* server: fix resource count of primary storage if some volumes are Expunged but not removed
Steps to reproduce the issue
(1) create a vm and stop it. check resource count of primary storage
(2) download volume. resource count of primary storage is not changed.
(3) expunge the vm, the volume will be Expunged state as there is a volume snapshot on secondary storage. The resource count of primary storage decreased.
(4) update resource count of the account (or domain), the resource count of primary storage is reset to the value in step (2).
* New feature: Add support to destroy/recover volumes
* Add integration test for volume destroy/recover
* marvin: check resource count of more types
* messages translate to JP
* Update messages for CN
* translate message for NL
* fix two issues per Daan's comments
Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
After a local template is uploaded via browser, the generated usage event with type = "TEMPLATE.CREATE" is persisted with the data store ID instead of the zone ID on the zone_id column. The fix will refactor the upload monitor logic, as after the upload completes, it sets the datastore ID on the zone ID column for the created "TEMPLATE.CREATE" usage event. This refactor will query the DB for the data store and will set its associated zone ID in the usage field.
The fix produces the same behaviour as when registering a template from URL.
FIx is also for uploading VOLUME from local/via browser.
When we restart the VPC after destroying the master VR, the backup VR
becomes the master and the site to site connections are not in
the connected state. The passive VPN connection will be in the connected
state but the active VPN connection will be in a disconnected state.
The VM ingestion feature allows CloudStack to discover, on-board, import existing VMs in an infra. The feature currently works only for VMware, with a hypervisor agnostic framework which may be extended for KVM and XenServer in future.
Associating static NAT on IP to VM fails even though the IP is not allocated.
When we try enable static NAT on second IP address to the same VM, the operation fails but the IP address is still allocated in the db and it can't be used to enable static NAT on different VM.
Steps to reproduce the issue:
(1) create a vpc (vpc-001) and a vpc tier (vpc-001-001)
(2) create a vm (vm-001-001) in vpc-001-001
(3) acquire a public ip (ip-1) and enable static nat to vm-001-001,
operation succeeds.
(4) acquire a public ip (ip-2) and enable static nat to vm-001-001,
operation fails but the ip is still assigned to vpc tier vpc-001-001.
Note down the ip address and the id of it.
(5) create another vpc tier vpc-001-002, and vm (vm-001-002) in the tier
(6) enabled ip-2 static nat to vm-001-002, operation should succeed
Add a global setting to disable creating networks with same name in an account
Add a global setting to disable creating network without
mentioning the start and end IPv4 or IPv6 address
By default we can create networks with the same name in the account.
Sometimes we should not create the networks with same name.
This change adds a global setting which prevents creating the network with same name.
The default value is true and set it to false to prevent creating network with same names.
Also its possible to create a shared network without mentioning the
start and the end IPv4 or IPv6 address.
This change adds a global setting which prevents creating a shared
network without specifying the start and the end IPv4 or IPv6 address
If the disk size of the vm to be created is greater
than the volume size, then the exception message should
display the numeric value instead of variable name
Fixes#3783
As reported in the issue, creating volumes from pure snapshot fails with NPE. This is due to order of calls where disk offering access is checked before checking disk offering value. This PR fixes the same.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
In VM migration on KVM, libvirt qemu hook script will change the bridge name to bridges for guest networks. It works for user vm. However for virtual router, it has nics on control network and public network. If control/public use different physical networks than guest network, virtual router cannot be migrated.
Fixes: #2783
Fixes#3191
When a template is registered, code stores md5sum of the downloaded file in the vm_template table. However, this downloaded file could be deleted after template installation if it is not an actual (.qcow2, .ova, etc.) file. When the user copies a template using copyTemplate API, the actual template file will be copied across the image stores. Matching checksum for the copied templated file and the stored value from the vm_template table will result in a mismatch.
Changes will set an empty checksum value for the copied template while passing to download service which allows skipping wrong checksum check for the copied while install.
However, this results in a change in checksum value for concerned template entry in vm_template table post template install.
Co-authored-by: dahn <daan.hoogland@gmail.com>
* [CLOUDSTACK-10408] Fix String.replaceAll() to replace() for better performance
* improve with replace char but string
Co-authored-by: Rohit Yadav <rohit@apache.org>
* marvin: check resource count of more types
* New feature: add flag resource.count.running.vms.only to count resource consumption of only running vms
Stopped VMs do not use CPU/RAM actually.
A new global configuration resource.count.running.vms.only is added to determine whether resource (cpu/memory) of only running vms (including Starting/Stopping) will be taken into calculation of resource consumption.
* Add integration test for resource count of only running vms
The List Management Server api returns a list of all the management servers but fails when trying to list by id or name. This ensures that it fetches the details as per the parameters passed.
Fixes: #3833
The metrics API has few properties missing that are present in the corresponding resource.
Fixes#3831
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit@apache.org>
since 4.11.3, haproxy is always restarted when add/delete a lb rule.
When haproxy is started, the processes are
```
root@r-854-VM:~# ps aux |grep haproxy
root 22272 0.0 0.2 4036 668 ? Ss 07:52 0:00 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy 22274 0.0 2.3 38444 5856 ? S 07:52 0:00 /usr/sbin/haproxy-master
haproxy 22275 0.0 0.3 38444 880 ? Ss 07:52 0:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
```
When haproxy is reload, the processes are
```
root@r-854-VM:~# ps aux |grep haproxy
root 22272 0.0 0.2 4168 632 ? Ss 07:52 0:00 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy 22283 0.0 2.3 38444 5884 ? S 07:53 0:00 /usr/sbin/haproxy-master
haproxy 22286 0.0 0.3 38444 880 ? Ss 07:53 0:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 22275
```
We need to change the pid file from /var/run/haproxy.pid to /run/haproxy.pid, so the haproxy will be reloaded instead of restarted.
Stop asking user (in the upgrade documentation) to remove a trailing slash for local KVM pool - do it here in upgrade path - so not needed in DOC for the upgrade to 4.14 and onwards.
Steps to reproduce the issue
(1) create a custom service offering
(2) create a vm with the offering
(3) update vm with displayvm=false, returns an error
(local) > update virtualmachine id=f33fd06a-7643-40d1-833f-272845d9ba09 displayvm=false
Error 530: {"updatevirtualmachineresponse":{"uuidList":[],"errorcode":530,"cserrorcode":9999}}
When start a vm or migrate a vm (away from a host in host maintenance), cloudstack will check capacity of all hosts and choose one. If there are hundreds of hosts on the platform, it will take some seconds. When cloudstack choose a host and start/migrate vm to it, the resource consumption of the host might have been changed. This normally happens when we start/migrate multiple vms.
It would be better to double check the host capacity when start vm on a host.
This PR includes the fix for cpucore capacity when start/migrate a vm.