These changes are related to PR #3194, but include suspending/resuming the VM when doing a VM snapshot as well, when deleting a VM snapshot, as it is performing the same operations via Libvirt. Also, there was an issue with the UI/localization changes in the prior PR, as that PR was altering the Volume snapshot behavior, but was altering the VM snapshot wording. Both have been altered in this PR.
Issuing this in response to the work happening in PR #4029.
In previous cloudstack versions, qcow2 image does not have a backing file format.
however, it is required in newer qemu versions, for example qemu 4.2 on ubuntu 20.04.
steps to reproduce the issue
(1) install cloudstack 4.14 or previous version, and ubuntu 19.04 or 18.04/16.04 LTS.
(2) create vms.
(3) upgrade to 4.15, upgrade os to ubuntu 20.04 , or install a new server with ubuntu 20.04.
(4) migrate vm from old ubuntu version to ubuntu 20.04, failed with exception below
```
2021-02-04 13:43:07,397 DEBUG [resource.wrapper.LibvirtMigrateCommandWrapper] (agentRequest-Handler-1:null) (logid:93da9385) ExecutionException : org.libvirt.LibvirtException: Requested operation is not valid: format of backing image '/mnt/03b6f487-9eaf-38bf-ad2d-d985423b832f/66990fcc-fd98-4932-9649-989bf6583d59' of image '/mnt/03b6f487-9eaf-38bf-ad2d-d985423b832f/a3dd1f0f-2557-4e07-951c-e4eb7b3f38b2' was not specified in the image metadata (See https://libvirt.org/kbase/backing_chains.html for troubleshooting)
```
(5)stop vm, and start it on ubuntu 20.04 server. failed with exception below
```
2021-02-04 13:46:29,766 WARN [resource.wrapper.LibvirtStartCommandWrapper] (agentRequest-Handler-5:null) (logid:b54745a7) LibvirtException
org.libvirt.LibvirtException: Requested operation is not valid: format of backing image '/mnt/03b6f487-9eaf-38bf-ad2d-d985423b832f/66990fcc-fd98-4932-9649-989bf6583d59' of image '/mnt/03b6f487-9eaf-38bf-ad2d-d985423b832f/a3dd1f0f-2557-4e07-951c-e4eb7b3f38b2' was not specified in the image metadata (See https://libvirt.org/kbase/backing_chains.html for troubleshooting)
```
To make testing easier, step 1 and 2 can be replaced by
```
qemu-img create -f qcow2 -b <backing file> <qcow2 image>
```
so qcow2 image does not have a backing file format.
* 4.14:
server: select root disk based on user input during vm import (#4591)
kvm: Use Q35 chipset for UEFI x86_64 (#4576)
server: fix wrong error message when create isolated network without SourceNat (#4624)
server: add possibility to scale vm to current customer offerings (#4622)
server: keep networks order and ips while move a vm with multiple networks (#4602)
server: throw exception when update vm nic on L2 network (#4625)
doc: fix typo in install notes (#4633)
Add RBD main storage through UI, it will fail when there is no host port parameter;
Because when we created the pool, we did not add the port target in the xml
This fixes issue introduced in c3554ec31dafbdfaa0ed646afb17a6f3378571f5
which enable block of code that will double escape rados host/monitor
port.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR fixes a regression issue in #4497
In cloudstack 4.14 or before, the cpu topology is set only when cpucore per socket is set (to 4 or 6).
in other conditions, there is no cpu topology in vm xml definition.
with #4497, vm will have cpu topology in its xml definition, if cpucore per socket is not set.
<topology sockets='<vm cpu cores>' cores='1' threads='1'/>
Not sure if it causes any issue. I think it would be better not to add this part in vm xml definition if cpucore per socket is not set.
XenServer 7.1 has an file descriptor/tapdisk iso-caching issue where new systemvm.iso are not recognised and inside the VR/ssvm/cpvm file IO error is seen. This was only reproducible with XS7.1 (intermittently), the fix was to check and eject the systemvm.iso (old/stale/cached), then insert the new systemvm.iso and then eject it.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem:
When migrateVMwithVolumes API is tried on a VM with two volumes to migrate to a different host and tried to migrate only one volume, Cloudstack migrates both the Volumes but then marks only one of them migrated. This makes volume inaccessible due to inconsitency in path of volume in cloudstack and vsphere
Solution:
Set the target datastore in relocate spec properly for each volume