(1) add support to create/delete/revert vm snapshots on running vms with QCOW2 format
(2) add new API to create volume snapshot from vm snapshot
(3) delete metadata of vm snapshots before stopping/migrating and recover vm snapshots after starting/migrating
(4) enable deleting of VM snapshot on stopped vm or vm snapshot is not listed in qcow2 image.
(5) enable smoke tests for vmsnaphsots on KVM
While remote executing commands/scripts through SSH in VR, ACS uses system vm keyfile.
ACS is fetching this key file using VMwareContext object which encapsulates vCenter connection handle.
This is inefficient because of dependency on getServiceContext() which means a vCenter connection
handle which is not required just to fetch a file in name space in management server.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
Issue
=====
While listing datacenters associated with a zone, only zone Id validation is required.
There is no need to have additional checks like zone is a legacy zone or not.
Fix
===
Removed unnecessary checks over zone ID and just checking if zone with specified ID exists or not.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
CLOUDSTACK-9456: Migrate master to Spring 4.xThis changes makes CloudStack use spring 4:
```
- Bump spring-framework version to 4.x and Jetty to version that runs with JDK7
- Bump servet dependency version
- Migrates various xmls to use version independent schema uris
```
Outstanding issue:
- Testing of various non-standard plugins such as network and storage plugins etc.
Since, this is a big change pinging for review -- @jburwell @karuturi @wido @murali-reddy @abhinandanprateek @DaanHoogland @GaborApatiNagy @JayapalUradi @kishankavala @K0zka @nvazquez @rafaelweingartner @pyr and others
@blueorangutan package
* pr/1638:
CLOUDSTACK-9456: Update Spring version in maven poms
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
XenServer 7 SupportThis PR adds support for XenServer 7. I have manually done the following tests
- Create a new cluster with XenServer7
- Add Primary storage: Should create an SR on XS7
- Add another XS7 host to the Pool
- Add host2 to Cloudstack
- Create VM1 from template
- Create VM2 from template
- Ping/SSH VM1 to VM2 and vice-versa
- Stop/Delete/Expunge VM2
- Create Data disk
- Attach it to VM1
- Create VM snaphsot of VM1
- Restore VM snapshot of VM1
- Delete VM snapshot of VM1
- Create Volume snapshot of Datadisk
- Create volume snapshot of Root disk
- Create new template from snapshot of root disk
- Create volume from snapshot of datadisk
- Detach datadisk volume
- Delete datadisk volume
- Aquire a public IP
- Create a static nat to VM1
- Live migrate VM1 while traffic on VM
- Delete VM1
* pr/1711:
[CLOUDSTACK-9662] Add support for XenServer 7
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Issue
====
Exception occured while creating the CPVM in the VmWare Setup using standard vswitches.
StartCommand failed due to Exception: com.vmware.vim25.AlreadyExists
message: [] com.vmware.vim25.AlreadyExistsFaultMsg: The specified key, name, or identifier already exists
Fix
===
Ensure synchronization while attempting to create port group such that simultaneous attempts are not made with same port group name on same ESXi host.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
- Bump spring-framework version to 4.x and Jetty to version that runs with JDK8
- Bump servet dependency version
- Migrate spring xmls to version 4, fixes schema locations that are 3.0
dependent in various xmls.
- Fix failing tests due to spring upgrade
(Thanks @marcaurele Marc-Aurèle Brothier for fixing them)
* Fix test DeploymentPlanningManagerImplTest
* Fix GloboDNS test
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
-when processing static nat rule, add a mangle table rule, to mark the traffic
from the guest vm when it has associated static nat rule so that traffic gets
routed using the route tabe of the device which has public ip associated
-fix the case where nic_device_id is empty when ip is getting disassociated
resulting in empty deviceid in ips.json
-add utility methods in CsRule, and CsRoute to add 'ip rule' and 'ip route' rules respectivley
-ensure traffic from all public interfaces are connection marked with device number, and restored
for the reverse traffic. use the connection marked number to do device specific routing table lookup
fill the device specific routing table with default route
-component tests for testing multiple public interfaces of VR
Cloudstack 9586: When using local storage with Xenserver prepareTemplate does not work with multiple primary storeThe race condition will happen whenever there are multiple primary storages and the CS tries to mount the secondary store to xenserver host simultaneously.
Due to synchronised block one mount will be successful and other thread will get the already mounted SR. Without the fix the two thread will try to mount it parallely and one will fail on Xenserver.
* pr/1765:
Cloudstack 9586: When using local storage with Xenserver prepareTemplate does not work with multiple primary store The race condition will happen whenever there are multiple primary storages and the CS tries to mount the secondary store to xenserver host simultaneously. Due to synchronised block one mount will be successful and other thread will get the already mounted SR. Without the fix the two thread will try to mount it parallely and one will fail on Xenserver.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9564: Fix memory leaks in VmwareContextPoolIn a recent management server crash, it was found that the largest contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.
This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.
@blueorangutan package
* pr/1729:
CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The race condition will happen whenever there are multiple primary storages and the CS tries to mount the secondary store to xenserver host simultaneously.
Due to synchronised block one mount will be successful and other thread will get the already mounted SR. Without the fix the two thread will try to mount it parallely and one will fail on Xenserver.
In a recent management server crash, it was found that the largest contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.
This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9566 instance-id metadata for baremetal VM returns IDThere is difference in instance-id metadata across baremetal and other hypervisors.
On Baremetal
[root@ip-172-17-0-144 ~]# curl http://8.37.203.221/latest/meta-data/instance-id
6021
on Xen
[root@ip-172-17-2-103 ~]# curl http://172.17.0.252/latest/meta-data/instance-id
cbeb517a-e833-4a0c-b1e8-9ed70200fbbf
In both cases it should be vm's uuid.
* pr/1738:
CLOUDSTACK-9566 instance-id metadata for baremetal VM returns ID
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This commit adds a additional VirtIO channel with the name
'org.qemu.guest_agent.0' to all Instances.
With the Qemu Guest Agent the Hypervisor gains more control over the Instance if
these tools are present inside the Instance, for example:
* Power control
* Flushing filesystems
* Fetching Network information
In the future this should allow safer snapshots on KVM since we can instruct the
Instance to flush the filesystems prior to snapshotting the disk.
More information: http://wiki.qemu.org/Features/QAPI/GuestAgent
Keep in mind that on Ubuntu AppArmor still needs to be disabled since the default
AppArmor profile doesn't allow libvirt to write into /var/lib/libvirt/qemu
This commit does not add any communication methods through API-calls, it merely
adds the channel to the Instances and installs the Guest Agent in the SSVMs.
With the addition of the Qemu Guest Agent channel a second channel appears in /dev
on a SSVM as a VirtIO port.
The order in which the ports are defined in the XML matters for the naming inside
the SSVM VM and by not relying on /dev/vportXX but looking for a static name the
SSVM still boots properly if the order in the XML definition is changed.
A SSVM with both ports attached will have something like this:
root@v-215-VM:~# ls -l /dev/virtio-ports
total 0
lrwxrwxrwx 1 root root 11 May 13 21:41 org.qemu.guest_agent.0 -> ../vport0p2
lrwxrwxrwx 1 root root 11 May 13 21:41 v-215-VM.vport -> ../vport0p1
root@v-215-VM:~# ls -l /dev/vport*
crw------- 1 root root 251, 1 May 13 21:41 /dev/vport0p1
crw------- 1 root root 251, 2 May 13 21:41 /dev/vport0p2
root@v-215-VM:~#
In this case the SSVM port points to /dev/vport0p1, but if the order in the XML
is different it might point to /dev/vport0p2
By looking for a portname with a pre-defined pattern in /dev/virtio-ports we
do not rely on the order in the XML definition.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
CLOUDSTACK-9379: Support nested virtualization at VM level on VMware Hypervisor## Introduction
[JIRA TICKET](https://issues.apache.org/jira/browse/CLOUDSTACK-9379)
It is desired to support nested virtualization at VM level for VMware hypervisor. Current behaviour supports enabling/desabling global nested virtualization by modifying global config `'vmware.nested.virtualization'`. It is wished to improve this feature, having control at VM level instead of a global control only.
A new global configuration is added, to enable/disable VM nested virtualization control: `'vmware.nested.virtualization.perVM'`. Default value=false
After a vm deployment or start command, vm params include `'nestedVirtualizationFlag'` key and its value is:
- true -> nested virtualization enabled
- false -> nested virtualization disabled
**We will determinate nested virtualization enabled/disabled by examining this 3 values:**
- **(1)** global configuration `'vmware.nested.virtualization'` value
- **(2)** global configuration `'vmware.nested.virtualization.perVM'` value
- **(3)** `'nestedVirtualizationFlag'` value in `user_vm_details` if present, `null` if not.
Using this 3 values, there are different use cases:
- **(1)** = TRUE, **(2)** = TRUE, **(3)** is null -> _ENABLED_
- **(1)** = TRUE, **(2)** = TRUE, **(3)** = TRUE -> _ENABLED_
- **(1)** = TRUE, **(2)** = TRUE, **(3)** = FALSE -> _DISABLED_
- **(1)** = TRUE, **(2)** = FALSE, **(3)** indifferent -> _ENABLED_
- **(1)** = FALSE, **(2)** = TRUE, **(3)** is null -> _DISABLED_
- **(1)** = FALSE, **(2)** = TRUE, **(3)** = TRUE -> _ENABLED_
- **(1)** = FALSE, **(2)** = TRUE, **(3)** = FALSE -> _DISABLED_
- **(1)** = FALSE, **(2)** = FALSE, **(3)** indifferent -> _DISABLED_
* pr/1542:
CLOUDSTACK-9379: Support nested virtualization at VM level on VMware Hypervisor
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9491: incorrect parsing of device list to find ethernet index of plugged NICIn VmwareResource, findRouterEthDeviceIndex() method find ethernet interface index given
the mac address. This method is used, once a nic is plugged to determine ethernet interface.
"/proc/sys/net/ipv4/conf" from the VR and looped through the devices to find the right
ethernet interface. Howver current logic read it once, and loops through the device list.
Its observerd device may not show up '/proc/sys/net/ipv4/conf' immediatly once NIC is plugged
in the VM from vCenter.
Fix ensured, while waiting for 15 sec in the loop, read the latest content from /proc/sys/net/ipv4/conf
, so that right device list is processed.
Manual tested VPC scenarios of adding new tiers which uses findRouterEthDeviceIndex, to find the guest/public network ethernet index.
* pr/1681:
CLOUDSTACK-9491: incorrect parsing of device list to find ethernet index of plugged NIC
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9503: Increased the VR script timeout. Most of the changes are about converting int/long time values to joda Duration.
* pr/1745:
CLOUDSTACK-9503: Increased the VR script timeout. Most of the changes are about converting int/long time values to joda Duration.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9502: DS template copies dont get deleted in VMware ESXi with multiple clusters and zone wide storage (include CLOUDSTACK-9386 into 4.9 release branch)Include #1560 into 4.9 release branch
* pr/1676:
CLOUDSTACK-9502: DS template copies don’t get deleted in VMware ESXi with multiple clusters and zone wide storage
Signed-off-by: John Burwell <meaux@cockamamy.net>
* 4.9:
CLOUDSTACK-8830: Fix for vm snapshots in Vmware, could not create vm snapshot until 12 minutes after vm creation due to vCenter sent null name on snpashot recent task
Made the changes to improve logging.CLOUSTACK-9465 Several log refactoring/improvement suggestions.
There are two scenarios of logging which needs refactoring/improvement:
Method invocation replaced by variable
This means that in the logging code, the method invocation is pre-defined as a variable. for simplicity, the method invocation should be replaced by the variable.
Delete variable which must be null
The variable in the logging code is null, there is no need to put the variable there.
* pr/1705:
Made the changes to improve logging.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR adds an ability to Pass a new parameter, locationType,
to the “createSnapshot” API command. Depending on the locationType,
we decide where the snapshot should go in case of managed storage.
There are two possible values for the locationType param
1) `Standard`: The standard operation for managed storage is to
keep the snapshot on the device. For non-managed storage, this will
be to upload it to secondary storage. This option will be the
default.
2) `Archive`: Applicable only to managed storage. This will
keep the snapshot on the secondary storage. For non-managed
storage, this will result in an error.
The reason for implementing this feature is to avoid a single
point of failure for primary storage. Right now in case of managed
storage, if the primary storage goes down, there is no easy way
to recover data as all snapshots are also stored on the primary.
This features allows us to mitigate that risk.
In VmwareResource, findRouterEthDeviceIndex() method find ethernet interface index given
the mac address. This method is used, once a nic is plugged to determine ethernet interface.
"/proc/sys/net/ipv4/conf" from the VR and looped through the devices to find the right
ethernet interface. However current logic read it once, and loops through the device list.
Its observerd device may not show up '/proc/sys/net/ipv4/conf' immediatly once NIC is plugged
in the VM from vCenter.Fix ensured, while waiting for 15 sec in the loop, read the latest
content from /proc/sys/net/ipv4/conf, so that right device list is processed.