Test scenarios:
- Enable cluster HA after VR is created. Now stop and start VR and check its restart priority, should be High.
- Enable cluster HA before VR is created. Now create some VM and verify that VR created must have High restart priority.
CLOUDSTACK-9720: [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.Updated the template_spool_ref table with the correct template (VMware - OVA file) size.
* pr/1880:
CLOUDSTACK-9720: [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Summary: CLOUDSTACK-8921
snapshot_store_ref table should store actual size of back snapshot in secondary storage
Calling SR scan to make sure size is updated correctly
Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm. free memory = total memory - (all vm memory)With memory over-provisioning set to 1, when mgmt server starts VMs in parallel on one host, then the memory allocated on that kvm can be larger than the actual physcial memory of the kvm host.
Fixed by checking free memory on host before starting Vm.
Added test case to check memory usage on Host.
Verified Vm deploy on Host with enough capacity and also without capacity
* pr/847:
Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm. free memory = total memory - (all vm memory)
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9794: Unable to attach more than 14 devices to a VMUpdated hardcoded value with max data volumes limit from hypervisor capabilities.
* pr/1953:
CLOUDSTACK-9794: Unable to attach more than 14 devices to a VM
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9698 [VMware] Make hardcorded wait timeout for NIC adapter hotplug as configurableJira
===
CLOUDSTACK-9698 [VMware] Make hardcoded wait timeout for NIC adapter hotplug as configurable
Description
=========
Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR running on VMware to get detected by guest OS.
The time taken to detect hot plugged NIC in guest OS depends on type of VMware NIC adapter like (E1000, VMXNET3, E1000e etc.)
and guest OS itself. In uncommon scenarios the NIC detection may take longer time than 15 seconds,
in such cases NIC hotplug would be treated as failure which results in VPC tier configuration failure.
Alternatively making the wait timeout for NIC adapter hotplug as configurable will be helpful for admins in such scenarios. This is specific to VR running over VMware hypervisor.
Also in future if VMware introduces new NIC adapter types which may take time to get detected by guest OS, it is good to have flexibility of
configuring the wait timeout as fallback mechanism in such scenarios.
Fix
===
Introduce new configuration parameter (via ConfigKey) "vmware.nic.hotplug.wait.timeout" which is "Wait timeout (milli seconds) for hot plugged NIC of VM to be detected by guest OS." as fallback instead of hard coded timeout, to ensure flexibility for admins given the listed scenarios above.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
* pr/1861:
CLOUDSTACK-9698 Make the wait timeout for NIC adapter hotplug as configurable
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
* 4.9:
moved logrotate from cron.daily to cron.hourly for vpcrouter in cloud-early-config
CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agent
[4.9] CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agentThe router.aggregation.command.each.timeout in global configuration is only applied on new created KVM host.
For existing KVM host, changing the value will not be effective.
We need to propagate the configuration to existing host when cloudstack-agent is connected.
* pr/1856:
CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agent
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This adds support for virtio-scsi on KVM hosts, either
for guests that are associated with a new os_type of 'Other PV Virtio-SCSI (64-bit)',
or when a VM or template is regstered with a detail parameter rootDiskController=scsi.
Update cloudstack add template dialog to allow for selecting rootDiskController with KVM
Update cloudstack kvm virtio-scsi to enable discard=unmap
Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR to get detected by guest OS.
The time taken to detect hot plugged NIC in guest OS depends on type of NIC adapter like (E1000, VMXNET3, E1000e etc.)
and guest OS itself. In uncommon scenarios the NIC detection may take longer time than 15 seconds,
in such cases NIC hotplug would be treated as failure which results in VPC tier configuration failure.
Alternatively making the wait timeout for NIC adapter hotplug as configurable will be helpful for admins in such scenarios.
Also in future if VMware introduces new NIC adapter types which may take time to get detected by guest OS, it is good to have flexibility of
configuring the wait timeout as fallback mechanism in such scenarios.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
* 4.9:
CLOUDSTACK-9746 system-vm: logrotate config causes critical failures
CLOUDSTACK-9788: Fix exception listNetworks with pagesize=0
CLOUDSTACK-8663: Fixed various issues to allow VM snapshots and volume snapshots to exist together
Fix HVM VM restart bug in XenServer
CLOUDSTACK-9363: Fix HVM VM restart bug in XenServerHere is the longer description of the problem:
By default XenServer limits HVM guests to only 4 disks. Two of those are reserved for the ROOT disk (deviceId=0) and CD ROM (device ID=3) which means that we can only attach 2 data disks. This limit however is removed when Xentools is installed on the guest. The information that a guest has Xentools installed and can handle more than 4 disks is stored in the VM metadata on XenServer. When a VM is shut down, Cloudstack removes the VM and all the metadata associated with the VM from XenServer. Now, when you start the VM again, even if it has Xentools installed, it will default to only 4 attachable disks.
Now this problem manifests itself when you have a HVM VM and you stop and start it with more than 2 data disks attached. The VM fails to start and the only way to start the VM is to detach the extra disks and then reattach them after the VM start.
In this fix, I am removing the check which is done before creating a `VBD` which enforces this limit. This will not affect current workflow and will fix the HVM issue.
@koushik-das this is related to the "autodetect" feature that you introduced a while back (https://issues.apache.org/jira/browse/CLOUDSTACK-8826). I would love your review on this fix.
* pr/1829:
Fix HVM VM restart bug in XenServer
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
snapshots to exist together
Reverting VM to disk only snapshot in Xenserver corrupts VM
Stale NFS secondary storage on XS leads to volume creation failure from snapshot
[4.10] CLOUDSTACK-8746: VM Snapshotting implementation for KVM
* pr/977:
Fixes for testing VM Snapshots on KVM. Related to PR 977
CLOUDSTACK-8746: vm snapshot implementation for KVM
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9359: IPv6 for Basic NetworkingThis PR is a proposal for adding very basic IPv6 to Basic Networking. The main goal of this PR is that the API returns a valid IPv6 address over which the Instance is reachable.
The GUI will show the IPv6 address after deployment of the Instance.

If the table VLAN has a proper IPv6 CIDR configured the DirectPodBasedNetworkGuru will calculate the IPv6 Address the Instance will obtain using EUI-64 and SLAAC: https://tools.ietf.org/search/rfc4862
In this case the _vlan_ table contained:
<pre>mysql> select * from vlan \G
*************************** 1. row ***************************
id: 1
uuid: 90e0716c-5261-4992-bb9d-0afd3006f476
vlan_id: vlan://untagged
vlan_gateway: 172.16.0.1
vlan_netmask: 255.255.255.0
description: 172.16.0.10-172.16.0.250
vlan_type: DirectAttached
data_center_id: 1
network_id: 204
physical_network_id: 200
ip6_gateway: 2001:980:7936:112::1
ip6_cidr: 2001:980:7936:112::/64
ip6_range: NULL
removed: NULL
created: 2016-07-19 20:39:41
1 row in set (0.00 sec)
mysql></pre>
It will then log:
<pre>2016-10-04 11:42:44,998 DEBUG [c.c.n.g.DirectPodBasedNetworkGuru] (Work-Job-Executor-1:ctx-1975ec54 job-186/job-187 ctx-0d967d88) (logid:275c4961) Found IPv6 CIDR 2001:980:7936:112::/64 for VLAN 1
2016-10-04 11:42:45,009 INFO [c.c.n.g.DirectPodBasedNetworkGuru] (Work-Job-Executor-1:ctx-1975ec54 job-186/job-187 ctx-0d967d88) (logid:275c4961) Calculated IPv6 address 2001:980:7936:112:4ba:80ff:fe00:e9 using EUI-64 for NIC 6a05deab-b5d9-4116-80da-c94b48333e5e</pre>
The template has to be configured accordingly:
- No IPv6 Privacy Extensions
- Use SLAAC
- Follow RFC4862
This is also described in: https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+in+Basic+Networking
The next steps after this will be:
- Security Grouping to prevent IPv6 Address Spoofing
- Security Grouping to filter ICMP, UDP and TCP traffic
* pr/1700:
CLOUDSTACK-676: IPv6 In -and Egress filtering for Basic Networking
CLOUDSTACK-676: IPv6 Basic Security Grouping for KVM
CLOUDSTACK-9359: IPv6 for Basic Networking with KVM
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9619: Updates for SAN-assisted snapshotsThis PR is to address a few issues in #1600 (which was recently merged to master for 4.10).
In StorageSystemDataMotionStrategy.performCopyOfVdi we call getSnapshotDetails. In one such scenario, the source snapshot in question is coming from secondary storage (when we are creating a new volume on managed storage from a snapshot of ours thats on secondary storage).
This usually worked in the regression tests due to a bit of "luck": We retrieve the ID of the snapshot (which is on secondary storage) and then try to pull out its StorageVO object (which is for primary storage). If you happen to have a primary storage that matches the ID (which is the ID of a secondary storage), then getSnapshotDetails populates its Map<String, String> with inapplicable data (that is later ignored) and you dont easily see a problem. However, if you dont have a primary storage that matches that ID (which I didnt today because I had removed that primary storage), then a NullPointerException is thrown.
I have fixed that issue by skipping getSnapshotDetails if the source is coming from secondary storage.
While fixing that, I noticed a couple more problems:
1) We can invoke grantAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).
2) We can invoke revokeAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).
I have corrected those issues, as well.
I then came across one more problem:
When using a SAN snapshot and copying it to secondary storage or creating a new managed-storage volume from a snapshot of ours on secondary storage, we attach to the SR in the XenServer code, but detach from it in the StorageSystemDataMotionStrategy code (by sending a message to the XenServer code to perform an SR detach). Since we know to detach from the SR after the copy is done, we should detach from the SR in the XenServer code (without that code having to be explicitly called from outside of the XenServer logic).
I went ahead and changed that, as well.
JIRA Ticket:
https://issues.apache.org/jira/browse/CLOUDSTACK-9619
* pr/1749:
CLOUDSTACK-9619: Updates for SAN-assisted snapshots
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This commit implements basic Security Grouping for KVM in
Basic Networking.
It does not implement full Security Grouping yet, but it does:
- Prevent IP-Address source spoofing
- Allow DHCPv6 clients, but disallow DHCPv6 servers
- Disallow Instances to send out Router Advertisements
The Security Grouping allows ICMPv6 packets as described by RFC4890
as they are essential for IPv6 connectivity.
Following RFC4890 it allows:
- Router Solicitations
- Router Advertisements (incoming only)
- Neighbor Advertisements
- Neighbor Solicitations
- Packet Too Big
- Time Exceeded
- Destination Unreachable
- Parameter Problem
- Echo Request
ICMPv6 is a essential part of IPv6, without it connectivity will break or be very
unreliable.
For now it allows any UDP and TCP packet to be send in to the Instance which
effectively opens up the firewall completely.
Future commits will implement Security Grouping further which allows controlling UDP and TCP
ports for IPv6 like can be done with IPv4.
Regardless of the egress filtering (which can't be done yet) it will always allow outbound DNS
to port 53 over UDP or TCP.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
(1) add support to create/delete/revert vm snapshots on running vms with QCOW2 format
(2) add new API to create volume snapshot from vm snapshot
(3) delete metadata of vm snapshots before stopping/migrating and recover vm snapshots after starting/migrating
(4) enable deleting of VM snapshot on stopped vm or vm snapshot is not listed in qcow2 image.
(5) enable smoke tests for vmsnaphsots on KVM
While remote executing commands/scripts through SSH in VR, ACS uses system vm keyfile.
ACS is fetching this key file using VMwareContext object which encapsulates vCenter connection handle.
This is inefficient because of dependency on getServiceContext() which means a vCenter connection
handle which is not required just to fetch a file in name space in management server.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
Issue
=====
While listing datacenters associated with a zone, only zone Id validation is required.
There is no need to have additional checks like zone is a legacy zone or not.
Fix
===
Removed unnecessary checks over zone ID and just checking if zone with specified ID exists or not.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
CLOUDSTACK-9456: Migrate master to Spring 4.xThis changes makes CloudStack use spring 4:
```
- Bump spring-framework version to 4.x and Jetty to version that runs with JDK7
- Bump servet dependency version
- Migrates various xmls to use version independent schema uris
```
Outstanding issue:
- Testing of various non-standard plugins such as network and storage plugins etc.
Since, this is a big change pinging for review -- @jburwell @karuturi @wido @murali-reddy @abhinandanprateek @DaanHoogland @GaborApatiNagy @JayapalUradi @kishankavala @K0zka @nvazquez @rafaelweingartner @pyr and others
@blueorangutan package
* pr/1638:
CLOUDSTACK-9456: Update Spring version in maven poms
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
XenServer 7 SupportThis PR adds support for XenServer 7. I have manually done the following tests
- Create a new cluster with XenServer7
- Add Primary storage: Should create an SR on XS7
- Add another XS7 host to the Pool
- Add host2 to Cloudstack
- Create VM1 from template
- Create VM2 from template
- Ping/SSH VM1 to VM2 and vice-versa
- Stop/Delete/Expunge VM2
- Create Data disk
- Attach it to VM1
- Create VM snaphsot of VM1
- Restore VM snapshot of VM1
- Delete VM snapshot of VM1
- Create Volume snapshot of Datadisk
- Create volume snapshot of Root disk
- Create new template from snapshot of root disk
- Create volume from snapshot of datadisk
- Detach datadisk volume
- Delete datadisk volume
- Aquire a public IP
- Create a static nat to VM1
- Live migrate VM1 while traffic on VM
- Delete VM1
* pr/1711:
[CLOUDSTACK-9662] Add support for XenServer 7
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Issue
====
Exception occured while creating the CPVM in the VmWare Setup using standard vswitches.
StartCommand failed due to Exception: com.vmware.vim25.AlreadyExists
message: [] com.vmware.vim25.AlreadyExistsFaultMsg: The specified key, name, or identifier already exists
Fix
===
Ensure synchronization while attempting to create port group such that simultaneous attempts are not made with same port group name on same ESXi host.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
- Bump spring-framework version to 4.x and Jetty to version that runs with JDK8
- Bump servet dependency version
- Migrate spring xmls to version 4, fixes schema locations that are 3.0
dependent in various xmls.
- Fix failing tests due to spring upgrade
(Thanks @marcaurele Marc-Aurèle Brothier for fixing them)
* Fix test DeploymentPlanningManagerImplTest
* Fix GloboDNS test
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
-when processing static nat rule, add a mangle table rule, to mark the traffic
from the guest vm when it has associated static nat rule so that traffic gets
routed using the route tabe of the device which has public ip associated
-fix the case where nic_device_id is empty when ip is getting disassociated
resulting in empty deviceid in ips.json
-add utility methods in CsRule, and CsRoute to add 'ip rule' and 'ip route' rules respectivley
-ensure traffic from all public interfaces are connection marked with device number, and restored
for the reverse traffic. use the connection marked number to do device specific routing table lookup
fill the device specific routing table with default route
-component tests for testing multiple public interfaces of VR