This removes network details from the guest VM template OVF xml before deploying
a VM which would fail in case of dvswitch-based vmware environment with no
dummy/existing vswitch.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This removes nic/network specific details while exporting the systemvmtemplate
for vmware (ova file). Having this causes the ssvms to not deploy in
dvswitch-based vmware environments that have no vswitch portgroups (dummy etc).
Tested this on a local Trillian env.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fix for test_snapshots.py using nfs2 instead of nfs templateFix for marvin test failure introduced in #1847
Cc: @borisstoyanov @rhtyd @karuturi
* pr/1961:
Fix for test failure
Fix for test_snapshots.py using nfs2 instead of nfs template
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Displays a VR tab that lists VRs for the network in the detail views for
isolated networks, shared networks and for VPCs.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9811: fix duplicated nics on VR caused by nic name p<slot_number>p<port_number>
* pr/2011:
CLOUDSTACK-9811: fix duplicated nics on VR caused by nic name p<slot_number>p<port_number>
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9811: fixed an issue if the dev is not in the databagDefend against the specified dev not being in the databag.
* pr/2003:
changed the order fix to be closer to the original code
CLOUDSTACK-9811: fixed an issue if the dev is not in the databag
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
There are some VM deployment failures happening when multiple VMs are deployed at a time, failures mainly due to NetworkModel code that iterates over all the vlans in the pod. This causes each deployVM thread to hold the global lock on Network longer and cause delays. This delay in turn causes more threads to choose same host and fail since capacity is not available on that host.
Following are some changes required to be done to reduce delays during VM deployments which in turn causes some vm deployment failures when multiple VMs are launched at a time.
In Planner, remove the clusters that do not contain a host with matching service offering tag. This will save some iterations over clusters that dont have matching tagged host
In NetworkModel, do not query the vlans for the pod within the loop. Also optimized the logic to query the ip/ipv6
In DeploymentPlanningManagerImpl, do not process the affinity group if the plan has hostId provided.
Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm. free memory = total memory - (all vm memory)With memory over-provisioning set to 1, when mgmt server starts VMs in parallel on one host, then the memory allocated on that kvm can be larger than the actual physcial memory of the kvm host.
Fixed by checking free memory on host before starting Vm.
Added test case to check memory usage on Host.
Verified Vm deploy on Host with enough capacity and also without capacity
* pr/847:
Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm. free memory = total memory - (all vm memory)
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9794: Unable to attach more than 14 devices to a VMUpdated hardcoded value with max data volumes limit from hypervisor capabilities.
* pr/1953:
CLOUDSTACK-9794: Unable to attach more than 14 devices to a VM
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-5806: add presetup to storage types that support over provisioning
Ideally this should be configurable via global settings
* pr/1958:
CLOUDSTACK-5806: add presetup to storage types that support over provisioning Ideally this should be configurable via global settings
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9698 [VMware] Make hardcorded wait timeout for NIC adapter hotplug as configurableJira
===
CLOUDSTACK-9698 [VMware] Make hardcoded wait timeout for NIC adapter hotplug as configurable
Description
=========
Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR running on VMware to get detected by guest OS.
The time taken to detect hot plugged NIC in guest OS depends on type of VMware NIC adapter like (E1000, VMXNET3, E1000e etc.)
and guest OS itself. In uncommon scenarios the NIC detection may take longer time than 15 seconds,
in such cases NIC hotplug would be treated as failure which results in VPC tier configuration failure.
Alternatively making the wait timeout for NIC adapter hotplug as configurable will be helpful for admins in such scenarios. This is specific to VR running over VMware hypervisor.
Also in future if VMware introduces new NIC adapter types which may take time to get detected by guest OS, it is good to have flexibility of
configuring the wait timeout as fallback mechanism in such scenarios.
Fix
===
Introduce new configuration parameter (via ConfigKey) "vmware.nic.hotplug.wait.timeout" which is "Wait timeout (milli seconds) for hot plugged NIC of VM to be detected by guest OS." as fallback instead of hard coded timeout, to ensure flexibility for admins given the listed scenarios above.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
* pr/1861:
CLOUDSTACK-9698 Make the wait timeout for NIC adapter hotplug as configurable
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
* 4.9:
moved logrotate from cron.daily to cron.hourly for vpcrouter in cloud-early-config
CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agent
[4.9] CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agentThe router.aggregation.command.each.timeout in global configuration is only applied on new created KVM host.
For existing KVM host, changing the value will not be effective.
We need to propagate the configuration to existing host when cloudstack-agent is connected.
* pr/1856:
CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agent
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This adds support for virtio-scsi on KVM hosts, either
for guests that are associated with a new os_type of 'Other PV Virtio-SCSI (64-bit)',
or when a VM or template is regstered with a detail parameter rootDiskController=scsi.
Update cloudstack add template dialog to allow for selecting rootDiskController with KVM
Update cloudstack kvm virtio-scsi to enable discard=unmap
CLOUDSTACK-9821: Fixed issue in deploying vm in basic zoneFixed issue in deploying vm in basic zone.
There is issue in ipset command with xenserver 6.5. In util.pread2 ipset and -N is passed as single string and it caused the issue in command failure.
util.pread2(['/bin/bash', '-c', 'ipset', '-N ', tmpname , type])
* pr/1991:
CLOUDSTACK-9821: Fixed issue in deploying vm in basic zone
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
In case of vmware host failure, all the VMs including stopped VMs migrate
to the new host. For the Stopped Vms powerhost gets updated. This was
triggering HandlePowerStateReport which finally calls updatePowerState
updating update_time for the VM. This cause the capacity being reserved
for stopped VMs.
Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR to get detected by guest OS.
The time taken to detect hot plugged NIC in guest OS depends on type of NIC adapter like (E1000, VMXNET3, E1000e etc.)
and guest OS itself. In uncommon scenarios the NIC detection may take longer time than 15 seconds,
in such cases NIC hotplug would be treated as failure which results in VPC tier configuration failure.
Alternatively making the wait timeout for NIC adapter hotplug as configurable will be helpful for admins in such scenarios.
Also in future if VMware introduces new NIC adapter types which may take time to get detected by guest OS, it is good to have flexibility of
configuring the wait timeout as fallback mechanism in such scenarios.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
Issue
=====
Exception observed while creating CPVM in VMware Setup with DVS.
Observed error "StartCommand failed due to Exception: com.vmware.vim25.AlreadyExists."
This is due to concurrent attempts to create same dv portgroup on same dvSwitch by
manager threads of CPVM and SSVM when both are started at same time.
Fix
===
Synchronize api calls to create/update dvportgroup.
Also maintaing local cache to avoid multiple fetch api calls to vCenter
when multiple threads try to create same object.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>