This PR improves the volume file search on the datastore while computing the VM snapshot chain size in VMware. The file search is now performed in the VM directory on the datastore, instead of all the directories on the datastore (which is leading to incorrect VM snapshot chain size computation when VM id has 6 digits).
* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster
* Change in constructors
* Naming changes
* Remove commented code
* Refactor code for more readability
* Addressed review comments on code refactor
* Fix export snapshot and export template to secondary storage in VMware to export only one required disk
* Move clone operation into virtual machine mo
* Code refactored for readability
* Added disk key check even for successful clone operation
* Delete dettached disks from cloned VM and added few logs
* vmware: delete snapshot disk after backup to secondary storage
WIP - This ensures that worker VM is destroyed along with any of its own
disks that are backed up to secondary storage.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix for volume backup and confuding vm var name
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* change
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* tag as worker vm
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: fix copy systemvm.iso for same version
For VMware, systemvm.iso is copied from MS to secondary store. Current server checks if the desired file is present on the secondary store or not. If it is not present ISO is copied.
This change adds a check for checksum for source and destination ISO which would allow copying new ISO if there is a mismatch.
Useful in case when file is corrupted in secondary store or new systemvm.iso is generated for dev environment.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Worker VM tags are missed for few cloned VMs in VMware, and so these are skipped when tracking / cleaning up of Worker VMs. Adding proper Worker VM tags to these VMs would make them trackable from CloudStack.
* Added support for removing unused port groups on VMWare
* Fixed error handling around unavailable portgroup name
* Review changes, defaulting glovbal var to false, added warning to description, changed if statement.
* Cleanup unused network port groups on all the hosts.
Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Co-authored-by: nicolas <nicovazquez90@gmail.com>
* CLOUDSTACK-9175: [VMware DRS] Adding new host to DRS cluster does not participate in load balancing.
Summary: When a new host is added to a cluster, Cloudstack doesn't create all the port groups (created by cloudstack earlier in other hosts) present in the cluster. Since the new host doesn't have all the necessary networking port groups of cloudstack, it is not eligible to participate in DRS load balancing or HA.
Solution: When adding a host to the cluster in Cloudstack, use VMware API to find the list of unique port groups on a previously added host (older host in the cluster) if exists and then create them on the new host.
* Added few checks for cluster details
* remove hot enable cpu und memory in case of reservation
ram and cpu reservation have not relation to ram and cpu hot add
* add custom ram_reservation and it to vm details
* system vms haven't this property, for this reason add additional check
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: dahn <daan.hoogland@gmail.com>
* replace 0.0 with NumberUtils
* remove default value and remove return MinRam(seems to be not necessary)
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/guru/VmwareVmImplementer.java
Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>
Co-authored-by: DK101010 <dirk.klahre@itelligence.de>
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>
* disable hot add memory and cpu via vm settings
* add alternative implementation for hot add memory and cpu
* add log entry
* Modify and add log entry for hotadd
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
Co-authored-by: DK101010 <dirk.klahre@itelligence.de>
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* server: skip zone check for PERHOST iso during attachIso
Hypervisor tools ISO - vmware-toools.iso, xs-tools.iso are marked as PERHOST in DB. They are active but not downloaded to the secondary storages and hence no template-zone entry.
Skips the template-zone check for such templates.
Fixes#5265
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* inverted check
* use constants in TemplateManager
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Added disk provisioning type support for VMWare
* Review changes
* Fixed unit test
* Review changes
* Added missing licenses
* Review changes
* Update StoragePoolInfo.java
Removed white space
* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id
* Delete __init__.py
* Merge fix
* Fixed failing test
* Added comment about parameters
* Added error log when update fails
* Added exception when using API
* Ordering storage pool selection to prefer thick disk capable pools if available
* Removed unused parameter
* Reordering changes
* Returning storage pool details after update
* Removed multiple pool update, updated marvin test, removed duplicate enum
* Removed comment
* Removed unused import
* Removed for loop
* Added missing return statements for failed checks
* Class name change
* Null pointer
* Added more info when a deployment fails
* Null pointer
* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Small bug fix on API response and added missing bracket
* Removed datastore cluster code
* Removed unused imports, added missing signature
* Removed duplicate config key
* Revert "Added more info when a deployment fails"
This reverts commit 2486db78dca8e034d8ad2386174dfb11004ce654.
Co-authored-by: dahn <daan.hoogland@gmail.com>
* vmware: fix migrate vm with volume
Recent forward merge of 4.15 branch accidentally brought a bug in VM relocation method for VMware while trying to find datastore for the migrated volume.
This PR fixes it by using either of available target or source host.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
his PR fixes the problem of not updating the chain info or setting chain info to null after volume migrations.
Problem: While fetching the volume chain info, management server assumes datastore name to be a UUID (this is true only for NFS storages added by CloudStack) but datastore name can be with any name.
Solution: To fetch the volume chain info, use datastore name instead of UUID.
The fix is made in the flow of following API operations
migrateVirtualMachine
migrateVirtualMachineWithVolume
migrateVolume
This PR fixes the issue of missing fcd folder in local storage in case of VMware vSphere.
with this fix, a folder with name fcd is created whenever local storage is initiated.
Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.
Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
Fixes: #4808, #4941
This PR adds a force flag to the attachIso / detachIso commands, especially for VMware where it is noticed that when trying to either detach an iso or attach an iso when there already exists another present it fails to do the necessary operation as from ACS end we either answer the question returned by Esxi for CDRom disconnect operation as No (for detach operation) or do not answer the question at all (for Attach operation).
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
* prevent other vm disks getting deleted
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: fix inter-cluster stopped vm migration
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix detached volume inter-cluster migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* cleanup unused method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* review changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: allow attached volume migration using VmwareStorageMotionStrategy
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* find vm clusterid with multiple ROOT volumes
VM can have multiple ROOT volumes and some can be on zone-wide store therefore iterate over all of them till a cluster ID is found.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix successive storage migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix intercluster check
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor vm cluster, host method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove inter-pod check
Added by mistake, VMware won't have pods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address review comment
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>