Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.
Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
This PR makes sure no orphaned snapshot details are considered in the cleanup at startup job.
a real solution would be to implement some kind of cascading delete, but as the parent record is "only" marked as removed this would be a bit com
Co-authored-by: Daan Hoogland <dahn@onecht.net>
When invoking migrateVirtualMachineWithVolume API call and a strategy isn't found the volumes are left in Migrating state
This PR puts back the volumes to Ready state.
Fixes#4201
This PR addresses the issue of a vm snapshot being indefinitely stuck is Expunging state in case deletion fails.
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
* server: create DB entry for storage pool capacity when create storage pool
* Revert "server: create DB entry for storage pool capacity when create storage pool"
This reverts commit e790167bfe8cdebc80c8a51cb0191184edc40afd.
* server: create DB entry for storage pool capacity when create zone-wide storage pools
While finding pools for volume migration list following compatible storages:
- all zone-wide storages of the same hypervisor.
- when the volume is attached to a VM, then all storages from the same cluster as that of VM.
- for detached volume, all storages that belong to clusters of the same hypervisor.
Fixes#4692Fixes#4400
* Fix for mapping guest OS type read from OVF to existing guest OS in CloudStack database while registering VMware template
* Added unit tests to String Utils methods and updated the code
* Updated the java doc section
* Updated os description logic to keep equals ignore match with guest os display name
Update the guest OS from the OVF file after upload is completed
This PR fixes the template upload from local on VMware
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
This PR addresses an error that appears when you try to add a new host. I don't even understand why there was a cast to String in the first place. I will assume some classes send HypervisorType and some send a string (empty or otherwise). Shouldn't this be addressed to use the same type everywhere? With this fix adding a new xenserver host works fine.
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Setting snapshot state to error on timeout
* Setting removed field so snapshot record is ignored by garbage collection
* Removed explicitly setting error status, renamed method from markFailed to markRemoved
* Renamed method, moved code a few lines down
* Moved remove logic
* Removed unused service
* Moved removed logic - last time, promise
* support for handling incremental snaps (on DB entries) on xen
* Addressed comments
* Update NfsSecondaryStorageResource.java
adjusted space in comment/ log
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This feature enables the following:
Balanced migration of data objects from source Image store to destination Image store(s)
Complete migration of data
setting an image store to read-only
viewing download progress of templates across all data stores
Related Primate PR: apache/cloudstack-primate#326