* Migrate volume improvements, to bypass secondary storage when copy volume between pools is allowed directly
* Bypass secondary storage for copy volume between zone-wide pools and
- local storage on host in the same zone
- cluser-wide pools in the same zone
* Bypass secondary storage for volumes on ceph/rdb pool when the scope permits
* Fix dest disk format while migrating volume from ceph/rbd to nfs, and some code improvements
* unit tests
* Update suitable disk offering(s) for volume(s) after migrate VM with volumes when change in pool type (shared or local)
Currently, Migrate VM with volume(s) bypasses the service and disk offerings of the volumes, as the target pools for migration are specified,
which ignores the offerings. Offering change is required when pool type (shared or local) is changed, mainly
- when volume on shared pool is migrated to local pool
- when volume on local pool is migrated to shared pool
* Update with proper message while migrate volume when target pool and offering type mismatches (both are not shared/local)
* Consider host scope first during endpoint selection while copying between primary storages
* Update disk offering count (for listDiskOfferings api) while removing offerings with tags mismatch with storage tags
* storage: change storage pool to Up state when cancel storage migration
* Update 11773: connect host to shared pool after cancelling storage migration
* Update 11773: update db only
* Update 11773: skip capacity update for storpool
* Return details of the storage pool in the response including url, and update capacityBytes and capacityIops if applicable while creating storage pool
* Added capacitybytes parameter to the storage pool response in sync with the capacityiops response parameter and createStoragePool cmd request parameter (existing disksizetotal parameter in the storage pool response can be deprecated)
* Don't keep url in details
* Persist the capacityBytes and capacityIops in the storage_pool_details table while creating storage pool as well, for consistency - as these are updated with during update storage pool
* rebase with main fixes
* ScaleIO/PowerFlex smoke tests improvements, and some fixes
* Fix test_volumes.py, encrypted volume size check (for powerflex volumes)
* Fix test_over_provisioning.py (over provisioning supported for powerflex)
* Update vm snapshot tests
* Update volume size delta in primary storage resource count for user vm volumes only
The VR volumes resource count for PowerFlex volumes is updated here, resulting in resource count discrepancy
(which is re-calculated through ResourceCountCheckTask later, and skips the VR volumes)
* Fix test_import_unmanage_volumes.py (unsupported for powerflex)
* Fix test_sharedfs_lifecycle.py (volume size check for powerflex)
* Update powerflex.connect.on.demand config default to true
We didn't account for caching the volume stats for each used Linstor
cluster, so the first asked Linstor cluster would prevent caching
for all the others and so null was returned.
Now we have invalidate counters for each Linstor cluster and
also store the cache result with the Linstor cluster address prefixed.
* Fix of deploy VM with a snapshot that is copied to another zone
* Fix of creating StorPool volume from a snapshot if the size in the
offering is bigger than the snapshot size
Ensure bucket.getSecretKey() is used when building the S3 client.
Previously, only getAccessKey() was passed for both key and secret,
causing V4 signature validation failures during operations such as
bucket creation and policy updates.
Co-authored-by: Jean Vetorello <jean@paneas.com>
* Find system VM templates for CKS cluster honouring the preferred architecture
* Fix unit tests
* Fix checkstyle
* Sort instead of filtering by preferred arch
* Remove unnecesary stubs
* Restore java version
* Address review comments
* Fail and display error message in case the CKS ISO arch doesnt match the selected template arch
* Prefer CKS ISO arch instead of the system VM setting
This feature adds the ability to create a new instance from a VM backup for dummy, NAS and Veeam backup providers. It works even if the original instance used to create the backup was expunged or unmanaged. There are two parts to this functionality:
Saving all configuration details that the VM had at the time of taking the backup. And using them to create an instance from backup.
Enabling a user to expunge/unmanage an instance that has backups.
* Use special icon for sharedfs instance and prefix for sharedfs volumes
* Give custom icon precedence over shared fs icon
* Fix sharedfsvm icon size
* Fix UT failure in StorageVmSharedFSLifeCycleTest
* [PowerFlex/ScaleIO] Added wait time after SDC service start/restart/stop, and retries to fetch SDC id/guid
* Added agent property 'powerflex.sdc.service.wait' for the time (in secs) to wait after SDC service start/restart/stop
* code improvements
* Cumulative enhancements fix for ScaleIO: MDM add/remove, Host prepare/unprepare, validate Storage Pool can be created in Agent.
- Implemented validation to fail Host disconnect from Storage Pool if there are Volumes attached and SDC client MDM removal requires scini service to be restarted
- Implemented Storage Pool validation by checking whether MDM addresses from configuration file and from memory (using CLI) matches, otherwise file ModifyStoragePool command.
- Introduced configuration key to apply timeout after making MDM changes for ScaleIO: powerflex.mdm.change.apply.timeout.ms (default 1000ms)
- Implemented logic to apply timeout after making MDM changes for ScaleIO in prepare and unprepare logic
- Added detection of MDM removal support via CLI
- If MDM removal support via CLI supported then use CLI, fall back to edit drv_cfg.txt and restart scini instead
Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>