* Support creation of PV(persistent volumes) in CloudStack projects
* add support for snapshot APIs for project role
* Add support to setup csi driver on k8s cluster creation
* fix deploy script
* update response
* fix table name
* fix linter
* show if csi driver is setup in cluster
* delete pvs whose reclaim policy is delete when cluster is destroyed
* update ref
* move changes to 4.22
* fix variables
* fix eof
* add createCrossZoneInstnaceEnabled to BackupOfferingResponse
* show use IP Address from Backup button when orignal instance is expunged
* Fix NPE in takeBackup if the vm template is deleted.
* Add since to Cross zone instance creation in BackupOfferingResponse.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Store and show Guest os type in the backup metadata
* show warning in create instance from backup form if guest os type is different
* show warning in create instance from backup form if guest os type is different
* backupvmexpunged -> isbackupvmexpunged
* review comments
* fix npe
* improve err msg
* err msg
---------
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Add source VM name on virt-v2v migration log entries
* Improve the feedback by displaying the running importing tasks
* Add source VM name prefix on more conversion logs
* Improve listing and also list completed tasks
* Pass extra parameters to virt-v2v if administrator allows via global setting
* Add Force converting directly to storage pool option
* Refactor based on review comments
* Add properties for env vars for the instance conversion
* Add separate component for Import VM Tasks
* applying copilot suggestions from code review
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Fix importing unmanaged instances due to incorrect internal name
* Add VM prefix on each log operation for conversion
* Log the original VM name instead of the cloned VM in case of cloning
* Allow searching storage pool by UUID after conversion to support SharedMountPoint
* Fix search pools logic
* Improve UI and add checks for force convert to pool parameter
* Support Local storage when forceconverttopool is set to true
* Add config key to for allowed extra params and add validation
* Fix params lists
* Fix compile error
* Remove extra stubbings
* Fix extra params execution
---------
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* API: Add support to list all snapshot policies & backup schedules
* Add support for backup policy listing without tying it to the vmid
* add tests for snapshot policy listing
* update tests for listbackupschedules
* remove trailing spaces and fix lint failure
* Add upgrade test
* remove unused import
* add create policy - snap/backup in the list view with resource (volume/vm) selection
* add translations
* refresh parent list
* remove unnecessary alert info
* fix checks for UI backup schedule list view
* fix checks for UI backup schedule list view
* add back access checks
* add since param
* fix failing test
* update snapshot policy and backup schedule ownership when VM is moved
* fix issue with showing vm selection
* fix unit test failure
* Update list snappolicy & backup schedule logic to list only those that belong to a proj or for root admin those that belong to it, unless listall & projid is passed
* fix test
* support snap / backup policy search using keyword
* fix tests
* Migrate volume improvements, to bypass secondary storage when copy volume between pools is allowed directly
* Bypass secondary storage for copy volume between zone-wide pools and
- local storage on host in the same zone
- cluser-wide pools in the same zone
* Bypass secondary storage for volumes on ceph/rdb pool when the scope permits
* Fix dest disk format while migrating volume from ceph/rbd to nfs, and some code improvements
* unit tests
* Update suitable disk offering(s) for volume(s) after migrate VM with volumes when change in pool type (shared or local)
Currently, Migrate VM with volume(s) bypasses the service and disk offerings of the volumes, as the target pools for migration are specified,
which ignores the offerings. Offering change is required when pool type (shared or local) is changed, mainly
- when volume on shared pool is migrated to local pool
- when volume on local pool is migrated to shared pool
* Update with proper message while migrate volume when target pool and offering type mismatches (both are not shared/local)
* Consider host scope first during endpoint selection while copying between primary storages
* Update disk offering count (for listDiskOfferings api) while removing offerings with tags mismatch with storage tags
* Return details of the storage pool in the response including url, and update capacityBytes and capacityIops if applicable while creating storage pool
* Added capacitybytes parameter to the storage pool response in sync with the capacityiops response parameter and createStoragePool cmd request parameter (existing disksizetotal parameter in the storage pool response can be deprecated)
* Don't keep url in details
* Persist the capacityBytes and capacityIops in the storage_pool_details table while creating storage pool as well, for consistency - as these are updated with during update storage pool
* rebase with main fixes
* Add UUID field for LDAP configuration
* move db changes to the lastest schema file
* Add ID param to list ldapConf API & delete ldapConf API
* fix ui test
* fix 1 ui test
* fix test
* fix api description
---------
Co-authored-by: dahn <daan@onecht.net>
* draas initial changes
* Added option to enable disaster recovery on a backup respository. Added UpdateBackupRepositoryCmd api.
* Added timeout for mount operation in backup restore configurable via global setting
* Addressed review comments
* fix for simulator test failures
* Added UT for coverage
* Fix create instance from backup ui for other providers
* Added events to add/update backup repository
* Fix race in fetchZones
* One more fix in fetchZones in DeployVMFromBackup.vue
* Fix zone selection in createNetwork via Create Instance from backup form.
* Allow template/iso selection in create instance from backup ui
* rename draasenabled to crosszoneinstancecreation
* Added Cross-zone instance creation in test_backup_recovery_nas.py
* Added UT in BackupManagerTest and UserVmManagerImplTest
* Integration test added for Cross-zone instance creation in test_backup_recovery_nas.py
* Remove allocated snapshots / vm snapshots on start
* Check and Cleanup snapshots / vm snapshots on MS start
* rebase fixes
* Update volume state (from Snapshotting) on MS start when its snapshot job not finished and snapshot in Creating state
* [routers] distiction between fatal failure and warning or unknown on healthchecks
* UI status for router health checks
* status from scripts varied
* automation signalled errors
* revert removal of update sql
* upgradeversion
* move config item and further cleanup
* handling services better
* backwards compatible response
---------
Co-authored-by: Daan Hoogland <dahn@apache.org>
* Find system VM templates for CKS cluster honouring the preferred architecture
* Fix unit tests
* Fix checkstyle
* Sort instead of filtering by preferred arch
* Remove unnecesary stubs
* Restore java version
* Address review comments
* Fail and display error message in case the CKS ISO arch doesnt match the selected template arch
* Prefer CKS ISO arch instead of the system VM setting
This feature adds the ability to create a new instance from a VM backup for dummy, NAS and Veeam backup providers. It works even if the original instance used to create the backup was expunged or unmanaged. There are two parts to this functionality:
Saving all configuration details that the VM had at the time of taking the backup. And using them to create an instance from backup.
Enabling a user to expunge/unmanage an instance that has backups.
This PR allows attaching of GPU devices via PCI, mdev or VF to an Instance for KVM.
It allows the operator to discover the GPU devices on the KVM host and create a Compute Offering with GPU support based on the available GPU devices on the host. Once the operator has created the Compute offering, it can be used by users to launch Instances with GPU devices.