* Add source VM name on virt-v2v migration log entries
* Improve the feedback by displaying the running importing tasks
* Add source VM name prefix on more conversion logs
* Improve listing and also list completed tasks
* Pass extra parameters to virt-v2v if administrator allows via global setting
* Add Force converting directly to storage pool option
* Refactor based on review comments
* Add properties for env vars for the instance conversion
* Add separate component for Import VM Tasks
* applying copilot suggestions from code review
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Fix importing unmanaged instances due to incorrect internal name
* Add VM prefix on each log operation for conversion
* Log the original VM name instead of the cloned VM in case of cloning
* Allow searching storage pool by UUID after conversion to support SharedMountPoint
* Fix search pools logic
* Improve UI and add checks for force convert to pool parameter
* Support Local storage when forceconverttopool is set to true
* Add config key to for allowed extra params and add validation
* Fix params lists
* Fix compile error
* Remove extra stubbings
* Fix extra params execution
---------
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* API: Add support to list all snapshot policies & backup schedules
* Add support for backup policy listing without tying it to the vmid
* add tests for snapshot policy listing
* update tests for listbackupschedules
* remove trailing spaces and fix lint failure
* Add upgrade test
* remove unused import
* add create policy - snap/backup in the list view with resource (volume/vm) selection
* add translations
* refresh parent list
* remove unnecessary alert info
* fix checks for UI backup schedule list view
* fix checks for UI backup schedule list view
* add back access checks
* add since param
* fix failing test
* update snapshot policy and backup schedule ownership when VM is moved
* fix issue with showing vm selection
* fix unit test failure
* Update list snappolicy & backup schedule logic to list only those that belong to a proj or for root admin those that belong to it, unless listall & projid is passed
* fix test
* support snap / backup policy search using keyword
* fix tests
* Migrate volume improvements, to bypass secondary storage when copy volume between pools is allowed directly
* Bypass secondary storage for copy volume between zone-wide pools and
- local storage on host in the same zone
- cluser-wide pools in the same zone
* Bypass secondary storage for volumes on ceph/rdb pool when the scope permits
* Fix dest disk format while migrating volume from ceph/rbd to nfs, and some code improvements
* unit tests
* Update suitable disk offering(s) for volume(s) after migrate VM with volumes when change in pool type (shared or local)
Currently, Migrate VM with volume(s) bypasses the service and disk offerings of the volumes, as the target pools for migration are specified,
which ignores the offerings. Offering change is required when pool type (shared or local) is changed, mainly
- when volume on shared pool is migrated to local pool
- when volume on local pool is migrated to shared pool
* Update with proper message while migrate volume when target pool and offering type mismatches (both are not shared/local)
* Consider host scope first during endpoint selection while copying between primary storages
* Update disk offering count (for listDiskOfferings api) while removing offerings with tags mismatch with storage tags
* storage: change storage pool to Up state when cancel storage migration
* Update 11773: connect host to shared pool after cancelling storage migration
* Update 11773: update db only
* Update 11773: skip capacity update for storpool
* Return details of the storage pool in the response including url, and update capacityBytes and capacityIops if applicable while creating storage pool
* Added capacitybytes parameter to the storage pool response in sync with the capacityiops response parameter and createStoragePool cmd request parameter (existing disksizetotal parameter in the storage pool response can be deprecated)
* Don't keep url in details
* Persist the capacityBytes and capacityIops in the storage_pool_details table while creating storage pool as well, for consistency - as these are updated with during update storage pool
* rebase with main fixes
This PR adds support for specifying user data (cloud-init) for system VMs via Zone Scoped global settings. This allows the operators to customize the System VMs and setup monitoring, logging or execute any custom commands.
We set the user data from the global setting in /var/cache/cloud/cmdline, and use the NoCloud datasource to process user data. cloud-init service is still disabled in the system VMs and it's executed as part of the cloud-postinit service which executes the postinit.sh script.
Added global settings:
systemvm.userdata.enabled - Disabled by default. Needs to be enabled to utilize the feature.
console.proxy.vm.userdata - UUID of the User data to be used for Console Proxy
secstorage.vm.userdata - UUID of the User data to be used for Secondary Storage VM
virtual.router.userdata - UUID of the User data to be used for Virtual Routers
CS creates transient KVM domain.xml. When instance is unmanaged from CS, explicit dump of domain has to be taken to manage is outside of CS.
With this PR
domainXML gets backed up and becomes persistent for further management of Instance.
Stopped instance also can be unmanaged, last host for instance is considered for defining domain
hostid param is supported in unmanageVirtualMachine API for KVM hypervisor and for stopped Instances
hostid field in response of unmanageVirtualMachine, representing host used for unmanage operation
Disable unmanaging instance with config drive, can unmanage from API using forced=true param for KVM
* Add UUID field for LDAP configuration
* move db changes to the lastest schema file
* Add ID param to list ldapConf API & delete ldapConf API
* fix ui test
* fix 1 ui test
* fix test
* fix api description
---------
Co-authored-by: dahn <daan@onecht.net>
This PR introduces console access support for instances deployed using Orchestrator Extensions, available via either VNC or a direct URL.
- CloudStack queries the extension using the getconsole action.
- For VNC-based access, the extension must return host/port/ticket details. CloudStack then forwards these to the Console Proxy VM (CPVM) in the instance’s zone. It is assumed that the CPVM can reach the specified host and port.
- For direct URL access, the extension returns a console URL with the protocol set to `direct`. The URL is then provided directly to the user.
- The built-in Proxmox Orchestrator Extension now supports console access via VNC. The extension calls the Proxmox API to fetch console details and returns them in the required format.
Also, adds changes to send caller details to the extension payload.
```
# cat /var/lib/cloudstack/management/extensions/Proxmox/02b650f6-bb98-49cb-8cac-82b7a78f43a2.json | jq
{
"caller": {
"roleid": "6b86674b-7e61-11f0-ba77-1e00c8000158",
"rolename": "Root Admin",
"name": "admin",
"roletype": "Admin",
"id": "93567ed9-7e61-11f0-ba77-1e00c8000158",
"type": "ADMIN"
},
"virtualmachineid": "126f4562-1f0f-4313-875e-6150cabeb72f",
...
```
Documentation PR: https://github.com/apache/cloudstack-documentation/pull/560
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* draas initial changes
* Added option to enable disaster recovery on a backup respository. Added UpdateBackupRepositoryCmd api.
* Added timeout for mount operation in backup restore configurable via global setting
* Addressed review comments
* fix for simulator test failures
* Added UT for coverage
* Fix create instance from backup ui for other providers
* Added events to add/update backup repository
* Fix race in fetchZones
* One more fix in fetchZones in DeployVMFromBackup.vue
* Fix zone selection in createNetwork via Create Instance from backup form.
* Allow template/iso selection in create instance from backup ui
* rename draasenabled to crosszoneinstancecreation
* Added Cross-zone instance creation in test_backup_recovery_nas.py
* Added UT in BackupManagerTest and UserVmManagerImplTest
* Integration test added for Cross-zone instance creation in test_backup_recovery_nas.py
* Remove allocated snapshots / vm snapshots on start
* Check and Cleanup snapshots / vm snapshots on MS start
* rebase fixes
* Update volume state (from Snapshotting) on MS start when its snapshot job not finished and snapshot in Creating state
* [routers] distiction between fatal failure and warning or unknown on healthchecks
* UI status for router health checks
* status from scripts varied
* automation signalled errors
* revert removal of update sql
* upgradeversion
* move config item and further cleanup
* handling services better
* backwards compatible response
---------
Co-authored-by: Daan Hoogland <dahn@apache.org>
* ScaleIO/PowerFlex smoke tests improvements, and some fixes
* Fix test_volumes.py, encrypted volume size check (for powerflex volumes)
* Fix test_over_provisioning.py (over provisioning supported for powerflex)
* Update vm snapshot tests
* Update volume size delta in primary storage resource count for user vm volumes only
The VR volumes resource count for PowerFlex volumes is updated here, resulting in resource count discrepancy
(which is re-calculated through ResourceCountCheckTask later, and skips the VR volumes)
* Fix test_import_unmanage_volumes.py (unsupported for powerflex)
* Fix test_sharedfs_lifecycle.py (volume size check for powerflex)
* Update powerflex.connect.on.demand config default to true
* Fix of create template from snapshot on another zone
When a snapshot has a copy on StorPool primary storage in another zone, but the original snapshot resides on secondary storage, creating a template from the copied snapshot results in the template being created in the first zone.
If the snapshot.backup.to.secondary setting is disabled, and a user creates a volume or template from a snapshot, the snapshot is temporarily backed up to secondary storage during the operation. After the operation, this backup should be deleted. However, the snapshot currently remains on both primary and secondary storage.
* update snapshot info depending on the data store role