* storage: change storage pool to Up state when cancel storage migration
* Update 11773: connect host to shared pool after cancelling storage migration
* Update 11773: update db only
* Update 11773: skip capacity update for storpool
* Return details of the storage pool in the response including url, and update capacityBytes and capacityIops if applicable while creating storage pool
* Added capacitybytes parameter to the storage pool response in sync with the capacityiops response parameter and createStoragePool cmd request parameter (existing disksizetotal parameter in the storage pool response can be deprecated)
* Don't keep url in details
* Persist the capacityBytes and capacityIops in the storage_pool_details table while creating storage pool as well, for consistency - as these are updated with during update storage pool
* rebase with main fixes
This PR adds support for specifying user data (cloud-init) for system VMs via Zone Scoped global settings. This allows the operators to customize the System VMs and setup monitoring, logging or execute any custom commands.
We set the user data from the global setting in /var/cache/cloud/cmdline, and use the NoCloud datasource to process user data. cloud-init service is still disabled in the system VMs and it's executed as part of the cloud-postinit service which executes the postinit.sh script.
Added global settings:
systemvm.userdata.enabled - Disabled by default. Needs to be enabled to utilize the feature.
console.proxy.vm.userdata - UUID of the User data to be used for Console Proxy
secstorage.vm.userdata - UUID of the User data to be used for Secondary Storage VM
virtual.router.userdata - UUID of the User data to be used for Virtual Routers
CS creates transient KVM domain.xml. When instance is unmanaged from CS, explicit dump of domain has to be taken to manage is outside of CS.
With this PR
domainXML gets backed up and becomes persistent for further management of Instance.
Stopped instance also can be unmanaged, last host for instance is considered for defining domain
hostid param is supported in unmanageVirtualMachine API for KVM hypervisor and for stopped Instances
hostid field in response of unmanageVirtualMachine, representing host used for unmanage operation
Disable unmanaging instance with config drive, can unmanage from API using forced=true param for KVM
* Add UUID field for LDAP configuration
* move db changes to the lastest schema file
* Add ID param to list ldapConf API & delete ldapConf API
* fix ui test
* fix 1 ui test
* fix test
* fix api description
---------
Co-authored-by: dahn <daan@onecht.net>
This PR introduces console access support for instances deployed using Orchestrator Extensions, available via either VNC or a direct URL.
- CloudStack queries the extension using the getconsole action.
- For VNC-based access, the extension must return host/port/ticket details. CloudStack then forwards these to the Console Proxy VM (CPVM) in the instance’s zone. It is assumed that the CPVM can reach the specified host and port.
- For direct URL access, the extension returns a console URL with the protocol set to `direct`. The URL is then provided directly to the user.
- The built-in Proxmox Orchestrator Extension now supports console access via VNC. The extension calls the Proxmox API to fetch console details and returns them in the required format.
Also, adds changes to send caller details to the extension payload.
```
# cat /var/lib/cloudstack/management/extensions/Proxmox/02b650f6-bb98-49cb-8cac-82b7a78f43a2.json | jq
{
"caller": {
"roleid": "6b86674b-7e61-11f0-ba77-1e00c8000158",
"rolename": "Root Admin",
"name": "admin",
"roletype": "Admin",
"id": "93567ed9-7e61-11f0-ba77-1e00c8000158",
"type": "ADMIN"
},
"virtualmachineid": "126f4562-1f0f-4313-875e-6150cabeb72f",
...
```
Documentation PR: https://github.com/apache/cloudstack-documentation/pull/560
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* draas initial changes
* Added option to enable disaster recovery on a backup respository. Added UpdateBackupRepositoryCmd api.
* Added timeout for mount operation in backup restore configurable via global setting
* Addressed review comments
* fix for simulator test failures
* Added UT for coverage
* Fix create instance from backup ui for other providers
* Added events to add/update backup repository
* Fix race in fetchZones
* One more fix in fetchZones in DeployVMFromBackup.vue
* Fix zone selection in createNetwork via Create Instance from backup form.
* Allow template/iso selection in create instance from backup ui
* rename draasenabled to crosszoneinstancecreation
* Added Cross-zone instance creation in test_backup_recovery_nas.py
* Added UT in BackupManagerTest and UserVmManagerImplTest
* Integration test added for Cross-zone instance creation in test_backup_recovery_nas.py
* Remove allocated snapshots / vm snapshots on start
* Check and Cleanup snapshots / vm snapshots on MS start
* rebase fixes
* Update volume state (from Snapshotting) on MS start when its snapshot job not finished and snapshot in Creating state
* [routers] distiction between fatal failure and warning or unknown on healthchecks
* UI status for router health checks
* status from scripts varied
* automation signalled errors
* revert removal of update sql
* upgradeversion
* move config item and further cleanup
* handling services better
* backwards compatible response
---------
Co-authored-by: Daan Hoogland <dahn@apache.org>
* ScaleIO/PowerFlex smoke tests improvements, and some fixes
* Fix test_volumes.py, encrypted volume size check (for powerflex volumes)
* Fix test_over_provisioning.py (over provisioning supported for powerflex)
* Update vm snapshot tests
* Update volume size delta in primary storage resource count for user vm volumes only
The VR volumes resource count for PowerFlex volumes is updated here, resulting in resource count discrepancy
(which is re-calculated through ResourceCountCheckTask later, and skips the VR volumes)
* Fix test_import_unmanage_volumes.py (unsupported for powerflex)
* Fix test_sharedfs_lifecycle.py (volume size check for powerflex)
* Update powerflex.connect.on.demand config default to true
* Fix of create template from snapshot on another zone
When a snapshot has a copy on StorPool primary storage in another zone, but the original snapshot resides on secondary storage, creating a template from the copied snapshot results in the template being created in the first zone.
If the snapshot.backup.to.secondary setting is disabled, and a user creates a volume or template from a snapshot, the snapshot is temporarily backed up to secondary storage during the operation. After the operation, this backup should be deleted. However, the snapshot currently remains on both primary and secondary storage.
* update snapshot info depending on the data store role