* ui: support bulk action for various resources
* ui: support bulk action for various resources
* Bulk actions support - progress review
* Extract common code + suppress error notification with bulk actions
* cleanup + suppress notification
* add progress view
* Add routes to notification + add async jobs + refactor progress view
* minor tweaks
* fix group action for vpn users
* Refactor code
* Unique row key
* remove redundant cols
* address comments
* Added the following:
1. Make Cancel as default button for bulk actions
2. Add Filter Filter on the Operation status Column - Progress View
3. For Stop and delete bulk operations - add An alert message(in Red) to inform users that it is a destructive operation
* Add dynamism to column filtering
This PR introduces new granularity levels to configure VM dynamic scalability. Previously VM is configured to be dynamically scalable based on the template and global setting. Now we bringing this option to configure at service offering and VM level also.
VM can dynamically scale only when all flags are ON at VM level, template, service offering and global setting. If any of the flags is set to false then VM cannot be scalable. This result will be persisted in DB for each VM and will be honoured for that VM till it is updated.
We are introducing 'dynamicscalingallowed' parameter with permitted values of true or false for deployVM API and createServiceOffering API.
Following are the API parameter changes:
createServiceOffering API:
dynamicscalingenabled: an optional parameter of type Boolean with default value “true”.
deployVirtualMachine API:
dynamicscalingenabled: an optional parameter of type Boolean with default value “true”.
Following are the UI changes:
Service offering creation has ON/OFF switch for dynamic scaling enabled with default value true
Fixes: #4990
When a VM associated with a backup offering is destroyed/expunged, the backup offering isn't unassigned, and despite the VM having no backups present, backup usage is generated. This PR prevent usage record generation when there are no backups present for a VM with a backup offering associated to it. This is done by ensuring that usage event for backups is generated only when a the backup size > 0
Fixes: #4808, #4941
This PR adds a force flag to the attachIso / detachIso commands, especially for VMware where it is noticed that when trying to either detach an iso or attach an iso when there already exists another present it fails to do the necessary operation as from ACS end we either answer the question returned by Esxi for CDRom disconnect operation as No (for detach operation) or do not answer the question at all (for Attach operation).
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
* Updated libvirt's native reboot operation for VM on KVM using ACPI event, and Added 'forced' reboot option to stop and start the VM (using rebootVirtualMachine API)
* Added 'forced' reboot option for System VM and Router
- New parameter 'forced' in rebootSystemVm API, to stop and then start System VM
- New parameter 'forced' in rebootRouter API, to force stop and then start Router
* Added force reboot tests for User VM, System VM and Router
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ
Documentation PR: apache/cloudstack-documentation#169
This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack
Other improvements addressed in addition to PowerFlex/ScaleIO support:
- Added support for config drives in host cache for KVM
=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)
- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates
- Updated full deployment destination for preparing the network(s) on VM start
- Propagate the direct download certificates uploaded to the newly added KVM hosts
- Discover the template size for direct download templates using any available host from the zones specified on template registration
=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones
- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)
- Retry VM deployment/start when the host cannot grant access to volume/template
- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore
- Check the router filesystem is writable or not, before performing health checks
=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not
- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)
* Addressed some issues for few operations on PowerFlex storage pool.
- Updated migration volume operation to sync the status and wait for migration to complete.
- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.
- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).
- Updated resize volume error message string.
- Blocked the below operations on PowerFlex storage pool:
-> Extract Volume
-> Create Snapshot for VMSnapshot
* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.
- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html
Other fixes included:
- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)
- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.
- Perform basic tests (for connectivity and file system) on router before updating the health check config data
=> Validate the basic tests (connectivity and file system check) on router
=> Cleanup the health check results when router is destroyed
* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0
* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool
and Minor improvements in PowerFlex/ScaleIO storage plugin code
* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.
- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.
* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py
* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)
* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.
* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration
* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure
* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
- Fixes inter-cluster migration of VMs
- Allows migration of stopped VM with disks attached to different and suitable pools
- Improves inter-cluster detached volume migration
- Allows inter-cluster migration (clusters of same Pod) for system VMs, VRs on VMware
- Allows storage migration for stopped system VMs, VRs on VMware within same Pod if StoragePool cluster scopetype
Linked Primate PR: https://github.com/apache/cloudstack-primate/pull/789 [Changes merged in this PR after new UI merge]
Documentation PR: https://github.com/apache/cloudstack-documentation/pull/170
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* compute: add button and modal `take VM volume snapshot`
* add awesome camera retro plugins
* modified to using component
* fix for quiescevm
* add quiescevm to api params
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Fixing instance view
* Chaning from ip to ssh port
* Fixing html tags in text
* Adding messages to kube actions
* Removing redundant code
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Revert "Not relying on erroneous count returned by findHostsForMigration (#774)"
This reverts commit c6624403556df286f1b61e9d34f948a089af8326.
* Fixing host count for migratevm
* Sorting based on suitability
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Language name fix
* Adding more translations on the dashboard
* New instance wizard
* Instance details view
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixes :
- Fixing scale router
- Fixing account actions
- Fixing user actions
- Adding message for create vm backup
- Fix default allowuserdrivenbackups in ImportBackupOfferings
- Fix typo in TakeSnapshot
- Ensuring zone mandatory in upload template
- Adding securitygroup to instacetab
- Adding related vms to routers
- Adding makeredundant to restart network
- Fixing no key in listview
- Link to ipaddress only if router path is publicip
- Show vpc routers only to admin
- Fix restartVPC args
- Fix storage action visibility
- Reorder routes to match legacy
- Reorder cluster tabs
- Fix number input width
- Fix create vpc
- List events also on fetchlatest
- Fix show domain actions
- Removing resource admin from default roles
- Fix missing store
- Adding createVPC view
- Adding attachiso view
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixes :
- Don't allow users in UI to delete/archive events
- Fix button name in VM deployment form in its network section to Create new network
- Refresh after template / iso upload
- Making external-id mandatory in ImportBackupOffering
- Fixing visibility of assignVirtualMachineToBackupOffering
- Removing link on traffic label
- Defensive check in TrafficTypesTab
- Ensuring we get the user info so that store.getters.user is never empty when the page is freshly loaded
- Changing from report bug to report issue
- Ordering projects in menu
- Changing router get health check results
- Show configureHAForHost based on hypervisor
- Fix scale and migrate router
- Fix scale and migrate systemvm
- Fix show actionbutton for assignVirtualMachineToBackupOffering
- Fix show actionbutton for stopKubernetesCluster
- Fix show actions for volumes
- Fix show actions for snapshots
- Fix show actions for vm snapshots
- Fix show actions for backups
- Adding loading for tags and annotations
- Enter to submit advanced search
- Fixing show Project instead of account when passed as projectaccount passed in account field
- Show project name instead of displaytext
- Fixing template and iso actions
- Fixing tags with projectid
- Fix security groups ingress/egress rules view
- Removing redundant allocationstate from zones
- Adding managedstate to clusters
- Adding capacity tab to clusters and pods
- Adding routerlink to events in dashboard
- Set autofocus to username in login
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The API response leaks account and domain information which for templates
and isos may appear leakage of information. This would at least limit
that in the list views for templates, isos and few other views for
account of role type User.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This adds a new search view component that will allow users to do
custom search using a popover component for vm, storage, network,
image, event, project and routers.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>