* Added API arg validator for RFC compliance domain name, to validate VM's host name
* Added unit tests for vm host/domain name validation
* Don't send sql exception/query from dao to upper layer, log it and send only the error message
* Updated user resources name / displayname(/text) column's charset to utf8mb4 to support emojis / unicode chars
* Check and update char set for affinity group name to utf8mb4, from the data migration in upgrade path
* Added smoke test to check resource name for vm, volume, service & disk offering, template, iso, account(first/lastname)
* Updated resource annotation charset to utf8mb4
* Updated some resources description charset to utf8mb4
* Updated sql stmt with constant
* Updated modify columns char set with idempotent procedure
* Removed delimiter (for creating procedures)
Fixes domain-admin access check to prevent unauthorized access.
Introduces a new non-dynamic global setting - api.allow.internal.db.ids
to control whether to allow using internal DB IDs as API parameters or
not. Default value for the global setting is false.
Co-authored-by: Fabricio Duarte <fabricio.duarte.jr@gmail.com>
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* test: improve purge expunged resources b/g task testcase
Failures were seen for during purging of expunged resources via
bacground task during different test runs. This PR tries to make sure
b/g task execution is not skipped after MS restrat in a multi-MS
environment. It also updates expunged.resources.purge.interval to allow
running task again in 60s.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* new string formatting
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* test_primary_storage_scope should only run with kvm, vmware and simulator
* move cluster create and storage pool create from setup to test so that they are cleaned up in case of failure
* fixed lint failure
* using super class' tearDown
* New feature: Change storage pool scope
* Added checks for Ceph/RBD
* Update op_host_capacity table on primary storage scope change
* Storage pool scope change integration test
* pull 8875 : Addressed review comments
* Pull 8875: remove storage checks, AbstractPrimayStorageLifeCycleImpl class
* Pull 8875: Fixed integration test failure
* Pull 8875: Review comments
* Pull 8875: review comments + broke changeStoragePoolScope into smaller functions
* Added UT for changeStoragePoolScope
* Rename AbstractPrimaryDataStoreLifeCycleImpl to BasePrimaryDataStoreLifeCycleImpl
* Pull 8875: Dao review comments
* Pull 8875: Rename changeStoragePoolScope.vue to ChangeStoragePoolScope.vue
* Pull 8875: Created a new smokes test file + A single warning msg in ui
* Pull 8875: Added cleanup in test_primary_storage_scope.py
* Pull 8875: Type in en.json
* Pull 8875: cleanup array in test_primary_storage_scope.py
* Pull:8875 Removing extra whitespace at eof of StorageManagerImplTest
* Pull 8875: Added UT for PrimaryDataStoreHelper and BasePrimaryDataStoreLifeCycleImpl
* Pull 8875: Added license header
* Pull 8875: Fixed sql query for vmstates
* Pull 8875: Changed icon plus info on disabled mode in apidoc
* Pull 8875: Change scope should not work for local storage
* Pull 8875: Change scope completion event
* Pull 8875: Added api findAffectedVmsForStorageScopeChange
* Pull 8875: Added UT for findAffectedVmsForStorageScopeChange and removed listByPoolIdVMStatesNotInCluster
* Pull 8875: Review comments + Vm name in response
* Pull 8875: listByVmsNotInClusterUsingPool was returning duplicate VM entries because of multiple volumes in the VM satisfying the criteria
* Pull 8875: fixed listAffectedVmsForStorageScopeChange UT
* listAffectedVmsForStorageScopeChange should work if the pool is not disabled
* Fix listAffectedVmsForStorageScopeChangeTest UT
* Pull 8875: add volume.removed not null check in VmsNotInClusterUsingPool query
* Pull 8875: minor refactoring in changeStoragePoolScopeToCluster
* Update server/src/main/java/com/cloud/storage/StorageManagerImpl.java
* fix eof
* changeStoragePoolScopeToZone should connect pool to all Up hosts
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Ability to specify NFS mount options while adding a primary storage and modify it later
* Pull 8947: Rename all occurrence of nfsopt to nfsMountOpt and added nfsMountOpts to ApiConstants
* Pull 8947: Refactor code - move into separate methods
* Pull 8947: CollectionsUtils.isNotEmpty and switch statement in LibvirtStoragePoolDef.java
* Pull 8947: UI - cancel maintainenace will remount the storage pool and apply the options
* Pull 8947: UI - moved edit NFS mount options to edit Primary Storage form
* Pull 8947: UI - moved 'NFS Mount Options' to below 'Type' in dataview
* Pull 8947: Fixed message in AddPrimaryStorage.vue
* Pull 8947: Convert _nfsmountOpts to Set in libvirtStoragePoolDef
* Pull 8947: Throw exception and log error if mount fails due to incorrect mount option
* Pull 8947: Added UT and moved integration test to component/maint
* Pull 8947: Review comments
* Pull 8947: Removed password from integration test
* Pull 8947: move details allocation to inside the if loop in getStoragePoolNFSMountOpts
* Pull 8947: Fixed a bug in AddPrimaryStorage.vue
* Pull 8947: Pool should remain in maintenance mode if mount fails
* Pull 8947: Removed password from integration test
* Pull 8947: Added UT
* Pull 8875: Fixed a bug in CloudStackPrimaryDataStoreLifeCycleImplTest
* Pull 8875: Fixed a bug in LibvirtStoragePoolDefTest
* Pull 8947: minor code restructuring
* Pull 8947 : added some ut for coverage
* Fix LibvirtStorageAdapterTest UT
Documentation PR: https://github.com/apache/cloudstack-documentation/pull/398
Currently, an administrator can break host tag compatibility for a VM administrator by certain operations:
* deploy/start VM on a specific host
* migrate VM
* restore VM
* scale VM
This PR allows the user to specify tags which must be checked during these operations.
Global Settings
1. `vm.strict.host.tags` - A comma-separated list of tags for strict host check (Default - empty)
2. `vm.strict.resource.limit.host.tag.check` - Determines whether the resource limits tags are considered strict or not (Default - true)
During the above operations, we now check and throw an error if host tags compatibility is being broken for tags specified in `vm.strict.host.tags`. If `vm.strict.resource.limit.host.tag.check` is set to `true`, tags set in `resource.limit.host.tags` are also checked during these operations.
This PR introduces the functionality of purging removed DB entries for CloudStack entities (currently only for VirtualMachine). There would be three mechanisms for purging removed resources:
Background task - CloudStack will run a background task which runs at a defined interval. Other parameters for this task can be controlled with new global settings.
API - New admin-only API purgeExpungedResources. It will allow passing the following parameters - resourcetype, batchsize, startdate, enddate. Currently, API is not supported in the UI.
Config for service offering. Service offerings can be created with purgeresources parameter which would allow purging resources immediately on expunge.
Following new global settings have been added:
expunged.resources.purge.enabled: Default: false. Whether to run a background task to purge the expunged resources
expunged.resources.purge.resources: Default: (empty). A comma-separated list of resource types that will be considered by the background task to purge the expunged resources. Currently only VirtualMachine is supported. An empty "value will result in considering all resource types for purging
expunged.resources.purge.interval: Default: 86400. Interval (in seconds) for the background task to purge the expunged resources
expunged.resources.purge.delay: Default: 300. Initial delay (in seconds) to start the background task to purge the expunged resources task.
expunged.resources.purge.batch.size: Default: 50. Batch size to be used during expunged resources purging.
expunged.resources.purge.start.time: Default: (empty). Start time to be used by the background task to purge the expunged resources. Use format yyyy-MM-dd or yyyy-MM-dd HH:mm:ss.
expunged.resources.purge.keep.past.days: Default: 30. The number of days in the past from the execution time of the background task to purge the expunged resources for which the expunged resources must not be purged. To enable purging expunged resource till the execution of the background task, set the value to zero.
expunged.resource.purge.job.delay: Default: 180. Delay (in seconds) to execute the purging of an expunged resource initiated by the configuration in the offering. Minimum value should be 180 seconds and if a lower value is set then the minimum value will be used.
Documentation PR: apache/cloudstack-documentation#397
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>