this fixes the exception in smoke test test_affinity_groups
```
2024-08-19T08:34:15,132 ERROR [c.c.a.ApiAsyncJobDispatcher] (API-Job-Executor-87:[ctx-f7804a8e, job-9232]) (logid:b71ddec8) Unexpected exception while executing org.apache.cloudstack.api.command.admin.vm.DeployVMCmdByAdmin com.cloud.utils.exception.CloudRuntimeException: Unable to find on DB, due to: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ') FOR UPDATE' at line 1
at com.cloud.utils.db.GenericDaoBase.searchIncludingRemoved(GenericDaoBase.java:441)
at com.cloud.utils.db.GenericDaoBase.searchIncludingRemoved(GenericDaoBase.java:368)
at com.cloud.utils.db.GenericDaoBase.search(GenericDaoBase.java:357)
at com.cloud.utils.db.GenericDaoBase.lockRows(GenericDaoBase.java:343)
at org.apache.cloudstack.affinity.dao.AffinityGroupDaoImpl.listByIds(AffinityGroupDaoImpl.java:171)
```
* reface quotaTariffList process and add listOnlyRemoved parameter
* add unit tests for createQuotaTariffResponse and isUserAllowedToSeeActivationRules methods
* update QuotaTariffListCmdTest
* refactor quota tariffs creation
* refactor quota tariffs update
* fix unit test in JsInterpreter
* remove unused import
* refactor quota listing and add quota deletion
* add functionality to create tariff from UI, not working when specifying dates
* fix date parsing
* add labels
* fix details view of tariffs
* new update tariff view
* fix filter placeholder
* remove debug html
* add labels
* make value field to be required when updating a tariff
* add labels
* add portuguese labels
* remove unused label
* fix updating tariff when there was no enddate specified
* refactor dates
* refactor dates
* clear code
* update disabled dates in date picker
* clear ListView component
* fix unnecessary updates when the new end date was equal to the exising end date
* fix when today was selected to start date
* add keyword to filter
* change usage type response
* add keyword and usagetype filter on UI
* fix disabled end dates in date picker
* modify datepickers to use datetime
* small fixes
* make value an unrequired field on update form
* remove duplicate import
* remove unused css classes
* add UI support for position parameter
* resize input fields to fill all available horizontal space
* remove console.log()
* remove unnecessary fully qualified names
* replace `usagetypeid` property name to `id` on `listUsageTypes` API call
* replace `usagetypeid` property name to `id` on `listUsageTypes` API call
If the libvirt mount point is still busy and can't be unmounted
right now, it was waited 5 seconds and an plain unmount was tried,
without cleaning up the libvirt storagepool.
This kept libvirt thinking the storagepool
is active and mounted (which it wasn't).
Now after the plain unmount call, also
the libvirt storagepool will be destroyed.
Adminstrators should ensure that IDP configuration has a signing certificate for the actual signature check to be performed. In addition to this, this change introduces a new global setting saml2.check.signature, with the default value of true, which can deliberately fail a SAML login attempt when the SAML response has a missing signature.
Purges the SAML token upon handling the first SAML response.
Authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* saml: purge token after first response and improve setting description
This improves the description of a saml signature checking global
setting, and purges the SAML token upon handling the first SAML
response.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* fix failing unit test
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
---------
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This improves upon #9219, to make the signature checks mandatory by
default but allows for users to relax the setting if they really must.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- mTLS implementation for cluster service communication
- Listen only on the specified cluster node IP address instead of all interfaces
- Validate incoming cluster service requests are from peer management servers based on the server's certificate dns name which can be through global config - ca.framework.cert.management.custom.san
- Hardening of KVM command wrapper script exeicution
- Improve API server integration port check
- cloudstack-management.default: don't have JMX configuration if not needed. JMX is used for instrumentation; users who need to use it should enable it explicitly
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
- mTLS implementation for cluster service communication
- Listen only on the specified cluster node IP address instead of all interfaces
- Validate incoming cluster service requests are from peer management servers based on the server's certificate dns name which can be through global config - ca.framework.cert.management.custom.san
- Hardening of KVM command wrapper script execution
- Improve API server integration port check
- cloudstack-management.default: don't have JMX configuration if not needed. JMX is used for instrumentation; users who need to use it should enable it explicitly
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
The client.setBasePath() would overwrite the Linstor controller IP/host
for all current client users. This is basically a race condition
that triggered as soon as you had configured 2 different primary storages
with different Linstor controllers.
* New feature: Change storage pool scope
* Added checks for Ceph/RBD
* Update op_host_capacity table on primary storage scope change
* Storage pool scope change integration test
* pull 8875 : Addressed review comments
* Pull 8875: remove storage checks, AbstractPrimayStorageLifeCycleImpl class
* Pull 8875: Fixed integration test failure
* Pull 8875: Review comments
* Pull 8875: review comments + broke changeStoragePoolScope into smaller functions
* Added UT for changeStoragePoolScope
* Rename AbstractPrimaryDataStoreLifeCycleImpl to BasePrimaryDataStoreLifeCycleImpl
* Pull 8875: Dao review comments
* Pull 8875: Rename changeStoragePoolScope.vue to ChangeStoragePoolScope.vue
* Pull 8875: Created a new smokes test file + A single warning msg in ui
* Pull 8875: Added cleanup in test_primary_storage_scope.py
* Pull 8875: Type in en.json
* Pull 8875: cleanup array in test_primary_storage_scope.py
* Pull:8875 Removing extra whitespace at eof of StorageManagerImplTest
* Pull 8875: Added UT for PrimaryDataStoreHelper and BasePrimaryDataStoreLifeCycleImpl
* Pull 8875: Added license header
* Pull 8875: Fixed sql query for vmstates
* Pull 8875: Changed icon plus info on disabled mode in apidoc
* Pull 8875: Change scope should not work for local storage
* Pull 8875: Change scope completion event
* Pull 8875: Added api findAffectedVmsForStorageScopeChange
* Pull 8875: Added UT for findAffectedVmsForStorageScopeChange and removed listByPoolIdVMStatesNotInCluster
* Pull 8875: Review comments + Vm name in response
* Pull 8875: listByVmsNotInClusterUsingPool was returning duplicate VM entries because of multiple volumes in the VM satisfying the criteria
* Pull 8875: fixed listAffectedVmsForStorageScopeChange UT
* listAffectedVmsForStorageScopeChange should work if the pool is not disabled
* Fix listAffectedVmsForStorageScopeChangeTest UT
* Pull 8875: add volume.removed not null check in VmsNotInClusterUsingPool query
* Pull 8875: minor refactoring in changeStoragePoolScopeToCluster
* Update server/src/main/java/com/cloud/storage/StorageManagerImpl.java
* fix eof
* changeStoragePoolScopeToZone should connect pool to all Up hosts
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Mitigation for non-scalable Powerflex/ScaleIO clients
- Added ScaleIOSDCManager to manage SDC connections, checks clients limit, prepare and unprepare SDC on the hosts.
- Added commands for prepare and unprepare storage clients to prepare/start and stop SDC service respectively on the hosts.
- Introduced config 'storage.pool.connected.clients.limit' at storage level for client limits, currently support for Powerflex only.
* tests issue fixed
* refactor / improvements
* lock with powerflex systemid while checking connections limit
* updated powerflex systemid lock to hold till sdc preparation
* Added custom stats support for storage pool, through listStoragePools API
* code improvements, and unit tests
* unit tests fixes
* Update config 'storage.pool.connected.clients.limit' to dynamic, and some improvements
* Stop SDC on host after migration if no volumes mapped to host
* Wait for SDC to connect after scini service start, and some log improvements
* Do not throw exception (log it) when SDC is not connected while revoking access for the powerflex volume
* some log improvements
* Restart agent when host comes out of maintenance
* Don't send CreateStoragePoolCommand to hosts in maintenance mode
* CreateStoragePoolCommand can run when host in maintenance. Reverted the change to restart agent when host was already up and in maintenance
* Reverted changes done to ResourceManagerImplTest
* Create/Export OVA file of the VM on external vCenter host, to temporary conversion location (NFS)
* Fixed ova issue on untar/extract ovf from ova file
"tar -xf" cmd on ova fails with "ovf: Not found in archive" while extracting ovf file
* Updated VMware to KVM instance migration using OVA
* Refactoring and cleanup
* test fixes
* Consider zone wide pools in the destination cluster for instance conversion
* Remove local storage pool support as temporary conversion location
- OVA export not possible as the pool is not accessible outside host, NFS pools are supported.
* cleanup unused code
* some improvements, and refactoring
* import nic unit tests
* vmware guru unit tests
* Separate clone VM and create template file for VMware migration
- Export OVA (of the cloned VM) to the conversion location takes time.
- Do any validations with cloned VM before creating the template (and fail early).
- Updated unit tests.
* Check conversion support on host before clone vm / create template on vmware (and fail early)
* minor code improvements
* Auto select the host with instance conversion capability
* Skip instance conversion supported response param for non-KVM hosts
* Show supported conversion hosts in the UI
* Skip persistence map update if network doesn't exist
* Added support to export OVA from KVM host, through ovftool (when installed in KVM host)
* Updated importvm api param 'usemsforovaexport' to 'forcemstodownloadvmfiles', to be generic
* Updated hardcoded UI messages with message labels
* Updated UI to support importvm api param - forcemstodownloadvmfiles
* Improved instance conversion support checks on ubuntu hosts, and for windows guest vms
* Use OVF template (VM disks and spec files) for instance conversion from VMware, instead of OVA file
- this would further increase the migration performance (as it reduces the time for OVA preparation / archiving of the VM files into a single file)
* OVF export tool parallel threads code improvements
* Updated 'convert.vmware.instance.to.kvm.timeout' config default value to 3 hrs
* Config values check & code improvements
* Updated import log, with time taken and vm details
* Support for parallel downloads of VMware VM disk files while exporting OVF from MS, and other changes below.
- Skip clone for powered off VMs
- Fixes to support standalone host (with its default datacenter)
- Some code improvements
* rebase fixes
* rebase fixes
* minor improvement
* code improvements - threads configuration, and api parameter changes to import vm files
* typo fix in error msg
* xenserver: attach regular iso with configdrive
Fixes#7902
This PR allows attaching a regular ISO to a VM when it already has the
config drive ISO attached.
Config-drive ISO is now attached with the SR name-label
<VM-NAME>-CONFIGDRIVE-ISO. While regular ISOs continue to attach with SR
name-label <VM-NAME>-ISO. VM which already have the configdrive ISO
attached before this fix will return an appropriate error and will need
to be stopped-start.
* Veeam: find storage pool by path for PreSetup and VMFS
* Veeam: support VMware distributed virtual switch
* Veeam: sync volumes on Solidfire after backup restoration
user faced the issue that backup is restored but the DATA disk is gone (ROOT disk is ok)
```
2024-05-03 12:00:32,868 ERROR [o.a.c.b.BackupManagerImpl] (API-Job-Executor-13:ctx-aa8a1d85 job-149661 ctx-73328567) (logid:6510cf06) Failed to import VM [vmInternalName: i-169-9679-VM] from backup restoration [{"backupType":"Full","externalId":"821ca400-a5da-4282-bf3f-7c7e38a6cdb4","id":257,"uuid":"69399101-5cbd-461c-8a48-f0c70eac0b24","vmId":9679}] with hypervisor [type: VMware] due to: [Couldn't find storage pool -iqn.2010-01.com.solidfire:3p53.data-9679.221-0].
```
On managed storage, the datastore name of DATA disk is determined by the iscsi_name of the volume.
* Veeam: set correct path for DATA disks on solidfire
* Temporarily backup StorPool volume before expunge
Sometimes the users delete the volumes by mistake. This enhancment
provides a solution to backup the volume before it's deleted. The user
will be able to see the snapshot in CloudStack UI/CLI and create only a
volume from it.
A task will check (by default on every 5mins) if the snapshots are
deleted from StorPool
Global settings to enable the delay delete option:
`storpool.delete.after.interval` - The interval (in seconds) after the StorPool snapshot will be deleted
`storpool.list.snapshots.delete.after.interval` - The interval (in seconds) to fetch the StorPool snapshots with deleteAfter flag
Minor fix when deleting snapshots
* added Apache licence
* addressed comments