Added capacity.calculate.workers config to control the number threads
that can be used to calculate capacities.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
- Enabels using worker threads for parallel connection of hosts to a
storage pool
- HostDaoImpl refactorings
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Mitigation for non-scalable Powerflex/ScaleIO clients
- Added ScaleIOSDCManager to manage SDC connections, checks clients limit, prepare and unprepare SDC on the hosts.
- Added commands for prepare and unprepare storage clients to prepare/start and stop SDC service respectively on the hosts.
- Introduced config 'storage.pool.connected.clients.limit' at storage level for client limits, currently support for Powerflex only.
* tests issue fixed
* refactor / improvements
* lock with powerflex systemid while checking connections limit
* updated powerflex systemid lock to hold till sdc preparation
* Added custom stats support for storage pool, through listStoragePools API
* code improvements, and unit tests
* Update config 'storage.pool.connected.clients.limit' to dynamic, and some improvements
* Stop SDC on host after migration if no volumes mapped to host
* Wait for SDC to connect after scini service start, and some log improvements
* Do not throw exception (log it) when SDC is not connected while revoking access for the powerflex volume
* some log improvements
* kvm: Add support for cgroupv2 (#8252)
1. Problem description
In Apache CloudStack (ACS), when a VM is deployed in a host with the KVM hypervisor, an XML file is created in the assigned host, which has a property shares that defines the weight of the VM to access the host CPU. The value of this property has no unit, and it is a relative measure to calculate how much CPU a given VM will have in the host. However, this value has a limit, which depends on the version of cgroup utilized by the host's kernel. The problem lies at the range value of shares that varies between both versions: [2, 264144] for cgroups version 1; and [1, 10000] for cgroups version 2. Currently, ACS calculates the value of shares using Equation 1, presented below, where CPU is the number of cores and speed is the CPU frequency; both specified in the VM's compute offering. Therefore, if a compute offering has, for example, 6 cores at 2 GHz, the shares value will be 12000 and an exception will be thrown by libvirt if the host utilizes cgroup v2. The second version is becoming the default one in current Linux distributions; thus, it is necessary to address this limitation.
Equation 1
shares = CPU * speed
Fixes: #6744
2. Proposed changes
To address the problem described, we propose to apply a scale conversion considering the max shares of the host. Using the same formula currently utilized by ACS, it is possible to calculate the maximum shares of a VM for a given host. In other words, using the number of cores and the nominal speed of the host's CPU as the upper limit of shares allowed to a VM. Then, this value will be scaled to the allowed interval of [1, 10000] of cgroup v2 by using a linear scale conversion.
The VM shares would be calculated as Equation 2, presented below, where VM requested shares is the requested shares value calculated using Equation 1, cgroup upper limit is fixed with a value of 10000 (cgroups v2 upper limit), and host max shares is the maximum shares value of the host, calculated using Equation 1. Using Equation 2, the only case where a VM passes the cgroup v2 limit is when the user requests more resources than the host has, which is not possible with the current implementation of ACS.
Equation 2
shares = (VM requested shares * cgroup upper limit)/host max shares
To implement the proposal, the following APIs will be updated: deployVirtualMachine, migrateVirtualMachine and scaleVirtualMachine. When a VM is being deployed, a new verification will be added to find a suitable host. The max shares of each host will be calculated, and the VM calculated shares will be verified if it does not surpass the host's value. Likewise, the migration of VMs will have a similar new verification. Lastly, the scale of VMs will also have the same verification for the VM's host.
To determine the max shares of a given host, we will use the same equation currently used in ACS for calculating the shares of VMs, presented in Section 1. When Equation 1 is used to determine the maximum shares of a host, CPU is the number of cores of the host, and speed is the nominal CPU speed, i.e., considering the CPU's base frequency.
It is important to note that these changes are only for hosts with the KVM hypervisor using cgroup v2 for now.
* Update overcommit ratio during live VM migration
* minor refactoring
---------
Co-authored-by: Bryan Lima <42067040+BryanMLima@users.noreply.github.com>
This PR introduces the functionality of purging removed DB entries for CloudStack entities (currently only for VirtualMachine).
There would be three mechanisms for purging removed resources:
- Background task - CloudStack will run a background task which runs at a defined interval. Other parameters for this task can be controlled with new global settings.
- API - New API `purgeExpungedResources`. It will allow passing the following parameters - resourcetype, batchsize, startdate, enddate
- Config for service offering. Service offerings can be created with purgeresources parameter which would allow purging resources immediately on expunge.
Following new global settings have been added:
- `expunged.resources.purge.enabled`: Default: false. Whether to run a background task to purge the DB records of the expunged resources.
- `expunged.resources.purge.resources`: Default: (empty). A comma-separated list of resource types that will be considered by the background task to purge the DB records of the expunged resources. Currently only VirtualMachine is supported. An empty value will result in considering all resource types for purging.
- `expunged.resources.purge.interval`: Default: 86400. Interval (in seconds) for the background task to purge the DB records of the expunged resources.
- `expunged.resources.purge.delay`: Default: 300. Initial delay (in seconds) to start the background task to purge the DB records of the expunged resources task.
- `expunged.resources.purge.batch.size`: Default: 50. Batch size to be used during purging of the DB records of the expunged resources.
- `expunged.resources.purge.start.time`: Default: (empty). Start time to be used by the background task to purge the DB records of the expunged resources. Use format `yyyy-MM-dd` or `yyyy-MM-dd HH:mm:ss`.
- `expunged.resources.purge.keep.past.days`: Default: 30. The number of days in the past from the execution time of the background task to purge the DB records of the expunged resources for which the expunged resources must not be purged. To enable purging DB records of the expunged resource till the execution of the background task, set the value to zero.
- `expunged.resource.purge.job.delay`: Default: 180. Delay (in seconds) to execute the purging of the DB records of an expunged resource initiated by the configuration in the offering. Minimum value should be 180 seconds and if a lower value is set then the minimum value will be used.
Upstream PRs:
https://github.com/apache/cloudstack/pull/8999https://github.com/apache/cloudstack-documentation/pull/397
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
* Validate user data with actual length, and some code improvements
* Ignore if user data is not set (don't fail)
* Validate user data after finalizing it
* Updated registerUserData API using POST call from UI, to support user data upto 1048576 bytes
* Apply suggestions from code review
* Added logs for user data
* Addressed review comments
* Check user data length with base64 encoded data, and some code improvements
This refactors a ResourceManager::listAvailHypervisorInZone method
that should return unique hypervisors for which existing hosts are Up
and processed. We can approximate this by assuming that those hosts
would have setup their hypervisor-specific systemvmtemplates. In a given
environment there wouldn't be thousands of systemvmtemplates, but can
have thousands of hosts. So, instead of scanning the entire cloud.host
table, we can make calculate guess by returning unique hypervisors of
systemvm templates which are ready. This method was used in
::processConnect() when an agent joins, to speed up its handling.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Add a global setting to control whether redirection is allowed while
downloading templates and volumes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
- Move allow.additional.vm.configuration.list.kvm from Global to Account setting
- Disallow VM details start with "extraconfig" when deploy VMs
- Skip changes on VM details start with "extraconfig" when update VM settings
- Allow only extraconfig for DPDK in service offering details
- Check if extraconfig values in vm details are supported when start VMs
- Check if extraconfig values in service offering details are supported when start VMs
- Disallow add/edit/update VM setting for extraconfig on UI
(cherry picked from commit e6e4fe16fb1ee428c3664b6b57384514e5a9252e)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Introduced a new API "checkVolumeAndRepair" that allows users or admins to check and repair if any leaks observed.
Currently this is supported only for KVM
* some fixes
* Added unit tests
* addressed review comments
* add repair volume while granting access
* Changed repair parameter to accept both leaks/all values
* Introduced new global setting volume.check.and.repair.before.use to do volume check and repair before VM start or volume attach operations
* Added volume check and repair changes only during VM start and volume attach operations
* Refactored the names to look similar across the code
* Some code fixes
* remove unused code
* Renamed repair values
* Addressed review comments
* code refactored
* used volume name in logs
* Changed the API to Async and the setting scope to storage pool
* Fixed exit value handling with check volume command
* Fixed storage scope to the setting
* Fixed volume format issues
* Refactored the log messages
* Fix formatting
* Fix host stuck in connecting state (#8502)
There are a lot of test failures due to test_vm_life_cycle.py in multiple PRs due to host not available for migration of VMs.
#8438 (comment)
#8433 (comment)
#7344 (comment)
While debugging I noticed that the hosts get stuck in Connecting state because MS is waiting for a response of the ReadyCommand from the agent. Since we take a lock on connection and disconnection, restarting the agent doesn't work. To fix this, we have to restart the MS or wait for ~1 hour (default timeout).
On the agent side, it gets stuck waiting for a response from the Script execution.
To reproduce, run smoke/test_vm_life_cycle.py (TestSecuredVmMigration test class to be specific). Once the tests are complete, you will notice that some hosts are stuck in Connecting state. And restarting the agent fails due to the named lock. Locks on DB can be checked using the below query.
SELECT *
FROM performance_schema.metadata_locks
INNER JOIN performance_schema.threads ON THREAD_ID = OWNER_THREAD_ID
WHERE PROCESSLIST_ID <> CONNECTION_ID() \G;
This PR adds a wait for the ready command and a timeout to the Script execution to ensure that the thread doesn't get stuck and the named lock from database is released.
* Externalise a few timeouts & fix timeout for hostSupportsUefi in libvirt ready command wrapper (#8547)
This PR fixes bug introduced in #8502. Timeout for script execution was set to 60 ms instead of 60s which resulted in host not getting UEFI enabled. This is a blocker for 4.19 release.
We do this by introducing a new agent parameter `agent.script.timeout` (default - 60 seconds) to use as a timeout for the script checking host's UEFI status.
We also externalize the timeout for the ReadyCommand by introducing a new global setting `ready.command.wait` (default - 60 seconds).
For ModifyStoragePoolCommand, we don't externalize the timeout to avoid confusion for the user. Since, the required timeout can vary depending on the provider in use and we are only setting the wait for default host listener for now. Instead, we reuse the global `wait` setting by dividing it by `5` making the default value of 6 minutes (1800/5 = 360s) for ModifyStoragePoolCommand.
Note: the actual time, the MS waits is twice the wait set for a Command. Check reference code below.
19250403e6/engine/orchestration/src/main/java/com/cloud/agent/manager/AgentAttache.java (L406-L442)
* fixup
Introduces the concept of tagged resource limits. Limits can be enforced on accounts and domains for the deployment of entities for a tagged resource. Current tagged resource limits can be used for the following resource types,
Host limits
user_vm
cpu
memory
Storage limits
volume
primary_storage
Following global settings can used to specify tags for which limit needs to be enforced,
Host: resource.limit.host.tags
Storage: resource.limit.storage.tags
Option for specifying tagged resource limits and viewing tagged resource usage are made available in the UI.
Enhances use of templatetag for VM deployment and template creation
Adds option to list disk offering with suitability flag for a virtualmachine. A new parameter named virtualmachineid has been added to the listDiskOfferings API which when passed returns suitableforvirtualmachine param in the reponse.
Fixes case of appending userdata when both template and vm data are either shellscript or cloudconfig
Fixes error when appending gzip userdata
Fixes case when userdata manual text from VM is not getting decoded-encoded correctly.
Fixes case of appending multipart data when both template and vm data contain same format types.
Refactor - moved validateUserData method to UserDataManager class
Refactor userdata test to check resultant multipart userdata thoroughly
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
(cherry picked from commit 729e6d144655bd26e6453dcc01a7e6f0d5c8f50e)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Auto Enable Disable KVM hosts
* Improve health check result
* Fix corner cases
* Script path refactor
* Fix sonar cloud reports
* Fix last code smells
* Add marvin tests
* Fix new line on agent.properties to prevent host add failures
* Send alert on auto-enable-disable and add annotations when the setting is enabled
* Address reviews
* Add a reason for enabling or disabling a host when the automatic feature is enabled
* Fix comment on the marvin test description
* Fix for disabling the feature if the admin has manually updated the host resource state before any health check result
(cherry picked from commit be66eb2a35bd6a5aae74f97a9140a9ec01f2a838)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fix ensures when datastore cluster in VMware is added as a primary storage pool in CloudStack then all the child datastores (which already exists in CS) should be in Up state.
For example:
1. Datastore Cluster DS has two child datastores A and B in vCenter. (B is already added as a storage pool in CloudStack)
2. Now try to add datastore cluster DS into CloudStack as a primary storage pool
3. CloudStack tries to add child datastores A and B in CloudStack, since B is already there in CloudStack, it will reuse the existing storagepool entry and will keep under parent Storage pool DS.
During Step 3 we are now checking if B is Up state or not.
The alert.email.addresses description is ambiguous and can cause doubts to operators. This description has been altered to avoid confusion. In addition, typos in alert.smtp.useStartTLS and project.smtp.useStartTLS have been fixed.
Co-authored-by: Stephan Krug <stephan.krug@scclouds.com.br>
This PR creates a new API createConsoleAccess to create VM console URL allowing it to connect using other UI implementations. To avoid reply attacks, the console access is enhanced to use a one time token per session
New configuration added:
consoleproxy.extra.security.validation.enabled: Enable/disable extra security validation for console proxy using a token
Documentation PR: apache/cloudstack-documentation#284
Adds option to provide custom DNS servers for isolated network, shared network and VPC tier.
New API parameters added in createNetwork API along with the corresponding response parameters.
Doc PR: apache/cloudstack-documentation#276
* add global setting to allow parallel execution on vmware
* cleanup setting distribution for vmware.create.full.clone
* query setting in vmware guru
* don´t touch other hypervisor's commands
* guru hierarchy cleanup
This PR enhances the existing PowerFlex/ScaleIO storage plugin to support separate (storage) network for Hosts(KVM)/Storage connection, mainly the SDC (ScaleIo Data Client) connection.
* refactor and log trace
* tracelogs
* shuffle pools with real randomiser
* sinlge retrieval of async job context
* some review comments addressed
* Apply suggestions from code review
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* log formatting
* integration test for distribution of volumes over storages
* move test to smoke tests
* imports
* sonarcloud issue # AYCOmVntKzsfKlhz0HDh
* spellos
* review comments
* review comments
* sonarcloud issues
* unittest
* import
* Update AbstractStoragePoolAllocatorTest.java
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Persistent Network feature & Marvin component tests
* Cleaned up comments and imports
* fixed small error
* add support to add setup persistent networks' resources when a disabled host is enabled
* small fix
* use wildcard instead of hard-coding the bridge name
* allow clean up of resources when removing a host in maintenance mode
* skip test for simulator hypervisor
Co-authored-by: shatoboar <sang-woo.bae@campus.tu-berlin.de>
* Mount disabled storage pool on host reboot
Add a global setting so that disabled pools will be mounted
again on host reboot
* fix build error
* Update description
* add cluster-wide support
Co-authored-by: Rakesh Venkatesh <rakeshv@apache.org>
Currently, our compute offerings and disk offerings are tightly coupled with respect to many aspects. For example, if a compute offering is created, a corresponding disk offering entry is also created with the same ID as the reference. Also creating compute offering takes few disk-related parameters which anyway goes to the corresponding disk offering only. I think this design was initially made to address compute offering for the root volume created from a template. Also changing the offering of a volume is tightly coupled with storage tags and has to be done in different APIs either migrateVolume or resizeVolume. Changing of disk offering should be seamless and should consider new storage tags, new size and place the volume in appropriate state as defined in disk offering.
more details are mentioned here https://cwiki.apache.org/confluence/display/CLOUDSTACK/Compute+offering+and+disk+offering+refactoring
* Schema changes and disk offering column change from "type" to "compute_only"
* Few more changes
* Decoupled service offering and disk offering
* Remove diskofferingid from vminstance VO
* Decouple service offering and disk offering states
* diskoffering getsize() is only for strict disk offerings
* Fix deployVM flow
* Added new API params to compute offering creation
* Add diskofferingstrictness to serviceoffering vo under quota
* Added overrideDiskOfferingId parameter in deploy VM API which will override disk offering for the root disk both in template and ISO case
Added diskSizeStrictness parameter in create Disk offering API which will decide whether to restrict resize or disk offering change of a volume
* Fix User vm response to show proper service offering and disk offerings
* Added disk size strictness in disk offering response
* Added disk offering strictness to the service offering response
* Remove comments
* Added UI changes for Disk offering strictness in add compute offering form and Disk size strictness in add disk offering form
* Added diskoffering details to the service offering response
* Added UI changes in deployvm wizard to accept override disk offering id
* Fix delete compute offering
* Fix VM deployment from custom service offering
* Move uselocalstorage column access from service offering to disk offering
* UI: Separated compute and disk releated parameters in add compute offering wizard, also added association to disk offering
* Fixed diskoffering automatic selection on add compute offering wizard
* UI: move compute only toggle button outside the box in add compute offering wizard
* Added volumeId parameter to listDiskOfferings API and the disksizestrictness flag of the current disk offering is honored while list disk offerings
* Added configuration parameter to decide whether to check volume tags on the destination storagepool during migration
* Added disk offering change checks during resize volume operation
* Added new API changeofferingforVolume API and corresponding changes
* Add UI form for changeOfferingForVolume API
* Fix UI conflicts
* Fix service offering usage as disk offering
* Fix unit test failures
* fix user_vm_view
* Addressed review comments
* Fixed service_offering_view
* Fix service offering edit flow
* Fix service offering constructor to address custom offering
* Fix domain_router_view to get proper service offering id
* Removed unused import
* Addressed review comments and fixed update service offering flow with storage tags
* Added marvin test cases for checking disk offering strictness
* review comments addressed
* Remove system_use column from disk offering join
* update volume_view to update system_use column from service offering and not disk offering
* Fix changeOfferingForVolume API for custom disk offering
* Fix global setting implementation
* Fix list volumes, after changing system_use column from disk offering to service offering in volume_view
* Changes for override root disk offering in deployvm wizard in case of custom offering
* Fix a unit test case
* Fixed recent unit test cases with new serviceofferingvo constructor
* Fix unit test in VolumeApiServiceImpl
* Added storage id for the list disk offering API and corresponding UI changes in migrateVolume and changeOfferingForVolume flow
* Rename global configuration parameter from storage.pool.tags.disk.offering.strictness to match.storage.pool.tags.with.disk.offering
* Fix smoke test failures
* Added tool tip for migrate volume UI form
* Address review comments and fix UI form of deploy VM in case of ISO.
* Fixed resize volume UI form for data disk
* UI changes to disable override root disk size when override root disk offering is enabled
* UI fix in deploy vm wizard
* Fix listdiskoffering after rebasing with main
* Fixed UI in migrate and changeofferingfor volume to handle empty disk offering list
Removed the volume's current disk offering from listDiskOffering response list
* Added custom Iops to resize volume form and removed the current disk offering during change offering for volume UI form
* Fix false response on updateDiskOffering API
* Added search field for changeofferingforvolume UI form
* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster
* Removed DB changes from 4.16 upgrade file
* Resolving merge conflicts with main 4.17
* Added support for auto migration and auto resize of the root volume upon changing the service offering for VM.
* UI: Added automigrate checkbox in scale VM form
* Addes since attributes to new API params
* Added shrinkOK parameter to changeofferingforvolume API
* Added shrinkOk param to UI in changeOfferingforVolume form
* Added shrinkOk flag to scaleVM and changeServiceForVirtualMachines and UI form
* Removed old foreign key constraint on IDs of service offering and disk offering
* Allow resize and automigrate of root volume if required in all cases of service offering change
* Allow only resize to higher disk size from UI
* Fixing vue syntax error
* Make UI changes to provide root disk size box when the linked disk offering is of custom
* Converted from check box to toggle in scale VM, changeoffering, resize and migrate volume forms
* Fix resize volume operation to update the VM settings
* Fix migratevolume form to pick selected storage pool id in list diskofferings API
* Improve logs
* Remove unnecessary comments
* Use diamond inference
* Fix some logs
* Remove unnecessary unboxing
* Create method to handle job result
* Remove unused vars and fix some logics
* Extract code to method and few adjusts
* Use CollectionUtils
* Extract pending work job validation to method
* Create new constructors
* Extract work job and info creation to a method
* Extract submit async job to a method
* Extract find vm by id to a method
* Change log level from trace to debug
* Remove unnused methods and add logs
* Undo code remotion
* Remove asserts and fix conditionals
* Address @GabrielBrascher reviews
* Remove double quotes from keys in manual json
* Undo code remotion
* Add object to log
* Remove statement from try/catch
* Implement toString with ReflectionToStringBuilderUtils
* Fix errors related to merge main
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>