Added caching for ConfigKey value retrievals based on the Caffeine
in-memory caching library.
https://github.com/ben-manes/caffeine
Currently, expire time for a cache is 1 minute and each update of the
config key invalidates the cache.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Don't send sql exception/query from dao to upper layer, log it and send only the error message
* Updated charset to utf8mb4, for display_name column/user_vm table and job_result column/async_job table to support unicode chars & emojis
* Added API arg validator for RFC compliance domain name, to validate VM's host name
* Updated user resources name / display name column's charset to utf8mb4
* Check and update char set for affinity group name to utf8mb4, from the data migration in upgrade path
* Updated backup offering name column charset to utf8mb4
* Added unit tests for vm host/domain name validation
* Added smoke test to check resource name for vm, volume, service & disk offering, template, iso, account(first/lastname)
* Updated resource annotation charset to utf8mb4
* Updated some resources description charset to utf8mb4
* Mitigation for non-scalable Powerflex/ScaleIO clients
- Added ScaleIOSDCManager to manage SDC connections, checks clients limit, prepare and unprepare SDC on the hosts.
- Added commands for prepare and unprepare storage clients to prepare/start and stop SDC service respectively on the hosts.
- Introduced config 'storage.pool.connected.clients.limit' at storage level for client limits, currently support for Powerflex only.
* tests issue fixed
* refactor / improvements
* lock with powerflex systemid while checking connections limit
* updated powerflex systemid lock to hold till sdc preparation
* Added custom stats support for storage pool, through listStoragePools API
* code improvements, and unit tests
* Update config 'storage.pool.connected.clients.limit' to dynamic, and some improvements
* Stop SDC on host after migration if no volumes mapped to host
* Wait for SDC to connect after scini service start, and some log improvements
* Do not throw exception (log it) when SDC is not connected while revoking access for the powerflex volume
* some log improvements
* kvm: Add support for cgroupv2 (#8252)
1. Problem description
In Apache CloudStack (ACS), when a VM is deployed in a host with the KVM hypervisor, an XML file is created in the assigned host, which has a property shares that defines the weight of the VM to access the host CPU. The value of this property has no unit, and it is a relative measure to calculate how much CPU a given VM will have in the host. However, this value has a limit, which depends on the version of cgroup utilized by the host's kernel. The problem lies at the range value of shares that varies between both versions: [2, 264144] for cgroups version 1; and [1, 10000] for cgroups version 2. Currently, ACS calculates the value of shares using Equation 1, presented below, where CPU is the number of cores and speed is the CPU frequency; both specified in the VM's compute offering. Therefore, if a compute offering has, for example, 6 cores at 2 GHz, the shares value will be 12000 and an exception will be thrown by libvirt if the host utilizes cgroup v2. The second version is becoming the default one in current Linux distributions; thus, it is necessary to address this limitation.
Equation 1
shares = CPU * speed
Fixes: #6744
2. Proposed changes
To address the problem described, we propose to apply a scale conversion considering the max shares of the host. Using the same formula currently utilized by ACS, it is possible to calculate the maximum shares of a VM for a given host. In other words, using the number of cores and the nominal speed of the host's CPU as the upper limit of shares allowed to a VM. Then, this value will be scaled to the allowed interval of [1, 10000] of cgroup v2 by using a linear scale conversion.
The VM shares would be calculated as Equation 2, presented below, where VM requested shares is the requested shares value calculated using Equation 1, cgroup upper limit is fixed with a value of 10000 (cgroups v2 upper limit), and host max shares is the maximum shares value of the host, calculated using Equation 1. Using Equation 2, the only case where a VM passes the cgroup v2 limit is when the user requests more resources than the host has, which is not possible with the current implementation of ACS.
Equation 2
shares = (VM requested shares * cgroup upper limit)/host max shares
To implement the proposal, the following APIs will be updated: deployVirtualMachine, migrateVirtualMachine and scaleVirtualMachine. When a VM is being deployed, a new verification will be added to find a suitable host. The max shares of each host will be calculated, and the VM calculated shares will be verified if it does not surpass the host's value. Likewise, the migration of VMs will have a similar new verification. Lastly, the scale of VMs will also have the same verification for the VM's host.
To determine the max shares of a given host, we will use the same equation currently used in ACS for calculating the shares of VMs, presented in Section 1. When Equation 1 is used to determine the maximum shares of a host, CPU is the number of cores of the host, and speed is the nominal CPU speed, i.e., considering the CPU's base frequency.
It is important to note that these changes are only for hosts with the KVM hypervisor using cgroup v2 for now.
* Update overcommit ratio during live VM migration
* minor refactoring
---------
Co-authored-by: Bryan Lima <42067040+BryanMLima@users.noreply.github.com>
* Enforce strict host tag checking
* Add e2e tests
* Add more information to error log
* Fix e2e test
* Update global settings descrption
* fixup
* Fix e2e test teardown
This PR introduces the functionality of purging removed DB entries for CloudStack entities (currently only for VirtualMachine).
There would be three mechanisms for purging removed resources:
- Background task - CloudStack will run a background task which runs at a defined interval. Other parameters for this task can be controlled with new global settings.
- API - New API `purgeExpungedResources`. It will allow passing the following parameters - resourcetype, batchsize, startdate, enddate
- Config for service offering. Service offerings can be created with purgeresources parameter which would allow purging resources immediately on expunge.
Following new global settings have been added:
- `expunged.resources.purge.enabled`: Default: false. Whether to run a background task to purge the DB records of the expunged resources.
- `expunged.resources.purge.resources`: Default: (empty). A comma-separated list of resource types that will be considered by the background task to purge the DB records of the expunged resources. Currently only VirtualMachine is supported. An empty value will result in considering all resource types for purging.
- `expunged.resources.purge.interval`: Default: 86400. Interval (in seconds) for the background task to purge the DB records of the expunged resources.
- `expunged.resources.purge.delay`: Default: 300. Initial delay (in seconds) to start the background task to purge the DB records of the expunged resources task.
- `expunged.resources.purge.batch.size`: Default: 50. Batch size to be used during purging of the DB records of the expunged resources.
- `expunged.resources.purge.start.time`: Default: (empty). Start time to be used by the background task to purge the DB records of the expunged resources. Use format `yyyy-MM-dd` or `yyyy-MM-dd HH:mm:ss`.
- `expunged.resources.purge.keep.past.days`: Default: 30. The number of days in the past from the execution time of the background task to purge the DB records of the expunged resources for which the expunged resources must not be purged. To enable purging DB records of the expunged resource till the execution of the background task, set the value to zero.
- `expunged.resource.purge.job.delay`: Default: 180. Delay (in seconds) to execute the purging of the DB records of an expunged resource initiated by the configuration in the offering. Minimum value should be 180 seconds and if a lower value is set then the minimum value will be used.
Upstream PRs:
https://github.com/apache/cloudstack/pull/8999https://github.com/apache/cloudstack-documentation/pull/397
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
* Validate user data with actual length, and some code improvements
* Ignore if user data is not set (don't fail)
* Validate user data after finalizing it
* Updated registerUserData API using POST call from UI, to support user data upto 1048576 bytes
* Apply suggestions from code review
* Added logs for user data
* Addressed review comments
* Check user data length with base64 encoded data, and some code improvements
* Merge two HostTagVO and HostTagDaoImpl
* Apple FR76: dynamic host tags
* Revert "Apple FR76: dynamic host tags"
This reverts commit 01b93a873f167018c4fafd0744c0de07ae4de4ed.
* Apple FR76: Implicit host tags
* Apple FR76: address Abhishek's comments
* Apple FR76: move updateImplicitTags
* Apple FR76: add since to other two responses
* Update 8929: add unit test in LibvirtComputingResourceTest
* Update variable names
* Update FR76: add explicithosttags in response
* Update FR76 UI: Update explicit host tags
* Update 8929: remove host tags and change labels on UI
* Update: ui polish for host tags
* fix since in responses
* Update 8929: fix UI error if no host tags
In env with large number of shared networks or ip addresses (10k+), this
causes millions of table scans in user_ip_address table. This causes
severe slowness in listVM APIs etc.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Using function in view was causing too many scans, as many rows as
number of domains and zones. This reduces table scans where left joins
happen using sub-queries. The effect is seen in bit faster create
network API performance.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Found these CPU and DB hotspot that handle agent ping commands, this
adds idle load when there are high number of hosts. By design, there
isn't any quick win here. However, the power sync report/handling could
be improved, so it doesn't need to kick-in for every ping command
received.
Few more areas marked in the codebase.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This refactors a ResourceManager::listAvailHypervisorInZone method
that should return unique hypervisors for which existing hosts are Up
and processed. We can approximate this by assuming that those hosts
would have setup their hypervisor-specific systemvmtemplates. In a given
environment there wouldn't be thousands of systemvmtemplates, but can
have thousands of hosts. So, instead of scanning the entire cloud.host
table, we can make calculate guess by returning unique hypervisors of
systemvm templates which are ready. This method was used in
::processConnect() when an agent joins, to speed up its handling.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
In this example commit, we look at:
- Adding missing indexes to speed up queries
- Reduce table scans by optimising sql query and using indexes
- Optimising sql queries to remove duplicate rows (use of distinct)
- Reduce CPU and DB load by using jprofiler to optimise both sql query
and CPU hotspots
server: reduce CPU and DB load caused by systemvm ::isZoneReady()
For this case, the sql query was fetching large number of table scans
only to determine if zone has any available pool+host to launch
systemvms. Accodingly the code and sql queries along with indexes
optimisations were used to lower both DB scans and mgmt server CPU load.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Also add a flag to disable on-the-fly metrics computation when the
list metrics APIs for zones and clusters are called.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This optimises the sql query and iterator to simply return the VMs list
excluding those in the received report.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Optimises DB query that seem to run against every Ping command, where
whole columns are fetched but only `id` column is used.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This refactors hotspot code to fetch just the count of hosts than
all the host VOs for a zone, during capacity scans for systemvms.
This reduces CPU and DB load, in really large (10k+ hosts) env.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The uprgade from 4.18.1.0 to 4.18.2.0-SNAPSHOT failed with error
```
2023-09-12 16:12:19,003 INFO [c.c.u.DatabaseUpgradeChecker] (main:null) (logid:) DB version = 4.18.1.0 Code Version = 4.18.2.0
2023-09-12 16:12:19,004 INFO [c.c.u.DatabaseUpgradeChecker] (main:null) (logid:) Database upgrade must be performed from 4.18.1.0 to 4.18.2.0
2023-09-12 16:12:19,036 DEBUG [c.c.u.DatabaseUpgradeChecker] (main:null) (logid:) Running upgrade Upgrade41800to41810 to upgrade from 4.18.0.0-4.18.1.0 to 4.18.1.0
...
2023-09-12 16:12:19,041 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:) -- Schema upgrade from 4.18.0.0 to 4.18.1.0
...
2023-09-12 16:12:21,602 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) (logid:) Statement: CREATE INDEX i_cluster_details__name on cluster_details (name)
2023-09-12 16:12:21,663 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) (logid:) Created index i_cluster_details__name
2023-09-12 16:12:21,673 DEBUG [c.c.u.d.T.Transaction] (main:null) (logid:) Rolling back the transaction: Time = 2632 Name = Upgrade; called by -TransactionLegacy.rollback:888-TransactionLegacy.removeUpTo:831-TransactionLegacy.close:655-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:175-ExposeInvocationInterceptor.invoke:97-ReflectiveMethodInvocation.proceed:186-JdkDynamicAopProxy.invoke:215-$Proxy30.persist:-1-DatabaseUpgradeChecker.upgrade:319-DatabaseUpgradeChecker.check:403-CloudStackExtendedLifeCycle.checkIntegrity:64
```
It succeeded with this change.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit a88a47989369af204ea6ee8a5fd190311f43c74c)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Add a global setting to control whether redirection is allowed while
downloading templates and volumes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
- Move allow.additional.vm.configuration.list.kvm from Global to Account setting
- Disallow VM details start with "extraconfig" when deploy VMs
- Skip changes on VM details start with "extraconfig" when update VM settings
- Allow only extraconfig for DPDK in service offering details
- Check if extraconfig values in vm details are supported when start VMs
- Check if extraconfig values in service offering details are supported when start VMs
- Disallow add/edit/update VM setting for extraconfig on UI
(cherry picked from commit e6e4fe16fb1ee428c3664b6b57384514e5a9252e)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>