* draas initial changes
* Added option to enable disaster recovery on a backup respository. Added UpdateBackupRepositoryCmd api.
* Added timeout for mount operation in backup restore configurable via global setting
* Addressed review comments
* fix for simulator test failures
* Added UT for coverage
* Fix create instance from backup ui for other providers
* Added events to add/update backup repository
* Fix race in fetchZones
* One more fix in fetchZones in DeployVMFromBackup.vue
* Fix zone selection in createNetwork via Create Instance from backup form.
* Allow template/iso selection in create instance from backup ui
* rename draasenabled to crosszoneinstancecreation
* Added Cross-zone instance creation in test_backup_recovery_nas.py
* Added UT in BackupManagerTest and UserVmManagerImplTest
* Integration test added for Cross-zone instance creation in test_backup_recovery_nas.py
This feature adds the ability to create a new instance from a VM backup for dummy, NAS and Veeam backup providers. It works even if the original instance used to create the backup was expunged or unmanaged. There are two parts to this functionality:
Saving all configuration details that the VM had at the time of taking the backup. And using them to create an instance from backup.
Enabling a user to expunge/unmanage an instance that has backups.
This PR allows attaching of GPU devices via PCI, mdev or VF to an Instance for KVM.
It allows the operator to discover the GPU devices on the KVM host and create a Compute Offering with GPU support based on the available GPU devices on the host. Once the operator has created the Compute offering, it can be used by users to launch Instances with GPU devices.
The Extensions Framework in Apache CloudStack is designed to provide a flexible and standardised mechanism for integrating external systems and custom workflows into CloudStack’s orchestration process. By defining structured hook points during key operations—such as virtual machine deployment, resource preparation, and lifecycle events—the framework allows administrators and developers to extend CloudStack’s behaviour without modifying its core codebase.
The Netris Plugin introduces Netris as a network service provider in CloudStack to be able to create and manage Virtual Private Clouds (VPCs) in CloudStack, being able to orchestrate the following network functionalities:
- Network segmentation with Netris-VXLAN isolation method
- Routing between "public" IP and network segments with an ACS ROUTED mode offering
- SourceNAT, DNAT, 1:1 NAT between "public" IP and network segments with an ACS NATTED mode offering
- Routing between VPC network segments (tiers in ACS nomenclature)
- Access Lists (ACLs) between VPC tiers and "public" network (TCP, UDP, ICMP) both as global egress rules and "public" IP specific ingress rules.
- ACLs between VPC network tiers (TCP, UDP, ICMP)
- External load balancing – between VPC network tiers and "public" IP
- Internal load balancing – between VPC network tiers
- CloudStack Virtual Router services (DHCP, DNS, UserData, Password Injection, etc…)
* directdownload: fix keytool importcert
```
$ /usr/bin/keytool -importcert file /etc/cloudstack/agent/CSCERTIFICATE-full -keystore /etc/cloudstack/agent/cloud.jks -alias full -storepass DAWsfkJeeGrmhta6
Illegal option: file
keytool -importcert [OPTION]...
Imports a certificate or a certificate chain
Options:
-noprompt do not prompt
-trustcacerts trust certificates from cacerts
-protected password through protected mechanism
-alias <alias> alias name of the entry to process
-file <file> input file name
-keypass <arg> key password
-keystore <keystore> keystore name
-cacerts access the cacerts keystore
-storepass <arg> keystore password
-storetype <type> keystore type
-providername <name> provider name
-addprovider <name> add security provider by name (e.g. SunPKCS11)
[-providerarg <arg>] configure argument for -addprovider
-providerclass <class> add security provider by fully-qualified class name
[-providerarg <arg>] configure argument for -providerclass
-providerpath <list> provider classpath
-v verbose output
Use "keytool -?, -h, or --help" for this help message
```
* DirectDownload: drop HttpsMultiTrustManager
* Management Server - Prepare for Maintenance and Cancel Maintenance improvements:
- Added new setting 'management.server.maintenance.ignore.maintenance.hosts' to ignore hosts in maintenance states while preparing management server for maintenance. This skips agent transfer and agents count check for hosts in maintenance.
- Rebalance indirect agents after cancel maintenance, using rebalance parameter in cancelMaintenance API
- Force maintenance after maintenance window timeout, using forced parameter in prepareForMaintenance API.
- Propagate 'indirect.agent.lb.check.interval' setting change to the host agents.
* rebases fixes
* code improvements, cleanup
* [UI] Set rebalance true by default in cancel maintenance dialog
* Update MS state after executing cluster cmd in the target MS, and some code improvements
* code improvements
* Ensure the host lb algorithm 'shuffle' is applied once before disabling the indirect agent lb check background task
CKS Enhancements:
* Ability to specify different compute or service offerings for different types of CKS cluster nodes – worker, master or etcd
* Ability to use CKS ready custom templates for CKS cluster nodes
* Add and Remove external nodes to and from a kubernetes cluster
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Update remove node timeout global setting
* CKS/NSX : Missing variables in worker nodes
* CKS: Fix ISO attach logic
* CKS: Fix ISO attach logic
* address comment
* Fix Port - Node mapping when cluster is scaled in the presence of external node(s)
* CKS: Externalize control and worker node setup wait time and installation attempts
* Fix logger
* Add missing headers and fix end of line on files
* CKS Mark Nodes for Manual Upgrade and Filter Nodes to add to CKS cluster from the same network
* Add support to deploy CKS cluster nodes on hosts dedicated to a domain
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* Support unstacked ETCD
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Fix CKS cluster scaling and minor UI improvement
* Reuse k8s cluster public IP for etcd nodes and rename etcd nodes
* Fix DNS resolver issue
* Update UDP active monitor to ICMP
* Add hypervisor type to CKS cluster creation to fix CKS cluster creation when External hosts added
* Fix build
* Fix logger
* Modify hypervisor param description in the create CKS cluster API
* CKS delete fails when external nodes are present
* CKS delete fails when external nodes are present
* address comment
* Improve network rules cleanup on failure adding external nodes to CKS cluster
* UI: Fix etcd template was not honoured
* UI: Fix etcd template was not honoured
* Refactor
* CKS: Exclude etcd nodes when calculating port numbers
* Fix network cleanup in case of CKS cluster failure
* Externalize retries and inverval for NSX segment deletion
* Fix CKS scaling when external node(s) present in the cluster
* CKS: Fix port numbers displayed against ETCD nodes
* Add node version details to every node of k8s cluster - as we now support manual upgrade
* Add node version details to every node of k8s cluster - as we now support manual upgrade
* update column name
* CKS: Exclude etcd nodes when calculating port numbers
* update param name
* update param
* UI: Fix CKS cluster creation templates listing for non admins
* CKS: Prevent etcd node start port number to coincide with k8s cluster start port numbers
* CKS: Set default kubernetes cluster node version to the kubernetes cluster version on upgrade
* CKS: Set default kubernetes cluster node version to the kubernetes cluster version on upgrade
* consolidate query
* Fix upgrade logic
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Fix CKS cluster version upgrade
* CKS: Fix etcd port numbers being skipped
* Fix CKS cluster with etcd nodes on VPC
* Move schema and upgrade for 4.20
* Fix logger
* Fix after rebasing
* Add support for using different CNI plugins with CKS
* Add support for using different CNI plugins with CKS
* remove unused import
* Add UI support and list cni config API
* necessary UI changes
* add license
* changes to support external cni
* UI changes
* Fix NPE on restarting VPC with additional public IPs
* fix merge conflict
* add asnumber to create k8s svc layer
* support cni framework to use as-numbers
* update code
* condition to ignore undefined jinja template variables
* CKS: Do not pass AS number when network ID is passed
* Fix deletion of Userdata / CNI Configuration in projects
* CKS: Add CNI configuration details to the response and UI
* Explicit events for registering cni configuration
* Add Delete cni configuration API
* Fix CKS deployment when using VPC tiers with custom ACLs
* Fix DNS list on VR
* CKS: Use Network offering of the network passed during CKS cluster creation to get the AS number
* CKS cluster with guest IP
* Fix: Use control node guest IP as join IP for external nodes addition
* Fix DNS resolver issue
* Improve etcd indexing - start from 1
* CKS: Add external node to a CKS cluster deployed with etcd node(s) successfully
* CKS: Add external node to a CKS cluster deployed with etcd node(s) successfully
* simplify logic
* Tweak setup-kube-system script for baremetal external nodes
* Consider cordoned nodes while getting ready nodes
* Fix CKS cluster scale calculations
* Set token TTL to 0 (no expire) for external etcd
* Fix missing quotes
* Fix build
* Revert PR 9133
* Add calico commands for ens35 interface
* Address review comments: plan CKS cluster deployment based on the node type
* Add qemu-guest-agent dependency for kvm based templates
* Add marvin test for CKS clusters with different offerings per node type
* Remove test tag
* Add marvin test and fix update template for cks and since annotations
* Fix marvin test for adding and removing external nodes
* Fix since version on API params
* Address review comments
* Fix unit test
* Address review comments
* UI: Make CKS public templates visible to non-admins on CKS cluster creation
* Fix linter
* Fix merge error
* Fix positional parameters on the create kubernetes ISO script and make the ETCD version optional
* fix etcd port displayed
* Further improvements to CKS (#118)
* Multiple nics support on Ubuntu template
* Multiple nics support on Ubuntu template
* supports allocating IP to the nic when VM is added to another network - no delay
* Add option to select DNS or VR IP as resolver on VPC creation
* Add API param and UI to select option
* Add column on vpc and pass the value on the databags for CsDhcp.py to fix accordingly
* Externalize the CKS Configuration, so that end users can tweak the configuration before deploying the cluster
* Add new directory to c8 packaging for CKS config
* Remove k8s configuration from resources and make it configurable
* Revert "Remove k8s configuration from resources and make it configurable"
This reverts commit d5997033ebe4ba559e6478a64578b894f8e7d3db.
* copy conf to mgmt server and consume them from there
* Remove node from cluster
* Add missing /opt/bin directory requrired by external nodes
* Login to a specific Project view
* add indents
* Fix CKS HA clusters
* Fix build
---------
Co-authored-by: Nicolas Vazquez <nicovazquez90@gmail.com>
* Add missing headers
* Fix linter
* Address more review comments
* Fix unit test
* Fix scaling case for the same offering
* Revert "Login to a specific Project view"
This reverts commit 95e37563f48573780b07a038a7f48c0bc04e9b64.
* Revert "Fix CKS HA clusters" (#120)
This reverts commit 8dac16aa359faa6500ea1e1ce548169cfd08331a.
* Apply suggestions from code review about user data
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Update api/src/main/java/org/apache/cloudstack/api/command/user/userdata/BaseRegisterUserDataCmd.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Refactor column names and schema path
* Fix scaling for non existing previous offering per node type
* Update node offering entry if there was an existing offering but a global service offering has been provided on scale
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Daan Hoogland <daan@onecht.net>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Introducing Storage Access Groups to define the host and storage pool connections
In CloudStack, when a primary storage is added at the Zone or Cluster scope, it is by default connected to all hosts within that scope. This default behavior can be refined using storage access groups, which allow operators to control and limit which hosts can access specific storage pools.
Storage access groups can be assigned to hosts, clusters, pods, zones, and primary storage pools. When a storage access group is set on a cluster/pod/zone, all hosts within that scope inherit the group. Connectivity between a host and a storage pool is then governed by whether they share the same storage access group.
A storage pool with a storage access group will connect only to hosts that have the same storage access group. A storage pool without a storage access group will connect to all hosts, including those with or without a storage access group.
* Add & Remove PowerFlex/ScaleIO MDMs while preparing & unpreparing the storage SDC connections (instead of start & stop scini)
* Add/Remove MDM IP addresses during Host connection/disconnection to/from storage pool when powerflex.connect.on.demand is false
* unit test fixes
* Don't remove MDM IPs from SDC when any volumes mapped to SDC
* Don't remove MDM IPs when other pools of same ScaleIO/PowerFlex cluster are connected
* rebase fixes
* update changes, to not remove/disconnect MDMs on maintenance
* import fixes after rebase
* KVM incremental snapshot feature
* fix log
* fix merge issues
* fix creation of folder
* fix snapshot update
* Check for hypervisor type during parent search
* fix some small bugs
* fix tests
* Address reviews
* do not remove storPool snapshots
* add support for downloading diff snaps
* Add multiple zones support
* make copied snapshots have normal names
* address reviews
* Fix in progress
* continue fix
* Fix bulk delete
* change log to trace
* Start fix on multiple secondary storages for a single zone
* Fix multiple secondary storages for a single zone
* Fix tests
* fix log
* remove bitmaps when deleting snapshots
* minor fixes
* update sql to new file
* Fix merge issues
* Create new snap chain when changing configuration
* add verification
* Fix snapshot operation selector
* fix bitmap removal
* fix chain on different storages
* address reviews
* fix small issue
* fix test
---------
Co-authored-by: João Jandre <joao@scclouds.com.br>
* Update last agents during ms maintenance, and some code improvements
* Send 503 (Service Unavailable) response status when maintenance or shutdown is initiated
[Any load balancer in the clustered environment can avoid routing requests to this MS node]
* Migrate systemvm agents before routing host agents, and some code improvements
* Added events for ms maintenance and shutdown operations
* Added the following ms maintenance and shutdown improvements
- block new agent connections during prepare for maintenance of ms
- maintain avoids ms list
- propagate updated management servers list and lb algorithm in host and indirect.agent.lb.algorithm settings respectively, to systemvm (non-routing) agents
- updated setup ms list and migrate agent connections to executor service
- migrate agent connection through executor, and send the answer to the ms host that initiated the migration
- re-initialize ssl handshake executor if it is shutdown
- don't allow prepare for maintenance or shutdown when other management server nodes are in preparing states
- don't allow trigger shutdown when management server is up and other management server nodes are in preparing states
- stop agent connections monitor on ms maintenance
- update avoid ms list in ready command
- updated connected host from the client connection
- update last agents in ms metrics from the database
- updated some agent config descriptions
- update last management server in the hosts during shutdown
- added agents and lastagents in management server response
- updated management server maintenance & shutdown unit tests
- some code improvements
* refactored code / addressed comments
* removed shutdown testcase (maybe, calling System.exit)
* Revert "removed shutdown testcase (maybe, calling System.exit)"
This reverts commit e14b0717152ef6c8be102d61c80f42803a53172e.
* avoid system.exit during shutdown test
* code improvements
* testcase fix
* Fix cutoff time in agent connections monitor thread
* api,agent,server,engine-schema: scalability improvements
Following changes and improvements have been added:
- Improvements in handling of PingRoutingCommand
1. Added global config - `vm.sync.power.state.transitioning`, default value: true, to control syncing of power states for transitioning VMs. This can be set to false to prevent computation of transitioning state VMs.
2. Improved VirtualMachinePowerStateSync to allow power state sync for host VMs in a batch
3. Optimized scanning stalled VMs
- Added option to set worker threads for capacity calculation using config - `capacity.calculate.workers`
- Added caching framework based on Caffeine in-memory caching library, https://github.com/ben-manes/caffeine
- Added caching for account/use role API access with expiration after write can be configured using config - `dynamic.apichecker.cache.period`. If set to zero then there will be no caching. Default is 0.
- Added caching for account/use role API access with expiration after write set to 60 seconds.
- Added caching for some recurring DB retrievals
1. CapacityManager - listing service offerings - beneficial in host capacity calculation
2. LibvirtServerDiscoverer existing host for the cluster - beneficial for host joins
3. DownloadListener - hypervisors for zone - beneficial for host joins
5. VirtualMachineManagerImpl - VMs in progress- beneficial for processing stalled VMs during PingRoutingCommands
- Optimized MS list retrieval for agent connect
- Optimize finding ready systemvm template for zone
- Database retrieval optimisations - fix and refactor for cases where only IDs or counts are used mainly for hosts and other infra entities. Also similar cases for VMs and other entities related to host concerning background tasks
- Changes in agent-agentmanager connection with NIO client-server classes
1. Optimized the use of the executor service
2. Refactore Agent class to better handle connections.
3. Do SSL handshakes within worker threads
5. Added global configs to control the behaviour depending on the infra. SSL handshake could be a bottleneck during agent connections. Configs - `agent.ssl.handshake.min.workers` and `agent.ssl.handshake.max.workers` can be used to control number of new connections management server handles at a time. `agent.ssl.handshake.timeout` can be used to set number of seconds after which SSL handshake times out at MS end.
6. On agent side backoff and sslhandshake timeout can be controlled by agent properties. `backoff.seconds` and `ssl.handshake.timeout` properties can be used.
- Improvements in StatsCollection - minimize DB retrievals.
- Improvements in DeploymentPlanner allow for the retrieval of only desired host fields and fewer retrievals.
- Improvements in hosts connection for a storage pool. Added config - `storage.pool.host.connect.workers` to control the number of worker threads that can be used to connect hosts to a storage pool. Worker thread approach is followed currently only for NFS and ScaleIO pools.
- Minor improvements in resource limit calculations wrt DB retrievals
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* test1, domaindetails, capacitymanager fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* test2 - agent tests
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* capacitymanagertest fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* change
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix missing changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address comments
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* revert marvin/setup.py
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix indent
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* use space in sql
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address duplicate
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* update host logs
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* revert e36c6a5d07
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix npe in capacity calculation
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* move schema changes to 4.20.1 upgrade
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* build fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address comments
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix build
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* add some more tests
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* checkstyle fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove unnecessary mocks
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* build fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* replace statics
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* engine/orchestration,utils: limit number of concurrent new agent
connections
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor - remove unused
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* unregister closed connections, monitor & cleanup
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* add check for outdated vm filter in power sync
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* agent: synchronize sendRequest wait
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Support for Management Server Maintenance
- New APIs: prepareForMaintenance and cancelMaintenance, with required parameter - managementserverid.
- New management server states for maintenance: PreparingForMaintenance, Maintenance.
- listHosts API with optional parameter – managementserverid, to list the hosts connected to the management server.
- Support management server maintenance when more than one active management servers available.
- Triggers transfer agents to other available management servers for maintenance, new agent command MigrateAgentConnectionCommand to initiate transfer of indirect agents.
- New global config 'management.server.maintenance.timeout', to set the timeout (in mins) for the management server maintenance window, default: 60 mins.
- UI changes: Prepare and Cancel Maintenance in Management Server section, Connected Agents tab, New fields for hosts and management servers.
* Updated pending jobs check timer task with ScheduledExecutorService
* keep maintenance state on trigger shutdown call when ms is in maintenance
* add pending jobs count to ms response
* during ms heartbeat, update state to up only when it's down
* allow vm work jobs of async job created before prepare for maintenance
* Revert "keep maintenance state on trigger shutdown call when ms is in maintenance"
This reverts commit 607e13364679eac897f4d146bb3325ea7a61ba17.
* skip maintenance test when multiple management servers are not available, and not configured in host setting for kvm
* 4.20:
merge errors fixed
Restrict the migration of volumes attached to VMs in Starting state (#9725)
server, plugin: enhance storage stats for IOPS (#10034)
Introducing granular command timeouts global setting (#9659)
Improve logging to include more identifiable information (#9873)
Adds framework layer change to allow retrieving and storing IOPS stats for storage pools. Custom `PrimaryStoreDriver` can implement method - `getStorageIopsStats` for returning IOPS stats. Existing method `getUsedIops` can also be overridden by such plugins when only used IOPS is returned.
For testing purpose, implementation has been added for simulator hypervisor plugin to return capacity and used IOPS for a pool.
For local storage pool, implementation has been added using iostat to return currently used IOPS.
StoragePoolResponse class has been updated to return IOPS values which allows showing IOPS values in UI for different storage pool related views and APIs.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Improve logging to include more identifiable information for kvm plugin
* Update logging for scaleio plugin
* Improve logging to include more identifiable information for default volume storage plugin
* Improve logging to include more identifiable information for agent managers
* Improve logging to include more identifiable information for Listeners
* Replace ids with objects or uuids
* Improve logging to include more identifiable information for engine
* Improve logging to include more identifiable information for server
* Fixups in engine
* Improve logging to include more identifiable information for plugins
* Improve logging to include more identifiable information for Cmd classes
* Fix toString method for StorageFilterTO.java