* Add source VM name on virt-v2v migration log entries
* Improve the feedback by displaying the running importing tasks
* Add source VM name prefix on more conversion logs
* Improve listing and also list completed tasks
* Pass extra parameters to virt-v2v if administrator allows via global setting
* Add Force converting directly to storage pool option
* Refactor based on review comments
* Add properties for env vars for the instance conversion
* Add separate component for Import VM Tasks
* applying copilot suggestions from code review
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Fix importing unmanaged instances due to incorrect internal name
* Add VM prefix on each log operation for conversion
* Log the original VM name instead of the cloned VM in case of cloning
* Allow searching storage pool by UUID after conversion to support SharedMountPoint
* Fix search pools logic
* Improve UI and add checks for force convert to pool parameter
* Support Local storage when forceconverttopool is set to true
* Add config key to for allowed extra params and add validation
* Fix params lists
* Fix compile error
* Remove extra stubbings
* Fix extra params execution
---------
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Migrate volume improvements, to bypass secondary storage when copy volume between pools is allowed directly
* Bypass secondary storage for copy volume between zone-wide pools and
- local storage on host in the same zone
- cluser-wide pools in the same zone
* Bypass secondary storage for volumes on ceph/rdb pool when the scope permits
* Fix dest disk format while migrating volume from ceph/rbd to nfs, and some code improvements
* unit tests
* Update suitable disk offering(s) for volume(s) after migrate VM with volumes when change in pool type (shared or local)
Currently, Migrate VM with volume(s) bypasses the service and disk offerings of the volumes, as the target pools for migration are specified,
which ignores the offerings. Offering change is required when pool type (shared or local) is changed, mainly
- when volume on shared pool is migrated to local pool
- when volume on local pool is migrated to shared pool
* Update with proper message while migrate volume when target pool and offering type mismatches (both are not shared/local)
* Consider host scope first during endpoint selection while copying between primary storages
* Update disk offering count (for listDiskOfferings api) while removing offerings with tags mismatch with storage tags
* Update gson date format for serializing/deserializing Date in MS stats (across multiple management servers)
* review
* review comments, and unit tests
* added unit test with different date format
* Use separate Gson for MS stats serialization/deserialization
This PR allows attaching of GPU devices via PCI, mdev or VF to an Instance for KVM.
It allows the operator to discover the GPU devices on the KVM host and create a Compute Offering with GPU support based on the available GPU devices on the host. Once the operator has created the Compute offering, it can be used by users to launch Instances with GPU devices.
The Extensions Framework in Apache CloudStack is designed to provide a flexible and standardised mechanism for integrating external systems and custom workflows into CloudStack’s orchestration process. By defining structured hook points during key operations—such as virtual machine deployment, resource preparation, and lifecycle events—the framework allows administrators and developers to extend CloudStack’s behaviour without modifying its core codebase.
The Netris Plugin introduces Netris as a network service provider in CloudStack to be able to create and manage Virtual Private Clouds (VPCs) in CloudStack, being able to orchestrate the following network functionalities:
- Network segmentation with Netris-VXLAN isolation method
- Routing between "public" IP and network segments with an ACS ROUTED mode offering
- SourceNAT, DNAT, 1:1 NAT between "public" IP and network segments with an ACS NATTED mode offering
- Routing between VPC network segments (tiers in ACS nomenclature)
- Access Lists (ACLs) between VPC tiers and "public" network (TCP, UDP, ICMP) both as global egress rules and "public" IP specific ingress rules.
- ACLs between VPC network tiers (TCP, UDP, ICMP)
- External load balancing – between VPC network tiers and "public" IP
- Internal load balancing – between VPC network tiers
- CloudStack Virtual Router services (DHCP, DNS, UserData, Password Injection, etc…)
* Cumulative enhancements fix for ScaleIO: MDM add/remove, Host prepare/unprepare, validate Storage Pool can be created in Agent.
- Implemented validation to fail Host disconnect from Storage Pool if there are Volumes attached and SDC client MDM removal requires scini service to be restarted
- Implemented Storage Pool validation by checking whether MDM addresses from configuration file and from memory (using CLI) matches, otherwise file ModifyStoragePool command.
- Introduced configuration key to apply timeout after making MDM changes for ScaleIO: powerflex.mdm.change.apply.timeout.ms (default 1000ms)
- Implemented logic to apply timeout after making MDM changes for ScaleIO in prepare and unprepare logic
- Added detection of MDM removal support via CLI
- If MDM removal support via CLI supported then use CLI, fall back to edit drv_cfg.txt and restart scini instead
Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>
* Management Server - Prepare for Maintenance and Cancel Maintenance improvements:
- Added new setting 'management.server.maintenance.ignore.maintenance.hosts' to ignore hosts in maintenance states while preparing management server for maintenance. This skips agent transfer and agents count check for hosts in maintenance.
- Rebalance indirect agents after cancel maintenance, using rebalance parameter in cancelMaintenance API
- Force maintenance after maintenance window timeout, using forced parameter in prepareForMaintenance API.
- Propagate 'indirect.agent.lb.check.interval' setting change to the host agents.
* rebases fixes
* code improvements, cleanup
* [UI] Set rebalance true by default in cancel maintenance dialog
* Update MS state after executing cluster cmd in the target MS, and some code improvements
* code improvements
* Ensure the host lb algorithm 'shuffle' is applied once before disabling the indirect agent lb check background task
Adds new interface for image selection (template/iso) for an instance in UI.
Old interface can still be used and it can be configured using UI configuration (config.json)
OS categories/Guest OS categories have been improved with ability to create new categories, delete an existing category, and marking a category as featured to allow it to show up in the UI in the image selection interface.
New APIs added:
- addOsCategory
- deleteOsCategory
- updateOsCategory
APIs updated:
- updateOsType
- listTemplates
- listOsCategories
Several improvements in UI especially related to forms - DeloyVM, ReinstallVM, CreateVnfAppliance, AddAutoscaleGroup.
DeployVM form can now be opened from template/ISO details view with query params.
Reorganized (removed and added some) OS categories to the following (in the same order):
```
1. Ubuntu
2. Debian
3. Fedora
4. CentOS
5. Rocky Linux
6. Alma Linux
7. Oracle
8. RedHat
9. SUSE
10. Windows
11. Other
```
Documentation PR: https://github.com/apache/cloudstack-documentation/pull/500
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Introducing Storage Access Groups to define the host and storage pool connections
In CloudStack, when a primary storage is added at the Zone or Cluster scope, it is by default connected to all hosts within that scope. This default behavior can be refined using storage access groups, which allow operators to control and limit which hosts can access specific storage pools.
Storage access groups can be assigned to hosts, clusters, pods, zones, and primary storage pools. When a storage access group is set on a cluster/pod/zone, all hosts within that scope inherit the group. Connectivity between a host and a storage pool is then governed by whether they share the same storage access group.
A storage pool with a storage access group will connect only to hosts that have the same storage access group. A storage pool without a storage access group will connect to all hosts, including those with or without a storage access group.
* KVM incremental snapshot feature
* fix log
* fix merge issues
* fix creation of folder
* fix snapshot update
* Check for hypervisor type during parent search
* fix some small bugs
* fix tests
* Address reviews
* do not remove storPool snapshots
* add support for downloading diff snaps
* Add multiple zones support
* make copied snapshots have normal names
* address reviews
* Fix in progress
* continue fix
* Fix bulk delete
* change log to trace
* Start fix on multiple secondary storages for a single zone
* Fix multiple secondary storages for a single zone
* Fix tests
* fix log
* remove bitmaps when deleting snapshots
* minor fixes
* update sql to new file
* Fix merge issues
* Create new snap chain when changing configuration
* add verification
* Fix snapshot operation selector
* fix bitmap removal
* fix chain on different storages
* address reviews
* fix small issue
* fix test
---------
Co-authored-by: João Jandre <joao@scclouds.com.br>
* Update last agents during ms maintenance, and some code improvements
* Send 503 (Service Unavailable) response status when maintenance or shutdown is initiated
[Any load balancer in the clustered environment can avoid routing requests to this MS node]
* Migrate systemvm agents before routing host agents, and some code improvements
* Added events for ms maintenance and shutdown operations
* Added the following ms maintenance and shutdown improvements
- block new agent connections during prepare for maintenance of ms
- maintain avoids ms list
- propagate updated management servers list and lb algorithm in host and indirect.agent.lb.algorithm settings respectively, to systemvm (non-routing) agents
- updated setup ms list and migrate agent connections to executor service
- migrate agent connection through executor, and send the answer to the ms host that initiated the migration
- re-initialize ssl handshake executor if it is shutdown
- don't allow prepare for maintenance or shutdown when other management server nodes are in preparing states
- don't allow trigger shutdown when management server is up and other management server nodes are in preparing states
- stop agent connections monitor on ms maintenance
- update avoid ms list in ready command
- updated connected host from the client connection
- update last agents in ms metrics from the database
- updated some agent config descriptions
- update last management server in the hosts during shutdown
- added agents and lastagents in management server response
- updated management server maintenance & shutdown unit tests
- some code improvements
* refactored code / addressed comments
* removed shutdown testcase (maybe, calling System.exit)
* Revert "removed shutdown testcase (maybe, calling System.exit)"
This reverts commit e14b0717152ef6c8be102d61c80f42803a53172e.
* avoid system.exit during shutdown test
* code improvements
* testcase fix
* Fix cutoff time in agent connections monitor thread