59 Commits

Author SHA1 Message Date
Abhisar Sinha
671d8ad704
Track volume usage data at a vm granularity as well (#11531)
Co-authored-by: Vishesh <8760112+vishesh92@users.noreply.github.com>
2025-11-12 09:32:01 +01:00
slavkap
e5f61164b3
Support of snapshot copy to primary storage in different zones. (#9478)
* Support of snapshot copy to different StorPool primary storage between zones
2025-08-04 16:35:16 +05:30
Daan Hoogland
0b3959221b Merge branch '4.20' 2025-07-29 16:50:55 +02:00
Abhisar Sinha
1b74c2dd3f
Fix restore from NAS backup when datadisk is older than the root disk. (#11258) 2025-07-23 12:45:47 +02:00
João Jandre
53eb2c5b9b
File-based disk-only VM snapshot with KVM as hypervisor (#10632)
Co-authored-by: João Jandre <joao@scclouds.com.br>
Co-authored-by: Fabricio Duarte <fabricio.duarte.jr@gmail.com>
2025-07-16 08:54:07 +02:00
Harikrishna
b17808bfba
Introducing Storage Access Groups for better management for host and storage connections (#10381)
* Introducing Storage Access Groups to define the host and storage pool connections

In CloudStack, when a primary storage is added at the Zone or Cluster scope, it is by default connected to all hosts within that scope. This default behavior can be refined using storage access groups, which allow operators to control and limit which hosts can access specific storage pools.

Storage access groups can be assigned to hosts, clusters, pods, zones, and primary storage pools. When a storage access group is set on a cluster/pod/zone, all hosts within that scope inherit the group. Connectivity between a host and a storage pool is then governed by whether they share the same storage access group.

A storage pool with a storage access group will connect only to hosts that have the same storage access group. A storage pool without a storage access group will connect to all hosts, including those with or without a storage access group.
2025-05-19 11:33:29 +05:30
Daan Hoogland
08ad1c70ba Merge branch '4.19' into 4.20 2025-02-24 14:21:14 +01:00
Suresh Kumar Anaparti
b6cebe22f9
Fixed VMware import issue - check and update pools in the order of the disks (do not update by position) (#10409) 2025-02-18 08:47:05 +01:00
Daan Hoogland
4f3e8e8c5a Merge branch '4.19' into 4.20 2025-02-12 15:00:51 +01:00
Abhishek Kumar
a627ab67c2
server: fix pod retrieval during volume attach (#10324)
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-02-07 12:59:23 +01:00
dahn
802bf5fce7
Revert "server: fix attach uploaded volume (#10267)" (#10323)
This reverts commit 1c84ce4e23e3fd243022c6c533fc14c10439c6f3.
2025-02-06 11:33:26 +01:00
Abhishek Kumar
ae2ffbe40b Merge remote-tracking branch 'apache/4.19' into 4.20 2025-01-31 16:54:22 +05:30
Abhishek Kumar
1c84ce4e23
server: fix attach uploaded volume (#10267)
* server: fix attach uploaded volume

Fixes #10120

When an uploaded volume is attached to a VM for which no existing volume
can be found it was resulting in error. For such volumes, server needs
to find a suitable pool first and copy them to the pool from secondary
store.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* add unit tests

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

---------

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Boris Stoyanov - a.k.a Bobby <bss.stoyanov@gmail.com>
2025-01-29 13:14:16 +02:00
Vishesh
a4224e58cc
Improve logging to include more identifiable information (#9873)
* Improve logging to include more identifiable information for kvm plugin

* Update logging for scaleio plugin

* Improve logging to include more identifiable information for default volume storage plugin

* Improve logging to include more identifiable information for agent managers

* Improve logging to include more identifiable information for Listeners

* Replace ids with objects or uuids


* Improve logging to include more identifiable information for engine

* Improve logging to include more identifiable information for server

* Fixups in engine

* Improve logging to include more identifiable information for plugins

* Improve logging to include more identifiable information for Cmd classes

* Fix toString method for StorageFilterTO.java
2025-01-06 16:42:37 +05:30
Wei Zhou
8a1da3804c
Resize volume: add pool capacity disablethreshold for resize and allow volume auto migration (#9761)
* server: add global settings for volume resize

* resizeVolume: support automigrate

* Address Suresh's comments

* Update api/src/main/java/org/apache/cloudstack/api/command/user/volume/ResizeVolumeCmd.java

Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>

* address Suresh's comments

* UI: add autoMigrate to resizeVolume

* resizevolume: add unit tests

* resizevolume: add unit test for Allocated volume

---------

Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
2024-12-02 10:28:14 +05:30
Daan Hoogland
abfa92928c merge conflicts 4.19 -> main 2024-09-09 14:48:20 +02:00
Suresh Kumar Anaparti
ebaf064d92
Fix root disk resize (don't allow) when service offering has root disk size, only allow through service offering change (#9428) 2024-09-06 10:45:28 +05:30
Rohit Yadav
85765c3125
backup: simple NAS backup plugin for KVM (#9451)
This is a simple NAS backup plugin for KVM which may be later expanded for other hypervisors. This backup plugin aims to use shared NAS storage on KVM hosts such as NFS (or CephFS and others in future), which is used to backup fully cloned VMs for backup & restore operations. This may NOT be as efficient and performant as some of the other B&R providers, but maybe useful for some KVM environments who are okay to only have full-instance backups and limited functionality.

Design & Implementation follows the `networker` B&R plugin, which is simply:

- Implement B&R plugin interfaces
- Use cmd-answer pattern to execute backup and restore operations on KVM host when VM is running (or needs to be restored) - instead of a B&R API client, relies on answers from KVM agent which executes the operations
- Backups are full VM domain snapshots, copied to a VM-specific folders on a NAS target (NFS) along with a domain XML
- Backup uses libvirt feature: https://libvirt.org/kbase/live_full_disk_backup.html orchestrated via virsh/bash script (nasbackup.sh) as the libvirt-java lacks the bindings
- Supported instance volume storage for restore operations: NFS & local storage

Refer the doc PR for feature limitations and usage details:
https://github.com/apache/cloudstack-documentation/pull/429

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
2024-09-05 22:19:13 +05:30
Abhisar Sinha
605534b417
feature: Shared Storage Filesystem as a First Class Feature (#9208)
This PR implements Storage filesystem as a first class feature.
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Filesystem+as+a+First+Class+Feature

Documentation PR: apache/cloudstack-documentation#420

Co-authored-by: Wei Zhou <weizhou@apache.org>
2024-09-05 17:22:32 +05:30
Abhishek Kumar
b29ec2bf12 Merge remote-tracking branch 'apache/4.19' 2024-03-01 17:40:58 +05:30
Harikrishna
c462be1412
New API "checkVolume" to check and repair any leaks or issues reported by qemu-img check (#8577)
* Introduced a new API checkVolumeAndRepair that allows users or admins to check and repair if any leaks observed.
Currently this is supported only for KVM

* some fixes

* Added unit tests

* addressed review comments

* add repair volume while granting access

* Changed repair parameter to accept both leaks/all

* Introduced new global setting volume.check.and.repair.before.use to do volume check and repair before VM start or volume attach operations

* Added volume check and repair changes only during VM start and volume attach operations

* Refactored the names to look similar across the code

* Some code fixes

* remove unused code

* Renamed repair values

* Fixed unit tests

* changed version

* Address review comments

* Code refactored

* used volume name in logs

* Changed the API to Async and the setting scope to storage pool

* Fixed exit value handling with check volume command

* Fixed storage scope to the setting

* Fix volume format issues

* Refactored the log messages

* Fix formatting
2024-02-29 14:41:49 +05:30
Abhishek Kumar
592038a304
api,server,ui: granular resource limit management (#8362)
Feature spec: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Granular+Resource+Limit+Management

Introduces the concept of tagged resource limits for granular resource limit management. Limits can be enforced on accounts and domains for the deployment of entities for a tagged resource. Current tagged resource limits can be used for the following resource types,

Host limits
- user_vm
- cpu
- memory

Storage limits
- volume
- primary_storage

Following global settings can used to specify tags for which limit needs to be enforced,

Host: `resource.limit.host.tags`
Storage: `resource.limit.storage.tags`

Option for specifying tagged resource limits and viewing tagged resource usage are made available in the UI.

Enhances the use of templatetag for VM deployment and template creation

Adds option to list service/compute offerings that can be used with a given template. A new parameter named templateid has been added.

Adds option to list disk offering with suitability flag for a virtual machine. A new parameter named virtualmachineid has been added to the listDiskOfferings API which when passed returns suitableforvirtualmachine param in the response.
2024-02-19 14:17:34 +05:30
Vishesh
399bd0a067
Upgrade to mockito 4 and handle Mockito deprecations (#8427) 2024-02-06 14:20:37 +01:00
Abhishek Kumar
05b0a8ae86 Merge remote-tracking branch 'apache/4.18' 2023-12-12 16:48:21 +05:30
Abhishek Kumar
ce586e3eca
server: fix resource count during assign volume (#8171)
ResourceType.volume stores the count of the volume and not the size so increment decrement should be just 1 when assigning a volume to a different account.
2023-12-11 15:45:42 +05:30
João Jandre
26b01f6f3b
Flexible tags for hosts and storage pools (#7489)
Co-authored-by: João Jandre <joao@scclouds.com.br>
2023-11-30 09:36:47 +01:00
Abhishek Kumar
543c54c718
api,server,ui: snapshot copy, multi-zone replica (#7873)
This PR adds new functionality to copy snapshots across zones and take snapshots for multiple zones.

Copy functionality is similar to template copy. The source zone acts as the web server from where the destination zone(s) can download the snapshot files. For this purpose, a new API - `copySnapshot` has been added. The response for copySnapshot will be returning zone and download details from the first destination zone of the request. This behaviour is similar to the `copyTemplate` API.

In a similar manner, multiple zones can be selected while taking the snapshots or creating snapshot policies. For this snapshot will be taken in the base zone(in which volume is present) and then copied to the additional zones. A new parameter - `zoneids` has been added to `createSnapshot` and `createSnapshotPolicy` APIs.

As snapshots can be present on multiple zones (secondary stores), a new parameter `zoneid` has been added to delete the snapshot copy on a specific zone.

`listSnapshots` API has been updated to allow listing snapshot entries for different zones/datastores. New parameters - `showUnique`, `locationType` have been added.

Events generated during snapshot operations will now be linked to the snapshot itself rather than the volume of the snapshot.

`listSnapshotPolicies` and `createSnapshotPolicy` APIs will return zone details of the zones in which backup will be scheduled for the policy.

----
New API added
`copySnapshot`

Request and response params updated for APIs
```
- listSnapshots
- deleteSnapshot
- createTemplate
- listZones
- listSnapshotPolicies
- createSnapshotPolicy
```
UI updated for
- Snapshot detail view
- Create snapshot form
- Create snapshot policy form
- Create volume (from snapshot) form
- Create template (from snapshot) form

Doc PR: https://github.com/apache/cloudstack-documentation/pull/344
PR: https://github.com/apache/cloudstack/pull/7873
2023-10-23 09:01:58 +02:00
Vishesh
e721f3b379
Remove powermock from server (#7986) 2023-09-22 14:07:08 +02:00
Harikrishna
80ca3acf15
Allow encrypted volume migration for PowerFlex volumes (#7757) 2023-07-21 10:08:21 +03:00
Nicolas Vazquez
c809201247
Fix: Volumes on lost local storage cannot be removed (#7594) 2023-06-23 12:22:15 +02:00
Abhishek Kumar
8849e0f464
server: fix volume detach operation when no vm host (#7526)
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2023-05-25 15:48:27 +02:00
Wei Zhou
54606dc965 server: fix 4.18/main build error after merge forward 2023-04-05 18:29:46 +02:00
nvazquez
83c2bfacd8
Merge branch '4.17' 2023-01-30 07:53:58 -03:00
Nicolas Vazquez
c78a777d3a
Fix: memory leak on volume allocation (#7136) 2023-01-30 09:44:50 +01:00
João Jandre
61a722548f
Create API to reassign volume (#6938) 2023-01-27 11:10:56 +01:00
SadiJr
f5b3cb59ee
[Veeam] enable volume attach/detach in VMs with Backup Offerings (#6581) 2023-01-23 09:34:46 +01:00
João Jandre
d6044fb5a6
Fix to make recovered volumes be accounted for by Usage (#6772) 2022-10-11 14:05:14 +02:00
Harikrishna
713a236843
UserData as first class resource (#6202)
This PR introduces a new feature to make userdata as a first class resource much like existing SSH keys.

Detailed feature specification document:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Userdata+as+a+first+class+resource
2022-10-05 17:34:59 +05:30
Marcus Sorensen
697e12f8f7
kvm: volume encryption feature (#6522)
This PR introduces a feature designed to allow CloudStack to manage a generic volume encryption setting. The encryption is handled transparently to the guest OS, and is intended to handle VM guest data encryption at rest and possibly over the wire, though the actual encryption implementation is up to the primary storage driver.

In some cases cloud customers may still prefer to maintain their own guest-level volume encryption, if they don't trust the cloud provider. However, for private cloud cases this greatly simplifies the guest OS experience in terms of running volume encryption for guests without the user having to manage keys, deal with key servers and guest booting being dependent on network connectivity to them (i.e. Tang), etc, especially in cases where users are attaching/detaching data disks and moving them between VMs occasionally.

The feature can be thought of as having two parts - the API/control plane (which includes scheduling aspects), and the storage driver implementation.

This initial PR adds the encryption setting to disk offerings and service offerings (for root volume), and implements encryption support for KVM SharedMountPoint, NFS, Local, and ScaleIO storage pools.

NOTE: While not required, operations can be significantly sped up by ensuring that hosts have the `rng-tools` package and service installed and running on the management server and hypervisors. For EL hosts the service is `rngd` and for Debian it is `rng-tools`. In particular, the use of SecureRandom for generating volume passphrases can be slow if there isn't a good source of entropy. This could affect testing and build environments, and otherwise would only affect users who actually use the encryption feature. If you find tests or volume creates blocking on encryption, check this first.

### Management Server

##### API

* createDiskOffering now has an 'encrypt' Boolean
* createServiceOffering now has an 'encryptroot' Boolean. The 'root' suffix is added here in case there is ever any other need to encrypt something related to the guest configuration, like the RAM of a VM.  This has been refactored to deal with the new separation of service offering from disk offering internally.
* listDiskOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listServiceOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listHosts now shows encryption support of each hypervisor host via `encryptionsupported`
* Volumes themselves don't show encryption on/off, rather the offering should be referenced. This follows the same pattern as other disk offering based settings such as the IOPS of the volume.

##### Volume functions

A decent effort has been made to ensure that the most common volume functions have either been cleanly supported or blocked. However, for the first release it is advised to mark this feature as *experimental*, as the code base is complex and there are certainly edge cases to be found.

Many of these features could eventually be supported over time, such as creating templates from encrypted volumes, but the effort and size of the change is already overwhelming.

Supported functions:
* Data Volume create
* VM root volume create
* VM root volume reinstall
* Offline volume snapshot/restore
* Migration of VM with storage (e.g. local storage VM migration)
* Resize volume
* Detach/attach volume

Blocked functions:
* Online volume snapshot
* VM snapshot w/memory
* Scheduled snapshots (would fail when VM is running)
* Disk offering migration to offerings that don't have matching encryption
* Creating template from encrypted volume
* Creating volume from encrypted volume
* Volume extraction (would we decrypt it first, or expose the key? Probably the former).

##### Primary Storage Support

For storage developers, adding encryption support involves:

1. Updating the `StoragePoolType` for your primary storage to advertise encryption support. This is used during allocation of storage to match storage types that support encryption to storage that supports it.

2. Implementing encryption feature when your `PrimaryDataStoreDriver` is called to perform volume lifecycle functions on volumes that are requesting encryption. You are free to do what your storage supports - this could be as simple as calling a storage API with the right flag when creating a volume. Or (as is the case with the KVM storage types), as complex as managing volume details directly at the hypervisor host. The data objects passed to the storage driver will contain volume passphrases, if encryption is requested.

##### Scheduling

For the KVM implementations specified above, we are dependent on the KVM hosts having support for volume encryption tools. As such, the hosts `StartupRoutingCommand` has been modified to advertise whether the host supports encryption. This is done via a probe during agent startup to look for functioning `cryptsetup` and support in `qemu-img`. This is also visible via the listHosts API and the host details in the UI.  This was patterned after other features that require hypervisor support such as UEFI.

The `EndPointSelector` interface and `DefaultEndpointSelector` have had new methods added, which allow the caller to ask for endpoints that support encryption.  This can be used by storage drivers to find the proper hosts to send storage commands that involve encryption. Not all volume activities will require a host to support encryption (for example a snapshot backup is a simple file copy), and this is the reason why the interface has been modified to allow for the storage driver to decide, rather than just passing the data objects to the EndpointSelector and letting the implementation decide.

VM scheduling has also been modified. When a VM start is requested, if any volume that requires encryption is attached, it will filter out hosts that don't support encryption.

##### DB Changes

A volume whose disk offering enables encryption will get a passphrase generated for it before its first use. This is stored in the new 'passphrase' table, and is encrypted using the CloudStack installation's standard configured DB encryption. A field has been added to the volumes table, referencing this passphrase, and a foreign key added to ensure passphrases that are referenced can't be removed from the database.  The volumes table now also contains an encryption format field, which is set by the implementer of the encryption and used as it sees fit.

#### KVM Agent

For the KVM storage pool types supported, the encryption has been implemented at Qemu itself, using the built-in LUKS storage support. This means that the storage remains encrypted all the way to the VM process, and decrypted before the block device is visible to the guest.  This may not be necessary in order to implement encryption for /your/ storage pool type, maybe you have a kernel driver that decrypts before the block device on the system, or something like that. However, it seemed like the simplest, common place to terminate the encryption, and provides the lowest surface area for decrypted guest data.

For qcow2 based storage, `qemu-img` is used to set up a qcow2 file with LUKS encryption. For block based (currently just ScaleIO storage), the `cryptsetup` utility is used to format the block device as LUKS for data disks, but `qemu-img` and its LUKS support is used for template copy.

Any volume that requires encryption will contain a passphrase ID as a byte array when handed down to the KVM agent. Care has been taken to ensure this doesn't get logged, and it is cleared after use in attempt to avoid exposing it before garbage collection occurs.  On the agent side, this passphrase is used in two ways:

1. In cases where the volume experiences some libvirt interaction it is loaded into libvirt as an ephemeral, private secret and then referenced by secret UUID in any libvirt XML. This applies to things like VM startup, migration preparation, etc.

2. In cases where `qemu-img` needs to use this passphrase for volume operations, it is written to a `KeyFile` on the cloudstack agent's configured tmpfs and passed along. The `KeyFile` is a `Closeable` and when it is closed, it is deleted. This allows us to try-with-resources any volume operations and get the KeyFile removed regardless.

In order to support the advanced syntax required to handle encryption and passphrases with `qemu-img`, the `QemuImg` utility has been modified to support the new `--object` and `--image-opts` flags. These are modeled as `QemuObject` and `QemuImageOptions`.  These `qemu-img` flags have been designed to supersede some of the existing, older flags being used today (such as choosing file formats and paths), and an effort could be made to switch over to these wholesale. However, for now we have instead opted to keep existing functions and do some wrapping to ensure backward compatibility, so callers of `QemuImg` can choose to use either way.

It should be noted that there are also a few different Enums that represent the encryption format for various purposes. While these are analogous in principle, they represent different things and should not be confused. For example, the supported encryption format strings for the `cryptsetup` utility has `LuksType.LUKS` while `QemuImg` has a `QemuImg.PhysicalDiskFormat.LUKS`.

Some additional effort could potentially be made to support advanced encryption configurations, such as choosing between LUKS1 and LUKS2 or changing cipher details. These may require changes all the way up through the control plane. However, in practice Libvirt and Qemu currently only support LUKS1 today. Additionally, the cipher details aren't required in order to use an encrypted volume, as they're stored in the LUKS header on the volume there is no need to store these elsewhere.  As such, we need only set the one encryption format upon volume creation, which is persisted in the volumes table and then available later as needed.  In the future when LUKS2 is standard and fully supported, we could move to it as the default and old volumes will still reference LUKS1 and have the headers on-disk to ensure they remain usable. We could also possibly support an automatic upgrade of the headers down the road, or a volume migration mechanism.

Every version of cryptsetup and qemu-img tested on variants of EL7 and Ubuntu that support encryption use the XTS-AES 256 cipher, which is the leading industry standard and widely used cipher today (e.g. BitLocker and FileVault).

Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
2022-09-27 10:20:59 +05:30
dahn
90a0ee0b6c
fix pseudo random behaviour in pool selection (#6307)
* refactor and log trace

* tracelogs

* shuffle pools with real randomiser

* sinlge retrieval of async job context

* some review comments addressed

* Apply suggestions from code review

Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>

* log formatting

* integration test for distribution of volumes over storages

* move test to smoke tests

* imports

* sonarcloud issue # AYCOmVntKzsfKlhz0HDh

* spellos

* review comments

* review comments

* sonarcloud issues

* unittest

* import

* Update AbstractStoragePoolAllocatorTest.java

Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
2022-06-10 08:06:23 -03:00
Gabriel Beims Bräscher
50b2dc2789
server: Fix #6263 Cannot scale VM with custom offering (#6267)
* When scaling with custom offering, which changes only CPU/Memory and keeps same disk offering an exception is thrown.

This commit fixes such cases by checking if the operation is happening on a custom service offering.

* Improve the unit tests that cover null objects.
2022-04-15 20:28:31 +05:30
JoaoJandre
5f07ddaca9
Refactor account type (#6048)
* Refactor account type

* Added license.

* Address reviews

* Address review.

Co-authored-by: João Paraquetti <joao@scclouds.com.br>
Co-authored-by: Joao <JoaoJandre@gitlab.com>
2022-03-09 11:14:19 -03:00
Harikrishna
f15cab16da
server: Decouple service (compute) offering and disk offering (#5008)
Currently, our compute offerings and disk offerings are tightly coupled with respect to many aspects. For example, if a compute offering is created, a corresponding disk offering entry is also created with the same ID as the reference. Also creating compute offering takes few disk-related parameters which anyway goes to the corresponding disk offering only. I think this design was initially made to address compute offering for the root volume created from a template. Also changing the offering of a volume is tightly coupled with storage tags and has to be done in different APIs either migrateVolume or resizeVolume. Changing of disk offering should be seamless and should consider new storage tags, new size and place the volume in appropriate state as defined in disk offering.

more details are mentioned here https://cwiki.apache.org/confluence/display/CLOUDSTACK/Compute+offering+and+disk+offering+refactoring

* Schema changes and disk offering column change from "type" to "compute_only"

* Few more changes

* Decoupled service offering and disk offering

* Remove diskofferingid from vminstance VO

* Decouple service offering and disk offering states

* diskoffering getsize() is only for strict disk offerings

* Fix deployVM flow

* Added new API params to compute offering creation

* Add diskofferingstrictness to serviceoffering vo under quota

* Added overrideDiskOfferingId parameter in deploy VM API which will override disk offering for the root disk both in template and ISO case

Added diskSizeStrictness parameter in create Disk offering API which will decide whether to restrict resize or disk offering change of a volume

* Fix User vm response to show proper service offering and disk offerings

* Added disk size strictness in disk offering response

* Added disk offering strictness to the service offering response

* Remove comments

* Added UI changes for Disk offering strictness in add compute offering form and Disk size strictness in add disk offering form

* Added diskoffering details to the service offering response

* Added UI changes in deployvm wizard to accept override disk offering id

* Fix delete compute offering

* Fix VM deployment from custom service offering

* Move uselocalstorage column access from service offering to disk offering

* UI: Separated compute and disk releated parameters in add compute offering wizard, also added association to disk offering

* Fixed diskoffering automatic selection on add compute offering wizard

* UI: move compute only toggle button outside the box in add compute offering wizard

* Added volumeId parameter to listDiskOfferings API and the disksizestrictness flag of the current disk offering is honored while list disk offerings

* Added configuration parameter to decide whether to check volume tags on the destination storagepool during migration

* Added disk offering change checks during resize volume operation

* Added new API changeofferingforVolume API and corresponding changes

* Add UI form for changeOfferingForVolume API

* Fix UI conflicts

* Fix service offering usage as disk offering

* Fix unit test failures

* fix user_vm_view

* Addressed review comments

* Fixed service_offering_view

* Fix service offering edit flow

* Fix service offering constructor to address custom offering

* Fix domain_router_view to get proper service offering id

* Removed unused import

* Addressed review comments and fixed update service offering flow with storage tags

* Added marvin test cases for checking disk offering strictness

* review comments addressed

* Remove system_use column from disk offering join

* update volume_view to update system_use column from service offering and not disk offering

* Fix changeOfferingForVolume API for custom disk offering

* Fix global setting implementation

* Fix list volumes, after changing system_use column from disk offering to service offering in volume_view

* Changes for override root disk offering in deployvm wizard in case of custom offering

* Fix a unit test case

* Fixed recent unit test cases with new serviceofferingvo constructor

* Fix unit test in VolumeApiServiceImpl

* Added storage id for the list disk offering API and corresponding UI changes in migrateVolume and changeOfferingForVolume flow

* Rename global configuration parameter from storage.pool.tags.disk.offering.strictness to match.storage.pool.tags.with.disk.offering

* Fix smoke test failures

* Added tool tip for migrate volume UI form

* Address review comments and fix UI form of deploy VM in case of ISO.

* Fixed resize volume UI form for data disk

* UI changes to disable override root disk size when override root disk offering is enabled

* UI fix in deploy vm wizard

* Fix listdiskoffering after rebasing with main

* Fixed UI in migrate and changeofferingfor volume to handle empty disk offering list
Removed the volume's current disk offering from listDiskOffering response list

* Added custom Iops to resize volume form and removed the current disk offering during change offering for volume UI form

* Fix false response on updateDiskOffering API

* Added search field for changeofferingforvolume UI form

* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster

* Removed DB changes from 4.16 upgrade file

* Resolving merge conflicts with main 4.17

* Added support for auto migration and auto resize of the root volume upon changing the service offering for VM.

* UI: Added automigrate checkbox in scale VM form

* Addes since attributes to new API params

* Added shrinkOK parameter to changeofferingforvolume API

* Added shrinkOk param to UI in changeOfferingforVolume form

* Added shrinkOk flag to scaleVM and changeServiceForVirtualMachines and UI form

* Removed old foreign key constraint on IDs of service offering and disk offering

* Allow resize and automigrate of root volume if required in all cases of service offering change

* Allow only resize to higher disk size from UI

* Fixing vue syntax error

* Make UI changes to provide root disk size box when the linked disk offering is of custom

* Converted from check box to toggle in scale VM, changeoffering, resize and migrate volume forms

* Fix resize volume operation to update the VM settings

* Fix migratevolume form to pick selected storage pool id in list diskofferings API
2022-01-27 15:08:42 +05:30
Gabriel Beims Bräscher
bc12833ccf
server: Failed to scale between Service Offerings with the same root disk size (#5095)
* Cover a case where resizing root disk failed; add isNotPossibleToResize method.

* remove format from resize validation

* Revert if-conditional changes that removed ImageFormat.ISO validation

* Add JUnit tests for VolumeApiServiceImpl.isNotPossibleToResize

* Fix checkstyle of test Class

* Use _templateDao.findByIdIncludingRemoved instead of _templateDao.findById

* Prevent null serviceOfferingView and Mock findByIdIncludingRemoved instead of findById
2021-06-14 12:49:55 +05:30
Rohit Yadav
318924d801
CloudStack Backup & Recovery Framework (#3553) 2020-03-03 13:27:58 +01:00
Rohit Yadav
d90341ebf1
cloudstack: add JDK11 support (#3601)
This adds support for JDK11 in CloudStack 4.14+:

- Fixes code to build against JDK11
- Bump to Debian 9 systemvmtemplate with openjdk-11
- Fix Travis to run smoketests against openjdk-11
- Use maven provided jdk11 compatible mysql-connector-java
- Remove old agent init.d scripts

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2020-02-12 12:58:25 +05:30
Wei Zhou
fd5bea838b
New feature: Add support to destroy/recover volumes (#3688)
* server: fix resource count of primary storage if some volumes are Expunged but not removed

Steps to reproduce the issue
(1) create a vm and stop it. check resource count of primary storage
(2) download volume. resource count of primary storage is not changed.
(3) expunge the vm, the volume will be Expunged state as there is a volume snapshot on secondary storage. The resource count of primary storage decreased.
(4) update resource count of the account (or domain), the resource count of primary storage is reset to the value in step (2).

* New feature: Add support to destroy/recover volumes

* Add integration test for volume destroy/recover

* marvin: check resource count of more types

* messages translate to JP

* Update messages for CN

* translate message for NL

* fix two issues per Daan's comments

Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
2020-02-07 11:25:10 +01:00
Nicolas Vazquez
38f97c65b8 server: Fix hardcoded max data volumes when VM has been created but not started before (#3494)
When VM is created but not started on any host, attaching a volume on it causes the maximum number of allowed volumes to be 6 (hardcoded).
2019-07-22 17:33:00 +05:30
Abhishek Kumar
58474530f6 api: snapshot, snapshotpolicy tag support (#3228)
Problem: Currently tags cannot be applied to snapshot when it is being created but through separate “create tags” API calls. For snapshot policies tags cannot be set either at creation or through “create tags” API.

Root Cause: The “create snapshots” API does not support adding tags during creation and it can only be done through “create tags” API. Snapshot policy as a resource does not support tags and no tags can be set for them through any API.

Solution: Tag support for snapshot policy has been added. Snapshot policy with tags when executed will produce snapshots containing the same tags from snapshot policy.

Following APIs have been updated:

Both “create snapshotpolicy” and “create snapshot” now accepts “tags” as a new parameter. The expected format for “tags” parameter is similar to parameter “tags” in “create tags“ API.
Deletion support for tags associated with snapshots policy has been added to “delete snapshotpolicies” API.
Tags set for snapshot policies are added to the Response of “list snapshotpolicies“ API.
UI support for setting tags to snapshots and snapshot policy is provided through the corresponding menus with a new section in each form to set tags.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2019-06-27 09:21:08 +05:30
Gabriel Beims Bräscher
6323aac01b server: Fix migration target has no matching tags (#3329)
The code prior to this commit was looking into storage tags at the
storage_pool_details. However, it gets null (table is empty). It should
select from storage_pool_tags, which would result on the storage pool
tags. and then reflect on the code that matched the volume tags (e.g.
'aTag') with the storage pool tags (empty).

The code prior to this commit was looking for the storage tags at the table
storage_pool_details, which is empty. It should select from storage_pool_tags,
which contains the tags from each tagged storage.
2019-06-07 11:44:00 +05:30