373 Commits

Author SHA1 Message Date
Marcus Sorensen
697e12f8f7
kvm: volume encryption feature (#6522)
This PR introduces a feature designed to allow CloudStack to manage a generic volume encryption setting. The encryption is handled transparently to the guest OS, and is intended to handle VM guest data encryption at rest and possibly over the wire, though the actual encryption implementation is up to the primary storage driver.

In some cases cloud customers may still prefer to maintain their own guest-level volume encryption, if they don't trust the cloud provider. However, for private cloud cases this greatly simplifies the guest OS experience in terms of running volume encryption for guests without the user having to manage keys, deal with key servers and guest booting being dependent on network connectivity to them (i.e. Tang), etc, especially in cases where users are attaching/detaching data disks and moving them between VMs occasionally.

The feature can be thought of as having two parts - the API/control plane (which includes scheduling aspects), and the storage driver implementation.

This initial PR adds the encryption setting to disk offerings and service offerings (for root volume), and implements encryption support for KVM SharedMountPoint, NFS, Local, and ScaleIO storage pools.

NOTE: While not required, operations can be significantly sped up by ensuring that hosts have the `rng-tools` package and service installed and running on the management server and hypervisors. For EL hosts the service is `rngd` and for Debian it is `rng-tools`. In particular, the use of SecureRandom for generating volume passphrases can be slow if there isn't a good source of entropy. This could affect testing and build environments, and otherwise would only affect users who actually use the encryption feature. If you find tests or volume creates blocking on encryption, check this first.

### Management Server

##### API

* createDiskOffering now has an 'encrypt' Boolean
* createServiceOffering now has an 'encryptroot' Boolean. The 'root' suffix is added here in case there is ever any other need to encrypt something related to the guest configuration, like the RAM of a VM.  This has been refactored to deal with the new separation of service offering from disk offering internally.
* listDiskOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listServiceOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listHosts now shows encryption support of each hypervisor host via `encryptionsupported`
* Volumes themselves don't show encryption on/off, rather the offering should be referenced. This follows the same pattern as other disk offering based settings such as the IOPS of the volume.

##### Volume functions

A decent effort has been made to ensure that the most common volume functions have either been cleanly supported or blocked. However, for the first release it is advised to mark this feature as *experimental*, as the code base is complex and there are certainly edge cases to be found.

Many of these features could eventually be supported over time, such as creating templates from encrypted volumes, but the effort and size of the change is already overwhelming.

Supported functions:
* Data Volume create
* VM root volume create
* VM root volume reinstall
* Offline volume snapshot/restore
* Migration of VM with storage (e.g. local storage VM migration)
* Resize volume
* Detach/attach volume

Blocked functions:
* Online volume snapshot
* VM snapshot w/memory
* Scheduled snapshots (would fail when VM is running)
* Disk offering migration to offerings that don't have matching encryption
* Creating template from encrypted volume
* Creating volume from encrypted volume
* Volume extraction (would we decrypt it first, or expose the key? Probably the former).

##### Primary Storage Support

For storage developers, adding encryption support involves:

1. Updating the `StoragePoolType` for your primary storage to advertise encryption support. This is used during allocation of storage to match storage types that support encryption to storage that supports it.

2. Implementing encryption feature when your `PrimaryDataStoreDriver` is called to perform volume lifecycle functions on volumes that are requesting encryption. You are free to do what your storage supports - this could be as simple as calling a storage API with the right flag when creating a volume. Or (as is the case with the KVM storage types), as complex as managing volume details directly at the hypervisor host. The data objects passed to the storage driver will contain volume passphrases, if encryption is requested.

##### Scheduling

For the KVM implementations specified above, we are dependent on the KVM hosts having support for volume encryption tools. As such, the hosts `StartupRoutingCommand` has been modified to advertise whether the host supports encryption. This is done via a probe during agent startup to look for functioning `cryptsetup` and support in `qemu-img`. This is also visible via the listHosts API and the host details in the UI.  This was patterned after other features that require hypervisor support such as UEFI.

The `EndPointSelector` interface and `DefaultEndpointSelector` have had new methods added, which allow the caller to ask for endpoints that support encryption.  This can be used by storage drivers to find the proper hosts to send storage commands that involve encryption. Not all volume activities will require a host to support encryption (for example a snapshot backup is a simple file copy), and this is the reason why the interface has been modified to allow for the storage driver to decide, rather than just passing the data objects to the EndpointSelector and letting the implementation decide.

VM scheduling has also been modified. When a VM start is requested, if any volume that requires encryption is attached, it will filter out hosts that don't support encryption.

##### DB Changes

A volume whose disk offering enables encryption will get a passphrase generated for it before its first use. This is stored in the new 'passphrase' table, and is encrypted using the CloudStack installation's standard configured DB encryption. A field has been added to the volumes table, referencing this passphrase, and a foreign key added to ensure passphrases that are referenced can't be removed from the database.  The volumes table now also contains an encryption format field, which is set by the implementer of the encryption and used as it sees fit.

#### KVM Agent

For the KVM storage pool types supported, the encryption has been implemented at Qemu itself, using the built-in LUKS storage support. This means that the storage remains encrypted all the way to the VM process, and decrypted before the block device is visible to the guest.  This may not be necessary in order to implement encryption for /your/ storage pool type, maybe you have a kernel driver that decrypts before the block device on the system, or something like that. However, it seemed like the simplest, common place to terminate the encryption, and provides the lowest surface area for decrypted guest data.

For qcow2 based storage, `qemu-img` is used to set up a qcow2 file with LUKS encryption. For block based (currently just ScaleIO storage), the `cryptsetup` utility is used to format the block device as LUKS for data disks, but `qemu-img` and its LUKS support is used for template copy.

Any volume that requires encryption will contain a passphrase ID as a byte array when handed down to the KVM agent. Care has been taken to ensure this doesn't get logged, and it is cleared after use in attempt to avoid exposing it before garbage collection occurs.  On the agent side, this passphrase is used in two ways:

1. In cases where the volume experiences some libvirt interaction it is loaded into libvirt as an ephemeral, private secret and then referenced by secret UUID in any libvirt XML. This applies to things like VM startup, migration preparation, etc.

2. In cases where `qemu-img` needs to use this passphrase for volume operations, it is written to a `KeyFile` on the cloudstack agent's configured tmpfs and passed along. The `KeyFile` is a `Closeable` and when it is closed, it is deleted. This allows us to try-with-resources any volume operations and get the KeyFile removed regardless.

In order to support the advanced syntax required to handle encryption and passphrases with `qemu-img`, the `QemuImg` utility has been modified to support the new `--object` and `--image-opts` flags. These are modeled as `QemuObject` and `QemuImageOptions`.  These `qemu-img` flags have been designed to supersede some of the existing, older flags being used today (such as choosing file formats and paths), and an effort could be made to switch over to these wholesale. However, for now we have instead opted to keep existing functions and do some wrapping to ensure backward compatibility, so callers of `QemuImg` can choose to use either way.

It should be noted that there are also a few different Enums that represent the encryption format for various purposes. While these are analogous in principle, they represent different things and should not be confused. For example, the supported encryption format strings for the `cryptsetup` utility has `LuksType.LUKS` while `QemuImg` has a `QemuImg.PhysicalDiskFormat.LUKS`.

Some additional effort could potentially be made to support advanced encryption configurations, such as choosing between LUKS1 and LUKS2 or changing cipher details. These may require changes all the way up through the control plane. However, in practice Libvirt and Qemu currently only support LUKS1 today. Additionally, the cipher details aren't required in order to use an encrypted volume, as they're stored in the LUKS header on the volume there is no need to store these elsewhere.  As such, we need only set the one encryption format upon volume creation, which is persisted in the volumes table and then available later as needed.  In the future when LUKS2 is standard and fully supported, we could move to it as the default and old volumes will still reference LUKS1 and have the headers on-disk to ensure they remain usable. We could also possibly support an automatic upgrade of the headers down the road, or a volume migration mechanism.

Every version of cryptsetup and qemu-img tested on variants of EL7 and Ubuntu that support encryption use the XTS-AES 256 cipher, which is the leading industry standard and widely used cipher today (e.g. BitLocker and FileVault).

Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
2022-09-27 10:20:59 +05:30
Rohit Yadav
840c3f6a7a Merge remote-tracking branch 'origin/4.17' 2022-08-10 23:11:09 +02:00
slavkap
76f52af8f3
removed the use of SharedMountPoint storage type for the StorPool plugin (#6552)
Fixes #6455

The default storage adaptor - LibvirtStorageAdaptor - is used by different storage types and doesn't use the annotation @StorageAdaptorInfo. In this case, a storage plugin that wants to adopt one of the predefined storage pool types will override the default behaviour. If fixing the issue in general (for new storage plugins or current ones that want to reuse the existing storage pool types) would affect all volume/snapshot/VM cases. This will lead to the need of extensive testing for each storage plugin for which we don't have the resources to do it. That's why this patch fixes the old behaviour for the SharedMountPoint by adding a new storage pool type for the StorPool plugin.
2022-08-10 14:41:32 +05:30
Rohit Yadav
c4c4c71591 cherry-pick ce7c3694c82232b5fa08f5a3fa8d5ff2b95542a3
This fixes cherry-pick issue, while merging 4.17.0.1 on 4.17 branch

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2022-07-20 16:18:32 +05:30
Rohit Yadav
661956cc60 Merge remote-tracking branch 'origin/4.17' 2022-07-20 11:52:26 +05:30
Harikrishna
2c05b63495
kvm: Fix for Revert volume snapshot (#6527)
This PR fixes the issue #6209 where the snapshot revert operation fails after certain volume operations like Migrate VM with volume / migrate volume / reinstall VM.

The root cause of the issue after these volume operations, the primary storage entry is getting deleted for that volume. We have fixed it here to get the primary datastore entry wrt volume and continue the operation.
2022-07-20 11:34:02 +05:30
Rohit Yadav
7a3e97d67e Tagging release 4.17.0.1 on branch b30a4a99d1b530efbf652373eda229f2cd5133b1.
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEXtHhEi3F6KSkURLCSEJIIQ7j2IQFAmLRYi0ACgkQSEJIIQ7j
 2ISTWxAAlozJuDMoRnr4D1TDbNCr2hzWSgVn5AK+IZGwnd22OnaZnS7tVQUheTCq
 t9aQgRLb7oUGAzNngHEjDaQBnxlHdLHMKby+QGe+RjX/d9urFoEyHe2xyvCJPkwM
 hFM1uesMqtH/HKwhIL3l8fATGPHlucdhQEZ+XA4bu91IVzxog0gikSnm7SjbaljF
 yYNkn9CgOWtZYFek7lcOM7iuKB79QSdpYxN8PYLpE7esyQSu4KjU4Ekufv1u6Tql
 ILsY5PA5tzzxS7ArfW5PICgSxkXOUIkflBbPHObGgduKw9Q36bmnRM/701lNb2re
 EWE4NMlM2PDn8kKZ2zULD2VBIq5tVdJuZjXbjDyD17z/KiU9pd6hGeHABSitnpDW
 vAS6rLJVY3YT9eqoVDVhpkpFQZmvdfDC8L4nYU2E7dCHj4lF9FlsgYO08SCfSgvP
 InAnfg1jZvbhA9EDL+LiuhxCStn6ZpjRuRCC89hYfRfRM1ZdrT2FazDj8KwPuC0P
 xfEr8eTnMm7xM+B9JCBQ2Lskl3jxQk3KAYQX13LtZCUj05Y1f3crx/iq6t0qIrAH
 PU9keojKMZffLz5MBlFU8qor32stw+uNMky8dZgtDIx6kRjnuYuPYOxpcPDzl+Cs
 KBRcwpIP+GR9mePU8PKBNDClLA45vDE1XqeK6KnOOf7MBSprU5o=
 =ETOD
 -----END PGP SIGNATURE-----

Merge tag '4.17.0.1' into 4.17

Tagging release 4.17.0.1 on branch b30a4a99d1b530efbf652373eda229f2cd5133b1.
2022-07-18 19:40:53 +05:30
Rohit Yadav
1c7efcbd0d Updating pom.xml version numbers for release 4.17.0.1
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2022-07-15 18:18:40 +05:30
Rohit Yadav
ce7c3694c8 storpool: fix mvn pom.xml build issue
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2022-07-14 17:36:02 +05:30
Rohit Yadav
35b5315dae
maven: update dependencies (#6539)
This upgrades mvn dependencies for the project.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2022-07-11 20:41:39 +05:30
Daan Hoogland
a470f3353a Merge branch '4.17' 2022-07-05 09:11:45 +02:00
John Bampton
7d23a0a759
Fix spelling (#6272) 2022-07-05 09:08:53 +02:00
Wei Zhou
ff7831d751 Merge remote-tracking branch 'apache/4.17' 2022-06-28 08:27:36 +02:00
Suresh Kumar Anaparti
c70bc9d69c
kvm: Updated PowerFlex/ScaleIO storage plugin to support separate (storage) network for Hosts(KVM)/Storage connection. (#6367)
This PR enhances the existing PowerFlex/ScaleIO storage plugin to support separate (storage) network for Hosts(KVM)/Storage connection, mainly the SDC (ScaleIo Data Client) connection.
2022-06-27 14:42:51 +05:30
nvazquez
0bcc609f05
Updating pom.xml version numbers for release 4.18.0.0-SNAPSHOT
Signed-off-by: nvazquez <nicovazquez90@gmail.com>
2022-06-06 12:25:35 -03:00
nvazquez
038a669d6b
Updating pom.xml version numbers for release 4.17.1.0-SNAPSHOT
Signed-off-by: nvazquez <nicovazquez90@gmail.com>
2022-06-06 12:19:44 -03:00
nvazquez
c56220fcf2
Updating pom.xml version numbers for release 4.17.0.0
Signed-off-by: nvazquez <nicovazquez90@gmail.com>
2022-05-31 14:33:47 -03:00
slavkap
453bb57fd2
Disable creating StorPool logs when there isn't StorPool primary storage (#6317)
There is not need to create log files for StorPool driver when there
isn't a StorPool primary storage
2022-04-27 07:24:44 -03:00
slavkap
4004dfcfd8
StorPool storage plugin (#6007)
* StorPool storage plugin

Adds volume storage plugin for StorPool SDS

* Added support for alternative endpoint

Added option to switch to alternative endpoint for SP primary storage

* renamed all classes from Storpool to StorPool

* Address review

* removed unnecessary else

* Removed check about the storage provider

We don't need this check, we'll get if the snapshot is on StorPool be
its name from path

* Check that current plugin supports all functionality before upgrade CS

* Smoke tests for StorPool plug-in

* Fixed conflicts

* Fixed conflicts and added missed Apache license header

* Removed whitespaces in smoke tests

* Added StorPool plugin jar for Debian

the StorPool jar will be included into cloudstack-agent package for
Debian/Ubuntu
2022-04-14 11:12:01 -03:00
Daniel Augusto Veronezi Salvador
39fad2d9d7
KVM disk-only based snapshot of volumes instead of taking VM's full snapshot and extracting disks (#5297)
* Refactor create volume snapshot with running VM

* Refactor create volume snapshot with stopped VM

* Refactor create volume from snapshot

* Refactor create template from snapshot

* Refactor volume migration (migrateVolume/ migrateVirtualMachineWithVolume)

* Refactor snapshot deletion

* Refactor snapshot revertion

* Adjusts and fix cherry-pick conflicts

* Remove diffuse tests

* Add validation to add flag '--delete' on command 'virsh blockcommand' only if libvirt version is equal or higher 6.0.0

* Expunge temporary snapshot only if template creation is from snapshot

* Extract strings to constant

* Remove unused imports

* Fix error on revert backed up snapshot

* Turn method's return to void as it is not used

* Rename method in SnapshotHelper

* Fix folder creation when using SharedMountPoint pool

* Remove static import

* Remove unnused method

* Cover take snapshot in centos 7

* Handle right snapshot flag according to qemu version

Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2022-04-12 08:14:27 -03:00
Pearl Dsilva
431c352a6d
Synchronization of network devices on newly added hosts for Persistent Networks (#5977)
* Persistent Network feature & Marvin component tests

* Cleaned up comments and imports

* fixed small error

* add support to add setup persistent networks' resources when a disabled host is enabled

* small fix

* use wildcard instead of hard-coding the bridge name

* allow clean up of resources when removing a host in maintenance mode

* skip test for simulator hypervisor

Co-authored-by: shatoboar <sang-woo.bae@campus.tu-berlin.de>
2022-04-11 23:12:05 -03:00
John Bampton
6401c850b7
Fix spelling (#6064)
* Fix spelling

- `interupted` to `interrupted`
- `paramter` to `parameter`

* Fix more typos
2022-03-08 13:02:35 -03:00
Pearl Dsilva
9f23fbe7b6
[Simulator] Fix NPE in SolidFireHostListener (#5955)
* [Simulator] Fix NPE in SolidFireHostListener

* address comments

* Update plugins/storage/volume/solidfire/src/main/java/org/apache/cloudstack/storage/datastore/provider/SolidFireSharedHostListener.java

Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>

Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
2022-02-12 08:32:19 -03:00
Harikrishna
f15cab16da
server: Decouple service (compute) offering and disk offering (#5008)
Currently, our compute offerings and disk offerings are tightly coupled with respect to many aspects. For example, if a compute offering is created, a corresponding disk offering entry is also created with the same ID as the reference. Also creating compute offering takes few disk-related parameters which anyway goes to the corresponding disk offering only. I think this design was initially made to address compute offering for the root volume created from a template. Also changing the offering of a volume is tightly coupled with storage tags and has to be done in different APIs either migrateVolume or resizeVolume. Changing of disk offering should be seamless and should consider new storage tags, new size and place the volume in appropriate state as defined in disk offering.

more details are mentioned here https://cwiki.apache.org/confluence/display/CLOUDSTACK/Compute+offering+and+disk+offering+refactoring

* Schema changes and disk offering column change from "type" to "compute_only"

* Few more changes

* Decoupled service offering and disk offering

* Remove diskofferingid from vminstance VO

* Decouple service offering and disk offering states

* diskoffering getsize() is only for strict disk offerings

* Fix deployVM flow

* Added new API params to compute offering creation

* Add diskofferingstrictness to serviceoffering vo under quota

* Added overrideDiskOfferingId parameter in deploy VM API which will override disk offering for the root disk both in template and ISO case

Added diskSizeStrictness parameter in create Disk offering API which will decide whether to restrict resize or disk offering change of a volume

* Fix User vm response to show proper service offering and disk offerings

* Added disk size strictness in disk offering response

* Added disk offering strictness to the service offering response

* Remove comments

* Added UI changes for Disk offering strictness in add compute offering form and Disk size strictness in add disk offering form

* Added diskoffering details to the service offering response

* Added UI changes in deployvm wizard to accept override disk offering id

* Fix delete compute offering

* Fix VM deployment from custom service offering

* Move uselocalstorage column access from service offering to disk offering

* UI: Separated compute and disk releated parameters in add compute offering wizard, also added association to disk offering

* Fixed diskoffering automatic selection on add compute offering wizard

* UI: move compute only toggle button outside the box in add compute offering wizard

* Added volumeId parameter to listDiskOfferings API and the disksizestrictness flag of the current disk offering is honored while list disk offerings

* Added configuration parameter to decide whether to check volume tags on the destination storagepool during migration

* Added disk offering change checks during resize volume operation

* Added new API changeofferingforVolume API and corresponding changes

* Add UI form for changeOfferingForVolume API

* Fix UI conflicts

* Fix service offering usage as disk offering

* Fix unit test failures

* fix user_vm_view

* Addressed review comments

* Fixed service_offering_view

* Fix service offering edit flow

* Fix service offering constructor to address custom offering

* Fix domain_router_view to get proper service offering id

* Removed unused import

* Addressed review comments and fixed update service offering flow with storage tags

* Added marvin test cases for checking disk offering strictness

* review comments addressed

* Remove system_use column from disk offering join

* update volume_view to update system_use column from service offering and not disk offering

* Fix changeOfferingForVolume API for custom disk offering

* Fix global setting implementation

* Fix list volumes, after changing system_use column from disk offering to service offering in volume_view

* Changes for override root disk offering in deployvm wizard in case of custom offering

* Fix a unit test case

* Fixed recent unit test cases with new serviceofferingvo constructor

* Fix unit test in VolumeApiServiceImpl

* Added storage id for the list disk offering API and corresponding UI changes in migrateVolume and changeOfferingForVolume flow

* Rename global configuration parameter from storage.pool.tags.disk.offering.strictness to match.storage.pool.tags.with.disk.offering

* Fix smoke test failures

* Added tool tip for migrate volume UI form

* Address review comments and fix UI form of deploy VM in case of ISO.

* Fixed resize volume UI form for data disk

* UI changes to disable override root disk size when override root disk offering is enabled

* UI fix in deploy vm wizard

* Fix listdiskoffering after rebasing with main

* Fixed UI in migrate and changeofferingfor volume to handle empty disk offering list
Removed the volume's current disk offering from listDiskOffering response list

* Added custom Iops to resize volume form and removed the current disk offering during change offering for volume UI form

* Fix false response on updateDiskOffering API

* Added search field for changeofferingforvolume UI form

* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster

* Removed DB changes from 4.16 upgrade file

* Resolving merge conflicts with main 4.17

* Added support for auto migration and auto resize of the root volume upon changing the service offering for VM.

* UI: Added automigrate checkbox in scale VM form

* Addes since attributes to new API params

* Added shrinkOK parameter to changeofferingforvolume API

* Added shrinkOk param to UI in changeOfferingforVolume form

* Added shrinkOk flag to scaleVM and changeServiceForVirtualMachines and UI form

* Removed old foreign key constraint on IDs of service offering and disk offering

* Allow resize and automigrate of root volume if required in all cases of service offering change

* Allow only resize to higher disk size from UI

* Fixing vue syntax error

* Make UI changes to provide root disk size box when the linked disk offering is of custom

* Converted from check box to toggle in scale VM, changeoffering, resize and migrate volume forms

* Fix resize volume operation to update the VM settings

* Fix migratevolume form to pick selected storage pool id in list diskofferings API
2022-01-27 15:08:42 +05:30
Rohit Yadav
c84198d76d Merge remote-tracking branch 'origin/4.16'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-12-30 16:52:30 +05:30
Marcus Sorensen
dcdcd09058
Randomize managed volume copy host (#5789)
* Randomize managed volume copy host

* Managed volume copy was always returning first host that could see storage pools

* Fix null value in logging for ScaleIOPrimaryDataStoreDriver due to if/else logic error

Signed-off-by: Marcus Sorensen <mls@apple.com>

* Use String.format for ScaleIO debug message

Signed-off-by: Marcus Sorensen <mls@apple.com>

* Update debug message for ScaleIO copy methods

Signed-off-by: Marcus Sorensen <mls@apple.com>

Co-authored-by: Marcus Sorensen <mls@apple.com>
2021-12-30 16:11:00 +05:30
Daniel Augusto Veronezi Salvador
b4aabadc4d
Replace string libraries with org.apache.commons.lang3.StringUtils (#5386)
* Replace google lib for lang3 and adjust methods calls

* Replace string libs by lang3

* Prohibit others string libs

Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2021-11-18 13:41:48 +05:30
nicolas
3f79436840
Updating pom.xml version numbers for release 4.17.0.0-SNAPSHOT
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-09 22:55:52 -03:00
nicolas
93c3c3b9ac
Updating pom.xml version numbers for release 4.16.1.0-SNAPSHOT
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-09 22:50:22 -03:00
nicolas
44c08b5acc
Updating pom.xml version numbers for release 4.16.0.0
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-04 14:14:57 -03:00
Harikrishna
cd4e7e031a
Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster (#5539)
* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster

* Change in constructors

* Naming changes

* Remove commented code

* Refactor code for more readability

* Addressed review comments on code refactor
2021-10-04 20:58:25 -03:00
Peinthor Rene
66c39c1589
storage: Linstor volume plugin (#4994)
This adds a volume(primary) storage plugin for the Linstor SDS.
Currently it can create/delete/migrate volumes, snapshots should be possible,
but currently don't work for RAW volume types in cloudstack.

* plugin-storage-volume-linstor: notify libvirt guests about the resize
2021-09-16 10:50:58 +05:30
Abhishek Kumar
56f4da6dce Merge remote-tracking branch 'apache/4.15' into main 2021-09-02 16:13:33 +05:30
Wei Zhou
4e53997ca2
server: do not remove volume from DB if fail to expunge it from primary storage or secondary storage (#5373)
* server: do not remove volume from DB if fail to expunge it from primary storage or secondary storage

* server/VolumeApiServiceImpl.java: move to method

* update #5373
2021-08-31 13:48:58 -03:00
Rohit Yadav
a1a3aff2b5 Merge remote-tracking branch 'origin/4.15' into main 2021-08-31 14:29:30 +05:30
slavkap
961e85eb60
Fix of creating volumes from snapshots without backup to secondary storage (#5349)
* Fix of creating volumes from snapshots without backup

When few snaphots are created onyl on primary storage, and try to create
a volume or a template from the snapshot only the first operation is
successful. Its because the snapshot is backup on secondary storage with
wrong SQL query. The problem appears on Ceph/NFS but may affects other
storage plugins.
Bypassing secondary storage is implemented only for Ceph primary storage
and it didn't cover the functionality to create volume from snapshot
which is kept only on Ceph

* Address review
2021-08-31 12:46:57 +05:30
Spaceman1984
96c9c5a5e2
Added disk provisioning type support for VMWare (#4640)
* Added disk provisioning type support for VMWare

* Review changes

* Fixed unit test

* Review changes

* Added missing licenses

* Review changes

* Update StoragePoolInfo.java

Removed white space

* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id

* Delete __init__.py

* Merge fix

* Fixed failing test

* Added comment about parameters

* Added error log when update fails

* Added exception when using API

* Ordering storage pool selection to prefer thick disk capable pools if available

* Removed unused parameter

* Reordering changes

* Returning storage pool details after update

* Removed multiple pool update, updated marvin test, removed duplicate enum

* Removed comment

* Removed unused import

* Removed for loop

* Added missing return statements for failed checks

* Class name change

* Null pointer

* Added more info when a deployment fails

* Null pointer

* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java

Co-authored-by: dahn <daan.hoogland@gmail.com>

* Small bug fix on API response and added missing bracket

* Removed datastore cluster code

* Removed unused imports, added missing signature

* Removed duplicate config key

* Revert "Added more info when a deployment fails"

This reverts commit 2486db78dca8e034d8ad2386174dfb11004ce654.

Co-authored-by: dahn <daan.hoogland@gmail.com>
2021-07-16 22:37:42 -03:00
Rohit Yadav
d916e416ec Updating pom.xml version numbers for release 4.15.2.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-07-02 22:59:07 +05:30
Rohit Yadav
379454caae Updating pom.xml version numbers for release 4.15.1.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-06-28 15:27:27 +05:30
sureshanaparti
07cabbe7ac
scaleio: Updated PowerFlex/ScaleIO gateway client with some improvements. (#5037)
- Added connection manager to the gateway client.
 - Renew the client session on '401 Unauthorized' response.
 - Refactored the gateway client calls, for GET and POST methods.
 - Consume the http entity content after login/(re)authentication and close the content stream if exists.
 - Updated storage pool client connection timeout configuration 'storage.pool.client.timeout' to non-dynamic.
 - Added storage pool client max connections configuration 'storage.pool.client.max.connections' (default: 100) to specify the maximum connections for the ScaleIO storage pool client.
 - Updated unit tests.
and blocked the attach volume operation for uploaded volume on ScaleIO/PowerFlex storage pool
2021-06-16 12:45:27 +05:30
Abhishek Kumar
426f14b6ed Merge remote-tracking branch 'apache/4.15'
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-05-18 15:19:20 +05:30
Abhishek Kumar
dc91a1fd4d
server: destroy ssvm, cpvm on last host maintenance (#4644)
* server: destroy ssvm, cpvm on last host maintenance

When a single or last UP host enters into maintenance just stopping SSVM and CPVM will leave behind VMs on hypervisor side. As these system vms will be recreated they can be destroyed.
Fixes #3719

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix methods

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* immediately destroy systemvms

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix destroy

Added bypassHostMaintenance flag in Comma.java class to allow command to be handled by host agent even when host is in maintenace.
Flag is set true only for delete commands for ssvm and cpvm.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* unit test fix

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix missing return statement

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix

VM should be stopped with cleanup before calling expunge else it server may through error with host in PrepareForMaintenance state.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactor

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* rename

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactor

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-05-14 23:16:15 +05:30
sureshanaparti
eba186aa40
storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
2021-02-24 14:58:33 +05:30
Rohit Yadav
b482da8c91 Updating pom.xml version numbers for release 4.15.1.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-01-11 13:58:30 +05:30
Daan Hoogland
280c13a4bb Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-05 15:51:02 +00:00
Daan Hoogland
81e9e6809b Updating pom.xml version numbers for release 4.15.1.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:34:46 +00:00
Daan Hoogland
e26202f23e Updating pom.xml version numbers for release 4.16.0.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:32:10 +00:00
Daan Hoogland
01b3e361c7 Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2020-12-23 16:32:25 +00:00
Alexandru Bagu
fdb2ee3165
storage: Fix hypervisor type cast to string (#4516)
This PR addresses an error that appears when you try to add a new host. I don't even understand why there was a cast to String in the first place. I will assume some classes send HypervisorType and some send a string (empty or otherwise). Shouldn't this be addressed to use the same type everywhere? With this fix adding a new xenserver host works fine.

Co-authored-by: dahn <daan.hoogland@gmail.com>
2020-12-14 11:56:44 +05:30
lujiefsi
2aa7fac9ac
CLOUDSTACK-10423:Potential sensitive information disclosure (#4536)
* fixing CLOUDSTACK-10423

* make the message clear

Co-authored-by: lujie <lujie@foxmail.com>
2020-12-14 11:40:23 +05:30