393 Commits

Author SHA1 Message Date
Suresh Kumar Anaparti
8ea9fc911d
StoragePoolType as class (#8544)
* StoragePoolType as a class

* Fix agent side StoragePoolType enum to class

* Handle StoragePoolType for StoragePoolJoinVO

* Since StoragePoolType is a class, it cannot be converted by @Enumerated annotation.
Implemented conveter class and logic to utilize @Convert annotation.

* Fix UserVMJoinVO for StoragePoolType

* fixed missing imports

* Since StoragePoolType is a class, it cannot be converted by @Enumerated annotation.
Implemented conveter class and logic to utilize @Convert annotation.

* Fixed equals for the enum.

* removed not needed try/catch for prepareAttribute

* Added license to the file.

* Implemented "supportsPhysicalDiskCopy" for storage adaptor.

Co-authored-by: mprokopchuk <mprokopchuk@apple.com>

* Add javadoc to StoragePoolType class

* Add unit test for StoragePoolType comparisons

* StoragePoolType "==" and ".equals()" fix.

* Fix StoragePoolType for FiberChannelAdapter

* Fix for abstract storage adaptor set up issue

* review comments

* Pass StoragePoolType object for poolType dao attribute

---------

Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>
Co-authored-by: mprokopchuk <mprokopchuk@gmail.com>
2024-02-05 13:27:15 +05:30
Abhishek Kumar
7dffbc6e47 Updating pom.xml version numbers for release 4.20.0.0-SNAPSHOT
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2024-02-02 18:16:37 +05:30
Abhishek Kumar
a7b97ff3b0 Updating pom.xml version numbers for release 4.19.1.0-SNAPSHOT
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2024-02-02 18:06:04 +05:30
Abhishek Kumar
2746225b99 Updating pom.xml version numbers for release 4.19.0.0
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2024-01-29 10:21:52 +05:30
Vishesh
fedcf66de0
Externalise a few timeouts & fix timeout for hostSupportsUefi in libvirt ready command wrapper (#8547)
This PR fixes bug introduced in #8502. Timeout for script execution was set to 60 ms instead of 60s which resulted in host not getting UEFI enabled. This is a blocker for 4.19 release.

We do this by introducing a new agent parameter `agent.script.timeout` (default - 60 seconds) to use as a timeout for the script checking host's UEFI status.

We also externalize the timeout for the ReadyCommand by introducing a new global setting `ready.command.wait` (default - 60 seconds).

For ModifyStoragePoolCommand, we don't externalize the timeout to avoid confusion for the user. Since, the required timeout can vary depending on the provider in use and we are only setting the wait for default host listener for now. Instead, we reuse the global `wait` setting by dividing it by `5` making the default value of 6 minutes (1800/5 = 360s) for ModifyStoragePoolCommand.

Note: the actual time, the MS waits is twice the wait set for a Command. Check reference code below.
19250403e6/engine/orchestration/src/main/java/com/cloud/agent/manager/AgentAttache.java (L406-L442)
2024-01-27 23:36:13 +05:30
kishankavala
80bbb29abf
CleanUp Async Jobs after mgmt server maintenance (#8394)
This PR fixes moves resources stuck in transition state during async job cleanup

Problem:
During maintenance of the management server, other servers in the cluster or the same server after a restart initiate async job cleanup. However, this process leaves resources in a transitional state. The only recovery option currently available is to make direct database changes.

Solution:
This PR introduces a resolution by changing Volume, Virtual Machine, and Network resources from their transitional states. This adjustment enables the reattempt of failed operations without the need for manual database modifications.
2024-01-19 13:26:25 +05:30
Vishesh
c3b77cb7b8
Fix host stuck in connecting state (#8502)
There are a lot of test failures due to test_vm_life_cycle.py in multiple PRs due to host not available for migration of VMs.
#8438 (comment)
#8433 (comment)
#7344 (comment)

While debugging I noticed that the hosts get stuck in Connecting state because MS is waiting for a response of the ReadyCommand from the agent. Since we take a lock on connection and disconnection, restarting the agent doesn't work. To fix this, we have to restart the MS or wait for ~1 hour (default timeout).

On the agent side, it gets stuck waiting for a response from the Script execution.

To reproduce, run smoke/test_vm_life_cycle.py (TestSecuredVmMigration test class to be specific). Once the tests are complete, you will notice that some hosts are stuck in Connecting state. And restarting the agent fails due to the named lock. Locks on DB can be checked using the below query.

SELECT *
FROM performance_schema.metadata_locks
INNER JOIN performance_schema.threads ON THREAD_ID = OWNER_THREAD_ID
WHERE PROCESSLIST_ID <> CONNECTION_ID() \G;

This PR adds a wait for the ready command and a timeout to the Script execution to ensure that the thread doesn't get stuck and the named lock from database is released.
2024-01-15 13:56:34 +05:30
Rene Glover
1031c31e6a
FiberChannel Multipath for KVM + Pure Flash Array and HPE-Primera Support (#7889)
This PR provides a new primary storage volume type called "FiberChannel" that allows access to volumes connected to hosts over fiber channel connections. It requires Multipath to provide path discovery and failover. Second, the PR adds an AdaptivePrimaryDatastoreProvider that abstracts how volumes are managed/orchestrated from the connector to communicate with the primary storage provider, using a ProviderAdapter interface, allowing the code interacting with the primary storage provider API's to be simpler and have no direct dependencies on Cloudstack code. Lastly, the PR provides an implementation of the ProviderAdapter classes for the HP Enterprise Primera line of storage solutions and the Pure Flash Array line of storage solutions.
2023-12-09 11:31:33 +05:30
Abhishek Kumar
c599011ef5 Merge remote-tracking branch 'apache/4.18' 2023-12-08 18:06:15 +05:30
Harikrishna
7eb36367c9
Add lock mechanism considering template id, pool id, host id in PowerFlex Storage (#8233)
Observed a failure to start new virtual machine with PowerFlex storage. Traced it to concurrent VM starts using the same template and the same host to copy. Second mapping attempt failed.

While creating the volume clone from the seeded template in primary storage, adding a lock with the string containing IDs of template, storage pool and destination host avoids the situation of concurrent mapping attempts with the same host.
2023-12-08 13:21:16 +05:30
Abhishek Kumar
543c54c718
api,server,ui: snapshot copy, multi-zone replica (#7873)
This PR adds new functionality to copy snapshots across zones and take snapshots for multiple zones.

Copy functionality is similar to template copy. The source zone acts as the web server from where the destination zone(s) can download the snapshot files. For this purpose, a new API - `copySnapshot` has been added. The response for copySnapshot will be returning zone and download details from the first destination zone of the request. This behaviour is similar to the `copyTemplate` API.

In a similar manner, multiple zones can be selected while taking the snapshots or creating snapshot policies. For this snapshot will be taken in the base zone(in which volume is present) and then copied to the additional zones. A new parameter - `zoneids` has been added to `createSnapshot` and `createSnapshotPolicy` APIs.

As snapshots can be present on multiple zones (secondary stores), a new parameter `zoneid` has been added to delete the snapshot copy on a specific zone.

`listSnapshots` API has been updated to allow listing snapshot entries for different zones/datastores. New parameters - `showUnique`, `locationType` have been added.

Events generated during snapshot operations will now be linked to the snapshot itself rather than the volume of the snapshot.

`listSnapshotPolicies` and `createSnapshotPolicy` APIs will return zone details of the zones in which backup will be scheduled for the policy.

----
New API added
`copySnapshot`

Request and response params updated for APIs
```
- listSnapshots
- deleteSnapshot
- createTemplate
- listZones
- listSnapshotPolicies
- createSnapshotPolicy
```
UI updated for
- Snapshot detail view
- Create snapshot form
- Create snapshot policy form
- Create volume (from snapshot) form
- Create template (from snapshot) form

Doc PR: https://github.com/apache/cloudstack-documentation/pull/344
PR: https://github.com/apache/cloudstack/pull/7873
2023-10-23 09:01:58 +02:00
Wei Zhou
246bb24b0f Updating pom.xml version numbers for release 4.18.2.0-SNAPSHOT
Signed-off-by: Wei Zhou <weizhou@apache.org>
2023-09-12 17:26:53 +02:00
Wei Zhou
4bdff06acd Updating pom.xml version numbers for release 4.18.1.0
Signed-off-by: Wei Zhou <weizhou@apache.org>
2023-09-07 08:50:50 +02:00
John Bampton
6f4503488b
pre-commit: apply end-of-file-fixer to all files (#7551) 2023-08-02 13:47:21 +02:00
Wei Zhou
09a4a252d7 Merge remote-tracking branch 'apache/4.18' into HEAD 2023-06-21 15:08:56 +02:00
Harikrishna
40cc10a73d
Allow volume migrations in ScaleIO within and across ScaleIO storage clusters (#7408)
* Live storage migration of volume in scaleIO within same storage scaleio cluster

* Added migrate command

* Recent changes of migration across clusters

* Fixed uuid

* recent changes

* Pivot changes

* working blockcopy api in libvirt

* Checking block copy status

* Formatting code

* Fixed failures

* code refactoring and some changes

* Removed unused methods

* removed unused imports

* Unit tests to check if volume belongs to same or different storage scaleio cluster

* Unit tests for volume livemigration in ScaleIOPrimaryDataStoreDriver

* Fixed offline volume migration case and allowed encrypted volume migration

* Added more integration tests

* Support for migration of encrypted volumes across different scaleio clusters

* Fix UI notifications for migrate volume

* Data volume offline migration: save encryption details to destination volume entry

* Offline storage migration for scaleio encrypted volumes

* Allow multiple Volumes to be migrated with migrateVirtualMachineWithVolume API

* Removed unused unittests

* Removed duplicate keys in migrate volume vue file

* Fix Unit tests

* Add volume secrets if does not exists during volume migrations. secrets are getting cleared on package upgrades.

* Fix secret UUID for encrypted volume migration

* Added a null check for secret before removing

* Added more unit tests

* Fixed passphrase check

* Add image options to the encypted volume conversion
2023-06-21 11:57:05 +05:30
Rohit Yadav
8a42ab9ce4 Merge remote-tracking branch 'origin/4.18' 2023-04-14 21:49:12 +05:30
Harikrishna
b774ee5d11
vmware: Datastore cluster synchronization should check if the child datastores are in UP state or not (#7385)
This fix ensures when datastore cluster in VMware is added as a primary storage pool in CloudStack then all the child datastores (which already exists in CS) should be in Up state.

For example:

1. Datastore Cluster DS has two child datastores A and B in vCenter. (B is already added as a storage pool in CloudStack)
2. Now try to add datastore cluster DS into CloudStack as a primary storage pool
3. CloudStack tries to add child datastores A and B in CloudStack, since B is already there in CloudStack, it will reuse the existing storagepool entry and will keep under parent Storage pool DS.

During Step 3 we are now checking if B is Up state or not.
2023-04-11 22:23:12 +05:30
Daan Hoogland
fb4f6a334d Updating pom.xml version numbers for release 4.19.0.0-SNAPSHOT
Signed-off-by: Daan Hoogland <daan@onecht.net>
2023-03-15 19:46:01 +01:00
Daan Hoogland
05cda2729f Updating pom.xml version numbers for release 4.18.1.0-SNAPSHOT
Signed-off-by: Daan Hoogland <daan@onecht.net>
2023-03-15 19:38:14 +01:00
Daan Hoogland
0574087284 Updating pom.xml version numbers for release 4.18.0.0
Signed-off-by: Daan Hoogland <daan@onecht.net>
2023-03-11 09:35:41 +01:00
João Jandre
61a722548f
Create API to reassign volume (#6938) 2023-01-27 11:10:56 +01:00
João Jandre
14937e1adb
Fixed NPE on volume creation from snapshot (#6839)
Co-authored-by: João Jandre <joao@scclouds.com.br>
2022-10-26 08:44:01 +02:00
Marcus Sorensen
697e12f8f7
kvm: volume encryption feature (#6522)
This PR introduces a feature designed to allow CloudStack to manage a generic volume encryption setting. The encryption is handled transparently to the guest OS, and is intended to handle VM guest data encryption at rest and possibly over the wire, though the actual encryption implementation is up to the primary storage driver.

In some cases cloud customers may still prefer to maintain their own guest-level volume encryption, if they don't trust the cloud provider. However, for private cloud cases this greatly simplifies the guest OS experience in terms of running volume encryption for guests without the user having to manage keys, deal with key servers and guest booting being dependent on network connectivity to them (i.e. Tang), etc, especially in cases where users are attaching/detaching data disks and moving them between VMs occasionally.

The feature can be thought of as having two parts - the API/control plane (which includes scheduling aspects), and the storage driver implementation.

This initial PR adds the encryption setting to disk offerings and service offerings (for root volume), and implements encryption support for KVM SharedMountPoint, NFS, Local, and ScaleIO storage pools.

NOTE: While not required, operations can be significantly sped up by ensuring that hosts have the `rng-tools` package and service installed and running on the management server and hypervisors. For EL hosts the service is `rngd` and for Debian it is `rng-tools`. In particular, the use of SecureRandom for generating volume passphrases can be slow if there isn't a good source of entropy. This could affect testing and build environments, and otherwise would only affect users who actually use the encryption feature. If you find tests or volume creates blocking on encryption, check this first.

### Management Server

##### API

* createDiskOffering now has an 'encrypt' Boolean
* createServiceOffering now has an 'encryptroot' Boolean. The 'root' suffix is added here in case there is ever any other need to encrypt something related to the guest configuration, like the RAM of a VM.  This has been refactored to deal with the new separation of service offering from disk offering internally.
* listDiskOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listServiceOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listHosts now shows encryption support of each hypervisor host via `encryptionsupported`
* Volumes themselves don't show encryption on/off, rather the offering should be referenced. This follows the same pattern as other disk offering based settings such as the IOPS of the volume.

##### Volume functions

A decent effort has been made to ensure that the most common volume functions have either been cleanly supported or blocked. However, for the first release it is advised to mark this feature as *experimental*, as the code base is complex and there are certainly edge cases to be found.

Many of these features could eventually be supported over time, such as creating templates from encrypted volumes, but the effort and size of the change is already overwhelming.

Supported functions:
* Data Volume create
* VM root volume create
* VM root volume reinstall
* Offline volume snapshot/restore
* Migration of VM with storage (e.g. local storage VM migration)
* Resize volume
* Detach/attach volume

Blocked functions:
* Online volume snapshot
* VM snapshot w/memory
* Scheduled snapshots (would fail when VM is running)
* Disk offering migration to offerings that don't have matching encryption
* Creating template from encrypted volume
* Creating volume from encrypted volume
* Volume extraction (would we decrypt it first, or expose the key? Probably the former).

##### Primary Storage Support

For storage developers, adding encryption support involves:

1. Updating the `StoragePoolType` for your primary storage to advertise encryption support. This is used during allocation of storage to match storage types that support encryption to storage that supports it.

2. Implementing encryption feature when your `PrimaryDataStoreDriver` is called to perform volume lifecycle functions on volumes that are requesting encryption. You are free to do what your storage supports - this could be as simple as calling a storage API with the right flag when creating a volume. Or (as is the case with the KVM storage types), as complex as managing volume details directly at the hypervisor host. The data objects passed to the storage driver will contain volume passphrases, if encryption is requested.

##### Scheduling

For the KVM implementations specified above, we are dependent on the KVM hosts having support for volume encryption tools. As such, the hosts `StartupRoutingCommand` has been modified to advertise whether the host supports encryption. This is done via a probe during agent startup to look for functioning `cryptsetup` and support in `qemu-img`. This is also visible via the listHosts API and the host details in the UI.  This was patterned after other features that require hypervisor support such as UEFI.

The `EndPointSelector` interface and `DefaultEndpointSelector` have had new methods added, which allow the caller to ask for endpoints that support encryption.  This can be used by storage drivers to find the proper hosts to send storage commands that involve encryption. Not all volume activities will require a host to support encryption (for example a snapshot backup is a simple file copy), and this is the reason why the interface has been modified to allow for the storage driver to decide, rather than just passing the data objects to the EndpointSelector and letting the implementation decide.

VM scheduling has also been modified. When a VM start is requested, if any volume that requires encryption is attached, it will filter out hosts that don't support encryption.

##### DB Changes

A volume whose disk offering enables encryption will get a passphrase generated for it before its first use. This is stored in the new 'passphrase' table, and is encrypted using the CloudStack installation's standard configured DB encryption. A field has been added to the volumes table, referencing this passphrase, and a foreign key added to ensure passphrases that are referenced can't be removed from the database.  The volumes table now also contains an encryption format field, which is set by the implementer of the encryption and used as it sees fit.

#### KVM Agent

For the KVM storage pool types supported, the encryption has been implemented at Qemu itself, using the built-in LUKS storage support. This means that the storage remains encrypted all the way to the VM process, and decrypted before the block device is visible to the guest.  This may not be necessary in order to implement encryption for /your/ storage pool type, maybe you have a kernel driver that decrypts before the block device on the system, or something like that. However, it seemed like the simplest, common place to terminate the encryption, and provides the lowest surface area for decrypted guest data.

For qcow2 based storage, `qemu-img` is used to set up a qcow2 file with LUKS encryption. For block based (currently just ScaleIO storage), the `cryptsetup` utility is used to format the block device as LUKS for data disks, but `qemu-img` and its LUKS support is used for template copy.

Any volume that requires encryption will contain a passphrase ID as a byte array when handed down to the KVM agent. Care has been taken to ensure this doesn't get logged, and it is cleared after use in attempt to avoid exposing it before garbage collection occurs.  On the agent side, this passphrase is used in two ways:

1. In cases where the volume experiences some libvirt interaction it is loaded into libvirt as an ephemeral, private secret and then referenced by secret UUID in any libvirt XML. This applies to things like VM startup, migration preparation, etc.

2. In cases where `qemu-img` needs to use this passphrase for volume operations, it is written to a `KeyFile` on the cloudstack agent's configured tmpfs and passed along. The `KeyFile` is a `Closeable` and when it is closed, it is deleted. This allows us to try-with-resources any volume operations and get the KeyFile removed regardless.

In order to support the advanced syntax required to handle encryption and passphrases with `qemu-img`, the `QemuImg` utility has been modified to support the new `--object` and `--image-opts` flags. These are modeled as `QemuObject` and `QemuImageOptions`.  These `qemu-img` flags have been designed to supersede some of the existing, older flags being used today (such as choosing file formats and paths), and an effort could be made to switch over to these wholesale. However, for now we have instead opted to keep existing functions and do some wrapping to ensure backward compatibility, so callers of `QemuImg` can choose to use either way.

It should be noted that there are also a few different Enums that represent the encryption format for various purposes. While these are analogous in principle, they represent different things and should not be confused. For example, the supported encryption format strings for the `cryptsetup` utility has `LuksType.LUKS` while `QemuImg` has a `QemuImg.PhysicalDiskFormat.LUKS`.

Some additional effort could potentially be made to support advanced encryption configurations, such as choosing between LUKS1 and LUKS2 or changing cipher details. These may require changes all the way up through the control plane. However, in practice Libvirt and Qemu currently only support LUKS1 today. Additionally, the cipher details aren't required in order to use an encrypted volume, as they're stored in the LUKS header on the volume there is no need to store these elsewhere.  As such, we need only set the one encryption format upon volume creation, which is persisted in the volumes table and then available later as needed.  In the future when LUKS2 is standard and fully supported, we could move to it as the default and old volumes will still reference LUKS1 and have the headers on-disk to ensure they remain usable. We could also possibly support an automatic upgrade of the headers down the road, or a volume migration mechanism.

Every version of cryptsetup and qemu-img tested on variants of EL7 and Ubuntu that support encryption use the XTS-AES 256 cipher, which is the leading industry standard and widely used cipher today (e.g. BitLocker and FileVault).

Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
2022-09-27 10:20:59 +05:30
Abhishek Kumar
e720b72e15 Merge remote-tracking branch 'apache/4.17' into main 2022-08-31 17:38:30 +05:30
Abhishek Kumar
a21efe75df
vmware: fix vm snapshot with datastore cluster, drs (#6643)
Fixes #6595
Sync volume datastore, path and chaininfo info while calculating snapshot chain size after snapshot operation is complete from vCenter.
2022-08-31 16:00:14 +05:30
Suresh Kumar Anaparti
75da982d73
Updated resource counter to include correct size after volume creation/resize and other improvements (#6587)
* Updated resource counter to include correct size after volume creation/resize and other improvements
- Recalculate resource counters for root domain in the periodic task
- Update correct size in the primary_storage resource counter after volume creation/resize
- Some code improvements

* review and sonarcloud issues

Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Co-authored-by: Daan Hoogland <daan@onecht.net>
2022-08-16 10:41:42 +02:00
Paula Oliveira
9717ed9af2
Improve log messages on VolumeOrchestrator class (#6408)
Co-authored-by: Paula Zomignani Oliveira <paula@scclouds.com.br>
2022-08-12 09:17:06 +02:00
John Bampton
f9347ecf2c
Fix spelling (#6597) 2022-08-03 15:43:47 +05:30
nvazquez
0bcc609f05
Updating pom.xml version numbers for release 4.18.0.0-SNAPSHOT
Signed-off-by: nvazquez <nicovazquez90@gmail.com>
2022-06-06 12:25:35 -03:00
nvazquez
038a669d6b
Updating pom.xml version numbers for release 4.17.1.0-SNAPSHOT
Signed-off-by: nvazquez <nicovazquez90@gmail.com>
2022-06-06 12:19:44 -03:00
nvazquez
c56220fcf2
Updating pom.xml version numbers for release 4.17.0.0
Signed-off-by: nvazquez <nicovazquez90@gmail.com>
2022-05-31 14:33:47 -03:00
DK101010
ccac1a383f
Feat/add vdisk UUID to list volume (#5848)
* get vdisk uuid from vcenter and store it into database

* add vdisk uuid as external_uuid to listVolume response

* add sql upgrade file

* Update vmware-base/src/main/java/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java

Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>

* update sql add column external_uuid

* Update server/src/main/java/com/cloud/storage/VolumeApiServiceImpl.java

Co-authored-by: Wei Zhou <weizhou@apache.org>

* adapt param description for externalUuid

* add 'idempotent column add' to create external_uuid col

* rename method to getExternalDiskUUID

* remove line disk_offering.system_use

Co-authored-by: DK101010 <dirk.klahre@itelligence.de>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
2022-04-19 23:34:09 -03:00
Pearl Dsilva
431c352a6d
Synchronization of network devices on newly added hosts for Persistent Networks (#5977)
* Persistent Network feature & Marvin component tests

* Cleaned up comments and imports

* fixed small error

* add support to add setup persistent networks' resources when a disabled host is enabled

* small fix

* use wildcard instead of hard-coding the bridge name

* allow clean up of resources when removing a host in maintenance mode

* skip test for simulator hypervisor

Co-authored-by: shatoboar <sang-woo.bae@campus.tu-berlin.de>
2022-04-11 23:12:05 -03:00
nvazquez
e3132af64e
Merge branch '4.16' 2022-03-10 08:49:43 -03:00
Wei Zhou
3a456f1b31
server: mark volume snapshots as Destroyed if it does not exist on primary and secondary storage when delete a volume (#6057)
* server: mark volume snapshots as Destroyed in some cases when delete a volume in QCOW2 format

when delete a volume in QCOW2 format, if volume snapshot does not exist on primary and secondary storage, mark the snapshot as Destroyed.

* Update #6057: remove check on volume format
2022-03-10 08:49:03 -03:00
Suresh Kumar Anaparti
bc70535ee5
Updating pom.xml version numbers for release 4.16.2.0-SNAPSHOT
Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
2022-03-03 18:15:33 +05:30
Suresh Kumar Anaparti
cad9332082
Updating pom.xml version numbers for release 4.16.1.0
Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
2022-02-25 19:01:16 +05:30
Suresh Kumar Anaparti
5c02f6d507
Merge branch '4.16' into main 2022-01-06 17:47:37 +05:30
dahn
2774bc156f
use physical size instead of virtual size for migration. (#5750)
* Use Physical size to evaluate if migration is possible

* Improve logging and consider files skipped as failure in complete migration

* skipped can't be negative

* remove useless method

* group multidisk templates for secstor migration

* use enum

* Update engine/orchestration/src/main/java/org/apache/cloudstack/engine/orchestration/DataMigrationUtility.java

Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Pearl d'Silva <pearl.dsilva@shapeblue.com>
2022-01-06 17:18:50 +05:30
Daniel Augusto Veronezi Salvador
b4aabadc4d
Replace string libraries with org.apache.commons.lang3.StringUtils (#5386)
* Replace google lib for lang3 and adjust methods calls

* Replace string libs by lang3

* Prohibit others string libs

Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2021-11-18 13:41:48 +05:30
nicolas
3f79436840
Updating pom.xml version numbers for release 4.17.0.0-SNAPSHOT
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-09 22:55:52 -03:00
nicolas
93c3c3b9ac
Updating pom.xml version numbers for release 4.16.1.0-SNAPSHOT
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-09 22:50:22 -03:00
nicolas
44c08b5acc
Updating pom.xml version numbers for release 4.16.0.0
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-04 14:14:57 -03:00
Nicolas Vazquez
a5372a98dc
Fix storage cleanup corner case preventing VM deletion (#5575)
* Fix storage cleanup corner case

* Improve deletion

* Refactor
2021-10-16 00:09:54 -03:00
Daniel Augusto Veronezi Salvador
8ffba83214
Keep volume policies after migrating it to another primary storage (#5067)
* Add commons-lang3 to Utils

* Create an util to provide methods that ReflectionToStringBuilder does not have yet

* Create method to retrieve map of tags from resource

* Enable tests on volume components and remove useless tests

* Refactor VolumeObject and add unit tests

* Extract createPolicy in several methods

* Create method to copy policies between volumes and add unit tests

* Copy policies to new volume before removing old volume on volume migration

* Extract "destroySourceVolumeAfterMigration" to a method and test it

* Remove javadoc @param with no sensible information

* Rename method name to a generic name

Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
2021-09-08 09:13:41 -03:00
Nicolas Vazquez
413d10dd81
server: Extend the Annotations framework (#5103)
* Extend addAnnotation and listAnnotations APIs

* Allow users to add, list and remove comments

* Add adminsonly UI and allow admins or owners to remove comments

* New annotations tab

* In progress: new comments section

* Address review comments

* Fix

* Fix annotationfilter and comments section

* Add keyword and delete action

* Fix and rename annotations tab

* Update annotation visibility API and update comments table accordingly

* Allow users seeing all the comments for their owned resources

* Extend comments for volumes and snapshots

* Extend comments to multiple entities

* Add uuid to ssh keypairs

* SSH keypair UI refactor

* Extend comments to the infrastructure entities

* Add missing entities

* Fix upgrade version for ssh keypairs

* Fix typo on DB upgrade schema

* Fix annotations table columns when there is no data

* Extend the list view of items showing they if they have comments

* Remove extra test

* Add annotation permissions

* Address review comments

* Extend marvin tests for annotations

* updating ui stuff

* addition to toggle visibility

* Fix pagination on comments section

* Extend to kubernetes clusters

* Fixes after last review

* Change default value for adminsonly column

* Remove the required field for the annotationfilter parameter

* Small fixes on visibility and other fixes

* Cleanup to reduce files changed

* Rollback extra line

* Address review comments

* Fix cleanup error on smoke test

* Fix sending incorrect parameter to checkPermissions method

* Add check domain access for the calling account for domain networks

* Fix only display annotations icon if there are comments the user can see

* Simply change the Save button label to Submit

* Change order of the Tools menu to provent users getting 404 error on clicking the text instead of expanding

* Remove comments when removing entities

* Address review comments on marvin tests

* Allow users to list annotations for an entity ID

* Allow users to see all comments for allowed entities

* Fix search filters

* Remove username from search filter

* Add pagination to the annotations tab

* Display username for user comments

* Fix add permissions for domain and resource admins

* Fix for domain admins

* Trivial but important UI fix

* Replace pagination for annotations tab

* Add confirmation for delete comment

* Lint warnings

* Fix reduced list as domain admin

* Fix display remove comment button for non admins

* Improve display remove action button

* Remove unused parameter on groupShow

* Include a clock icon to the all comments filter except for root admin

* Move cleanup SQL to the correct file after rebasing main

Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>
2021-09-08 10:14:06 +05:30
Spaceman1984
96c9c5a5e2
Added disk provisioning type support for VMWare (#4640)
* Added disk provisioning type support for VMWare

* Review changes

* Fixed unit test

* Review changes

* Added missing licenses

* Review changes

* Update StoragePoolInfo.java

Removed white space

* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id

* Delete __init__.py

* Merge fix

* Fixed failing test

* Added comment about parameters

* Added error log when update fails

* Added exception when using API

* Ordering storage pool selection to prefer thick disk capable pools if available

* Removed unused parameter

* Reordering changes

* Returning storage pool details after update

* Removed multiple pool update, updated marvin test, removed duplicate enum

* Removed comment

* Removed unused import

* Removed for loop

* Added missing return statements for failed checks

* Class name change

* Null pointer

* Added more info when a deployment fails

* Null pointer

* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java

Co-authored-by: dahn <daan.hoogland@gmail.com>

* Small bug fix on API response and added missing bracket

* Removed datastore cluster code

* Removed unused imports, added missing signature

* Removed duplicate config key

* Revert "Added more info when a deployment fails"

This reverts commit 2486db78dca8e034d8ad2386174dfb11004ce654.

Co-authored-by: dahn <daan.hoogland@gmail.com>
2021-07-16 22:37:42 -03:00
Rohit Yadav
cb167072a1 Merge remote-tracking branch 'origin/4.15'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-05-07 16:37:42 +05:30
Harikrishna
32e3bbdcc5
VMware Datastore Cluster primary storage pool synchronisation (#4871)
Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.

Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
2021-05-07 16:30:54 +05:30