1140 Commits

Author SHA1 Message Date
Suresh Kumar Anaparti
9f4c895974
Updating pom.xml version numbers for release 4.19.1.0
Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
2024-07-15 17:19:29 +05:30
Rene Glover
32cc1d46a5
Copy on pool host when storage pool has ScopeType.HOST (#9356) 2024-07-10 12:30:47 +05:30
Abhisar Sinha
063dc60114
Change storage pool scope from Cluster to Zone and vise versa (#8875)
* New feature: Change storage pool scope

* Added checks for Ceph/RBD

* Update op_host_capacity table on primary storage scope change

* Storage pool scope change integration test

* pull 8875 : Addressed review comments

* Pull 8875: remove storage checks, AbstractPrimayStorageLifeCycleImpl class

* Pull 8875: Fixed integration test failure

* Pull 8875: Review comments

* Pull 8875: review comments + broke changeStoragePoolScope into smaller functions

* Added UT for changeStoragePoolScope

* Rename AbstractPrimaryDataStoreLifeCycleImpl to BasePrimaryDataStoreLifeCycleImpl

* Pull 8875: Dao review comments

* Pull 8875: Rename changeStoragePoolScope.vue to ChangeStoragePoolScope.vue

* Pull 8875: Created a new smokes test file + A single warning msg in ui

* Pull 8875: Added cleanup in test_primary_storage_scope.py

* Pull 8875: Type in en.json

* Pull 8875: cleanup array in test_primary_storage_scope.py

* Pull:8875 Removing extra whitespace at eof of StorageManagerImplTest

* Pull 8875: Added UT for PrimaryDataStoreHelper and BasePrimaryDataStoreLifeCycleImpl

* Pull 8875: Added license header

* Pull 8875: Fixed sql query for vmstates

* Pull 8875: Changed icon plus info on disabled mode in apidoc

* Pull 8875: Change scope should not work for local storage

* Pull 8875: Change scope completion event

* Pull 8875: Added api findAffectedVmsForStorageScopeChange

* Pull 8875: Added UT for findAffectedVmsForStorageScopeChange and removed listByPoolIdVMStatesNotInCluster

* Pull 8875: Review comments + Vm name in response

* Pull 8875: listByVmsNotInClusterUsingPool was returning duplicate VM entries because of multiple volumes in the VM satisfying the criteria

* Pull 8875: fixed listAffectedVmsForStorageScopeChange UT

* listAffectedVmsForStorageScopeChange should work if the pool is not disabled

* Fix listAffectedVmsForStorageScopeChangeTest UT

* Pull 8875: add volume.removed not null check in VmsNotInClusterUsingPool query

* Pull 8875: minor refactoring in changeStoragePoolScopeToCluster

* Update server/src/main/java/com/cloud/storage/StorageManagerImpl.java

* fix eof

* changeStoragePoolScopeToZone should connect pool to all Up hosts

Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
2024-06-29 10:03:34 +05:30
Vishesh
bcbf152a05
Merge branch '4.18' into 4.19 2024-06-28 20:14:21 +05:30
dahn
6b25ed7a02
prevent an NPE on an uninitialised TemplateObject (#8898)
* prevent an NPE on an uninitialised TemplateObject

* move npe handler up-stack

* Update engine/storage/image/src/main/java/org/apache/cloudstack/storage/image/store/TemplateObject.java

* catch yet one level up

* Update engine/orchestration/src/main/java/org/apache/cloudstack/engine/orchestration/VolumeOrchestrator.java

* Update engine/storage/image/src/main/java/org/apache/cloudstack/storage/image/store/TemplateObject.java

* extra guard

* Revert "prevent an NPE on an uninitialised TemplateObject"

This reverts commit e602a65ea62e4707828483a4ddea288d81ff06f5.
2024-06-26 21:02:08 +05:30
slavkap
6c06e85c80
Temporarily backup StorPool volume before expunge (#8843)
* Temporarily backup StorPool volume before expunge

Sometimes the users delete the volumes by mistake. This enhancment
provides a solution to backup the volume before it's deleted. The user
will be able to see the snapshot in CloudStack UI/CLI and create only a
volume from it.
A task will check (by default on every 5mins) if the snapshots are
deleted from StorPool

Global settings to enable the delay delete option:

`storpool.delete.after.interval` - The interval (in seconds) after the StorPool snapshot will be deleted
`storpool.list.snapshots.delete.after.interval` - The interval (in seconds) to fetch the StorPool snapshots with deleteAfter flag

Minor fix when deleting snapshots

* added Apache licence

* addressed comments
2024-06-26 13:58:04 +05:30
Abhisar Sinha
4eb43651e2
Ability to specify NFS mount options while adding a primary storage and modify them on a pre-existing primary storage (#8947)
* Ability to specify NFS mount options while adding a primary storage and modify it later

* Pull 8947: Rename all occurrence of nfsopt to nfsMountOpt and added nfsMountOpts to ApiConstants

* Pull 8947: Refactor code - move into separate methods

* Pull 8947: CollectionsUtils.isNotEmpty and switch statement in LibvirtStoragePoolDef.java

* Pull 8947: UI - cancel maintainenace will remount the storage pool and apply the options

* Pull 8947: UI - moved edit NFS mount options to edit Primary Storage form

* Pull 8947: UI - moved 'NFS Mount Options' to below 'Type' in dataview

* Pull 8947: Fixed message in AddPrimaryStorage.vue

* Pull 8947: Convert _nfsmountOpts to Set in libvirtStoragePoolDef

* Pull 8947: Throw exception and log error if mount fails due to incorrect mount option

* Pull 8947: Added UT and moved integration test to component/maint

* Pull 8947: Review comments

* Pull 8947: Removed password from integration test

* Pull 8947: move details allocation to inside the if loop in getStoragePoolNFSMountOpts

* Pull 8947: Fixed a bug in AddPrimaryStorage.vue

* Pull 8947: Pool should remain in maintenance mode if mount fails

* Pull 8947: Removed password from integration test

* Pull 8947: Added UT

* Pull 8875: Fixed a bug in CloudStackPrimaryDataStoreLifeCycleImplTest

* Pull 8875: Fixed a bug in LibvirtStoragePoolDefTest

* Pull 8947: minor code restructuring

* Pull 8947 : added some ut for coverage

* Fix LibvirtStorageAdapterTest UT
2024-06-25 23:45:35 +05:30
Suresh Kumar Anaparti
620ed164d8
VMware: Improve error messaging / logs when starting non-user VMs, and secondary storage not available or doesn't have enough capacity (#9207) 2024-06-25 12:25:42 +05:30
Rene Glover
6ee6603359
Updates to HPE-Primera and Pure FlashArray Drivers to use Host-based VLUN Assignments (#8889)
* Updates to change PUre and Primera to host-centric vlun assignments; various small bug fixes

* update to add timestamp when deleting pure volumes to avoid future conflicts

* update to migrate to properly check disk offering is valid for the target storage pool

* Updates to change PUre and Primera to host-centric vlun assignments; various small bug fixes

* update to add timestamp when deleting pure volumes to avoid future conflicts

* update to migrate to properly check disk offering is valid for the target storage pool

* improve error handling when copying volumes to add precision to which step failed

* rename pure volume before delete to avoid conflicts if the same name is used before its expunged on the array

* remove dead code in AdaptiveDataStoreLifeCycleImpl.java

* Fix issues found in PR checks

* fix session refresh TTL logic

* updates from PR comments

* logic to delete by path ONLY on supported OUI

* fix to StorageSystemDataMotionStrategy compile error

* change noisy debug message to trace message

* fix double callback call in handleVolumeMigrationFromNonManagedStorageToManagedStorage

* fix for flash array delete error

* fix typo in StorageSystemDataMotionStrategy

* change copyVolume to use writeback to speed up copy ops

* remove returning PrimaryStorageDownloadAnswer when connectPhysicalDisk returns false during KVMStorageProcessor template copy

* remove change to only set UUID on snapshot if it is a vmSnapshot

* reverting change to UserVmManagerImpl.configureCustomRootDiskSize

* add error checking/simplification per comments from @slavkap

* Update engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/StorageSystemDataMotionStrategy.java

Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>

* address PR comments from @sureshanaparti

---------

Co-authored-by: GLOVER RENE <rg9975@cs419-mgmtserver.rg9975nprd.app.ecp.att.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
2024-06-25 10:35:39 +05:30
João Jandre
3e30283500
Fix migration from local storage to NFS in KVM (#8909)
* fix migration from local to nfs

* remove unused imports

* remove dead code
2024-06-24 13:02:48 +05:30
Pearl Dsilva
f792684b9c
Support migration of VM imported from a remote host (#9259) 2024-06-24 12:46:21 +05:30
Harikrishna
bb0c1f93af
Add volume encryption checks during the disk offering change (#9209) 2024-06-17 10:36:47 +02:00
dahn
e520525fe7
Use parameter dcId as wrapper to prevent NPE (#8986) 2024-05-01 09:12:36 +02:00
Vishesh
80a8b80a9d
Update volume's passphrase to null if diskOffering doesn't support encryption (#8904) 2024-04-29 12:18:09 +05:30
João Jandre
8a101fbbc1 Updating pom.xml version numbers for release 4.18.3.0-SNAPSHOT
Signed-off-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
2024-04-17 11:11:57 -03:00
João Jandre
154566f914 Updating pom.xml version numbers for release 4.18.2.0
Signed-off-by: João Jandre <48719461+JoaoJandre@users.noreply.github.com>
2024-04-12 08:25:04 -03:00
Abhishek Kumar
ff3e9bd821 engine-storage: control download redirection
Add a global setting to control whether redirection is allowed while
downloading templates and volumes

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2024-04-04 14:11:05 +05:30
Wei Zhou
939d0b9011 engine-storage: control download redirection
Add a global setting to control whether redirection is allowed while
downloading templates and volumes

core: some changes on SimpleHttpMultiFileDownloader
similar as HttpTemplateDownloader

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
(cherry picked from commit b1642bc3bf58ccde9f56f632b5a9fe46a3eb5356)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2024-04-04 11:19:20 +05:30
Harikrishna
c462be1412
New API "checkVolume" to check and repair any leaks or issues reported by qemu-img check (#8577)
* Introduced a new API checkVolumeAndRepair that allows users or admins to check and repair if any leaks observed.
Currently this is supported only for KVM

* some fixes

* Added unit tests

* addressed review comments

* add repair volume while granting access

* Changed repair parameter to accept both leaks/all

* Introduced new global setting volume.check.and.repair.before.use to do volume check and repair before VM start or volume attach operations

* Added volume check and repair changes only during VM start and volume attach operations

* Refactored the names to look similar across the code

* Some code fixes

* remove unused code

* Renamed repair values

* Fixed unit tests

* changed version

* Address review comments

* Code refactored

* used volume name in logs

* Changed the API to Async and the setting scope to storage pool

* Fixed exit value handling with check volume command

* Fixed storage scope to the setting

* Fix volume format issues

* Refactored the log messages

* Fix formatting
2024-02-29 14:41:49 +05:30
Daan Hoogland
f4987bf8ee Merge release branch 4.18 to 4.19
* 4.18:
  Storage plugin support to check if volume on datastore requires access for migration (#8655)
  CKS: fix /opt/bin/deploy-cloudstack-secret in CKS control nodes (#8697)
2024-02-26 15:53:11 +01:00
Suresh Kumar Anaparti
f731fe882c
Storage plugin support to check if volume on datastore requires access for migration (#8655)
* Check if volume on datastore requires access for migration, and grant/revoke volume access if requires

* Updated default implementation for requiresAccessForMigration method in PrimaryDataStoreDriver
2024-02-26 20:16:31 +05:30
GaOrtiga
6f3e4e6302
fix_filter_and_pagination (#8306)
Co-authored-by: Gabriel <gabriel.fernandes@scclouds.com.br>
2024-02-16 11:15:55 +01:00
Abhishek Kumar
a7b97ff3b0 Updating pom.xml version numbers for release 4.19.1.0-SNAPSHOT
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2024-02-02 18:06:04 +05:30
Abhishek Kumar
2746225b99 Updating pom.xml version numbers for release 4.19.0.0
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2024-01-29 10:21:52 +05:30
Vishesh
fedcf66de0
Externalise a few timeouts & fix timeout for hostSupportsUefi in libvirt ready command wrapper (#8547)
This PR fixes bug introduced in #8502. Timeout for script execution was set to 60 ms instead of 60s which resulted in host not getting UEFI enabled. This is a blocker for 4.19 release.

We do this by introducing a new agent parameter `agent.script.timeout` (default - 60 seconds) to use as a timeout for the script checking host's UEFI status.

We also externalize the timeout for the ReadyCommand by introducing a new global setting `ready.command.wait` (default - 60 seconds).

For ModifyStoragePoolCommand, we don't externalize the timeout to avoid confusion for the user. Since, the required timeout can vary depending on the provider in use and we are only setting the wait for default host listener for now. Instead, we reuse the global `wait` setting by dividing it by `5` making the default value of 6 minutes (1800/5 = 360s) for ModifyStoragePoolCommand.

Note: the actual time, the MS waits is twice the wait set for a Command. Check reference code below.
19250403e6/engine/orchestration/src/main/java/com/cloud/agent/manager/AgentAttache.java (L406-L442)
2024-01-27 23:36:13 +05:30
kishankavala
80bbb29abf
CleanUp Async Jobs after mgmt server maintenance (#8394)
This PR fixes moves resources stuck in transition state during async job cleanup

Problem:
During maintenance of the management server, other servers in the cluster or the same server after a restart initiate async job cleanup. However, this process leaves resources in a transitional state. The only recovery option currently available is to make direct database changes.

Solution:
This PR introduces a resolution by changing Volume, Virtual Machine, and Network resources from their transitional states. This adjustment enables the reattempt of failed operations without the need for manual database modifications.
2024-01-19 13:26:25 +05:30
Vishesh
c3b77cb7b8
Fix host stuck in connecting state (#8502)
There are a lot of test failures due to test_vm_life_cycle.py in multiple PRs due to host not available for migration of VMs.
#8438 (comment)
#8433 (comment)
#7344 (comment)

While debugging I noticed that the hosts get stuck in Connecting state because MS is waiting for a response of the ReadyCommand from the agent. Since we take a lock on connection and disconnection, restarting the agent doesn't work. To fix this, we have to restart the MS or wait for ~1 hour (default timeout).

On the agent side, it gets stuck waiting for a response from the Script execution.

To reproduce, run smoke/test_vm_life_cycle.py (TestSecuredVmMigration test class to be specific). Once the tests are complete, you will notice that some hosts are stuck in Connecting state. And restarting the agent fails due to the named lock. Locks on DB can be checked using the below query.

SELECT *
FROM performance_schema.metadata_locks
INNER JOIN performance_schema.threads ON THREAD_ID = OWNER_THREAD_ID
WHERE PROCESSLIST_ID <> CONNECTION_ID() \G;

This PR adds a wait for the ready command and a timeout to the Script execution to ensure that the thread doesn't get stuck and the named lock from database is released.
2024-01-15 13:56:34 +05:30
Suresh Kumar Anaparti
e87ce0c723
Fix reorder/list pools when cluster details are not set, while deploying vm / attaching volume (#8373)
This PR fixes reorder/list pools when cluster details are not set, while deploying vm / attaching volume.

Problem:
Attach volume to a VM fails, on infra with zone-wide pools & vm.allocation.algorithm=userdispersing as the cluster details are not set (passed as null) while reordering / listing pools by volumes.

Solution:
Ignore cluster details when not set, while reordering / listing pools by volumes.
2024-01-10 18:13:32 +05:30
Abhishek Kumar
82f7abddb3 Merge remote-tracking branch 'apache/4.18' 2023-12-13 11:24:15 +05:30
Bryan Lima
3bb318bab9
kvm: Add support for cgroupv2 (#8252)
1. Problem description

In Apache CloudStack (ACS), when a VM is deployed in a host with the KVM hypervisor, an XML file is created in the assigned host, which has a property shares that defines the weight of the VM to access the host CPU. The value of this property has no unit, and it is a relative measure to calculate how much CPU a given VM will have in the host. However, this value has a limit, which depends on the version of cgroup utilized by the host's kernel. The problem lies at the range value of shares that varies between both versions: [2, 264144] for cgroups version 1; and [1, 10000] for cgroups version 2. Currently, ACS calculates the value of shares using Equation 1, presented below, where CPU is the number of cores and speed is the CPU frequency; both specified in the VM's compute offering. Therefore, if a compute offering has, for example, 6 cores at 2 GHz, the shares value will be 12000 and an exception will be thrown by libvirt if the host utilizes cgroup v2. The second version is becoming the default one in current Linux distributions; thus, it is necessary to address this limitation.

    Equation 1
    shares = CPU * speed

Fixes: #6744
2. Proposed changes

To address the problem described, we propose to apply a scale conversion considering the max shares of the host. Using the same formula currently utilized by ACS, it is possible to calculate the maximum shares of a VM for a given host. In other words, using the number of cores and the nominal speed of the host's CPU as the upper limit of shares allowed to a VM. Then, this value will be scaled to the allowed interval of [1, 10000] of cgroup v2 by using a linear scale conversion.

The VM shares would be calculated as Equation 2, presented below, where VM requested shares is the requested shares value calculated using Equation 1, cgroup upper limit is fixed with a value of 10000 (cgroups v2 upper limit), and host max shares is the maximum shares value of the host, calculated using Equation 1. Using Equation 2, the only case where a VM passes the cgroup v2 limit is when the user requests more resources than the host has, which is not possible with the current implementation of ACS.

    Equation 2
    shares = (VM requested shares * cgroup upper limit)/host max shares

To implement the proposal, the following APIs will be updated: deployVirtualMachine, migrateVirtualMachine and scaleVirtualMachine. When a VM is being deployed, a new verification will be added to find a suitable host. The max shares of each host will be calculated, and the VM calculated shares will be verified if it does not surpass the host's value. Likewise, the migration of VMs will have a similar new verification. Lastly, the scale of VMs will also have the same verification for the VM's host.

To determine the max shares of a given host, we will use the same equation currently used in ACS for calculating the shares of VMs, presented in Section 1. When Equation 1 is used to determine the maximum shares of a host, CPU is the number of cores of the host, and speed is the nominal CPU speed, i.e., considering the CPU's base frequency.

It is important to note that these changes are only for hosts with the KVM hypervisor using cgroup v2 for now.
2023-12-13 10:51:24 +05:30
Rene Glover
1031c31e6a
FiberChannel Multipath for KVM + Pure Flash Array and HPE-Primera Support (#7889)
This PR provides a new primary storage volume type called "FiberChannel" that allows access to volumes connected to hosts over fiber channel connections. It requires Multipath to provide path discovery and failover. Second, the PR adds an AdaptivePrimaryDatastoreProvider that abstracts how volumes are managed/orchestrated from the connector to communicate with the primary storage provider, using a ProviderAdapter interface, allowing the code interacting with the primary storage provider API's to be simpler and have no direct dependencies on Cloudstack code. Lastly, the PR provides an implementation of the ProviderAdapter classes for the HP Enterprise Primera line of storage solutions and the Pure Flash Array line of storage solutions.
2023-12-09 11:31:33 +05:30
Abhishek Kumar
c599011ef5 Merge remote-tracking branch 'apache/4.18' 2023-12-08 18:06:15 +05:30
Harikrishna
7eb36367c9
Add lock mechanism considering template id, pool id, host id in PowerFlex Storage (#8233)
Observed a failure to start new virtual machine with PowerFlex storage. Traced it to concurrent VM starts using the same template and the same host to copy. Second mapping attempt failed.

While creating the volume clone from the seeded template in primary storage, adding a lock with the string containing IDs of template, storage pool and destination host avoids the situation of concurrent mapping attempts with the same host.
2023-12-08 13:21:16 +05:30
Bryan Lima
b0910fc61d
Add dynamic secondary storage selection (#7659) 2023-12-04 09:52:32 +01:00
kishankavala
5651eab49c
ObjectStore Framework with MinIO and Simulator plugins (#7752)
This PR adds Object Storage feature to CloudStack.

FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/%5BDRAFT%5D+CloudStack+Object+Store
2023-12-01 17:51:00 +05:30
João Jandre
26b01f6f3b
Flexible tags for hosts and storage pools (#7489)
Co-authored-by: João Jandre <joao@scclouds.com.br>
2023-11-30 09:36:47 +01:00
Daan Hoogland
05b9b6e2e7 Merge branch '4.18' into main 2023-11-13 11:36:51 +01:00
Abhishek Kumar
d0f3233fda
edge-zone,kvm,iso,cks: allow k8s deployment with direct-download iso (#8142)
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2023-11-10 13:56:05 +01:00
John Bampton
f090c77f41
misc: fix spelling (#7549)
Co-authored-by: Stephan Krug <stekrug@icloud.com>
2023-11-02 09:23:53 +01:00
Vishesh
5362bad442
Storage Management (#7949) 2023-11-01 10:46:22 +01:00
Daan Hoogland
587d1d7dba Merge remote-tracking branch 'apache/4.18' into main 2023-10-26 09:37:38 +02:00
slavkap
6ae3b73ca2
Create snapshot from VM snapshot without memory for NFS/Local storage (#8117) 2023-10-26 08:46:14 +02:00
Abhishek Kumar
543c54c718
api,server,ui: snapshot copy, multi-zone replica (#7873)
This PR adds new functionality to copy snapshots across zones and take snapshots for multiple zones.

Copy functionality is similar to template copy. The source zone acts as the web server from where the destination zone(s) can download the snapshot files. For this purpose, a new API - `copySnapshot` has been added. The response for copySnapshot will be returning zone and download details from the first destination zone of the request. This behaviour is similar to the `copyTemplate` API.

In a similar manner, multiple zones can be selected while taking the snapshots or creating snapshot policies. For this snapshot will be taken in the base zone(in which volume is present) and then copied to the additional zones. A new parameter - `zoneids` has been added to `createSnapshot` and `createSnapshotPolicy` APIs.

As snapshots can be present on multiple zones (secondary stores), a new parameter `zoneid` has been added to delete the snapshot copy on a specific zone.

`listSnapshots` API has been updated to allow listing snapshot entries for different zones/datastores. New parameters - `showUnique`, `locationType` have been added.

Events generated during snapshot operations will now be linked to the snapshot itself rather than the volume of the snapshot.

`listSnapshotPolicies` and `createSnapshotPolicy` APIs will return zone details of the zones in which backup will be scheduled for the policy.

----
New API added
`copySnapshot`

Request and response params updated for APIs
```
- listSnapshots
- deleteSnapshot
- createTemplate
- listZones
- listSnapshotPolicies
- createSnapshotPolicy
```
UI updated for
- Snapshot detail view
- Create snapshot form
- Create snapshot policy form
- Create volume (from snapshot) form
- Create template (from snapshot) form

Doc PR: https://github.com/apache/cloudstack-documentation/pull/344
PR: https://github.com/apache/cloudstack/pull/7873
2023-10-23 09:01:58 +02:00
sato03
e437d1016f
Snapshot removal and storage cleanup logs (#8031) 2023-10-16 16:20:09 +02:00
Rohit Yadav
8a34afa8ab Merge remote-tracking branch 'origin/4.18' 2023-10-11 21:00:06 +05:30
Rohit Yadav
8350ce5aa4
storage: allow VM snapshots without memory for KVM when global setting allows (#8062)
This removes the conditional logic where comment notest to remove it
after PR #5297 is merged that is applicable for ACS 4.18+. Only when the
global setting is enabled and memory isn't selected, VM snapshot could
be allowed for VMs on KVM that have qemu-guest-agent running.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2023-10-11 20:56:45 +05:30
SadiJr
1bda2343f3
Improve logs when searching one storage pool to allocate a new volume (#7212)
Co-authored-by: SadiJr <sadi@scclouds.com.br>
2023-09-28 13:42:42 +02:00
Vishesh
c69e3c5f42
Remove powermock from engine/storage/configdrive (#7988) 2023-09-22 14:07:29 +02:00
Vishesh
84277e783b
remove powermock from engine (#7975) 2023-09-20 10:11:28 +02:00
Wei Zhou
246bb24b0f Updating pom.xml version numbers for release 4.18.2.0-SNAPSHOT
Signed-off-by: Wei Zhou <weizhou@apache.org>
2023-09-12 17:26:53 +02:00