* 4.18:
Storage plugin support to check if volume on datastore requires access for migration (#8655)
CKS: fix /opt/bin/deploy-cloudstack-secret in CKS control nodes (#8697)
* Check if volume on datastore requires access for migration, and grant/revoke volume access if requires
* Updated default implementation for requiresAccessForMigration method in PrimaryDataStoreDriver
* linstor: Outline get storagepools from resourcegroup into function
* linstor: move getHostname() to kvm/Pool and reimplement
* linstor: implement CloudStack HA support
* linstor: Add util method getBestErrorMessage from main
* linstor: failed remove of allow-two-primaries is no fatal error
* linstor: Fix failure if a Linstor node is down while migrating
If a Linstor node is down while migrating resource, allow-two-primaries
setting will fail because we can't reach the downed node. But it will
still set the property on the other nodes and migration should work.
We now just report an error instead of completely failing.
This PR provides a new primary storage volume type called "FiberChannel" that allows access to volumes connected to hosts over fiber channel connections. It requires Multipath to provide path discovery and failover. Second, the PR adds an AdaptivePrimaryDatastoreProvider that abstracts how volumes are managed/orchestrated from the connector to communicate with the primary storage provider, using a ProviderAdapter interface, allowing the code interacting with the primary storage provider API's to be simpler and have no direct dependencies on Cloudstack code. Lastly, the PR provides an implementation of the ProviderAdapter classes for the HP Enterprise Primera line of storage solutions and the Pure Flash Array line of storage solutions.
On no access to the storage nodes, we now create a temporary resource from the snapshot and copy that data into the secondary storage. Revert works the same, just that we now also look additionally for any Linstor agent node.
Also enables now backup snapshot by default.
This whole BackupSnapshot functionality was introduced in 4.19,
so I would be happy if this still could be merged.
This PR adds an config option for the Linstor primary storage driver, that allows you to automatically backup
volume snapshots to the secondary storage.
Additionally it will not mangle the need java-linstor dependency into the client.jar, but instead just copy
the java-linstor.jar into lib.
Config option is called: lin.backup.snapshots and is default false
The scope of this change should be limited, as it only touches the Linstor driver and a part of copyAsync
was implemented with 2 new Linstor specific commands.
Extending the current functionality of KVM Host HA for the StorPool storage plugin and the option for easy integration for the rest of the storage plugins to support Host HA
This extension works like the current NFS storage implementation. It allows it to be used simultaneously with NFS and StorPool storage or only with StorPool primary storage.
If it is used with different primary storages like NFS and StorPool, and one of the health checks fails for storage, there is an option to report the failure to the management with the global config kvm.ha.fence.on.storage.heartbeat.failure. By default this option is disabled when enabled the Host HA service will continue with the checks on the host and eventually will fence the host
This PR adds new functionality to copy snapshots across zones and take snapshots for multiple zones.
Copy functionality is similar to template copy. The source zone acts as the web server from where the destination zone(s) can download the snapshot files. For this purpose, a new API - `copySnapshot` has been added. The response for copySnapshot will be returning zone and download details from the first destination zone of the request. This behaviour is similar to the `copyTemplate` API.
In a similar manner, multiple zones can be selected while taking the snapshots or creating snapshot policies. For this snapshot will be taken in the base zone(in which volume is present) and then copied to the additional zones. A new parameter - `zoneids` has been added to `createSnapshot` and `createSnapshotPolicy` APIs.
As snapshots can be present on multiple zones (secondary stores), a new parameter `zoneid` has been added to delete the snapshot copy on a specific zone.
`listSnapshots` API has been updated to allow listing snapshot entries for different zones/datastores. New parameters - `showUnique`, `locationType` have been added.
Events generated during snapshot operations will now be linked to the snapshot itself rather than the volume of the snapshot.
`listSnapshotPolicies` and `createSnapshotPolicy` APIs will return zone details of the zones in which backup will be scheduled for the policy.
----
New API added
`copySnapshot`
Request and response params updated for APIs
```
- listSnapshots
- deleteSnapshot
- createTemplate
- listZones
- listSnapshotPolicies
- createSnapshotPolicy
```
UI updated for
- Snapshot detail view
- Create snapshot form
- Create snapshot policy form
- Create volume (from snapshot) form
- Create template (from snapshot) form
Doc PR: https://github.com/apache/cloudstack-documentation/pull/344
PR: https://github.com/apache/cloudstack/pull/7873
Making a diskful resource was meant as an optimization,
but cannot work on non hyperconverged setups,
as the storage nodes (diskful) are not part of the cloudstack cluster.
A TODO was overseen and never implemented,
which could trigger the following bug:
If Linstor didn't create a resource (diskless or diskfull) on
the cloudstack choosen node, it would not be able to copy the
template data there, it even seems no error was
triggered and the new template file silently just became
empty/corrupt.
This fixes the following cases in which Solidfire storage integration
caused issues when using Solidfire datadisks with VMware:
1. Take Volume Snapshot of Solidfire data disk
2. Delete an active Instance with Solidfire data disk attached
3. Attach used existing Solidfire data disk to a running/stopped VM
4. Stop and Start an instance with Solidfire data disks attached
5. Expand disk by resizing Solidfire data disk by providing size
6. Expand disk by changing disk offering for the Solidfire data disk
Additional changes:
- Use VMFS6 as managed datastore type if the host supports
- Refactor detection and splitting of managed storage ds name in storage
processor
- Restrict storage rescanning for managed datastore when resizing
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
steps to reproduce the issue:
- git clone https://github.com/apache/cloudstack.git
- cd cloudstack
- rm -rf .git/
- run `mvn -P developer,systemvm clean install`
Without this PR, it fails with error
```
> [ 8/10] RUN mvn -Pdeveloper -Dsimulator -DskipTests clean install:
668.1 [ERROR] Failed to execute goal pl.project13.maven:git-commit-id-plugin:4.9.10:revision (get-the-git-infos) on project cloud-plugin-storage-volume-storpool: .git directory is not found! Please specify a valid [dotGitDirectory] in your pom.xml -> [Help 1]
```
* 4.18:
UI: allow new keys for VM details (#7793)
Refactoring StorPool's smoke tests (#7392)
UI: decode userdata in EditVM dialog (#7796)
packaging: unalias cp before package upgrade (#7722)
make NoopDbUpgrade do a systemvm template check (#7564)
UI unit test: fix expected values (#7792)
* Removed the hardcoded StorPool endpoint from tests
- removed the hardcoded enpoint of StorPool primary storage from tests
- added the git commit information into the maven build
* Convert indents to spaces
* update git-commit-id-plugin version
* 4.18:
SSVM: 'allow from' private IP in other SSVMs if the public IP is in allowed internal sites cidrs (#7288)
eof added to StorPoolStatsCollector (#7754)
* 4.18:
Storage and volumes statistics tasks for StorPool primary storage (#7404)
proper storage construction (#6797)
guarantee MAC uniqueness (#7634)
server: allow migration of all VMs with local storage on KVM (#7656)
Add L2 networks to Zones with SG (#7719)