We didn't account for caching the volume stats for each used Linstor
cluster, so the first asked Linstor cluster would prevent caching
for all the others and so null was returned.
Now we have invalidate counters for each Linstor cluster and
also store the cache result with the Linstor cluster address prefixed.
Somehow deleteDatastore was never implemented, that meant:
templates haven't been cleaned up on datastore delete and
also agents have never been informed about storage pool removal.
If a -rst resource wasn't deleted because of a failed copy,
a reoccurring snapshot attempt couldn't be done, because there
was still the "old" -rst resource. To prevent this always
try to remove the -rst resource before, if it doesn't exist it is a noop.
This is found in some PRs
plugins/storage/volume/linstor/src/main/java/com/cloud/hypervisor/kvm/storage/LinstorStorageAdaptor.java:510: poperties ==> properties
* Improve logging to include more identifiable information for kvm plugin
* Update logging for scaleio plugin
* Improve logging to include more identifiable information for default volume storage plugin
* Improve logging to include more identifiable information for agent managers
* Improve logging to include more identifiable information for Listeners
* Replace ids with objects or uuids
* Improve logging to include more identifiable information for engine
* Improve logging to include more identifiable information for server
* Fixups in engine
* Improve logging to include more identifiable information for plugins
* Improve logging to include more identifiable information for Cmd classes
* Fix toString method for StorageFilterTO.java
Particular Linstor needs can use this information to only allow
dual volume access for live migration and not enable it in general,
which can and will lead to data corruption if for some reason
2 VMs get started on 2 different hosts.
If a node doesn't have a DRBD connection to another node,
additionally ask Linstor-Controller if the node is alive.
Otherwise we would have simply said no and the node might still be alive.
This is always the case in a non hyperconverged setup.
If a secondary storage pool is used by e.g.
2 concurrent snapshot->template actions,
if the first action finished it removed the netfs mount
point for the other action.
Now the storage pools are usage ref-counted and will only
deleted if there are no more users.
In non-hyperconverged setups, diskless nodes don't have a connection
to each other, so setting properties there had no effect.
Now it is checked if a connection exists,
between the live migration nodes and if not,
it will set the allow-two-primaries on resource-definition level.
qemu has a bug versions prior 7.0 with discard enabled and using the IDE bus.
It would crash the qemu process and kill the virtual machine,
this is most noticeable on installing a windows guest from the
Windows ISO installer.
* linstor: enable discard for Linstor storage pools
All Linstor storage backends support discard, so it can be safely enabled.
* linstor: enable discard for Linstor storage pools CHANGELOG.md