A separate service account will be created and added in the project, if
not exist already, when a Kubernetes cluster is deployed in a project.
This account will have a role with limited API access.
Cleanup clusters on owner account cleanup, delete service account
if needed
When the owner account of k8s clusters is deleted, while its node VMs
get expunged, the cluster entry in DB remain present. This fixes the
issue by cleaning up all clusters for the account deleted.
Project k8s service account will be deleted on account cleanup or when
there is no active k8s cluster remaining
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Reset the pool id when create volume fails on the allocated pool
- the pool id is persisted while creating the volume, when it fails the pool id is not reverted. On next create volume attempt, CloudStack couldn't find any suitable primary storage even there are pools available with enough capacity as the pool is already assigned to volume which is in Allocated state (and storage pool compatibility check fails). Ensure volume is not assigned to any pool if create volume fails (so the next creation job would pick the suitable pool).
* endpoint check for resize
* update the resize error through callback result instead of exception
Somehow deleteDatastore was never implemented, that meant:
templates haven't been cleaned up on datastore delete and
also agents have never been informed about storage pool removal.
* engine-storage-datamotion: Set SecretConsumerDetail for VM live migration with storage on shared NFS
* VM live migration - powerflex encrypted volume
* rename isPowerFlex
```
2024-06-10T12:24:50,711 INFO [o.a.c.s.i.TemplateServiceImpl] (qtp776484396-20:[ctx-eb090c22, ctx-5fa5579c]) (logid:7a783000) Template Sync did not find 211-2-d536fb03-5f89-3e77-8dea-323315bcbfab on image store 3, may request download based on available hypervisor types
...
2024-06-10T12:24:51,043 INFO [o.a.c.s.i.TemplateServiceImpl] (qtp776484396-20:[ctx-eb090c22, ctx-5fa5579c]) (logid:7a783000) Skip sync downloading private template 211-2-d536fb03-5f89-3e77-8dea-323315bcbfab to a new image store
```
Using XenServer as the hypervisor, when deleting a snapshot that has a parent, that parent will also get erased on storage, causing data loss. This behavior was introduced with #7873, where the list of snapshot states that can be deleted was changed to add BackedUp snapshots.
This PR changes the states list back to the original list, and swaps the while loop for a do while loop to account for the changes in #7873.
Fixes#9446