Somehow the commit 5a52ca78ae5e165211c618525613c3d62cfd1b28 was reverted
so cloud-init templates don't work on arm64 anymore :(
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Use special icon for sharedfs instance and prefix for sharedfs volumes
* Give custom icon precedence over shared fs icon
* Fix sharedfsvm icon size
* Fix UT failure in StorageVmSharedFSLifeCycleTest
* [PowerFlex/ScaleIO] Added wait time after SDC service start/restart/stop, and retries to fetch SDC id/guid
* Added agent property 'powerflex.sdc.service.wait' for the time (in secs) to wait after SDC service start/restart/stop
* code improvements
* directdownload: fix keytool importcert
```
$ /usr/bin/keytool -importcert file /etc/cloudstack/agent/CSCERTIFICATE-full -keystore /etc/cloudstack/agent/cloud.jks -alias full -storepass DAWsfkJeeGrmhta6
Illegal option: file
keytool -importcert [OPTION]...
Imports a certificate or a certificate chain
Options:
-noprompt do not prompt
-trustcacerts trust certificates from cacerts
-protected password through protected mechanism
-alias <alias> alias name of the entry to process
-file <file> input file name
-keypass <arg> key password
-keystore <keystore> keystore name
-cacerts access the cacerts keystore
-storepass <arg> keystore password
-storetype <type> keystore type
-providername <name> provider name
-addprovider <name> add security provider by name (e.g. SunPKCS11)
[-providerarg <arg>] configure argument for -addprovider
-providerclass <class> add security provider by fully-qualified class name
[-providerarg <arg>] configure argument for -providerclass
-providerpath <list> provider classpath
-v verbose output
Use "keytool -?, -h, or --help" for this help message
```
* DirectDownload: drop HttpsMultiTrustManager
* [Vmware to KVM Migration] Display virt-v2v and ovftool versions for supported hosts for migration
* Fix UI display
* Address review comments
* Fix ovftool and version display - also display versions on host details view
A separate service account will be created and added in the project, if
not exist already, when a Kubernetes cluster is deployed in a project.
This account will have a role with limited API access.
Cleanup clusters on owner account cleanup, delete service account
if needed
When the owner account of k8s clusters is deleted, while its node VMs
get expunged, the cluster entry in DB remain present. This fixes the
issue by cleaning up all clusters for the account deleted.
Project k8s service account will be deleted on account cleanup or when
there is no active k8s cluster remaining
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
A separate service account will be created and added in the project, if
not exist already, when a Kubernetes cluster is deployed in a project.
This account will have a role with limited API access.
Cleanup clusters on owner account cleanup, delete service account
if needed
When the owner account of k8s clusters is deleted, while its node VMs
get expunged, the cluster entry in DB remain present. This fixes the
issue by cleaning up all clusters for the account deleted.
Project k8s service account will be deleted on account cleanup or when
there is no active k8s cluster remaining
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* VMware - Ignore disk not found error on cleanup when the VM disk doesn't exists
* VMware - Retry powerOn on lock issues
* addressed comments
* Update CPVM reboot tests - wait for the agent to Disconnect and back Up
* Retry moveDatastoreFile when any file access issue while creating volume from snapshot
* Update full clone flag when restoring vm using root disk offering with more size than the template size
* refactored (mainly,for diskInfo - causing NPE in some cases)
* Retry moveDatastoreFile when there is any file access issue
* Reset the pool id when create volume fails on the allocated pool
- the pool id is persisted while creating the volume, when it fails the pool id is not reverted. On next create volume attempt, CloudStack couldn't find any suitable primary storage even there are pools available with enough capacity as the pool is already assigned to volume which is in Allocated state (and storage pool compatibility check fails). Ensure volume is not assigned to any pool if create volume fails (so the next creation job would pick the suitable pool).
* endpoint check for resize
* update the resize error through callback result instead of exception
* Consider the clusters with allocation state 'Enabled' for EndPoint selection (in addition to Host state)
* Reset the pool id when create volume fails on the allocated pool
- the pool id is persisted while creating the volume, when it fails the pool id is not reverted. On next create volume attempt, CloudStack couldn't find any suitable primary storage even there are pools available with enough capacity as the pool is already assigned to volume which is in Allocated state (and storage pool compatibility check fails). Ensure volume is not assigned to any pool if create volume fails (so the next creation job would pick the suitable pool).
* endpoint check for resize
* update the resize error through callback result instead of exception
* logger fix