* ScaleIO/PowerFlex smoke tests improvements, and some fixes
* Fix test_volumes.py, encrypted volume size check (for powerflex volumes)
* Fix test_over_provisioning.py (over provisioning supported for powerflex)
* Update vm snapshot tests
* Update volume size delta in primary storage resource count for user vm volumes only
The VR volumes resource count for PowerFlex volumes is updated here, resulting in resource count discrepancy
(which is re-calculated through ResourceCountCheckTask later, and skips the VR volumes)
* Fix test_import_unmanage_volumes.py (unsupported for powerflex)
* Fix test_sharedfs_lifecycle.py (volume size check for powerflex)
* Update powerflex.connect.on.demand config default to true
We didn't account for caching the volume stats for each used Linstor
cluster, so the first asked Linstor cluster would prevent caching
for all the others and so null was returned.
Now we have invalidate counters for each Linstor cluster and
also store the cache result with the Linstor cluster address prefixed.
* Fix of deploy VM with a snapshot that is copied to another zone
* Fix of creating StorPool volume from a snapshot if the size in the
offering is bigger than the snapshot size
Ensure bucket.getSecretKey() is used when building the S3 client.
Previously, only getAccessKey() was passed for both key and secret,
causing V4 signature validation failures during operations such as
bucket creation and policy updates.
Co-authored-by: Jean Vetorello <jean@paneas.com>
* Find system VM templates for CKS cluster honouring the preferred architecture
* Fix unit tests
* Fix checkstyle
* Sort instead of filtering by preferred arch
* Remove unnecesary stubs
* Restore java version
* Address review comments
* Fail and display error message in case the CKS ISO arch doesnt match the selected template arch
* Prefer CKS ISO arch instead of the system VM setting
This feature adds the ability to create a new instance from a VM backup for dummy, NAS and Veeam backup providers. It works even if the original instance used to create the backup was expunged or unmanaged. There are two parts to this functionality:
Saving all configuration details that the VM had at the time of taking the backup. And using them to create an instance from backup.
Enabling a user to expunge/unmanage an instance that has backups.
* Use special icon for sharedfs instance and prefix for sharedfs volumes
* Give custom icon precedence over shared fs icon
* Fix sharedfsvm icon size
* Fix UT failure in StorageVmSharedFSLifeCycleTest
* [PowerFlex/ScaleIO] Added wait time after SDC service start/restart/stop, and retries to fetch SDC id/guid
* Added agent property 'powerflex.sdc.service.wait' for the time (in secs) to wait after SDC service start/restart/stop
* code improvements
* Cumulative enhancements fix for ScaleIO: MDM add/remove, Host prepare/unprepare, validate Storage Pool can be created in Agent.
- Implemented validation to fail Host disconnect from Storage Pool if there are Volumes attached and SDC client MDM removal requires scini service to be restarted
- Implemented Storage Pool validation by checking whether MDM addresses from configuration file and from memory (using CLI) matches, otherwise file ModifyStoragePool command.
- Introduced configuration key to apply timeout after making MDM changes for ScaleIO: powerflex.mdm.change.apply.timeout.ms (default 1000ms)
- Implemented logic to apply timeout after making MDM changes for ScaleIO in prepare and unprepare logic
- Added detection of MDM removal support via CLI
- If MDM removal support via CLI supported then use CLI, fall back to edit drv_cfg.txt and restart scini instead
Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>
* Option to deploy a VM with existing volume/snapshot
* smoke test changes
check if the hypervisor is KVM
check if the primary storage's scope is ZONE wide
* skip all tests if the storage isn't Zone-Wide and the hypervisor isn't KVM
* support StorPool tags
add StorPool tags to a volume created from snapshot or to a volume which
will be attached as a ROOT to a new VM
* Add StorPool tags on the new ROOT volume
* Add the StorPool's tags when volume is created from a snapshot or a
volume is attached as a ROOT to a VM
* Addressed review
CKS Enhancements:
* Ability to specify different compute or service offerings for different types of CKS cluster nodes – worker, master or etcd
* Ability to use CKS ready custom templates for CKS cluster nodes
* Add and Remove external nodes to and from a kubernetes cluster
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Update remove node timeout global setting
* CKS/NSX : Missing variables in worker nodes
* CKS: Fix ISO attach logic
* CKS: Fix ISO attach logic
* address comment
* Fix Port - Node mapping when cluster is scaled in the presence of external node(s)
* CKS: Externalize control and worker node setup wait time and installation attempts
* Fix logger
* Add missing headers and fix end of line on files
* CKS Mark Nodes for Manual Upgrade and Filter Nodes to add to CKS cluster from the same network
* Add support to deploy CKS cluster nodes on hosts dedicated to a domain
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* Support unstacked ETCD
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Fix CKS cluster scaling and minor UI improvement
* Reuse k8s cluster public IP for etcd nodes and rename etcd nodes
* Fix DNS resolver issue
* Update UDP active monitor to ICMP
* Add hypervisor type to CKS cluster creation to fix CKS cluster creation when External hosts added
* Fix build
* Fix logger
* Modify hypervisor param description in the create CKS cluster API
* CKS delete fails when external nodes are present
* CKS delete fails when external nodes are present
* address comment
* Improve network rules cleanup on failure adding external nodes to CKS cluster
* UI: Fix etcd template was not honoured
* UI: Fix etcd template was not honoured
* Refactor
* CKS: Exclude etcd nodes when calculating port numbers
* Fix network cleanup in case of CKS cluster failure
* Externalize retries and inverval for NSX segment deletion
* Fix CKS scaling when external node(s) present in the cluster
* CKS: Fix port numbers displayed against ETCD nodes
* Add node version details to every node of k8s cluster - as we now support manual upgrade
* Add node version details to every node of k8s cluster - as we now support manual upgrade
* update column name
* CKS: Exclude etcd nodes when calculating port numbers
* update param name
* update param
* UI: Fix CKS cluster creation templates listing for non admins
* CKS: Prevent etcd node start port number to coincide with k8s cluster start port numbers
* CKS: Set default kubernetes cluster node version to the kubernetes cluster version on upgrade
* CKS: Set default kubernetes cluster node version to the kubernetes cluster version on upgrade
* consolidate query
* Fix upgrade logic
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Fix CKS cluster version upgrade
* CKS: Fix etcd port numbers being skipped
* Fix CKS cluster with etcd nodes on VPC
* Move schema and upgrade for 4.20
* Fix logger
* Fix after rebasing
* Add support for using different CNI plugins with CKS
* Add support for using different CNI plugins with CKS
* remove unused import
* Add UI support and list cni config API
* necessary UI changes
* add license
* changes to support external cni
* UI changes
* Fix NPE on restarting VPC with additional public IPs
* fix merge conflict
* add asnumber to create k8s svc layer
* support cni framework to use as-numbers
* update code
* condition to ignore undefined jinja template variables
* CKS: Do not pass AS number when network ID is passed
* Fix deletion of Userdata / CNI Configuration in projects
* CKS: Add CNI configuration details to the response and UI
* Explicit events for registering cni configuration
* Add Delete cni configuration API
* Fix CKS deployment when using VPC tiers with custom ACLs
* Fix DNS list on VR
* CKS: Use Network offering of the network passed during CKS cluster creation to get the AS number
* CKS cluster with guest IP
* Fix: Use control node guest IP as join IP for external nodes addition
* Fix DNS resolver issue
* Improve etcd indexing - start from 1
* CKS: Add external node to a CKS cluster deployed with etcd node(s) successfully
* CKS: Add external node to a CKS cluster deployed with etcd node(s) successfully
* simplify logic
* Tweak setup-kube-system script for baremetal external nodes
* Consider cordoned nodes while getting ready nodes
* Fix CKS cluster scale calculations
* Set token TTL to 0 (no expire) for external etcd
* Fix missing quotes
* Fix build
* Revert PR 9133
* Add calico commands for ens35 interface
* Address review comments: plan CKS cluster deployment based on the node type
* Add qemu-guest-agent dependency for kvm based templates
* Add marvin test for CKS clusters with different offerings per node type
* Remove test tag
* Add marvin test and fix update template for cks and since annotations
* Fix marvin test for adding and removing external nodes
* Fix since version on API params
* Address review comments
* Fix unit test
* Address review comments
* UI: Make CKS public templates visible to non-admins on CKS cluster creation
* Fix linter
* Fix merge error
* Fix positional parameters on the create kubernetes ISO script and make the ETCD version optional
* fix etcd port displayed
* Further improvements to CKS (#118)
* Multiple nics support on Ubuntu template
* Multiple nics support on Ubuntu template
* supports allocating IP to the nic when VM is added to another network - no delay
* Add option to select DNS or VR IP as resolver on VPC creation
* Add API param and UI to select option
* Add column on vpc and pass the value on the databags for CsDhcp.py to fix accordingly
* Externalize the CKS Configuration, so that end users can tweak the configuration before deploying the cluster
* Add new directory to c8 packaging for CKS config
* Remove k8s configuration from resources and make it configurable
* Revert "Remove k8s configuration from resources and make it configurable"
This reverts commit d5997033ebe4ba559e6478a64578b894f8e7d3db.
* copy conf to mgmt server and consume them from there
* Remove node from cluster
* Add missing /opt/bin directory requrired by external nodes
* Login to a specific Project view
* add indents
* Fix CKS HA clusters
* Fix build
---------
Co-authored-by: Nicolas Vazquez <nicovazquez90@gmail.com>
* Add missing headers
* Fix linter
* Address more review comments
* Fix unit test
* Fix scaling case for the same offering
* Revert "Login to a specific Project view"
This reverts commit 95e37563f48573780b07a038a7f48c0bc04e9b64.
* Revert "Fix CKS HA clusters" (#120)
This reverts commit 8dac16aa359faa6500ea1e1ce548169cfd08331a.
* Apply suggestions from code review about user data
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Update api/src/main/java/org/apache/cloudstack/api/command/user/userdata/BaseRegisterUserDataCmd.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Refactor column names and schema path
* Fix scaling for non existing previous offering per node type
* Update node offering entry if there was an existing offering but a global service offering has been provided on scale
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Daan Hoogland <daan@onecht.net>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Introducing Storage Access Groups to define the host and storage pool connections
In CloudStack, when a primary storage is added at the Zone or Cluster scope, it is by default connected to all hosts within that scope. This default behavior can be refined using storage access groups, which allow operators to control and limit which hosts can access specific storage pools.
Storage access groups can be assigned to hosts, clusters, pods, zones, and primary storage pools. When a storage access group is set on a cluster/pod/zone, all hosts within that scope inherit the group. Connectivity between a host and a storage pool is then governed by whether they share the same storage access group.
A storage pool with a storage access group will connect only to hosts that have the same storage access group. A storage pool without a storage access group will connect to all hosts, including those with or without a storage access group.
* Reset the pool id when create volume fails on the allocated pool
- the pool id is persisted while creating the volume, when it fails the pool id is not reverted. On next create volume attempt, CloudStack couldn't find any suitable primary storage even there are pools available with enough capacity as the pool is already assigned to volume which is in Allocated state (and storage pool compatibility check fails). Ensure volume is not assigned to any pool if create volume fails (so the next creation job would pick the suitable pool).
* endpoint check for resize
* update the resize error through callback result instead of exception
* Add & Remove PowerFlex/ScaleIO MDMs while preparing & unpreparing the storage SDC connections (instead of start & stop scini)
* Add/Remove MDM IP addresses during Host connection/disconnection to/from storage pool when powerflex.connect.on.demand is false
* unit test fixes
* Don't remove MDM IPs from SDC when any volumes mapped to SDC
* Don't remove MDM IPs when other pools of same ScaleIO/PowerFlex cluster are connected
* rebase fixes
* update changes, to not remove/disconnect MDMs on maintenance
* import fixes after rebase
* Consider the clusters with allocation state 'Enabled' for EndPoint selection (in addition to Host state)
* Reset the pool id when create volume fails on the allocated pool
- the pool id is persisted while creating the volume, when it fails the pool id is not reverted. On next create volume attempt, CloudStack couldn't find any suitable primary storage even there are pools available with enough capacity as the pool is already assigned to volume which is in Allocated state (and storage pool compatibility check fails). Ensure volume is not assigned to any pool if create volume fails (so the next creation job would pick the suitable pool).
* endpoint check for resize
* update the resize error through callback result instead of exception
* logger fix
* KVM incremental snapshot feature
* fix log
* fix merge issues
* fix creation of folder
* fix snapshot update
* Check for hypervisor type during parent search
* fix some small bugs
* fix tests
* Address reviews
* do not remove storPool snapshots
* add support for downloading diff snaps
* Add multiple zones support
* make copied snapshots have normal names
* address reviews
* Fix in progress
* continue fix
* Fix bulk delete
* change log to trace
* Start fix on multiple secondary storages for a single zone
* Fix multiple secondary storages for a single zone
* Fix tests
* fix log
* remove bitmaps when deleting snapshots
* minor fixes
* update sql to new file
* Fix merge issues
* Create new snap chain when changing configuration
* add verification
* Fix snapshot operation selector
* fix bitmap removal
* fix chain on different storages
* address reviews
* fix small issue
* fix test
---------
Co-authored-by: João Jandre <joao@scclouds.com.br>