17 Commits

Author SHA1 Message Date
Suresh Kumar Anaparti
b4ad04badf
Allow config drive deletion of migrated VM, on host maintenance (#10045) 2024-12-18 09:12:28 +01:00
kishankavala
ab20b1220f
KVM Ingestion - Import Instance (#7976)
This PR adds new functionality to import KVM instances from an external host or from disk images in local or shared storage.
Doc PR: https://github.com/apache/cloudstack-documentation/pull/356
2023-12-14 13:08:56 +05:30
Nicolas Vazquez
371ad9f55b
New Feature: Import VMware VMs into KVM (#7881)
This PR adds the capability in CloudStack to convert VMware Instances disk(s) to KVM using virt-v2v and import them as CloudStack instances. It enables CloudStack operators to import VMware instances from vSphere into a KVM cluster managed by CloudStack. vSphere/VMware setup might be managed by CloudStack or be a standalone setup.

    CloudStack will let the administrator select a VM from an existing VMware vCenter in the CloudStack environment or external vCenter requesting vCenter IP, Datacenter name and credentials.
    The migrated VM will be imported as a KVM instance
    The migration is done through virt-v2v: https://access.redhat.com/articles/1351473, https://www.ovirt.org/develop/release-management/features/virt/virt-v2v-integration.html
    The migration process timeout can be set by the setting convert.instance.process.timeout
    Before attempting the virt-v2v migration, CloudStack will create a clone of the source VM on VMware. The clone VM will be removed after the registration process finishes.
    CloudStack will delegate the migration action to a KVM host and the host will attempt to migrate the VM invoking virt-v2v. In case the guest OS is not supported then CloudStack will handle the error operation as a failure
    The migration process using virt-v2v may not be a fast process
    CloudStack will not perform any check about the guest OS compatibility for the virt-v2v library as indicated on: https://access.redhat.com/articles/1351473.
2023-12-07 12:59:56 +05:30
Vishesh
ea90848429
Feature: Add support for DRS in a Cluster (#7723)
This pull request (PR) implements a Distributed Resource Scheduler (DRS) for a CloudStack cluster. The primary objective of this feature is to enable automatic resource optimization and workload balancing within the cluster by live migrating the VMs as per configuration.
Administrators can also execute DRS manually for a cluster, using the UI or the API.
Adds support for two algorithms - condensed & balanced. Algorithms are pluggable allowing ACS Administrators to have customized control over scheduling.

Implementation
There are three top level components:

    Scheduler
    A timer task which:

    Generate DRS plan for clusters
    Process DRS plan
    Remove old DRS plan records

    DRS Execution
    We go through each VM in the cluster and use the specified algorithm to check if DRS is required and to calculate cost, benefit & improvement of migrating that VM to another host in the cluster. On the basis of cost, benefit & improvement, the best migration is selected for the current iteration and the VM is migrated. The maximum number of iterations (live migrations) possible on the cluster is defined by drs.iterations which is defined as a percentage (as a value between 0 and 1) of total number of workloads.

    Algorithm
    Every algorithms implements two methods:
        needsDrs - to check if drs is required for cluster
        getMetrics - to calculate cost, benefit & improvement of a migrating a VM to another host.

Algorithms

    Condensed - Packs all the VMs on minimum number of hosts in the cluster.
    Balanced - Distributes the VMs evenly across hosts in the cluster.
    Algorithms use drs.level to decide the amount of imbalance to allow in the cluster.

APIs Added

listClusterDrsPlan

    id - ID of the DRS plan to list
    clusterid - to list plans for a cluster id

generateClusterDrsPlan

    id - cluster id
    iterations - The maximum number of iterations in a DRS job defined as a percentage (as a value between 0 and 1) of total number of workloads. Defaults to value of cluster's drs.iterations setting.

executeClusterDrsPlan

    id - ID of the cluster for which DRS plan is to be executed.
    migrateto - This parameter specifies the mapping between a vm and a host to migrate that VM. Format of this parameter: migrateto[vm-index].vm=<uuid>&migrateto[vm-index].host=<uuid>.

Config Keys Added

    ClusterDrsPlanExpireInterval
    Key drs.plan.expire.interval
    Scope Global
    Default Value 30 days
    Description The interval in days after which old DRS records will be cleaned up.

    ClusterDrsEnabled
    Key drs.automatic.enable
    Scope Cluster
    Default Value false
    Description Enable/disable automatic DRS on a cluster.

    ClusterDrsInterval
    Key drs.automatic.interval
    Scope Cluster
    Default Value 60 minutes
    Description The interval in minutes after which a periodic background thread will schedule DRS for a cluster.

    ClusterDrsIterations
    Key drs.max.migrations
    Scope Cluster
    Default Value 50
    Description Maximum number of live migrations in a DRS execution.

    ClusterDrsAlgorithm
    Key drs.algorithm
    Scope Cluster
    Default Value condensed
    Description DRS algorithm to execute on the cluster. This PR implements two algorithms - balanced & condensed.

    ClusterDrsLevel
    Key drs.imbalance
    Scope Cluster
    Default Value 0.5
    Description Percentage (as a value between 0.0 and 1.0) of imbalance allowed in the cluster. 1.0 means no imbalance
    is allowed and 0.0 means imbalance is allowed.

    ClusterDrsMetric
    Key drs.imbalance.metric
    Scope Cluster
    Default Value memory
    Description The cluster imbalance metric to use when checking the drs.imbalance.threshold. Possible values are memory and cpu.
2023-10-26 11:48:18 +05:30
Wei Zhou
9d46df57f2
kvm: add vm setting for nic multiqueue number and packed virtqueues (#7333)
This PR adds two vm setting for user vms on KVM

- nic multiqueue number
- packed virtqueues enabled . optional are true and false (false by default). It requires qemu>=4.2.0 and libvirt >=6.3.0

Tested ok on ubuntu 22 and rocky 8.4
2023-05-09 15:19:26 +05:30
slavkap
d288bb0c78
KVM support of iothreads and IO driver policy (#6909) 2023-01-25 12:34:05 +01:00
David Jumani
85c59979f7
Multiple SSH Keys support (#5965)
* keypairs added in api-constants

* names parameter added

* findbynames method added in dao

* change in impl to find and reset multiple keys

* findbynames method implemented

* log the publickeys, check the ssh keys given exists or not

* new ArrayList<>

* SQL IN toArray

* keypair

* null pointer exception solved with + concatanation

* null pointer exception solved with + concatanation

* error resolved

* keypair name to names in uservmresponse

* keypair name is set in the uservmresponse, from the details

* null checks are removed, keypairnames are stored in a string, sent to the resetvmsshinternal, and added in details

* commit first eval

* deploy vm takes multiple ssh-keys

* Deploy VM UI changed to accept multiple ssh keys

* Reset SSH UI API changed

* ResetSSH.vue

* ssh keys joined, ssh added in infocard

* changes made

* schema error resolved

* potential null pointer exception removed

* Update UserVmManagerImpl.java

unnecessary check removed.

* Update DeployVMCmd.java

* Update DeployVMCmd.java

* Update ResetVMSSHKeyCmd.java

* Update UserVmJoinDaoImpl.java

* .

* arraylist

* Update DeployVMCmd.java

* Update UserVmManagerImpl.java

* Update ResetVMSSHKeyCmd.java

* Update db

* Fix list vm by keypair

* ui fixes

* Fix typos

* ui fixes

* Cleanup

* Adding deprecated and since in api params

* Adding upgrade for existing vms with ssh keys

* Handle no key for cks

* Show existing keyparis in reset ssh key form

* get keys from the right account

Co-authored-by: bicrxm <bickrombishsass@gmail.com>
2022-03-01 21:30:55 -03:00
Pearl Dsilva
e0a5df50ce
CKS Enhancements and SystemVM template upgrade improvements (#5863)
* This PR/commit comprises of the following:
- Support to fallback on the older systemVM template in case of no change in template across ACS versions
- Update core user to cloud in CKS
- Display details of accessing CKS nodes in the UI - K8s Access tab
- Update systemvm template from debian 11 to debian 11.2
- Update letsencrypt cert
- Remove docker dependency as from ACS 4.16 onward k8s has deprecated support for docker - use containerd as container runtime

* support for private registry - containerd

* Enable updating template type (only) for system owned templates via UI

* edit indents

* Address comments and move cmd from patch file to cloud-init runcmd

* temporary change

* update k8s test to use k8s version 1.21.5 (instead of 1.21.3 - due to https://github.com/kubernetes/kubernetes/pull/104530)

* support for private registry - containerd

* Enable updating template type (only) for system owned templates via UI

* smooth upgrade of cks clusters

* update pom file with temp download.cloudstack.org testing links

* fix pom

* add cgroup config for containerd

* add systemd config for kubelet

* add additional info during image registry config

* update to official links
2022-02-15 18:27:14 +05:30
Leo (Hsueh Yu-Min)
72a1c0e7f1
[KVM] Add MV Settings for virtual GPU hardware type and memory (#5513)
* KVM: Add MV Settings for virtual GPU hardware type and memory

* fix method createVideoDef argument in test package

* add available options for KVM virtual GPU hardware VM setting

* fix videoRam default value

* fix _videoRam is 0, it will use default provided by libvirt
2021-10-04 09:55:32 +05:30
DK101010
9163013683
Feat/ram reservation (#4662)
* remove hot enable cpu und memory in case of reservation

ram and cpu reservation have not relation to ram and cpu hot add

* add custom ram_reservation and it to vm details

* system vms haven't this property, for this reason add additional check

* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java

Co-authored-by: dahn <daan.hoogland@gmail.com>

* replace 0.0 with NumberUtils

* remove default value and remove return MinRam(seems to be not necessary)

* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/guru/VmwareVmImplementer.java

Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>

* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java

Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>

Co-authored-by: DK101010 <dirk.klahre@itelligence.de>
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>
2021-08-24 14:15:52 -03:00
Abhishek Kumar
d763169b1c
Restore VMware VM naming convention option (#4581)
* initial chanes

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* changes

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* make check explicit for instance name flag

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* allow hiding vm details (in ui)

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* condition based on name instead of displayname

Signed-off-by: Abhishek Kumar <abhishek.kumar@shapeblue.com>
2021-03-29 16:13:14 +05:30
sureshanaparti
eba186aa40
storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
2021-02-24 14:58:33 +05:30
nvazquez
d864e9dc39 [VMware] Full OVF properties support 2020-10-19 15:05:56 +05:30
pavanaravapalli
d4b537efa7
UEFI Implementation: Enabled UEFI Support for Guest VM's on Hypervisor KVM,VMware. enabled boot modes [Legacy,Secure] support for UEFI boot with known caveats. (#3638)
Co-authored-by: Pavan Kumar Aravapalli <pavan_aravapalli@accelerite.com>
Co-authored-by: dahn <daan.hoogland@shapeblue.com>
2020-03-13 20:56:26 +01:00
Abhishek Kumar
0f5b0e67f8
VM ingestion (#3606)
The VM ingestion feature allows CloudStack to discover, on-board, import existing VMs in an infra. The feature currently works only for VMware, with a hypervisor agnostic framework which may be extended for KVM and XenServer in future.
2020-02-03 15:43:52 +01:00
Rohit Yadav
9f4f2c5348
api: instance and template details are free text (#3240)
Problem: Users don't know what keys/values to enter for template and VM details.
Root Cause: The feature does not exist that can list possible details and options.
Solution: Based on the possible VM and template details handled by the
codebase, those details were refactored and a list API is introduced
that can return users those details along with possible values. When
users add details now, they will be presented with a list of key details
and their possible options if any.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-27 09:14:47 +05:30
Marc-Aurèle Brothier
893a88d225 CLOUDSTACK-10105: Use maven standard project structure in all projects (#2283)
Remove maven standard module (which only a few were using) and get ride of maven customization for the projects structure.

- moved all directories to src/main/java, src/main/resources, src/main/scripts, src/test/java, src/test/resources
- grep scan to search for src/com and src/org left over
- grep for <project>/scripts to fix pom.xml configuration
- remove custom <build> configuration in pom.xml

Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
2018-01-20 03:19:27 +05:30