542 Commits

Author SHA1 Message Date
Abhishek Kumar
af58284560
server,config: respect storage.max.volume.size and make it dynamic (#5857)
* server,config: respect storage.max.volume.size and make it dynamic

Fixes #5830

* fix test

* size change

* fix check

* server: donot include ISO size while checking volume sizes

* revert size check

* refactor

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
2022-02-08 13:29:35 +05:30
Harikrishna
f15cab16da
server: Decouple service (compute) offering and disk offering (#5008)
Currently, our compute offerings and disk offerings are tightly coupled with respect to many aspects. For example, if a compute offering is created, a corresponding disk offering entry is also created with the same ID as the reference. Also creating compute offering takes few disk-related parameters which anyway goes to the corresponding disk offering only. I think this design was initially made to address compute offering for the root volume created from a template. Also changing the offering of a volume is tightly coupled with storage tags and has to be done in different APIs either migrateVolume or resizeVolume. Changing of disk offering should be seamless and should consider new storage tags, new size and place the volume in appropriate state as defined in disk offering.

more details are mentioned here https://cwiki.apache.org/confluence/display/CLOUDSTACK/Compute+offering+and+disk+offering+refactoring

* Schema changes and disk offering column change from "type" to "compute_only"

* Few more changes

* Decoupled service offering and disk offering

* Remove diskofferingid from vminstance VO

* Decouple service offering and disk offering states

* diskoffering getsize() is only for strict disk offerings

* Fix deployVM flow

* Added new API params to compute offering creation

* Add diskofferingstrictness to serviceoffering vo under quota

* Added overrideDiskOfferingId parameter in deploy VM API which will override disk offering for the root disk both in template and ISO case

Added diskSizeStrictness parameter in create Disk offering API which will decide whether to restrict resize or disk offering change of a volume

* Fix User vm response to show proper service offering and disk offerings

* Added disk size strictness in disk offering response

* Added disk offering strictness to the service offering response

* Remove comments

* Added UI changes for Disk offering strictness in add compute offering form and Disk size strictness in add disk offering form

* Added diskoffering details to the service offering response

* Added UI changes in deployvm wizard to accept override disk offering id

* Fix delete compute offering

* Fix VM deployment from custom service offering

* Move uselocalstorage column access from service offering to disk offering

* UI: Separated compute and disk releated parameters in add compute offering wizard, also added association to disk offering

* Fixed diskoffering automatic selection on add compute offering wizard

* UI: move compute only toggle button outside the box in add compute offering wizard

* Added volumeId parameter to listDiskOfferings API and the disksizestrictness flag of the current disk offering is honored while list disk offerings

* Added configuration parameter to decide whether to check volume tags on the destination storagepool during migration

* Added disk offering change checks during resize volume operation

* Added new API changeofferingforVolume API and corresponding changes

* Add UI form for changeOfferingForVolume API

* Fix UI conflicts

* Fix service offering usage as disk offering

* Fix unit test failures

* fix user_vm_view

* Addressed review comments

* Fixed service_offering_view

* Fix service offering edit flow

* Fix service offering constructor to address custom offering

* Fix domain_router_view to get proper service offering id

* Removed unused import

* Addressed review comments and fixed update service offering flow with storage tags

* Added marvin test cases for checking disk offering strictness

* review comments addressed

* Remove system_use column from disk offering join

* update volume_view to update system_use column from service offering and not disk offering

* Fix changeOfferingForVolume API for custom disk offering

* Fix global setting implementation

* Fix list volumes, after changing system_use column from disk offering to service offering in volume_view

* Changes for override root disk offering in deployvm wizard in case of custom offering

* Fix a unit test case

* Fixed recent unit test cases with new serviceofferingvo constructor

* Fix unit test in VolumeApiServiceImpl

* Added storage id for the list disk offering API and corresponding UI changes in migrateVolume and changeOfferingForVolume flow

* Rename global configuration parameter from storage.pool.tags.disk.offering.strictness to match.storage.pool.tags.with.disk.offering

* Fix smoke test failures

* Added tool tip for migrate volume UI form

* Address review comments and fix UI form of deploy VM in case of ISO.

* Fixed resize volume UI form for data disk

* UI changes to disable override root disk size when override root disk offering is enabled

* UI fix in deploy vm wizard

* Fix listdiskoffering after rebasing with main

* Fixed UI in migrate and changeofferingfor volume to handle empty disk offering list
Removed the volume's current disk offering from listDiskOffering response list

* Added custom Iops to resize volume form and removed the current disk offering during change offering for volume UI form

* Fix false response on updateDiskOffering API

* Added search field for changeofferingforvolume UI form

* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster

* Removed DB changes from 4.16 upgrade file

* Resolving merge conflicts with main 4.17

* Added support for auto migration and auto resize of the root volume upon changing the service offering for VM.

* UI: Added automigrate checkbox in scale VM form

* Addes since attributes to new API params

* Added shrinkOK parameter to changeofferingforvolume API

* Added shrinkOk param to UI in changeOfferingforVolume form

* Added shrinkOk flag to scaleVM and changeServiceForVirtualMachines and UI form

* Removed old foreign key constraint on IDs of service offering and disk offering

* Allow resize and automigrate of root volume if required in all cases of service offering change

* Allow only resize to higher disk size from UI

* Fixing vue syntax error

* Make UI changes to provide root disk size box when the linked disk offering is of custom

* Converted from check box to toggle in scale VM, changeoffering, resize and migrate volume forms

* Fix resize volume operation to update the VM settings

* Fix migratevolume form to pick selected storage pool id in list diskofferings API
2022-01-27 15:08:42 +05:30
Suresh Kumar Anaparti
5c02f6d507
Merge branch '4.16' into main 2022-01-06 17:47:37 +05:30
dahn
2774bc156f
use physical size instead of virtual size for migration. (#5750)
* Use Physical size to evaluate if migration is possible

* Improve logging and consider files skipped as failure in complete migration

* skipped can't be negative

* remove useless method

* group multidisk templates for secstor migration

* use enum

* Update engine/orchestration/src/main/java/org/apache/cloudstack/engine/orchestration/DataMigrationUtility.java

Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Pearl d'Silva <pearl.dsilva@shapeblue.com>
2022-01-06 17:18:50 +05:30
nicolas
3f79436840
Updating pom.xml version numbers for release 4.17.0.0-SNAPSHOT
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-09 22:55:52 -03:00
nicolas
93c3c3b9ac
Updating pom.xml version numbers for release 4.16.1.0-SNAPSHOT
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-09 22:50:22 -03:00
nicolas
44c08b5acc
Updating pom.xml version numbers for release 4.16.0.0
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-04 14:14:57 -03:00
davidjumani
6ac834a358
Adding AutoScaling for cks + CKS CoreOS EOL update + systemvmtemplate improvements (#4329)
Adding AutoScaling support for cks
Kubernetes PR : kubernetes/autoscaler#3629
Also replaces CoreOS with Debian
Fixes #4198

Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
Co-authored-by: Wei Zhou <w.zhou@global.leaseweb.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-10-06 21:17:41 +05:30
Abhishek Kumar
6e216dd0d1
vmware, network: add maclearning option (#5471)
* vmware, network: add maclearning option

Adds option for specifying MAC Learning property for network offering (useful for VMware Distributed Virtual Portgroup). Added global config - network.mac.learning for the default value.
MAC Learning is supported for DV portgroups for VMware Distributed vSwitches v6.6.0+ and vSphere 6.7+

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix warning msg

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-10-04 20:00:45 -03:00
Abhishek Kumar
4a42e7ef9e
vmware, ui: update portgroup on network update (#5470)
Enhanced update network form in the UI.
On network offering change for an isolated network,

- VMware portgroup should be updated accordingly.
- VMs on the network should be placed on the correct VMware portgroup based on the network rate, https://docs.cloudstack.apache.org/en/latest/adminguide/service_offerings.html#network-throttling.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-10-03 14:41:42 +05:30
Pearl Dsilva
74bb80687d
resource limit: Fix resource limit check on VM start (#5428)
* resource limit: Fix resource limit check on VM start

* add check to validate if cpu/memory are within limits for custom offering + exception handling

* unit tests

Co-authored-by: utchoang <hoangnm@unitech.vn>
2021-09-24 09:51:16 +05:30
Daniel Augusto Veronezi Salvador
8ffba83214
Keep volume policies after migrating it to another primary storage (#5067)
* Add commons-lang3 to Utils

* Create an util to provide methods that ReflectionToStringBuilder does not have yet

* Create method to retrieve map of tags from resource

* Enable tests on volume components and remove useless tests

* Refactor VolumeObject and add unit tests

* Extract createPolicy in several methods

* Create method to copy policies between volumes and add unit tests

* Copy policies to new volume before removing old volume on volume migration

* Extract "destroySourceVolumeAfterMigration" to a method and test it

* Remove javadoc @param with no sensible information

* Rename method name to a generic name

Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
2021-09-08 09:13:41 -03:00
Wei Zhou
a755ecfce8
Migrate vm across clusters (#4534)
* server: Optional destination host when migrate a vm

* #4378: migrate systemvms/routers with optional host

* Migrate vms across clusters

After enabling maintenance mode on host, if no suitable hosts
are found in the same cluster then search for hosts in
different clusters having the same hypervisor type

set global setting migrate.vm.across.clusters to true

* search all clusters in zone when migrate vm across clusters if applicable

* Honor migrate.vm.across.clusters when migrate vm without destination

* Check MIGRATE_VM_ACROSS_CLUSTERS in zone setting

* #4534 Fix Vms are migrated to same clusters in CloudStack caused by dedicated resources.

* #4534 extract some codes to methods

* fix #4534: an error in 'git merge'

* fix #4534: remove useless methods in FirstFitPlanner.java

* fix #4534: vms are stopped in host maintenance

* fix #4534: across-cluster migration of vms with cluster-scoped pools is supported by vmware vmotion

* fix #4534: migrate systemvms is only possible across clusters in same pod to avoid potential network errors.

* fix #4534: code optimization

Co-authored-by: Rakesh Venkatesh <r.venkatesh@global.leaseweb.com>
Co-authored-by: Sina Kashipazha <s.kashipazha@global.leaseweb.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Sina Kashipazha <soreana@users.noreply.github.com>
2021-09-07 21:50:29 -03:00
Daniel Augusto Veronezi Salvador
8a16729fcf
Support vm dynamic scaling with kvm (#4878)
* Create utility to centralize byte convertions

* Add/change toString definitions

* Create Libvirt handler to ScaleVmCommand

* Enable dynamic scalling VM with KVM

* Move config from interface to class and rename it

As every variable declared in interfaces are already final,
this moving will be needed to mock tests in nexts commits

* Configure VM max memory and cpu cores

The values are according to service offering or global configs

* Extract dpdk configuration to a method and test it

* Extract OS desc config to a method and test it

* Extract guest resource def to a method and test it

Improve libvirt def

* Refactor LibvirtVMDef.GuestResourceDef

* Refactor ScaleVmCommand

* Improve VMInstaVO toString()

* Refactor upgradeRunningVirtualMachine method

* Turn int variables into long on utility

* Verify if VM is scalable on KVMGuru

* Rename some KVMGuruTest's methods

* Change vm's xml to work with max memory

* Verify if service offering is dynamic before scale

* Create methods to retrieve data from domain

* Create def to hotplug memory

* Adjust the way command was scaling the VM

* Fix database persistence before executing command

* Send more info to host to improve log

* Fix var name

* Fix missing "}"

* Undo unnecessary changes

* Address review

* Fix scale validation

* Add VM prepared for dynamic scaling validation

* Refactor LibvirtScaleVmCommandWrapper and improve unit tests

* Remove duplicated method

* Add RuntimeException check

* Remove copyright from header

* Remove copyright from header

* Remove copyright from header

* Remove copyright from header

* Remove copyright from header

* Update ByteScaleUtilsTest.java

Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
2021-08-21 09:29:02 +02:00
Spaceman1984
96c9c5a5e2
Added disk provisioning type support for VMWare (#4640)
* Added disk provisioning type support for VMWare

* Review changes

* Fixed unit test

* Review changes

* Added missing licenses

* Review changes

* Update StoragePoolInfo.java

Removed white space

* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id

* Delete __init__.py

* Merge fix

* Fixed failing test

* Added comment about parameters

* Added error log when update fails

* Added exception when using API

* Ordering storage pool selection to prefer thick disk capable pools if available

* Removed unused parameter

* Reordering changes

* Returning storage pool details after update

* Removed multiple pool update, updated marvin test, removed duplicate enum

* Removed comment

* Removed unused import

* Removed for loop

* Added missing return statements for failed checks

* Class name change

* Null pointer

* Added more info when a deployment fails

* Null pointer

* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java

Co-authored-by: dahn <daan.hoogland@gmail.com>

* Small bug fix on API response and added missing bracket

* Removed datastore cluster code

* Removed unused imports, added missing signature

* Removed duplicate config key

* Revert "Added more info when a deployment fails"

This reverts commit 2486db78dca8e034d8ad2386174dfb11004ce654.

Co-authored-by: dahn <daan.hoogland@gmail.com>
2021-07-16 22:37:42 -03:00
Rohit Yadav
1abd10199c Merge remote-tracking branch 'origin/4.15' 2021-05-04 19:37:45 +05:30
Gabriel Beims Bräscher
ab790c11d5
server: Allow to upgrade service offerings from local <> shared storage pools (#4915)
This PR addresses the issue raised at #4545 (Fail to change Service offering from local <> shared storage).

When upgrading a VM service offering it is validated if the new offering has the same storage scope (local or shared) as the current offering. I think that the validation makes sense in a way of preventing running Root disks with an offering that does not match the current storage pool. However, the validation only compares both offerings and does not consider that it is possible to migrate Volumes between local <> shared storage pools.

The idea behind this implementation is that CloudStack should check the scope of the current storage pool which the ROOT volume is allocated; this, it is possible to migrate the volume between storage pools and list/upgrade according to the offerings that are supported for such pool.

This PR also fixes an issue where the API command that lists offerings for a VM should follow the same idea and list based on the storage pool that the volume is allocated and not the previous offering.

Fixes: #4545
2021-04-30 11:59:50 +05:30
Abhishek Kumar
42c83b08f5 Merge remote-tracking branch 'apache/4.15' 2021-04-26 14:33:58 +05:30
Abhishek Kumar
a30d518e8a
vmware: fix stopped VM volume migration (#4758)
* prevent other vm disks getting deleted

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* vmware: fix inter-cluster stopped vm migration

Fixes #4838

For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix detached volume inter-cluster migration

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* cleanup unused method

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* review changes

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* changes

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* vmware: allow attached volume migration using VmwareStorageMotionStrategy

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* find vm clusterid with multiple ROOT volumes

VM can have multiple ROOT volumes and some can be on zone-wide store therefore iterate over all of them till a cluster ID is found.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix successive storage migration

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix intercluster check

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactor vm cluster, host method

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* remove inter-pod check

Added by mistake, VMware won't have pods

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* address review comment

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-04-24 18:55:25 +05:30
Rohit Yadav
77290df0d5 Merge remote-tracking branch 'origin/4.15'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-02-26 12:09:11 +05:30
Abhishek Kumar
88337bdea4
server: fix finding pools for volume migration (#4693)
While finding pools for volume migration list following compatible storages:
- all zone-wide storages of the same hypervisor.
- when the volume is attached to a VM, then all storages from the same cluster as that of VM.
- for detached volume, all storages that belong to clusters of the same hypervisor. 

Fixes #4692 
Fixes #4400
2021-02-25 22:13:50 +05:30
sureshanaparti
eba186aa40
storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
2021-02-24 14:58:33 +05:30
Pearl Dsilva
aa01580381
network: Specify IP for VR in shared networks (#4503)
This PR enables admins to specify IP for a VR in a shared network.
2021-02-18 13:54:09 +05:30
Abhishek Kumar
d6e8b53736
vmware: vm migration improvements (#4385)
- Fixes inter-cluster migration of VMs
- Allows migration of stopped VM with disks attached to different and suitable pools
- Improves inter-cluster detached volume migration
- Allows inter-cluster migration (clusters of same Pod) for system VMs, VRs on VMware
- Allows storage migration for stopped system VMs, VRs on VMware within same Pod if StoragePool cluster scopetype

Linked Primate PR: https://github.com/apache/cloudstack-primate/pull/789 [Changes merged in this PR after new UI merge]
Documentation PR: https://github.com/apache/cloudstack-documentation/pull/170

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-02-12 12:41:41 +05:30
Daan Hoogland
ff376d8187 Merge release branch 4.15 to master
* 4.15:
  server: select root disk based on user input during vm import (#4591)
  kvm: Use Q35 chipset for UEFI x86_64 (#4576)
  server: fix wrong error message when create isolated network without SourceNat (#4624)
  server: add possibility to scale vm to current customer offerings (#4622)
  server: keep networks order and ips while move a vm with multiple networks (#4602)
  server: throw exception when update vm nic on L2 network (#4625)
  doc: fix typo in install notes (#4633)
2021-02-01 09:58:52 +00:00
Daan Hoogland
b6b778f003 Merge release branch 4.14 to 4.15
* 4.14:
  server: select root disk based on user input during vm import (#4591)
  kvm: Use Q35 chipset for UEFI x86_64 (#4576)
  server: fix wrong error message when create isolated network without SourceNat (#4624)
  server: add possibility to scale vm to current customer offerings (#4622)
  server: keep networks order and ips while move a vm with multiple networks (#4602)
  server: throw exception when update vm nic on L2 network (#4625)
  doc: fix typo in install notes (#4633)
2021-02-01 09:57:35 +00:00
Wei Zhou
1913c6854e
server: keep networks order and ips while move a vm with multiple networks (#4602)
This PR fixes an issue when move a vm from an account to another account.

Steps to reproduce the issue
(1) create a vm with multiple shared networks (in advanced zone, or advanced zone with security groups)
(2) create another account (in same domain who can also access the shared networks)
(3) move vm to new account, with a list of networkid

expected result: the vm has nics on the networks in same order as specified in API request, and nics have the same ips as before actual result: network order is not same as specified, ips are changed.
2021-02-01 14:14:20 +05:30
Rohit Yadav
b482da8c91 Updating pom.xml version numbers for release 4.15.1.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-01-11 13:58:30 +05:30
Daan Hoogland
280c13a4bb Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-05 15:51:02 +00:00
Daan Hoogland
81e9e6809b Updating pom.xml version numbers for release 4.15.1.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:34:46 +00:00
Daan Hoogland
e26202f23e Updating pom.xml version numbers for release 4.16.0.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:32:10 +00:00
Daan Hoogland
01b3e361c7 Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2020-12-23 16:32:25 +00:00
Harikrishna Patnala
46b5322d9b Adding vSphere storage policy to disk on start command and attach volume command 2020-10-19 15:05:58 +05:30
Harikrishna Patnala
9543fd6e6a Fix startcommand on Datastore cluster when the volume datastore in CloudStack mismatches with vCenter datastore. Volume could have migrated with in datastore cluster which caused the mismatch
Fix dettach volume when volume is not on CloudStack intended datastore
2020-10-19 15:05:57 +05:30
nvazquez
d864e9dc39 [VMware] Full OVF properties support 2020-10-19 15:05:56 +05:30
Harikrishna Patnala
61dd85876b Fix migrate vm and volume APIs in case if datastore cluster 2020-10-19 14:57:16 +05:30
Wei Zhou
2acd87c41e
server: Add global configuration vm.serviceoffering.cpu.cores.max and vm.serviceoffering.ram.size.max (#4379)
vm.serviceoffering.cpu.cores.max and vm.serviceoffering.ram.size.max
2020-10-14 15:48:35 +05:30
Pearl Dsilva
b464fe41c6
server: Secondary Storage Usage Improvements (#4053)
This feature enables the following:
Balanced migration of data objects from source Image store to destination Image store(s)
Complete migration of data
setting an image store to read-only
viewing download progress of templates across all data stores
Related Primate PR: apache/cloudstack-primate#326
2020-09-17 10:12:10 +05:30
Wei Zhou
4746c8c726
server: move UpdateDefaultNic to vm work job queue (#4020)
While remove secondary nic from a Running vm, if update the default nic to the secondary nic before the nic is removed, the vm will not have default nic (and cannot be started) when both operations are completed.

It is because UpdateDefaultNic api is not handled as a vm work job (AddNicToVMCmd and RemoveNicFromVMCmd are), it is processed before nic is removed. The result is that secondary nic becomes default nic and got removed.
2020-09-01 13:54:48 +05:30
Rohit Yadav
562a7db8df Merge remote-tracking branch 'origin/4.14'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2020-08-05 23:59:16 +05:30
Wei Zhou
cd8e28b279
server: Move restoreVM to vm work job queue (#4019) 2020-08-05 09:46:55 +00:00
Pearl Dsilva
a73712ec4e
server: Enable sending hypervior host name via metadata - VR and Config Drive (#3976)
Enable sending hypervisor host details via metadata for VR and Config Drive providers

Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2020-07-01 08:44:11 +05:30
Nicolas Vazquez
8c1d749360
[VMware] Enable unmanaging guest VMs (#4103)
* Enable unmanaging guest VMs

* Minor fixes

* Fix stop usage event only if VM is not stopped when unmanaging

* Rename unmanaged VMs manager

* Generate netofferingremove usage event if VM is not stopped

* Generate usage event VM snapshot primary off when unmanaging
2020-06-26 08:31:43 -03:00
harikrishna-patnala
0d4f67ad8c
server: NPE occured when dynamic scaling tried on VM and as part of this when VM tries to migrate if current host does not have capacity. (#3998)
Repro Steps:
1. Create a VM on host1
2. Make host1 capacity full by deploying multiple VMs
3. Try Dynamic scaling on VM on host1
4. NPE occurs when MS tries to find host to migrate the VM and then scale.

Root cause: VM profile is not initiated properly with serviceoffering before planning for deployment

Solution: Iniate VM profile with serviceoffering and also make sure custom compute parameters are handled
2020-06-18 09:19:19 +05:30
Rohit Yadav
612100c84a Merge remote-tracking branch 'origin/4.14' 2020-06-16 12:23:23 +05:30
Rohit Yadav
77947f23fd Merge remote-tracking branch 'origin/4.13' into 4.14
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2020-06-16 12:21:48 +05:30
harikrishna-patnala
5054766d9f
server: Submitting multiple dynamic VM Scaling API commands for the same instance can result in two usage events in the same second causing a compound key violation in usage service (#3991)
Root cause:
Even though dynamic scaling job is handled in vmworkjob queue which ensures serilizing multiple jobs but the database updating and generating usage events are out of the job queue.

Solution:
Moved all updations into the job queue

Firstly I have tested all the scenarios to check if nothing is broken:
Scaling on a running VM with normal compute offering
Scaling on a stopped VM with normal compute offering
Scaling on a running VM with custom compute offering
Scaling on stopped VM with custom compute offering
Scaling on stopped/running VM between custom compute offering and normal compute offering and combinations among these. Checked if the custom parameters have been populated or deleted accordingly based on the offering to which the VM is scaled
Since this is a corner scenario I could not test the exact point where two usage events are recorded at the same time for two different API calls on same VM.
2020-06-16 11:41:14 +05:30
andrijapanicsb
5f926c3353 Updating pom.xml version numbers for release 4.15.0.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 10:18:39 +01:00
andrijapanicsb
05e9b11694 Updating pom.xml version numbers for release 4.14.1.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 09:59:32 +01:00
andrijapanicsb
6f96b3b2b3 Updating pom.xml version numbers for release 4.14.0.0
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-11 15:03:14 +01:00