331 Commits

Author SHA1 Message Date
nvazquez
9b51a706db Set deploy-as-is to default on VMware 2020-10-19 15:05:57 +05:30
Harikrishna Patnala
33ae2afc89 Removed few duplicate imports during rebase with master 2020-10-19 15:05:57 +05:30
nvazquez
d864e9dc39 [VMware] Full OVF properties support 2020-10-19 15:05:56 +05:30
Harikrishna Patnala
201ebe8868 Simulator failures fixing 2020-10-19 14:57:16 +05:30
Harikrishna Patnala
61dd85876b Fix migrate vm and volume APIs in case if datastore cluster 2020-10-19 14:57:16 +05:30
Harikrishna Patnala
873f9dd9ac Datastore Clusters operations on putting into maintenance mode, update storage pool with tags, cancelling mantenance mode and deleting storage pool 2020-10-19 14:57:16 +05:30
Harikrishna Patnala
75fb1d91ee Fix adding Datastore clusters and listing 2020-10-19 14:57:15 +05:30
Harikrishna Patnala
41b3fc19d6 Add Datastore cluster and the child entities which are datastores in the cluster into CloudStack
Setting scope is still pending.
2020-10-19 14:57:15 +05:30
Harikrishna Patnala
48786b2d31 DataStore Clusters addition as a storage pool 2020-10-19 14:57:15 +05:30
Harikrishna Patnala
6df819028e UI changes and accept any type of datastore as presetup in vmware 2020-10-19 14:57:15 +05:30
Pearl Dsilva
b464fe41c6
server: Secondary Storage Usage Improvements (#4053)
This feature enables the following:
Balanced migration of data objects from source Image store to destination Image store(s)
Complete migration of data
setting an image store to read-only
viewing download progress of templates across all data stores
Related Primate PR: apache/cloudstack-primate#326
2020-09-17 10:12:10 +05:30
Nicolas Vazquez
8c1d749360
[VMware] Enable unmanaging guest VMs (#4103)
* Enable unmanaging guest VMs

* Minor fixes

* Fix stop usage event only if VM is not stopped when unmanaging

* Rename unmanaged VMs manager

* Generate netofferingremove usage event if VM is not stopped

* Generate usage event VM snapshot primary off when unmanaging
2020-06-26 08:31:43 -03:00
andrijapanicsb
5f926c3353 Updating pom.xml version numbers for release 4.15.0.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 10:18:39 +01:00
andrijapanicsb
05e9b11694 Updating pom.xml version numbers for release 4.14.1.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 09:59:32 +01:00
andrijapanicsb
6f96b3b2b3 Updating pom.xml version numbers for release 4.14.0.0
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-11 15:03:14 +01:00
Nicolas Vazquez
73122fd0a9
[KVM] Direct download agnostic of the storage provider (#3828)
* Remove constraint for NFS storage

* Add new property on agent.properties

* Add free disk space on the host prior template download

* Add unit tests for the free space check

* Fix free space check - retrieve avaiable size in bytes

* Update default location for direct download

* Improve the method to retrieve hosts to retry on depending on the destination pool type and scope

* Verify location for temporary download exists before checking free space

* In progress - refactor and extension

* Refactor and fix

* Last fixes and marvin tests

* Remove unused test file

* Improve logging

* Change default path for direct download

* Fix upload certificate

* Fix ISO failure after retry

* Fix metalink filename mismatch error

* Fix iso direct download

* Fix for direct download ISOs on local storage and shared mount point

* Last fix iso

* Fix VM migration with ISO

* Refactor volume migration to remove secondary storage intermediate

* Fix simulator issue
2020-03-06 19:56:54 +01:00
Wei Zhou
fd5bea838b
New feature: Add support to destroy/recover volumes (#3688)
* server: fix resource count of primary storage if some volumes are Expunged but not removed

Steps to reproduce the issue
(1) create a vm and stop it. check resource count of primary storage
(2) download volume. resource count of primary storage is not changed.
(3) expunge the vm, the volume will be Expunged state as there is a volume snapshot on secondary storage. The resource count of primary storage decreased.
(4) update resource count of the account (or domain), the resource count of primary storage is reset to the value in step (2).

* New feature: Add support to destroy/recover volumes

* Add integration test for volume destroy/recover

* marvin: check resource count of more types

* messages translate to JP

* Update messages for CN

* translate message for NL

* fix two issues per Daan's comments

Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
2020-02-07 11:25:10 +01:00
Paul Angus
50fc045f36 Updating pom.xml version numbers for release 4.14.0.0-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2019-09-07 09:57:46 +01:00
Nicolas Vazquez
0fbf5006b8 kvm: live storage migration intra cluster from NFS source and destination (#2983)
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548

Live storage migration on KVM under these conditions:

From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.

Additional notes:

To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026
https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:

Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-10 15:35:26 +05:30
Rohit Yadav
0700d91a68 Merge branch '4.12'
- Fixes PR #3146 db cleanup to the correct 4.12->4.13 upgrade path
- Fixes failing unit test due to jdk specific changes after forward
  merging

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-05-14 15:15:17 +05:30
Rohit Yadav
00ff536f81 Merge remote-tracking branch 'origin/4.11' into 4.12
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-05-14 14:26:11 +05:30
Anurag Awasthi
f9b61bc737 orchestration: Allow VM that has never started to have volumes attached (#3276)
With this patch b766bf7
we started tracking disks in attaching state so that other attach request can fail gracefully. However this missed the case where disks were in allocated state but attach was requested.

For the use case where users want to attach disk in allocated state but not ready, we need to have allocated-attaching transition as well. We must take care of returning to the original state - allocated or ready - when attach request has completed.

For the use case of unstarted vm's the disk must proceed as follows - "Allocated" -> Attaching -> Allocated. When VM is started, the disk is "created" and pool is assigned. For the use case of started VMs it's more trivial and disk proceeds as follows - Ready -> Attaching -> Ready.

Test this by creating a VM with "startvm=false", create a disk and try attaching it in allocated state. It would give an exception on latest 4.11 but will be fixed on this patch.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-05-10 23:40:38 +05:30
GabrielBrascher
8d3feb100a Updating pom.xml version numbers for release 4.13.0.0-SNAPSHOT
Signed-off-by: GabrielBrascher <gabriel@pcextreme.nl>
2019-03-20 18:47:35 -03:00
GabrielBrascher
a137398bf1 Updating pom.xml version numbers for release 4.12.0.0
Signed-off-by: GabrielBrascher <gabriel@pcextreme.nl>
2019-03-14 10:11:46 -03:00
Nathan Johnson
637cc6ec4e feature: add libvirt / qemu io bursting (#3133)
* feature: add libvirt / qemu io bursting

Adds the ability to set bursting features from libvirt / qemu

This allows you to utilize the iops and bytes temporary "burst" mode
introduced with libvirt 2.4 and improved upon with libvirt 2.6.

https://blogs.igalia.com/berto/2016/05/24/io-bursts-with-qemu-2-6/

* updates per rafael et al
2019-02-04 19:47:44 -02:00
dahn
b363fd49f7 Vmware offline migration (#2848)
* - Offline VM and Volume migration on Vmware hypervisor hosts
- Also add VM disk consolidation call on successful VM migrations

* Fix indentation of marvin test file and reformat against PEP8

* * Fix few comment typos
* Refactor debug messages to use String.format() when debug log level is enabled.

* Send list of commands returned by hypervisor Guru instead of explicitly selecting the first one

* Fix unhandled NPE during VM migration

* Revert back to distinct event descriptions for VM to host or storage pool migration

* Reformat test_primary_storage file against PEP-8 and Remove unused imports

* Revert back the deprecation messages in the custom StringUtils class to favour the use of the ApacheUtils
2019-01-25 10:05:13 -02:00
Paul Angus
fb80e51307 Updating pom.xml version numbers for release 4.11.3.0-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2018-11-20 13:11:52 +00:00
Khosrow Moossavi
7c6630bca7 Cleanup POMs (#2613)
* Cleaup and code-formatting POM files

* Remove obsolete mycila license-maven-plugin

* Remove obsolete console-proxy/plugin project

* Move console-proxy-rdbconsole under console-proxy parent

* Use correct parent path for rdpconsole

* Order alphabetally items in setnextversion.sh

* Unifiy License header in POMs

* Alphabetic order of modules definition

* Extract all defined versions into parent pom

* Remove obsolete files: version-info.in, configure-info.in

* Remove redundant defaultGoal

* Remove useless checkstyle plugin from checkstyle project

* Order alphabetally items in pom.xml

* Add aditional SPACEs to fix debian build

* Don't execute checkstyle on parent projects

* Use UTF-8 encoding in building checkstyle project

* Extract plugin versions into properties

* Execute PMD plugin on all the projects with -Penablefindbugs

* Upgrade maven plugins to latest version

* Make sure to always look for apache parent pom from repository

* Fix incorrect version grep in debian packaging

* Fix rebase conflicts

* Fix rebase conflicts

* Remove PMD for now to be fixed on another PR
2018-07-25 14:39:37 -03:00
Khosrow Moossavi
67860d9f46 maven: Updating pom.xml version numbers for release 4.11.2.0-SNAPSHOT (#2728)
Fixes the version in pom etc. to be consistent with versioning pattern as X.Y.Z.0-SNAPSHOT after a minor release.

Signed-off-by: Khosrow Moossavi <khos2ow@gmail.com>
2018-07-06 17:27:12 +05:30
Paul Angus
8ba318da19 Updating pom.xml version numbers for release 4.11.2-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2018-06-26 17:53:54 +01:00
Paul Angus
2cb2dacbe7 Updating pom.xml version numbers for release 4.11.1.0
Signed-off-by: Paul Angus <paulangus@PA-Ansible-GUI.sblab.local>
2018-06-21 15:52:43 +01:00
Rafael Weingärtner
15eddf3dd6 Merge forward branch '4.11' PR #2629
Fix primary storage count when deleting volumes (#2629)
2018-05-16 16:59:17 -03:00
Rafael Weingärtner
b9ed42bd29
Fix primary storage count when deleting volumes (#2629)
* Primary Storage count for an account does not decrease when a Data Disk is deleted

When a data disk is created and not attached in a running VM, the "deleteVolume" will not decrement the count for used primary storage in the VMs accounting information. The property that is not being decremented is called "primarystoragetotal"; this information can be retrieved via "listAccounts" API method.

Steps to reproduce this issue:
1 - Create an account, deploy a VM in it
2 - Check the primary storage count for the account with listAccounts API
3 - Create a data disk
4 - Check the primary storage count for the account with listAccounts API
5 - Delete the Data disk
6 - Check the primary storage count for the account with listAccounts API - It is the same as before deleting the data disk (it should not be the same as the value in step 2!)

* formatting and cleanups

* fix imports that were wrongly changed during rebase
2018-05-16 15:28:28 -03:00
Rohit Yadav
0ece15f86e Updating pom.xml version numbers for release 4.11.1.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-02-26 16:57:48 +01:00
Rohit Yadav
6ffbce6159 Updating pom.xml version numbers for release 4.11.0.1-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-02-05 11:13:50 +01:00
Rohit Yadav
5dada1f7ed Updating pom.xml version numbers for release 4.11.0.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-01-26 13:13:37 +01:00
Marc-Aurèle Brothier
893a88d225 CLOUDSTACK-10105: Use maven standard project structure in all projects (#2283)
Remove maven standard module (which only a few were using) and get ride of maven customization for the projects structure.

- moved all directories to src/main/java, src/main/resources, src/main/scripts, src/test/java, src/test/resources
- grep scan to search for src/com and src/org left over
- grep for <project>/scripts to fix pom.xml configuration
- remove custom <build> configuration in pom.xml

Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
2018-01-20 03:19:27 +05:30
Rohit Yadav
072dbc0720 Updating pom.xml version numbers for master to 4.12.0.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-01-15 17:43:45 +05:30
Mike Tutkowski
a30a31c9b7 CLOUDSTACK-9620: Enhancements for managed storage (#2298)
Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in a KVM-only environment (previously this only worked for XenServer and VMware)

Added support for root disks on managed storage with KVM

Added support for volume snapshots with managed storage on KVM

Enable creating a template directly from a volume (i.e. without having to go through a volume snapshot) on KVM with managed storage

Only allow the resizing of a volume for managed storage on KVM if the volume in question is either not attached to a VM or is attached to a VM in the Stopped state.

Included support for Reinstall VM on KVM with managed storage

Enabled offline migration on KVM from non-managed storage to managed storage and vice versa

Included support for online storage migration on KVM with managed storage (NFS and Ceph to managed storage)

Added support to download (extract) a managed-storage volume to a QCOW2 file

When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS, if applicable.

Included support for the KVM auto-convergence feature

The compression flag was actually added in version 1.0.3 (1000003) as opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)

On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable iSCSI volumes that need to be disconnected.

Added a new Global Setting: kvm.storage.live.migration.wait

For XenServer, added a check to enforce that only volumes from zone-wide managed storage can be storage motioned from a host in one cluster to a host in another cluster (cannot do so at the time being with volumes from cluster-scoped managed storage)

Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one or more snapshots.

Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot, create volume from snapshot, and create template from snapshot

Added an SIOC API plug-in to support VMware SIOC

When starting a VM that uses managed storage in a cluster other than the one it last was running in, we need to remove the reference to the iSCSI volume from the original cluster.

Added the ability to revert a volume to a snapshot

Enabled cluster-scoped managed storage

Added support for VMware dynamic discovery
2018-01-15 00:05:52 +05:30
Abhinandan Prateek
64832fd70a CLOUDSTACK-4757: Support OVA files with multiple disks for templates (#2146)
CloudStack volumes and templates are one single virtual disk in case of XenServer/XCP and KVM hypervisors since the files used for templates and volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates are in OVA format, which are archives that can contain a complete VM including multiple VMDKs and other files such as ISOs. And currently, Cloudstack only supports Template creation based on OVA files containing a single disk. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, only the first disk is attached to the new instance and other disks are ignored.
Similarly with uploaded volumes, attaching an uploaded volume that contains multiple disks to a VM will result in only one VMDK to being attached to the VM.

FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Support+OVA+files+containing+multiple+disks

This behavior needs to be improved in VMWare to support OVA files with multiple disks for both uploaded volumes and templates. i.e. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, the first disk should be attached to the new instance as the ROOT disk and volumes should be created based on other VMDK disks in the OVA file and should be attached to the instance.

Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-01-10 22:10:41 +05:30
subhash yedugundla
8eca04e1f6 CLOUDSTACK-9572: Snapshot on primary storage not cleaned up after Storage migration (#1740)
Snapshot on primary storage not cleaned up after Storage migration. This happens in the following scenario:

Steps To Reproduce
Create an instance on the local storage on any host
Create a scheduled snapshot of the volume:
Wait until ACS created the snapshot. ACS is creating a snapshot on local storage and is transferring this snapshot to secondary storage. But the latest snapshot on local storage will stay there. This is as expected.
Migrate the instance to another XenServer host with ACS UI and Storage Live Migration
The Snapshot on the old host on local storage will not be cleaned up and is staying on local storage. So local storage will fill up with unneeded snapshots.
2018-01-05 11:19:56 +05:30
Rafael Weingärtner
7f6ae15972 Remove annotation not needed at cloud-engine-storage-image (#2324)
remove bogus depends-on declaration at spring-engine-storage-snapshot-core-context
2017-11-16 23:11:15 +05:30
Rajani Karuturi
4bc7c270fa Updating pom.xml version numbers for release 4.11.0.0-SNAPSHOT
Signed-off-by: Rajani Karuturi <rajanikaruturi@gmail.com>
2017-07-12 12:09:38 +05:30
Rajani Karuturi
9d2893d44a Updating pom.xml version numbers for release 4.10.0.0
Signed-off-by: Rajani Karuturi <rajanikaruturi@gmail.com>
2017-07-03 10:06:43 +05:30
Mike Tutkowski
ccb68e13a7
Fix for an issue introduced in this commit: 336df84f1787de962a67d0a34551f9027303040e
(cherry picked from commit c57f556966ab68b5bd91bb0d009a5ae170402e26)

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2017-06-15 16:28:52 +05:30
Rajani Karuturi
1bd66cb03e Merge pull request #2072 from Accelerite/CLOUDSTACK-9895_ParallelVolumes
CLOUDSTACK-9895 : Added support for parallel volume(s) creation from a volume snapshot
2017-05-31 14:05:05 +05:30
Pavan Kumar Aravapalli
502f813370 CLOUDSTACK-9895 : Added support for parallel volume(s) creation from a volume snapshot 2017-05-31 11:27:30 +05:30
Rajani Karuturi
86eb5683dd Merge pull request #1840 from anshul1886/CLOUDSTACK-9685
CLOUDSTACK-9685: delete snapshot on primary associated with a volume …
2017-05-16 12:42:48 +05:30
Rajani Karuturi
503c803ba0 Merge pull request #803 from anshul1886/CLOUDSTACK-8833
CLOUDSTACK-8833: Fixed  Generating url and migrate volume to another storage , resulting two entry in UI and listvolume is not working for that volume
2017-05-08 10:14:02 +05:30
Daan Hoogland
4fd72917cd CE-113 removed TODOs 2017-04-18 18:11:00 +02:00