298 Commits

Author SHA1 Message Date
Paul Angus
2cb2dacbe7 Updating pom.xml version numbers for release 4.11.1.0
Signed-off-by: Paul Angus <paulangus@PA-Ansible-GUI.sblab.local>
2018-06-21 15:52:43 +01:00
Rafael Weingärtner
b9ed42bd29
Fix primary storage count when deleting volumes (#2629)
* Primary Storage count for an account does not decrease when a Data Disk is deleted

When a data disk is created and not attached in a running VM, the "deleteVolume" will not decrement the count for used primary storage in the VMs accounting information. The property that is not being decremented is called "primarystoragetotal"; this information can be retrieved via "listAccounts" API method.

Steps to reproduce this issue:
1 - Create an account, deploy a VM in it
2 - Check the primary storage count for the account with listAccounts API
3 - Create a data disk
4 - Check the primary storage count for the account with listAccounts API
5 - Delete the Data disk
6 - Check the primary storage count for the account with listAccounts API - It is the same as before deleting the data disk (it should not be the same as the value in step 2!)

* formatting and cleanups

* fix imports that were wrongly changed during rebase
2018-05-16 15:28:28 -03:00
Rohit Yadav
0ece15f86e Updating pom.xml version numbers for release 4.11.1.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-02-26 16:57:48 +01:00
Rohit Yadav
6ffbce6159 Updating pom.xml version numbers for release 4.11.0.1-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-02-05 11:13:50 +01:00
Rohit Yadav
5dada1f7ed Updating pom.xml version numbers for release 4.11.0.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-01-26 13:13:37 +01:00
Mike Tutkowski
a30a31c9b7 CLOUDSTACK-9620: Enhancements for managed storage (#2298)
Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in a KVM-only environment (previously this only worked for XenServer and VMware)

Added support for root disks on managed storage with KVM

Added support for volume snapshots with managed storage on KVM

Enable creating a template directly from a volume (i.e. without having to go through a volume snapshot) on KVM with managed storage

Only allow the resizing of a volume for managed storage on KVM if the volume in question is either not attached to a VM or is attached to a VM in the Stopped state.

Included support for Reinstall VM on KVM with managed storage

Enabled offline migration on KVM from non-managed storage to managed storage and vice versa

Included support for online storage migration on KVM with managed storage (NFS and Ceph to managed storage)

Added support to download (extract) a managed-storage volume to a QCOW2 file

When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS, if applicable.

Included support for the KVM auto-convergence feature

The compression flag was actually added in version 1.0.3 (1000003) as opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)

On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable iSCSI volumes that need to be disconnected.

Added a new Global Setting: kvm.storage.live.migration.wait

For XenServer, added a check to enforce that only volumes from zone-wide managed storage can be storage motioned from a host in one cluster to a host in another cluster (cannot do so at the time being with volumes from cluster-scoped managed storage)

Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one or more snapshots.

Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot, create volume from snapshot, and create template from snapshot

Added an SIOC API plug-in to support VMware SIOC

When starting a VM that uses managed storage in a cluster other than the one it last was running in, we need to remove the reference to the iSCSI volume from the original cluster.

Added the ability to revert a volume to a snapshot

Enabled cluster-scoped managed storage

Added support for VMware dynamic discovery
2018-01-15 00:05:52 +05:30
Abhinandan Prateek
64832fd70a CLOUDSTACK-4757: Support OVA files with multiple disks for templates (#2146)
CloudStack volumes and templates are one single virtual disk in case of XenServer/XCP and KVM hypervisors since the files used for templates and volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates are in OVA format, which are archives that can contain a complete VM including multiple VMDKs and other files such as ISOs. And currently, Cloudstack only supports Template creation based on OVA files containing a single disk. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, only the first disk is attached to the new instance and other disks are ignored.
Similarly with uploaded volumes, attaching an uploaded volume that contains multiple disks to a VM will result in only one VMDK to being attached to the VM.

FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Support+OVA+files+containing+multiple+disks

This behavior needs to be improved in VMWare to support OVA files with multiple disks for both uploaded volumes and templates. i.e. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, the first disk should be attached to the new instance as the ROOT disk and volumes should be created based on other VMDK disks in the OVA file and should be attached to the instance.

Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-01-10 22:10:41 +05:30
subhash yedugundla
8eca04e1f6 CLOUDSTACK-9572: Snapshot on primary storage not cleaned up after Storage migration (#1740)
Snapshot on primary storage not cleaned up after Storage migration. This happens in the following scenario:

Steps To Reproduce
Create an instance on the local storage on any host
Create a scheduled snapshot of the volume:
Wait until ACS created the snapshot. ACS is creating a snapshot on local storage and is transferring this snapshot to secondary storage. But the latest snapshot on local storage will stay there. This is as expected.
Migrate the instance to another XenServer host with ACS UI and Storage Live Migration
The Snapshot on the old host on local storage will not be cleaned up and is staying on local storage. So local storage will fill up with unneeded snapshots.
2018-01-05 11:19:56 +05:30
Rafael Weingärtner
7f6ae15972 Remove annotation not needed at cloud-engine-storage-image (#2324)
remove bogus depends-on declaration at spring-engine-storage-snapshot-core-context
2017-11-16 23:11:15 +05:30
Rajani Karuturi
4bc7c270fa Updating pom.xml version numbers for release 4.11.0.0-SNAPSHOT
Signed-off-by: Rajani Karuturi <rajanikaruturi@gmail.com>
2017-07-12 12:09:38 +05:30
Rajani Karuturi
9d2893d44a Updating pom.xml version numbers for release 4.10.0.0
Signed-off-by: Rajani Karuturi <rajanikaruturi@gmail.com>
2017-07-03 10:06:43 +05:30
Mike Tutkowski
ccb68e13a7
Fix for an issue introduced in this commit: 336df84f1787de962a67d0a34551f9027303040e
(cherry picked from commit c57f556966ab68b5bd91bb0d009a5ae170402e26)

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2017-06-15 16:28:52 +05:30
Rajani Karuturi
1bd66cb03e Merge pull request #2072 from Accelerite/CLOUDSTACK-9895_ParallelVolumes
CLOUDSTACK-9895 : Added support for parallel volume(s) creation from a volume snapshot
2017-05-31 14:05:05 +05:30
Pavan Kumar Aravapalli
502f813370 CLOUDSTACK-9895 : Added support for parallel volume(s) creation from a volume snapshot 2017-05-31 11:27:30 +05:30
Rajani Karuturi
86eb5683dd Merge pull request #1840 from anshul1886/CLOUDSTACK-9685
CLOUDSTACK-9685: delete snapshot on primary associated with a volume …
2017-05-16 12:42:48 +05:30
Rajani Karuturi
503c803ba0 Merge pull request #803 from anshul1886/CLOUDSTACK-8833
CLOUDSTACK-8833: Fixed  Generating url and migrate volume to another storage , resulting two entry in UI and listvolume is not working for that volume
2017-05-08 10:14:02 +05:30
Daan Hoogland
4fd72917cd CE-113 removed TODOs 2017-04-18 18:11:00 +02:00
Daan Hoogland
c689d4a696 CE-113 trace logging and rethrow instead of nesting CloudRuntimeException 2017-04-18 18:11:00 +02:00
nvazquez
c66df6e11f Fix for test failure 2017-03-02 16:14:45 -03:00
Anshul Gangwar
42b89278e9 CLOUDSTACK-8833: Fixed Generating url and migrate volume to another storage , resulting two entry in UI and listvolume is not working for that volume
Update the volume id in volume_store_ref table to newly created volume for migration
2017-02-28 17:55:42 +05:30
Rajani Karuturi
6a18cdd6ef Merge pull request #1825 from Accelerite/CLOUDSTACK-9660
CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests

NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle
only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.

* pr/1825:
  CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2017-02-28 06:00:02 +05:30
Rajani Karuturi
4721c53ea0 Merge pull request #1749 from mike-tutkowski/archived_snapshots
CLOUDSTACK-9619: Updates for SAN-assisted snapshotsThis PR is to address a few issues in #1600 (which was recently merged to master for 4.10).

In StorageSystemDataMotionStrategy.performCopyOfVdi we call getSnapshotDetails. In one such scenario, the source snapshot in question is coming from secondary storage (when we are creating a new volume on managed storage from a snapshot of ours thats on secondary storage).

This usually worked in the regression tests due to a bit of "luck": We retrieve the ID of the snapshot (which is on secondary storage) and then try to pull out its StorageVO object (which is for primary storage). If you happen to have a primary storage that matches the ID (which is the ID of a secondary storage), then getSnapshotDetails populates its Map<String, String> with inapplicable data (that is later ignored) and you dont easily see a problem. However, if you dont have a primary storage that matches that ID (which I didnt today because I had removed that primary storage), then a NullPointerException is thrown.

I have fixed that issue by skipping getSnapshotDetails if the source is coming from secondary storage.

While fixing that, I noticed a couple more problems:

1)       We can invoke grantAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).
2)       We can invoke revokeAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).

I have corrected those issues, as well.

I then came across one more problem:
         When using a SAN snapshot and copying it to secondary storage or creating a new managed-storage volume from a snapshot of ours on secondary storage, we attach to the SR in the XenServer code, but detach from it in the StorageSystemDataMotionStrategy code (by sending a message to the XenServer code to perform an SR detach). Since we know to detach from the SR after the copy is done, we should detach from the SR in the XenServer code (without that code having to be explicitly called from outside of the XenServer logic).

I went ahead and changed that, as well.

JIRA Ticket:
https://issues.apache.org/jira/browse/CLOUDSTACK-9619

* pr/1749:
  CLOUDSTACK-9619: Updates for SAN-assisted snapshots

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2017-01-27 05:35:06 +05:30
Anshul Gangwar
336df84f17 CLOUDSTACK-9685: delete snapshot on primary associated with a volume when that volume is deleted
as that snapshot will never be going to use again and also it will fill up primary storage
2016-12-20 15:21:04 +05:30
Rohit Yadav
0dce1c50c1 CLOUDSTACK-9456: Update Spring version in maven poms
- Bump spring-framework version to 4.x and Jetty to version that runs with JDK8
- Bump servet dependency version
- Migrate spring xmls to version 4, fixes schema locations that are 3.0
  dependent in various xmls.
- Fix failing tests due to spring upgrade
  (Thanks @marcaurele Marc-Aurèle Brothier for fixing them)
    * Fix test DeploymentPlanningManagerImplTest
    * Fix GloboDNS test

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-16 21:21:20 +05:30
Koushik Das
d6b41d9ac2 CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests
NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle
only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
2016-12-09 15:49:39 +05:30
Mike Tutkowski
06806a6523 CLOUDSTACK-9619: Updates for SAN-assisted snapshots 2016-12-06 17:32:56 -07:00
Rohit Yadav
9555492b4d Merge branch '4.9' 2016-08-23 14:16:53 +05:30
Rohit Yadav
f13c224da1 Updating pom.xml version numbers for release 4.9.1.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-08-19 13:53:39 +05:30
Will Stevens
62aa3b2bfa Updating pom.xml version numbers for release 4.10.0-SNAPSHOT
Signed-off-by: Will Stevens <williamstevens@gmail.com>
2016-07-29 10:11:34 -04:00
Will Stevens
227ff3884d Updating pom.xml version numbers for release 4.9.0
Signed-off-by: Will Stevens <williamstevens@gmail.com>
2016-07-25 16:56:04 -04:00
Mike Tutkowski
9d215562eb Faster logic to see if a cluster supports resigning 2016-05-16 07:18:39 -06:00
Mike Tutkowski
2bd035d199 Support for backend snapshots with XenServer 2016-05-13 01:02:04 -06:00
Mike Tutkowski
dad9e5d868 CLOUDSTACK-8813: Notify listeners when a host has been added to a cluster, is about to be removed from a cluster, or has been removed from a cluster 2016-05-11 08:02:46 -06:00
Remi Bergsma
43ab98d823 Updating pom.xml version numbers for release 4.9.0-SNAPSHOT
Signed-off-by: Remi Bergsma <github@remi.nl>
2016-01-26 15:12:20 +01:00
Remi Bergsma
32fcc47117 Updating pom.xml version numbers for release 4.8.1-SNAPSHOT
Signed-off-by: Remi Bergsma <github@remi.nl>
2016-01-26 09:39:00 +01:00
Remi Bergsma
62f218b7bd Updating pom.xml version numbers for release 4.8.0
Signed-off-by: Remi Bergsma <github@remi.nl>
2016-01-20 23:43:35 +01:00
Remi Bergsma
8f5a2920e8 Updating pom.xml version numbers for release 4.8.0-SNAPSHOT
Signed-off-by: Remi Bergsma <github@remi.nl>
2015-12-21 22:09:31 +01:00
Remi Bergsma
1f53f2a93e Updating pom.xml version numbers for release 4.7.0-SNAPSHOT
Signed-off-by: Remi Bergsma <github@remi.nl>
2015-11-15 18:54:13 +01:00
Wei Zhou
bef92052ee CLOUDSTACK-8964: Can't create volume from snapshot of a removed volume
This issue happens on KVM as well.
This is because the volume info is missing in the CopyCommand once the volume has been removed.
When the KVM agent tries to process the command, it will throws a NPE.
2015-10-23 22:02:59 +02:00
Mike Tutkowski
8b0266d12e Merge branch 'pr/547'
* pr/547:
  CLOUDSTACK-8601. VMFS storage added as local storage can be re-added as shared storage. Fail addition of a VMFS shared storage pool in case it has already been added as local storage in CS.

Signed-off-by: Mike Tutkowski <mike.tutkowski@solidfire.com>
2015-08-10 19:00:53 -06:00
Koushik Das
ab7c9e4098 CLOUDSTACK-8655: [Browser Based Upload Volume] Partially uploaded volumes are not getting destroyed as part of storage GC
As part of volume sync, that runs during of SSVM start-up, the volume_store_ref entry was getting deleted. Volume GC relies on this entry to move volume to destroyed state.
Since the entry was getting deleted, GC thread never moved the volume from UploadError/UploadAbandoned to Destroyed. Fix is to not remove the volume_store_ref entry as part
of volume sync and let GC thread handle the clean up.

This closes #611
2015-07-22 19:05:47 +05:30
Likitha Shetty
13a98dd196 CLOUDSTACK-8601. VMFS storage added as local storage can be re-added as shared storage.
Fail addition of a VMFS shared storage pool in case it has already been added as local storage in CS.
2015-07-01 10:47:36 +05:30
Rajani Karuturi
b31b8425df CLOUDSTACK-8525: NPE while updating the state of the volume after deletion
The volume is already deleted (may be by the cleanup thread) and hence
the NPE. Added a not null check for the volumevo and returning false
from the state transition

This closes #321
2015-06-03 11:45:02 +05:30
Mike Tutkowski
ac2bccd2a2 Removing unused imports 2015-05-07 13:52:25 -06:00
Daan Hoogland
1c408dec37 Merge branch '4.5' after 4.5.1 vote passes 2015-05-07 16:03:26 +02:00
Rohit Yadav
4ba72a877c Updating pom.xml version numbers for release 4.5.2-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2015-05-07 15:33:01 +02:00
Rohit Yadav
0eb4eb2370 Updating pom.xml version numbers for release 4.5.1
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2015-05-04 12:17:03 +02:00
Likitha Shetty
6c649ce3ae CLOUDSTACK-8411. Unable to delete an uploaded volume after CCP fails to attach the volume to a VM.
Correctly update the status of an uploaded volume upon failure to attach it to a VM.

(cherry picked from commit 10a106f5d86a7f6786b94a7298a5c63c32eab66b)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2015-04-29 16:50:40 +02:00
Rajani Karuturi
0b8355920e Merge branch 'volume-upload' into master
This closes #206
2015-04-29 11:12:53 +05:30
Likitha Shetty
10a106f5d8 CLOUDSTACK-8411. Unable to delete an uploaded volume after CCP fails to attach the volume to a VM.
Correctly update the status of an uploaded volume upon failure to attach it to a VM.
2015-04-28 09:52:06 +05:30