A root volume can be replaced by a different root volume without the VM it belongs to being expunged.
From dev@:
For example: Let’s say we have a system VM running on NFS primary storage. We then put this primary storage into maintenance mode, which creates the system VM (with the same name) on a different primary storage (we do not create a new row in the cloud.vm_instance table for this VM). While this VM works, the original root disk of the system VM remains on the original primary storage and is not destroyed by the code in StorageManagerImpl.cleanupStorage(boolean) in 4.10 because 4.10 (as shown above) only asks for non-root volumes to consider for deletion. In the 4.9 version of the code, the original root disk is cleaned up in StorageManagerImpl.cleanupStorage(boolean). The problem with 4.10 relying on a root disk always being deleted when the VM it belongs to is deleted is that in a situation like this that the system VM doesn’t get deleted at this point – it gets a new root disk that’s hosted by a different primary storage (so now it’s original root disk is stranded).
CLOUDSTACK-5806: add presetup to storage types that support over provisioning
Ideally this should be configurable via global settings
* pr/1958:
CLOUDSTACK-5806: add presetup to storage types that support over provisioning Ideally this should be configurable via global settings
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests
NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle
only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
* pr/1825:
CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9574: Redesign storage views## Part 1: Redesign storage tags
### Actual behavior
Primary storage tags are being saved as an entry on `storage_pool_details` with:
* name = TAG_NAME
* value = "true"
When a boolean property is defined in {{storage_pool_details}} and has value = "true", it is displayed as a tag.


### Goal
Redesign `Storage Tags` for Primary Storage view, to list only tags, as it is done in Host Tags (Hosts view).
## Part 2: Remove details from listImageStores API call response and UI
### Description
In Secondary Storage view we propose removing `Details` field, as `Setting` tab list details for a given image store. We also remove details from response on `listImageStores` API method
* pr/1747:
CLOUDSTACK-9574: Redesign storage tags and remove details from listImageStores response and UI
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This issue occurs when a volume in Ready state is moved across storage
pools.
While finding if the storage pool has enough space, it has a check to
consider the size of non Ready volumes only. This is true if the volume
to be attached to a vm is in the same storage pool. But, if the volume
is in another storage pool and has to be moved to a vm's storage pool,
the size of the volume should be considered in doing the space check.
computing the asking size when volume is not in ready state or when the
volume is on a different storage pool.
NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle
only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
* 4.6:
CLOUDSTACK-9022: move storage.cleanup related global configurations to StorageManager
CLOUDSTACK-9022: keep Destroyed volumes for sometime
Conflicts:
server/src/com/cloud/storage/StorageManagerImpl.java
The S3 implementation is far from finished, this commit focusses on the bases.
- Upgrade AWS SDK to latest version.
- Rewrite S3 Template downloader.
- Rewrite S3Utils utility class.
- Improve addImageStoreS3 API command.
- Split various classes for convenience.
- Various minor improvements and code optimalisations.
A side effect of the new AWS SDK is that it, by default, uses the V4 signature. Therefore I added an option to specify the Signer, so it stays compatible with previous versions.
This reverts commit cd7218e241a8ac93df7a73f938320487aa526de6, reversing
changes made to f5a7395cc2ec37364a2e210eac60720e9b327451.
Reason for Revert:
noredist build failed with the below error:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on project cloud-plugin-hypervisor-vmware: Compilation failure
[ERROR] /home/jenkins/acs/workspace/build-master-noredist/plugins/hypervisors/vmware/src/com/cloud/hypervisor/guru/VMwareGuru.java:[484,12] error: non-static variable logger cannot be referenced from a static context
[ERROR] -> [Help 1]
even the normal build is broken as reported by @koushik-das on dev list
http://markmail.org/message/nngimssuzkj5gpbz
... of new volumes. Following changes are implemented 1. Disable or enable a pool with the
updateStoragePool api. A new 'enabled' parameter added for the same. 2. When a
pool is disabled the state of the pool is updated to 'Disabled' in the db. On
enabling it is updated back to 'Up'. Alert is raised when a pool is disabled or
enabled. 3. Updated other storage providers to also honour the disabled state.
4. A disabled pool is skipped by allocators for provisioing of new volumes. 5.
Since the allocators skip a disabled pool for provisioning of volumes, the
volumes are also not listed as a destination for volume migration.
FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Disabling+Storage+Pool+for+Provisioning
This closes#257
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Backend should update if state was diabled and now has changed
- UI's fetch latest does not actually fetch latest
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit 985a61652eb5dc97503c002e9fc3c3a7ca39b70c)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Steps to reproduce if you have this issue:
- Create a VM's volume snapshot
- Remove VM's template and mark the template as removed with timestamp in DB
- Restart mgmt server and create a volume out of snapshot you should get NPE
Fix: In `storagePoolHasEnoughSpace`, we're only searching for a VM's volume's
snapshot's template by Id and not including removed templates. This is a corner
case and NPE hits when template has been marked removed for a VM's volume's
template so we should search including removed templates.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit f189c105d8dde5491697b171b969e757f8f15858)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>