Support for local data disk. Currently enable/disable config is at zone level, in subsequent checkins it can be made more granular.
Following changes are made:
- Create disk offering API now takes an extra parameter to denote storage type (local or shared). This is similar to storage type in service offering.
- Create/delete of data volume on local storage
- Attach/detach for local data volumes. Re-attach is allowed as long as vm host and data volume storage pool host is same.
- Migration of VM instance is not supported if it uses local root or data volumes.
- Migrate is not supported for local volumes.
- Zone level config to enable/disable local storage usage for service and disk offerings.
- Local storage gets discovered when a host is added/reconnected if zone level config is enabled. When disabled existing local storages are not removed but any new local storage is not added.
- Deploy VM command validates service and disk offerings based on local storage config.
- Upgrade uses the global config 'use.local.storage' to set the zone level config for local storage.
(cherry picked from commit 62710aed37606168012a0ed255a876c8e7954010)
Verified on XS 6.0.2
Test scenario
- Created 2 shared primary storage pools
- Created data volume using shared disk offering
- Attached it to a running VM (created in one storage pool)
- Detached it (now it is in READY state)
- Created a new VM in stopped state (using deployVirtualMachine API with startVm=false)
- Attached the data volume to this new VM
- Started new VM (migrated volume scenario got hit when the planner assigned the other shared pool)
This patch adds RBD (RADOS Block Device) support for primary storage in combination with KVM.
To get this patch working you need:
- libvirt-java 0.4.8
- libvirt with RBD storage pool support (>0.9.13)
- Qemu with RBD support (>0.14)
The primary storage does not support all the functions of CloudStack yet, for example snapshotting is disabled
due to the fact that backupping up a RBD snapshot is not possible in the way CloudStack wants to do it.
Creating templates from RBD volumes goes well, creating a VM from a template however is still a hit-and-miss.
NFS primary storage is also still required, you are not able to run your System VM's from RBD, they will need
to run on NFS.
Other then these points you can run instances with RBD backed disks.
un-allocated space is insufficient on primary storage
check the availability of un-allocated primary storage space during
planning stage, for multiple-volume VM creation scenario
modification in StorageManagerImpl.java and StorageManager.java:
add a new method storagePoolHasEnoughSpace(List<Volumes>, StoragePool),
check if storagePool has enough space for all requested volumes
modification in FirstfitPlanner.findPotentialDeploymentResources:
handle multiple volume case, keep track of allocated volumes for pools
and call storagePoolHasEnoughSpace to check space availability
modification in AbstractStoragePoolAllocator.java:
extract capacity computation logic and make a new method in
StorageManagerImpl
RB: https://reviews.apache.org/r/6028/
Send-by: mice_xia@tcloudcomputing.com
Issue happens when ROOT volume gets created and there is subsequent failure in starting the VM. During retry if allocator assigns a different storage pool the scenario was not handled. Now in case of local storage the volume get recreated on the newly assigned pool and old one gets cleaned up. In case of shared storage the existing volume is migrated to new storage pool.
UploadVolume API is async now with the guidance for all the new apis added in 3.0.x need to be async. Though the success/failure wont be available through the queryAsync job which will report only the initial validation success or failure. The success or failure and the progress will all be available through listVolumes api.