secondaryStorageResource used for SSVM
Issue:
For deploying VMware VMs, cloud service on SSVM needs to be started with
PremiumSecondaryStorageResource, a bunch of VMware related commands rely
on it.
Changes:
1)include cloud-vmware.jar in systemvm.zip
2)start cloud service in SSVM with PremiumSecondaryStorageResource
RB: https://reviews.apache.org/r/6320/
Send-by: mice_xia@tcloudcomputing.com
1) When account/domainId or projectId are passed in:
* list all account specific networks of the account/project
* list all domain level networks from the domainId + subdomains if the targeted network has allowSubdomainAccess = true
In other words, we use all the networks that can be used for vm deployment by account/domainId.
If listAll is not specified in the request, account/domainId are being defaulted to the account/domainId of the caller
listAll is ignored if the call is being done by the regular user.
2) listAll is passed in by the Root admin, we list:
* all Account specific networks in the system
* all domain specific networks in the system
3) listAll is passed by the Domain admin, we list:
* All Account specific networks belonging to domain/subdomains of the domain admin.
* All domain specific networks belonging to domain/subdomains of the domain admin
* All domain specific networks allowing subdomain access belonging to the parent domain.
4) domainId - can be passed either with or without listAll. We list:
* all account specific networks belonging to the domain
* all domain specific networks of the domain
* all domain specific networks of the subdomains if isRecursive = true is passed in
Conflicts:
server/src/com/cloud/network/NetworkManagerImpl.java
Support for up to 16 VDIs per VM on XS 6.0 and above (16 VDIs => root + cd + 14 data volumes). Currently in CS number of data disk that can be attached to VM is hard-coded to 6. Made this setting configurable by moving it to hypervisor capabilities. Although XS 6.0 and above supports upto 16 VDIs but while testing on XS 6.0.2 found that only 13 data volumes can be attached to a VM. So for XS 6.0 and 6.0.2 max_data_volumes_limit is set to 13 currently.
Signed-off-by: Koushik Das <Koushik.Das@citrix.com>
73be77a4c1877ae7e3613c7562d562ad96cde7ee
I've renamed discover to discoverer to fix the issue. My ant debug fails
with:
[java] ERROR [utils.component.ComponentLocator] (main:) Unable to
load configuration for management-server from components.xml
[java] com.cloud.utils.exception.CloudRuntimeException: Unable to
find class: com.cloud.hypervisor.kvm.discoverer.KvmServerDiscoverer
RB: https://reviews.apache.org/r/6239/
Send-by: rohit.yadav@citrix.com
This patch adds RBD (RADOS Block Device) support for primary storage in combination with KVM.
To get this patch working you need:
- libvirt-java 0.4.8
- libvirt with RBD storage pool support (>0.9.13)
- Qemu with RBD support (>0.14)
The primary storage does not support all the functions of CloudStack yet, for example snapshotting is disabled
due to the fact that backupping up a RBD snapshot is not possible in the way CloudStack wants to do it.
Creating templates from RBD volumes goes well, creating a VM from a template however is still a hit-and-miss.
NFS primary storage is also still required, you are not able to run your System VM's from RBD, they will need
to run on NFS.
Other then these points you can run instances with RBD backed disks.
un-allocated space is insufficient on primary storage
check the availability of un-allocated primary storage space during
planning stage, for multiple-volume VM creation scenario
modification in StorageManagerImpl.java and StorageManager.java:
add a new method storagePoolHasEnoughSpace(List<Volumes>, StoragePool),
check if storagePool has enough space for all requested volumes
modification in FirstfitPlanner.findPotentialDeploymentResources:
handle multiple volume case, keep track of allocated volumes for pools
and call storagePoolHasEnoughSpace to check space availability
modification in AbstractStoragePoolAllocator.java:
extract capacity computation logic and make a new method in
StorageManagerImpl
RB: https://reviews.apache.org/r/6028/
Send-by: mice_xia@tcloudcomputing.com
Issue happens when ROOT volume gets created and there is subsequent failure in starting the VM. During retry if allocator assigns a different storage pool the scenario was not handled. Now in case of local storage the volume get recreated on the newly assigned pool and old one gets cleaned up. In case of shared storage the existing volume is migrated to new storage pool.
When a host is not considered for deployment because it has disabled HVM, then call that out in the logs for debugging.
Signed-off-by: Nitin Mehta<nitin.mehta@citrix.com>