1. Modified create-schema.sql to add version as 4.1.0 instead of 4.0.0
2. Removed schema-40to41.sql amd moved the content to schema-40to410.sql
3. Added to schema-40to410.sql Upgrade40to41.java
Conflicts:
server/src/com/cloud/upgrade/dao/Upgrade40to41.java
setup/db/create-schema.sql
setup/db/db/schema-40to41.sql
setup/db/db/schema-40to410.sql
Signed-off-by: Min Chen <min.chen@citrix.com>
1. Modified create-schema.sql to add version as 4.1.0 instead of 4.0.0
2. Removed schema-40to41.sql amd moved the content to schema-40to410.sql
3. Added to schema-40to410.sql Upgrade40to41.java
IdentityProxy is only referenced in CreateCmdResponse, which involves
some async job logic change. Since it is not impacting list performance,
will leave it there for now.
Signed-off-by: Min Chen <min.chen@citrix.com>
- introduces Capability in the network offering, which
decides when EIP service is enabled, by defualt public IP
should be assigned to the VM or not
- default network offering with EIP/ELB service will still work with old EIP
semantics, i.e) assign a public IP to each VM on start
This is part 1 of list API refactoring. Commands covered:
listVmsCmd, listRoutersCmd Response covered:
UserVmResponse, DomainRouterResponse. DB views created:
user_vm_view, domain_router_view.
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
This commit merges the nicira-l3support branch with master. This
effectively adds nicira nvp l3 support to master. The NiciraNVP Provider
can support the following services with this modification: Connectivity,
SourceNat, StaticNat and PortForwarding
Testing done:
Create, Delete network offerings with Nicira Element
Use Gui to add, modify, remove Nicira Element and Provider
Provision, deprovision SourceNat networks
Provision, deprovision Portforwarding and StaticNat rules
Tested with Nicira NVP release 2.1.0, 2.2.0 and 2.2.1 (2.2.x recommended)
Support for local data disk. Currently enable/disable config is at zone level, in subsequent checkins it can be made more granular.
Following changes are made:
- Create disk offering API now takes an extra parameter to denote storage type (local or shared). This is similar to storage type in service offering.
- Create/delete of data volume on local storage
- Attach/detach for local data volumes. Re-attach is allowed as long as vm host and data volume storage pool host is same.
- Migration of VM instance is not supported if it uses local root or data volumes.
- Migrate is not supported for local volumes.
- Zone level config to enable/disable local storage usage for service and disk offerings.
- Local storage gets discovered when a host is added/reconnected if zone level config is enabled. When disabled existing local storages are not removed but any new local storage is not added.
- Deploy VM command validates service and disk offerings based on local storage config.
- Upgrade uses the global config 'use.local.storage' to set the zone level config for local storage.
(cherry picked from commit 62710aed37606168012a0ed255a876c8e7954010)
Support for up to 16 VDIs per VM on XS 6.0 and above (16 VDIs => root + cd + 14 data volumes). Currently in CS number of data disk that can be attached to VM is hard-coded to 6. Made this setting configurable by moving it to hypervisor capabilities. Although XS 6.0 and above supports upto 16 VDIs but while testing on XS 6.0.2 found that only 13 data volumes can be attached to a VM. So for XS 6.0 and 6.0.2 max_data_volumes_limit is set to 13 currently.
Signed-off-by: Koushik Das <Koushik.Das@citrix.com>
This patch adds RBD (RADOS Block Device) support for primary storage in combination with KVM.
To get this patch working you need:
- libvirt-java 0.4.8
- libvirt with RBD storage pool support (>0.9.13)
- Qemu with RBD support (>0.14)
The primary storage does not support all the functions of CloudStack yet, for example snapshotting is disabled
due to the fact that backupping up a RBD snapshot is not possible in the way CloudStack wants to do it.
Creating templates from RBD volumes goes well, creating a VM from a template however is still a hit-and-miss.
NFS primary storage is also still required, you are not able to run your System VM's from RBD, they will need
to run on NFS.
Other then these points you can run instances with RBD backed disks.