Since we would introduce a way to specify each service provider in the network
offering, it's better for redundant virtual router as a separate service
provider.
Also isRedundant() flag in the network offering would be removed. Redundant
virtual router temporality won't work from now. Until we're able to add
different network elements/service providers in network_offering.
In the past, the NetworkElement would cover almost all the functionality that
e.g. virtual router can cover: firewall, source NAT, static NAT, password,
VPN... So anyone want to implement the NetworkElement would have to implement
these service's specific methods, even it wouldn't support it. Also, if we want
to find a e.g. FirewallServiceProvider, we have to proceed all the current
network service providers, to call a method to know if it support such service.
That's neither elegant nor scaling way to do it.
As the first step, this patch separates each ServiceProvider from NetworkElement
(there are some interface already out of NetworkElement, so this patch slightly
modifies them too), and only the class would implement the correlated interface, would
have the ability to do these services.
Changes:
- VirtualMachineMgr puts the constraint that if Root volume is already READY, we provide the clusterId in the plan to the deploymentPlanner. Planner then searches for resources only under that cluster.
- If no deployment could be found, deploying VM fails.
- Fixed this, such that incase the root volume is recreatable, we call the planner again by removing the cluster constraint. Planner will then search for resources in other clusters.
- Works for system VMs(SSVM, consoleproxy, virual routers).
This reverts commit 42ab3c94c210d5a29289a5dfd0e44ae99c427f8b.
The commit may not fit for our new network as service framework, because we
would make single router and redundant router as two different service provider,
so the change of network offering should clean up the old network and then setup
new one. Make single router work as redundant router later make no sense in such
condition.
Then every router would have one guest ip. The gateway ip would be used if the
router is not redundant, otherwise the guest ip would be used for guest network.
Changes:
- We were ordering clusters based on capacity of the first-fit host found in each cluster. Due to this, there were cases where we deployed VMs to one cluster instead of balancing off within clusters.
- Now we order the list of clusters by aggregate capacity and choose the ones that have enough capacity for the required VM in this order.
- This should balance the load between clusters instead of bombarding one.
Conflicts:
server/src/com/cloud/capacity/dao/CapacityDao.java
server/src/com/cloud/capacity/dao/CapacityDaoImpl.java
e.g: command=configuresimulator&name=SecurityIngressRulesCmd&zoneid=1&value=enabled:true|timeout=30, means enable command SecurityIngressRulesCmd for zone 1, and wait for 30 seconds.
Changes:
- Added a new API 'migrateSystemVm' backed by MigrateSystemVMCmd.java to migrate system VMs (SSVM, consoleproxy, domain routers(router, LB, DHCP))
- This is Admin only action
- The existing API 'migratevirtualmachine' is only for user VMs
Otherwise they can't be deserialized.
Fix the following exception on KVM host:
ERROR [agent.transport.Request] (Agent-Handler-2:null)
Caught problem with
[{"ClusterSyncCommand":{"_interval":60,"_skipSteps":20,"_steps":0,"_clusterId":2,"contextMap":{},"wait":0}}]
com.google.gson.JsonParseException: The JsonDeserializer
com.cloud.agent.transport.ArrayTypeAdaptor@44ed904 failed to deserialized json
object
[{"ClusterSyncCommand":{"_interval":60,"_skipSteps":20,"_steps":0,"_clusterId":2,"contextMap":{},"wait":0}}]
given the type class [Lcom.cloud.agent.api.Command;
...
Caused by: java.lang.RuntimeException: No-args constructor for class
com.cloud.agent.api.ClusterSyncCommand does not exist. Register an
InstanceCreator with Gson for this type to fix this problem.
at
com.google.gson.MappedObjectConstructor.constructWithNoArgConstructor(MappedObjectConstructor.java:64)
...
This bug also happens in XenServer 5.6 FP1, but this issue is hidden by bug 11564.
There are two issues here
1. in XenServer 5.6 FP1/SP2, when a slave host is down ( NIC is unplugged), and before master detect the slave is down( it takes about 10 minutes),
template.createClone doesn't work sometime, it complains can not contact slave host, looks like XenServer try to start VM on this slave host, we use VM.create instead, we can specify host in VM.create API, so it will not try to connect the slave
2. in XenServer 5.6 FP1/SP2, host tag is introduced into VDI, however in HA case, VM.reset_powerstate and VM.destory doesn't remove this host tag, when CloudStack try to start this VM on other host, XenServer complains this VDI is already used ( or mount as RW). CloudStack will manually remove the host tage in VDI in FenceCommand
status 11552: resolved fixed