This reverts commit cd7218e241a8ac93df7a73f938320487aa526de6, reversing
changes made to f5a7395cc2ec37364a2e210eac60720e9b327451.
Reason for Revert:
noredist build failed with the below error:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on project cloud-plugin-hypervisor-vmware: Compilation failure
[ERROR] /home/jenkins/acs/workspace/build-master-noredist/plugins/hypervisors/vmware/src/com/cloud/hypervisor/guru/VMwareGuru.java:[484,12] error: non-static variable logger cannot be referenced from a static context
[ERROR] -> [Help 1]
even the normal build is broken as reported by @koushik-das on dev list
http://markmail.org/message/nngimssuzkj5gpbz
This is caused by serialization failure for VmWorkMigrateForScale object. Replaced DeployDestination member
present in VmWorkMigrateForScale with serializable types.
During ping task, while scanning and updating status of all VMs on the host that are stuck in a transitional state
and are missing from the power report, do so only for VMs that are not removed.
* removed the "is redundant" flag form the addVpcRouterToGuestNetwork() method
* removed the "is redundant" flag from the removeVpcRouterFromGuestNetwork() method
* changed the path of the master.py file in the keepalived.conf.temp file
* the call to routerDao.addRouterToGuestNetwork() in the VpcRouterDeploymentDefinition is not needed. That step will be performed once a VM is created
- In addition, when restarting a VPC the routers will have the guest net configured, if any exists.
* Pushing the POM.xml as well, to use the old Jetty for now. Could not fix the logging problem. Will replace the POM with master version after VPC is done.
During vmsync if StopCommand (issued as part of PowerOff/PowerMissing report) fails to stop VM (since VM is running on HV),
don't transition VM state to "Stopped" in CS db. Also added a check to throw ConcurrentOperationException if vm state is not
"Running" after start operation.
Changes:
- When there is HA we try to redeploy the affected vm using regular planners and if that fails we retry using the special planner for HA (which skips checking disable threshold)
Now because of job framework the InsufficientCapacittyException gets masked and the special planners are not called. Job framework needs to be fixed to rethrow the correct exception.
- Also the VM Work Job framework is not setting the DeploymentPlanner to the VmWorkJob. So the HA Planner being passed by HAMgr was not getting used.
- Now the job framework sets the planner passed in by any caller of the VM Start operation, to the job
root cause:
when vmsync reports system VM is down, CCP doesn't release the VM resource before starting it.
fix:
make sure cleanup is called for a VM when it is reported as Stopped
Unnecessary exception in MS logs while removing default NIC from VM. Following changes are made:
1. Changed the exception from CloudRuntimeException to InvalidParameterValueExecption.
2. Moved out validation logic to UserVMManagerImpl from VirtualMachineManagerImpl.
3. Handling InvalidParameterValueException from async API calls so that they are not logged as ERROR in MS logs.
requires storage migration resulting in failure of VM migration. This also improves
the hostsformigration api. Firstly we were trying to list all hosts and then
finding suitable storage pools for all volumes and then we were checking whether
vm migration requires storage migration to that host. Now the process is updated.
We are checking for only those volumes which are not in zone wide primary store.
We are verifying by comparing volumes->poolid->clusterid to host clusterid. If it
uses local or clusterids are different then verifying whether host has suitable
storage pools for the volume of the vm to be migrated too.
Changes:
PodId in which the router should get started was not being saved to the DB due to the VO's setter method not following the setXXX format. So when planner loaded the router from DB, it always got podId as null and that would allow planner to deploy the router in any pod. If the router happens to start in a different pod than the user VM, the Vm fails to start since the Dhcp service check fails.
Fixed the VO's setPodId method, that was causing the DB save operation fail.