During VM creation, if vm.instancename.flag is set to true and hypervisor type is VMware, check if VM with the same hostname already exists in the zone.
This reverts commit 709bf074de9f8f22e6a71362551c4867be884e4b.
Network GC is broken with out-of-band VM movements due to the original commit, so reverting.
While taking a snapshot of a volume, CS chooses the endpoint to perform backup snapshot operation by selecting any host that has the storage containing the volume mounted on it.
Instead, if the volume is attached to a VM, the endpoint chosen by CS should be the host that contains the VM.
During vmsync if StopCommand (issued as part of PowerOff/PowerMissing report) fails to stop VM (since VM is running on HV),
don't transition VM state to "Stopped" in CS db. Also added a check to throw ConcurrentOperationException if vm state is not
"Running" after start operation.
upgraded jetty from 6.1 to 9.2. The 6.1 branch was announced EOL a few weeks ago.
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
Signed-off-by: Rajani Karuturi <rajanikaruturi@gmail.com>
This closes#59
Changes:
- This is a race condition between the deleteDomain thread and AccountChecker thread. DeleteDomain thread marks the domain as inactive and proceeds for cleanup, AccountChecker thread that runs at the same time cleans up any domains marked as inactive.
- When the DeleteDomain thread finds that domain is already removed, it need not error out since the domain deletion has already happened
Changes:
- The event of deleteing an affinity group is published on the MessageBus so that IAM Service can listen and process the event, However the publish operation should not be handled within a DB transaction, since it may take longer and hold the DB transaction for long unnecessarily
- Publish any events to MessageBus outside of the transaction
Conflicts:
server/src/org/apache/cloudstack/affinity/AffinityGroupServiceImpl.java
Changes:
- When there is HA we try to redeploy the affected vm using regular planners and if that fails we retry using the special planner for HA (which skips checking disable threshold)
Now because of job framework the InsufficientCapacittyException gets masked and the special planners are not called. Job framework needs to be fixed to rethrow the correct exception.
- Also the VM Work Job framework is not setting the DeploymentPlanner to the VmWorkJob. So the HA Planner being passed by HAMgr was not getting used.
- Now the job framework sets the planner passed in by any caller of the VM Start operation, to the job
Changes:
- When there is HA we try to redeploy the affected vm using regular planners and if that fails we retry using the special planner for HA (which skips checking disable threshold)
Now because of job framework the InsufficientCapacittyException gets masked and the special planners are not called. Job framework needs to be fixed to rethrow the correct exception.
- Also the VM Work Job framework is not setting the DeploymentPlanner to the VmWorkJob. So the HA Planner being passed by HAMgr was not getting used.
- Now the job framework sets the planner passed in by any caller of the VM Start operation, to the job
Changes:
- The event of deleteing an affinity group is published on the MessageBus so that IAM Service can listen and process the event, However the publish operation should not be handled within a DB transaction, since it may take longer and hold the DB transaction for long unnecessarily
- Publish any events to MessageBus outside of the transaction
Changes:
- This is a race condition between the deleteDomain thread and AccountChecker thread. DeleteDomain thread marks the domain as inactive and proceeds for cleanup, AccountChecker thread that runs at the same time cleans up any domains marked as inactive.
- When the DeleteDomain thread finds that domain is already removed, it need not error out since the domain deletion has already happened