This limits the likeliness of timing attacks against the API.
See http://codahale.com/a-lesson-in-timing-attacks/ for the
full rationale.
Conflicts:
server/src/com/cloud/api/ApiServer.java
server/src/com/cloud/user/AccountManagerImpl.java
This removes extra whitespaces from the JSON serialized response.
After the fix, tested to work with:
- Present UI
- CloudMonkey
- Old buggy json parsers
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit 921ad057def3015cda9d9f5861c9be29a88b148e)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
If user hasn't supplied a display name for a VM, default it to VM name in listVolume response.
This behaviour is identical to listVirtualMachine response.
For AttachVolume/DetachVolume API command, improve user error message in case of RuntimeException by throwing the exception instead of 'Unexpected Exception'.
During VM creation, if vm.instancename.flag is set to true and hypervisor type is VMware, check if VM with the same hostname already exists in the zone.
This reverts commit 709bf074de9f8f22e6a71362551c4867be884e4b.
Network GC is broken with out-of-band VM movements due to the original commit, so reverting.
During vmsync if StopCommand (issued as part of PowerOff/PowerMissing report) fails to stop VM (since VM is running on HV),
don't transition VM state to "Stopped" in CS db. Also added a check to throw ConcurrentOperationException if vm state is not
"Running" after start operation.
Changes:
- When there is HA we try to redeploy the affected vm using regular planners and if that fails we retry using the special planner for HA (which skips checking disable threshold)
Now because of job framework the InsufficientCapacittyException gets masked and the special planners are not called. Job framework needs to be fixed to rethrow the correct exception.
- Also the VM Work Job framework is not setting the DeploymentPlanner to the VmWorkJob. So the HA Planner being passed by HAMgr was not getting used.
- Now the job framework sets the planner passed in by any caller of the VM Start operation, to the job
Changes:
- The event of deleteing an affinity group is published on the MessageBus so that IAM Service can listen and process the event, However the publish operation should not be handled within a DB transaction, since it may take longer and hold the DB transaction for long unnecessarily
- Publish any events to MessageBus outside of the transaction
Steps to reproduce if you have this issue:
- Create a VM's volume snapshot
- Remove VM's template and mark the template as removed with timestamp in DB
- Restart mgmt server and create a volume out of snapshot you should get NPE
Fix: In `storagePoolHasEnoughSpace`, we're only searching for a VM's volume's
snapshot's template by Id and not including removed templates. This is a corner
case and NPE hits when template has been marked removed for a VM's volume's
template so we should search including removed templates.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit f189c105d8dde5491697b171b969e757f8f15858)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>