1. Are there multiple hosts connect to the same local storage pool due to 2.1.x bug
2. Is there any missed premium upgrade
either true answer of above cause mangemnt stopping and asking user to contact Cloud.com support
Use a new target "system-integrity-checker" in components.xml/components-premium.xml.
All checkers must be explicitly specified in XML file, they will execute before any components load
status 10860: resolved fixed
Changes:
- Added new Investigator 'ManagementIPSystemVMInvestigator' that checks if Vm is alive only for System VM's that have a management IP address.
- If no management IP is found, ping test cannot be done, so this investigator would return null in that case.
- Current implementation InvestigatorImpl is renamed as 'UserVmDomRInvestigator' and does the ping test for user VMs only.
- Corrected the ping test code that was checking a hard-coded string. Now if the ping answer is negative, we just return null
- Added the new investigator to components.xml
- This NPE happened when starting the DomR and its Volume's podId was null.
- This case should never happen that podId of a Volume is null.
- It looks like this is a side-effect of some other bug- most likely another DomR and basic networking issue (9578)
- While reviewing this bug, found out that we need not use the RecreateHostAllocator anymore. Using it actually is not good, since this allocator ignores what is passed to it in the plan by the DeploymentPlanner and lists pods once again.
- Changed components.xml to use FirstFitRoutingAllocator instead.
1) Added new apis: createInstanceGroup, updateInstanceGroup, deleteInstanceGroup, listInstanceGroups
2) Group can be created using:
* createInsanceGroup api
* deployVirtualMachine/updateVirtualMachine commands (we create a group with name equal to "group" parameter value if the group doesn't exist already)
3) Group can be removed by:
* deleteInstanceGroup api
* when corresponding account is removed
4) Vm can be assigned to one group only. To move vm from one group to another, use updateVirtualMachine command with "group" parameter
5) Changed listVirtualMachines command to use "groupId" parameter instead of "group".
status 6177: resolved fixed