Changes:
- When the ROOT volume of a VM is found to be READY, changed planner to reuse the pool for every volume(root or data) that is READY and that has a pool not in maintenance and not in avoid state
- If ROOT volume is not ready, we dont care about the DATA disk. Both would get re-allocated.
- When a pool is reused for a ready volume, Planner does not call storagepool allocators. And such volumes are not assigned a pool in the deployment destination returned by the planner. Accordingly StorageManager :: prepare method wont recreate these volumes since they are not mentioned in the destination.
- This NPE happened when starting the DomR and its Volume's podId was null.
- This case should never happen that podId of a Volume is null.
- It looks like this is a side-effect of some other bug- most likely another DomR and basic networking issue (9578)
- While reviewing this bug, found out that we need not use the RecreateHostAllocator anymore. Using it actually is not good, since this allocator ignores what is passed to it in the plan by the DeploymentPlanner and lists pods once again.
- Changed components.xml to use FirstFitRoutingAllocator instead.
Changes:
- Reason was that the old volume's templateId was being updated before volume creation was attempted. So on the retry, we dint find a difference in volume's templateId and VM's templateId and did not enter the recreation logic.
- Fix is to update the new volume's templateId with the VM's templateId while creating the new volume. The old volume's templateId stays the same and the volume is marked as 'Destroy' when a new volume is created.
- Local fix to not log the content for ModifySshKeyCommand.
- For commands that do not want to log the parameters, added the facility to indicate this.
- For such commands, we remove the parameters from the log.
Changes:
- Changed host allocators/planner to use cpu.overprovisioning.factor
- Removed following: while adding a new host, we were setting the total_cpu in op_host_capacity to be actual_cpu * cpu.overprovisioning.factor. Now we set it to actual_cpu.
- ListCapacities response now calculates the total CPU as actual * cpu.overprovisioning.factor (This change does not add anything new - listCapacities was pulling total CPU from op_host_capacity DB earlier which had the cpu.overprovisioning.factor applied already. Now we need to apply it over the DB entry.)
- HostResponse has a new field: 'cpuWithOverprovisioning' that returns the cpu after applying the cpu.overprovisioning.factor
- Db Upgrade 222 to 224 now updates the total_cpu in op_host_capacity to be the actual_cpu for each Routing host.
status 9336: resolved fixed
Following changes were made:
* deleteSecurityGroup/authorizeSecurityGroupIngress - removed account/domainId parameters as SG is uniquely identified by id now
* removed account_name field from securityGroup DB table; removed allowed_security_group/allowed_sec_grp_acct from security_ingress_rule.
These values were used for api response generation only for performance purposes; added caching on API level to improve performance
* Added missing security checks for securityGroups/ingressRules