Created a table for source cidrs list.
Created necessary Daos and VOs.
Updated PortForwardingRulesDao to persist/update non null list of cidrs.
For deletion depending on ON DELETE CASCADE.
status 9688: resolved fixed
To verify that the rule was removed:
* make sure that there is no record with lb id in load_balancer table
* verify that lb.delete event was generated for this rule
Changes:
- Added a new column `source_template_id` to vm_template table to carry the parent/source template ID from which the tempalte was created
- Added the column in db upgrade 224 to 225
- Changed code to save the source_template_id if there is one associated to the volume/ volume from which the snapshot was taken
- API response returns the sourcetemplateid field, if set, in all template usecases.
- CreateZone API creates a zoneToken, inserts in DB and returns it in the
response
- UpdateZone API takes in 'details' map that is loaded to data_center_details
status 9336: resolved fixed
Following changes were made:
* deleteSecurityGroup/authorizeSecurityGroupIngress - removed account/domainId parameters as SG is uniquely identified by id now
* removed account_name field from securityGroup DB table; removed allowed_security_group/allowed_sec_grp_acct from security_ingress_rule.
These values were used for api response generation only for performance purposes; added caching on API level to improve performance
* Added missing security checks for securityGroups/ingressRules
Changes:
- Cluster entry is not removed from the table when a cluster is deleted because there are some foreign key constraints failing if the row delete is attempted. Instead the cluster is marked as 'removed'
- While deleting the pod changed the check to see if pod has any clusters - we now check that there are no clusters with removed column null.
- Also pod entry cannot be deleted from the db due to foreign key constraints. So added 'removed' column to Pod table host_pod_ref
- Now on deleting a pod, the pod will be marked as removed and pod name is set to null.
Changes:
- Now while listing hosts for migration, capacity is calculated as total_capacity -used_capacity
instead of total_capacity -(used_capacity + reserved_capacity)
- Also, the capacity columns in op_host_capacity are now 'signed' type so that the subtractions in queries does not overflow.
- Added this to DB upgrade 222 to 224 change as well.
1) No longer do multiple searches involving "domain" table; only one join with domain is being done.
2) Do join with domain table only when command is executed by domainAdmin
3) Added index for "path" field in "domain" table
4) No longer do joins with account table as account_id is already present in vm_instance table.
1) No longer do multiple searches involving "domain" table; only one join with domain is being done.
2) Do join with domain table only when command is executed by domainAdmin
3) Added index for "path" field in "domain" table
4) No longer do joins with account table as account_id is already present in vm_instance table.
- Added a new flag 'allocation_state' to zone,pod,cluster and host
- The possible values for this flag are 'Enabled' or 'Disabled'
- When a new zone,pod,cluster or host is added, allocation_state is 'Disabled' by default.
- For existing zone,pod,cluster or host, the state is 'Enabled'.
- All Add/Update/List commands for each of zone,pod,cluster or host can now take a new parameter 'allocationstate'
- If 'allocation_state' is 'Disabled', Allocators skip that zone or pod or cluster or pod.
- For a root admin, ListZones lists all zones including the 'Disabled' zones. But for any other user, the 'Disabled' zones are not included in the response.
- For any usecase that creates/deploys/adds/registers a resource and takes in zone as parameter, now we check if the Zone is 'Disabled'. If yes then the operation cannot be performed by a user other than root-admin. Add volume, snapshot, templates are examples of this usecase.
- To enable the root admin to test a particular pod/cluster/host, deployVM command takes in 'host_id' parameter that can be passed in only by root admin.
If this parameter is passed in by the admin, allocators do not search for hosts and use that host only. StoragePools are searched in the cluster of that host.
If VM cannot be deployed to that host, allocators and deployVM fails without retrying