Adding the missing file
During HA and maintenance call different planners (if the original planners are not able to find capacity) which skip some heurestics
Removing resource leaks from UsageSanityChecker and
refactoring it (encapsulation, removal of copy and paste, constants...)
Modularize static method for closing Statments in TransactionLegacy
and reusing this new method from other classes (Upgrade2214to30)
Create Unit and Integration Tests for UsageSanityChecker
Add DBUnit cases and integration profile for nitegration tests as
a base for future DB tests
Changes:
- The vmprofile owner passed in to the planner should be the VM's account and not the caller
- Do not do the access check for Root Admin
Conflicts:
server/src/com/cloud/deploy/DeploymentPlanningManagerImpl.java
local Ip addresses do not get released resulting in all the link local
Ip addresses being consumed eventually.
fix ensure Nics with reservation strategy 'Start' should go through
release phase in the Nic life cycle so that release is performed before
Nic is removed to avoid resource leaks.
brought back up after being down for few hours,snapshot jobs do not get
triggered with reason "there is other active snapshot tasks on the
instance to which the volume is attached".
encryption is enabled When db encryption is enabled, the server expects all
secure,hidden fields in encrypted form. moved the insert statements which has
dafault values to java and populated encrypted values if encryption is
enabled.
1. Egress default policy rules is send to the firewall provider. It is up to the
provider to configure the rules.
2. The default policy rules are send for both allow and deny default policy.
3. On network shutdown rules for delete are send.
4. For VR and SRX, by default deny the traffic. So no default rule to deny traffic is required.
service and not used for LB
Fix adds a boolean flag to addNetscalerLoadBalancer api, which
will mark added NetScaler for exclusive GSLB service. A netscaler marked
as exclusive gslb service provider is not picked for any guest network's
lb provider.
scaling up vms was not considering parameter cluster.(memory/cpu).allocated.capacity.disablethreshold. Fixed it
Also added overprovisioning factor retrieval at the cluster level for host capacity check
Changes:
- During Vm migration while finding a new host within the cluster, we need to set the storagepool Id to the deployment plan too.
- This will indicate the planner that the volumes are ready and no need to find new pool
- This in turn will prevent the threshold check done during the pool allocation. This step is not needed since there is no need to allocate pools newly.
- Thus the migration wont fail because th threshold check fails.
Changes:
- Added 'virtualmachineid' parameter to the createVolume API to specify a VM for the volume. The Vm should be in 'Running' or 'Stopped' state.
- This parameter is used only when createVolume API is called using snapshotid parameter
- When this parameter is set, the volume is created from the snapshot in the pod/cluster of the VM. Also the volume is then attached to the VM in the same request
- If attach Volume fails but create has succeeded, the API errors out but the Volume created remains available. User may attach the same volume later
- When Vm is provided, but if no storage pool is available in the VM's pod/cluster then the volume is not created and API fails.
This patch adds support for trust chains in the netscaler.
I initially planned on using the 10.1 API's "bundle" feature but during
my testing I found that was not working. So I am doing the chain linking
myself. Also NS can have only one entity of a certificate ie lets say
two different users try to add the same certificate on the netscaler
only one of them will go through. The other one says resouce already
exists even though they have different files.
This can be a problem in trust chains where the chain can be shared
between multiple accounts/certificates. So, I am using the figerprint as
an identifier of a certificate and making sure that we delete it only
when no one references it.
Volume create usage event and resource count werent getting registered. Check its type rather than it is UserVm since the code is coming from VirtualMachineManager.