Adding the missing file
During HA and maintenance call different planners (if the original planners are not able to find capacity) which skip some heurestics
Changes:
- The vmprofile owner passed in to the planner should be the VM's account and not the caller
- Do not do the access check for Root Admin
Conflicts:
server/src/com/cloud/deploy/DeploymentPlanningManagerImpl.java
local Ip addresses do not get released resulting in all the link local
Ip addresses being consumed eventually.
fix ensure Nics with reservation strategy 'Start' should go through
release phase in the Nic life cycle so that release is performed before
Nic is removed to avoid resource leaks.
1. Egress default policy rules is send to the firewall provider. It is up to the
provider to configure the rules.
2. The default policy rules are send for both allow and deny default policy.
3. On network shutdown rules for delete are send.
4. For VR and SRX, by default deny the traffic. So no default rule to deny traffic is required.
Changes:
- During Vm migration while finding a new host within the cluster, we need to set the storagepool Id to the deployment plan too.
- This will indicate the planner that the volumes are ready and no need to find new pool
- This in turn will prevent the threshold check done during the pool allocation. This step is not needed since there is no need to allocate pools newly.
- Thus the migration wont fail because th threshold check fails.
Changes:
- Added 'virtualmachineid' parameter to the createVolume API to specify a VM for the volume. The Vm should be in 'Running' or 'Stopped' state.
- This parameter is used only when createVolume API is called using snapshotid parameter
- When this parameter is set, the volume is created from the snapshot in the pod/cluster of the VM. Also the volume is then attached to the VM in the same request
- If attach Volume fails but create has succeeded, the API errors out but the Volume created remains available. User may attach the same volume later
- When Vm is provided, but if no storage pool is available in the VM's pod/cluster then the volume is not created and API fails.
Volume create usage event and resource count werent getting registered. Check its type rather than it is UserVm since the code is coming from VirtualMachineManager.
The parameter is optional and true by default to preserve the original behavior
Conflicts:
api/src/org/apache/cloudstack/api/command/user/vpc/CreateVPCCmd.java
engine/orchestration/src/org/apache/cloudstack/engine/orchestration/NetworkOrchestrator.java
In VMware during VM start the existing disk information is used to configure the VMs. So even if a new disk is created using the new template VM continues to use the old disk.
Once the old root disk is marked for destroy force expunge it and sync the new disk into the VM folder before VM start
Implemented commands that are required for VR to bootup and Vm deployment to work
Modified hyperv agent code, to deploy VR with Boot Args, boot args passed to VR using KVP Exchange Component.
Fix for VR to boot up and get configured with boot args, Fixed issue in VolumeOrchestrator
Implemented SetFirewallRulesCommand in HyperV Resource
Implemented VR network commands to provide the necessary services from VR
Fixed hyperv localstorage path encode url issue. encode is converting space to '+'
Cloudstack sends requests to directly managed HV hosts (direct agents) using the direct agent thread pool. The size of the pool is determined by global config direct.agent.pool.size defaulted to 500.
Currently there is no restriction on the number of threads a direct agent can use from this shared thread pool to send requests to the host. This is fine as long as the host is responding to requests
in a reasonable amount of time. But if there is a considerable delay in getting response, the thread remain blocked for that much time. As more commands are send to the slow host threads keep getting
blocked. This can eventually lead to a situation where requests to healthy hosts cannot be processed as there are not enough free threads.
The problem being addressed here is to localize the impact of few bad hosts, so that entire management server is not affected.
One such way is to throttle based on the # of outstanding requests on per host basis. The outstanding requests to a host will be a % of direct agent pool size. This is configurable based on
direct.agent.thread.cap. The default value is 0.1 or 10%, a value of 1 would mean the old behavior where there is no upper cap. This will ensure that the impacted host will be bound by a upper cap on the number of threads it can use to process requests and not the entire pool.