If cleanup=true, removes all VRs and creates VR, implements network.
If cleanup=false, skips running VRs, implements network for stopped/deleted VRs.
Signed-off-by: Rohit Yadav <rohit.yadav@citrix.com>
Changes:
- Added a two new deployment planners 'UserDispersingPlanner' and 'UserConcentratedPodPlanner' to the DeploymentPlanners
- Planner can be chosen by setting the global config variable 'vm.allocation.algorithm' to either of the following values:
('random', 'firstfit', 'userdispersing', 'userconcentratedpod')
- By default, the value is 'random'. When the value is 'random', FirstFitPlanner is invoked as before that shuffles the resource lists.
- Now Admin can choose whether the deployment heuristic should be applied starting at cluster or pod level. This can be done by using the
global config variable 'apply.allocation.algorithm.to.pods' which is false by default. Thus by default as earlier, planner starts at clusters directly.
'UserConcentratedPodPlanner' changes:
- Earlier to 3.0, FirstFitPlanner used to reorder the clusters in case this heuristic was chosen.
- Now this is done by a separate planner and is applied only when 'vm.allocation.algorithm' is set to this planner
- It reorders the capacity based clusters/pods such that those pods having more number of Running Vms for the given account are tried first.
- Note that this userconcentration is applied only to pods and clusters. Not to hosts or storagepools within a cluster.
'UserDispersingPlanner' changes:
- 'UserDispersingPlanner' reorders the capacity ordered pods and clusters based on number of 'Running' VMs for the given account in ascending order. Aim is to choose thodes pods/clusters first which have less number of Running VMs for the given account
- Admin can provide weights to capacity and user dispersion so that both parameters get considered in reordering the pods/clusters. This can be done by setting
the global config parameter 'vm.user.dispersion.weight'. Default value is 1. Thus if this planner is chosen, by default, ordering will be done only by number of Running Vms, unless the weight is changed.
- HostAlllocators and StoragePoolAllocators also reorder the hosts and pools by ascending order of number of Running VMS/ Ready Volumes respectively for the given account. Thus try to choose that host or pool within a cluster with less number of VMs for the account.
Changes:
- Adding a new table 'hypervisor_capabilities' that will record capabilities for each hypervisor version. Added db schema changes for this.
- Currently a few capabilities have been added, namely, 'max_guests_limit' and 'security_group_enabled'
- Added a new column 'hypervisor_version' to host table. StartupRouting command now takes in this parameter. It should be set when a host connects.
- If a host's hypervisor version is not present, we find all the capabilities rows for that hypervisor type and use the first record.
- 'max_guests_limit' is the maximum number of running guest Vms that a host can have for the given hypervisor.
- Host Allocators use this limit and skip a host if the number of running VMs on that host exceeds this limit.
implement pool-wise VM sync,
For XenServer, VM fullSync is pool-wise now, VM deltaSync is still per host
Conflicts:
server/src/com/cloud/vm/VirtualMachineManagerImpl.java
status 7863: resolved fixed
Router cleanp thread is fixed, here is functionality description:
* Runs every "router.cleanup.interval" period of time (1 day by default)
* Stops only domRs running in Advance zone
* Thread Flow:
- gets all Running domRs/dhcps, get their networks, select network that
has to be checked (see criteria below):
- checks that there is only one nic in the op_networks table for the
network, and this nic belongs to domR/dhcp
- Stops domR/dhcp
* Criteria to choose the network:
- Network has to be non-system.
- Network should be one of the following: Guest Virtual (TrafficType=Guest; GuestType=Virtual); Direct Tagged (TrafficType=Public; GuestType=Direct)
Couple of other fixes:
* Added isShared parameter to listNetworks command
* Moved guestType from NetworkOffering to Network
status 7348: resolved fixed
More fixes:
* Update user_statistics on each domR stop/reboot
* Reset dhcpData/userData as a part of domR stop/reboot
* More logging for domR commands