Description:
1. Modify addCiscoNexusVSMCmd to enable a VSM
by default, when it is added to a cluster.
2. Put in two new APIs exposed to the user -
a. EnableCiscoNexusVSMCmd
b. DisableCiscoNexusVSMCmd
Disabling a VSM does not delete it. It only
prevents the Management Server from using that
VSM. This is useful if the VSM is in
maintenance mode.
At this point, the mgmt server comes up, loading the
Nexus related modules without dying.
Description:
1) Added a new properties file for Cisco N1kv VSM commands:
cisconexusvsm_commands.properties.in
2) Added the CiscoNexusVSMElement to the components.xml file.
3) Modified CiscoNexusVSMElement to implement NetworkElement.
The NetworkElement interface functions are not
relevant to the N1KV VSM, so we override them
with noops.
4) Added an addDao() of CiscoNexusVSMDeviceDaoImpl in populateDaos(),
else we'd run into a failure to look up the VSM's dao when the
mgmt server is starting up:
com.cloud.utils.exception.CloudRuntimeException: Unable to find DAO com.cloud.network.dao.CiscoNexusVSMDeviceDao
5) Also added the CiscoNexusVSMElementService in populateServices(),
and modified CiscoNexusVSMElement to implement Manager as well.
6) populateServices() was running into an exception that indicated
that it was unable to find a commands.properties file for the
cisco n1kv vsm service. Fixed it by changing getProperties() in
CiscoNexusVSMElement to return the correct string
"cisconexusvsm_commands.properties", and putting in an @Override
for getProperties() in CiscoNexusVSMElement. Also fixed up all
the other functions in CiscoNexusVSMElement that needed to have
@Override. Also updated build/developers.xml with this file
location. And did other small cleanup.
7) More clean up in CiscoNexusVSMDeviceManagerImpl.
Conflicts:
server/src/com/cloud/configuration/DefaultComponentLibrary.java
2) Added new api - changeServiceForSystemVm - to support service offering upgrade for system vms
3) Removed global config parameters that are not in use anymore: consoleproxy.ram.size, consoleproxy.cpu.mhz, secstorage.vm.ram.size, secstorage.vm.cpu.mhz
When elb capability is enabled on the network offering, we:
1) on each createLB command:
* associate ip address to the LB rule owner
* create LB rule
2) on each deleteLb command:
* delete the rule
* disassociate ip address
The rule belongs to the owner, so proper usage events are generated
2) Added elasticIp and elasticLb network capabilities. Provided support to create network offering with these capabilities.
3) Added one more default network offering having elasticip and elasticlb
4) Public network support to Basic zone. You can associate/disassociate IP addresses now
1. remove Garbagecollection primary storage allocator. other storage allocators fails may because there is primary storage with the same tag, it is not caused by no primary storage capacity.
2. delete template will try to delete templates in secondary storage in the API.
status 11497: resolved fixed
GarbageCollecting
Conflicts:
server/src/com/cloud/template/HyervisorTemplateAdapter.java
Revert "bug 10837: rename api related to netapp"
This reverts commit 5db6b500dd1bbb96bfddbd7eda6cf1f616e2e0f9.
Conflicts:
api/src/com/cloud/api/commands/MigrateVolumeCmd.java
client/tomcatconf/commands-ext.properties.in
Changes:
- Enabled updating storage tags
- All existing tags are wiped out and new ones provided are stored.
- Note that, if tags are updated on the storage, no changes are done to the deployment of already running VMs that were deployed prior to tag addition.
- Also added some validation to host tags update API.
Changes:
- Added a two new deployment planners 'UserDispersingPlanner' and 'UserConcentratedPodPlanner' to the DeploymentPlanners
- Planner can be chosen by setting the global config variable 'vm.allocation.algorithm' to either of the following values:
('random', 'firstfit', 'userdispersing', 'userconcentratedpod')
- By default, the value is 'random'. When the value is 'random', FirstFitPlanner is invoked as before that shuffles the resource lists.
- Now Admin can choose whether the deployment heuristic should be applied starting at cluster or pod level. This can be done by using the
global config variable 'apply.allocation.algorithm.to.pods' which is false by default. Thus by default as earlier, planner starts at clusters directly.
'UserConcentratedPodPlanner' changes:
- Earlier to 3.0, FirstFitPlanner used to reorder the clusters in case this heuristic was chosen.
- Now this is done by a separate planner and is applied only when 'vm.allocation.algorithm' is set to this planner
- It reorders the capacity based clusters/pods such that those pods having more number of Running Vms for the given account are tried first.
- Note that this userconcentration is applied only to pods and clusters. Not to hosts or storagepools within a cluster.
'UserDispersingPlanner' changes:
- 'UserDispersingPlanner' reorders the capacity ordered pods and clusters based on number of 'Running' VMs for the given account in ascending order. Aim is to choose thodes pods/clusters first which have less number of Running VMs for the given account
- Admin can provide weights to capacity and user dispersion so that both parameters get considered in reordering the pods/clusters. This can be done by setting
the global config parameter 'vm.user.dispersion.weight'. Default value is 1. Thus if this planner is chosen, by default, ordering will be done only by number of Running Vms, unless the weight is changed.
- HostAlllocators and StoragePoolAllocators also reorder the hosts and pools by ascending order of number of Running VMS/ Ready Volumes respectively for the given account. Thus try to choose that host or pool within a cluster with less number of VMs for the account.