At this point, the mgmt server comes up, loading the
Nexus related modules without dying.
Description:
1) Added a new properties file for Cisco N1kv VSM commands:
cisconexusvsm_commands.properties.in
2) Added the CiscoNexusVSMElement to the components.xml file.
3) Modified CiscoNexusVSMElement to implement NetworkElement.
The NetworkElement interface functions are not
relevant to the N1KV VSM, so we override them
with noops.
4) Added an addDao() of CiscoNexusVSMDeviceDaoImpl in populateDaos(),
else we'd run into a failure to look up the VSM's dao when the
mgmt server is starting up:
com.cloud.utils.exception.CloudRuntimeException: Unable to find DAO com.cloud.network.dao.CiscoNexusVSMDeviceDao
5) Also added the CiscoNexusVSMElementService in populateServices(),
and modified CiscoNexusVSMElement to implement Manager as well.
6) populateServices() was running into an exception that indicated
that it was unable to find a commands.properties file for the
cisco n1kv vsm service. Fixed it by changing getProperties() in
CiscoNexusVSMElement to return the correct string
"cisconexusvsm_commands.properties", and putting in an @Override
for getProperties() in CiscoNexusVSMElement. Also fixed up all
the other functions in CiscoNexusVSMElement that needed to have
@Override. Also updated build/developers.xml with this file
location. And did other small cleanup.
7) More clean up in CiscoNexusVSMDeviceManagerImpl.
Conflicts:
server/src/com/cloud/configuration/DefaultComponentLibrary.java
When elb capability is enabled on the network offering, we:
1) on each createLB command:
* associate ip address to the LB rule owner
* create LB rule
2) on each deleteLb command:
* delete the rule
* disassociate ip address
The rule belongs to the owner, so proper usage events are generated
2) Added elasticIp and elasticLb network capabilities. Provided support to create network offering with these capabilities.
3) Added one more default network offering having elasticip and elasticlb
4) Public network support to Basic zone. You can associate/disassociate IP addresses now
1. remove Garbagecollection primary storage allocator. other storage allocators fails may because there is primary storage with the same tag, it is not caused by no primary storage capacity.
2. delete template will try to delete templates in secondary storage in the API.
status 11497: resolved fixed
GarbageCollecting
Conflicts:
server/src/com/cloud/template/HyervisorTemplateAdapter.java
Changes:
- Added a two new deployment planners 'UserDispersingPlanner' and 'UserConcentratedPodPlanner' to the DeploymentPlanners
- Planner can be chosen by setting the global config variable 'vm.allocation.algorithm' to either of the following values:
('random', 'firstfit', 'userdispersing', 'userconcentratedpod')
- By default, the value is 'random'. When the value is 'random', FirstFitPlanner is invoked as before that shuffles the resource lists.
- Now Admin can choose whether the deployment heuristic should be applied starting at cluster or pod level. This can be done by using the
global config variable 'apply.allocation.algorithm.to.pods' which is false by default. Thus by default as earlier, planner starts at clusters directly.
'UserConcentratedPodPlanner' changes:
- Earlier to 3.0, FirstFitPlanner used to reorder the clusters in case this heuristic was chosen.
- Now this is done by a separate planner and is applied only when 'vm.allocation.algorithm' is set to this planner
- It reorders the capacity based clusters/pods such that those pods having more number of Running Vms for the given account are tried first.
- Note that this userconcentration is applied only to pods and clusters. Not to hosts or storagepools within a cluster.
'UserDispersingPlanner' changes:
- 'UserDispersingPlanner' reorders the capacity ordered pods and clusters based on number of 'Running' VMs for the given account in ascending order. Aim is to choose thodes pods/clusters first which have less number of Running VMs for the given account
- Admin can provide weights to capacity and user dispersion so that both parameters get considered in reordering the pods/clusters. This can be done by setting
the global config parameter 'vm.user.dispersion.weight'. Default value is 1. Thus if this planner is chosen, by default, ordering will be done only by number of Running Vms, unless the weight is changed.
- HostAlllocators and StoragePoolAllocators also reorder the hosts and pools by ascending order of number of Running VMS/ Ready Volumes respectively for the given account. Thus try to choose that host or pool within a cluster with less number of VMs for the account.
Add configure command for these virtual router based elements. The commands
should be different for different elements.
The context of configuration would be added later.
* moved all services to the separate table, map them to the network_offering+provider.
* added state/securityGroupEnabled properties for the networkOffering
* added ability to list by state/securityGroupEnabled in listNetworkOfferings api command
2) New service: SourceNat
Changes:
- Added a new interface 'PluggableService'
- Any component that can be packaged separately from cloudstack, can implement this interface and provide its own property file listing the API commands the component supports
- As an example have made VirtualNetworkApplianceService pluggable and a new configureRouter command is added
- ComponentLocator reads all the pluggable service from componentLibrary or from components.xml and instantiates the services.
- As an example, DefaultComponentLibrary adds the pluggable service 'VirtualNetworkApplianceService'
- Also components.xml.in has an entry to show how a pluggable service can be added, but it is commented out.
- APIServer now reads the commands for each pluggable service and when a command for such a service is called, APIServer sets the required instance of the pluggable service in the coomand.
- To do this a new annotation '@PlugService' is added that is processed by APIServer. This eliminates the dependency on the BaseCmd to instantiate the service instances.
status 11036: resolved fixed
1) Use row locks instead of global lock when update resource_count table. When update resource_count for account, make sure that we lock account+all related domains
2) Insert resource_count records for account/domain at the moment when account/domain is created.
3) As a part of DB upgrade, insert missing resource_count records for all non-removed accounts/domains
Conflicts:
core/src/com/cloud/alert/AlertManager.java
server/test/com/cloud/agent/MockAgentManagerImpl.java
1. Are there multiple hosts connect to the same local storage pool due to 2.1.x bug
2. Is there any missed premium upgrade
either true answer of above cause mangemnt stopping and asking user to contact Cloud.com support
Use a new target "system-integrity-checker" in components.xml/components-premium.xml.
All checkers must be explicitly specified in XML file, they will execute before any components load
status 10860: resolved fixed
Changes:
- Added new Investigator 'ManagementIPSystemVMInvestigator' that checks if Vm is alive only for System VM's that have a management IP address.
- If no management IP is found, ping test cannot be done, so this investigator would return null in that case.
- Current implementation InvestigatorImpl is renamed as 'UserVmDomRInvestigator' and does the ping test for user VMs only.
- Corrected the ping test code that was checking a hard-coded string. Now if the ping answer is negative, we just return null
- Added the new investigator to components.xml
- This NPE happened when starting the DomR and its Volume's podId was null.
- This case should never happen that podId of a Volume is null.
- It looks like this is a side-effect of some other bug- most likely another DomR and basic networking issue (9578)
- While reviewing this bug, found out that we need not use the RecreateHostAllocator anymore. Using it actually is not good, since this allocator ignores what is passed to it in the plan by the DeploymentPlanner and lists pods once again.
- Changed components.xml to use FirstFitRoutingAllocator instead.