Changes:
- Added a two new deployment planners 'UserDispersingPlanner' and 'UserConcentratedPodPlanner' to the DeploymentPlanners
- Planner can be chosen by setting the global config variable 'vm.allocation.algorithm' to either of the following values:
('random', 'firstfit', 'userdispersing', 'userconcentratedpod')
- By default, the value is 'random'. When the value is 'random', FirstFitPlanner is invoked as before that shuffles the resource lists.
- Now Admin can choose whether the deployment heuristic should be applied starting at cluster or pod level. This can be done by using the
global config variable 'apply.allocation.algorithm.to.pods' which is false by default. Thus by default as earlier, planner starts at clusters directly.
'UserConcentratedPodPlanner' changes:
- Earlier to 3.0, FirstFitPlanner used to reorder the clusters in case this heuristic was chosen.
- Now this is done by a separate planner and is applied only when 'vm.allocation.algorithm' is set to this planner
- It reorders the capacity based clusters/pods such that those pods having more number of Running Vms for the given account are tried first.
- Note that this userconcentration is applied only to pods and clusters. Not to hosts or storagepools within a cluster.
'UserDispersingPlanner' changes:
- 'UserDispersingPlanner' reorders the capacity ordered pods and clusters based on number of 'Running' VMs for the given account in ascending order. Aim is to choose thodes pods/clusters first which have less number of Running VMs for the given account
- Admin can provide weights to capacity and user dispersion so that both parameters get considered in reordering the pods/clusters. This can be done by setting
the global config parameter 'vm.user.dispersion.weight'. Default value is 1. Thus if this planner is chosen, by default, ordering will be done only by number of Running Vms, unless the weight is changed.
- HostAlllocators and StoragePoolAllocators also reorder the hosts and pools by ascending order of number of Running VMS/ Ready Volumes respectively for the given account. Thus try to choose that host or pool within a cluster with less number of VMs for the account.
If virtual router provide DHCP but not DNS service, the DHCP response would
contained DNS server address rather than domr itself's address. Then user VM
would use specified DNS server directly.
listSupportedNetworkServiceProviders returs list of services with providers and capabilities of each service.
It supports 2 parameters:
-service : list providers and capabilities of this service
-provider: list services of this provider
- if none is specified, lists all services supported
It’s due to an security fix of OpenJDK 1.6.0 added by Redhat. Here is excerpt
of [RHSA-2011:1380-01] Critical: java-1.6.0-openjdk security update(
https://www.redhat.com/archives/rhsa-announce/2011-October/msg00011.html)
A flaw was found in the way the SSL 3 and TLS 1.0 protocols used block
ciphers in cipher-block chaining (CBC) mode. An attacker able to perform a
chosen plain text attack against a connection mixing trusted and untrusted
data could use this flaw to recover portions of the trusted data sent over
the connection. (CVE-2011-3389)
Note: This update mitigates the CVE-2011-3389 issue by splitting the first
application data record byte to a separate SSL/TLS protocol record. This
mitigation may cause compatibility issues with some SSL/TLS implementations
and can be disabled using the jsse.enableCBCProtection boolean property.
This can be done on the command line by appending the flag
"-Djsse.enableCBCProtection=false" to the java command.
To our knowledge, there are two condition need to be met to trigger this bug:
1. Using old keystore generated by mgmt. server 2.2.8, which is signed with
SHA1withDSA. Any version later than 2.2.8 would generate keystore signed with
SHA1withRSA. RSA one seems fine with us so far.
2. Use OpenJDK >=1.6.0.
The reason is, due to the security fix above, the assumption that one packet
would contain only one SSL record is broken. The decrypted data maybe only
contained the first byte of original application data. Then result in buffer
underflow when mgmt server want to read more from it.
To workaround it, according to the message above, add
"-Djsse.enableCBCProtection=false" to tomcat6.conf JAVA_OPTS line would work.
Notice the parameter would only work with latest version of OpenJDK, so simply
add it to the all setup would not work.
This patch provided a fix for it.
status 11904: resolved fixed