Description:
Code changes to manage Cisco Nexus 1000v in CloudStack.
VmwareResource has been modified to leverage Nexus vSwitch.
Providing following global configuration parameters,
vmware.use.nexus.vswitch -
This would decide whether Nexus vSwitch in the VMware
cluster environment would be used/managed by CloudStack
for it's network infrastructure needs.
vmware.guest.network.vswitch.type -
This setting would enable CloudStack to use Nexus vSwitch
in the VMware cluster environment for guest traffic.
vmware.private.network.vswitch.type -
This setting would enable CloudStack to use Nexus vSwitch
in the VMware cluster environment for private traffic.
vmware.public.network.vswitch.type -
This setting would enable CloudStack to use Nexus vSwitch
in the VMware cluster environment for private traffic.
Functional Specification -
http://wiki.cloudstack.org/display/RelOps/Cisco+Nexus+1000v+Support+in+CloudStack+-+Functional+Specification
Documentation / README for usage instructions -
http://wiki.cloudstack.org/display/RelOps/Configuration+instructions+for+CloudStack+Deployment+with+Nexus+vSwitch
Conflicts:
core/src/com/cloud/hypervisor/vmware/manager/VmwareManager.java
core/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
server/src/com/cloud/hypervisor/vmware/VmwareServerDiscoverer.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/HostMO.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/HypervisorHostHelper.java
vmware-base/src/com/cloud/hypervisor/vmware/util/VmwareContext.java
This fix will enable support for multiple NetScaler devices providing EIP service in same zone.
- Introduced global setting "eip.use.multiple.netscalers" to turn multiple netscaler support
- Enhanced configureNetscalerLoadBalancer API to take the PBR setup between the POD's subnet
and NetScaler device
- logic to pick a NetScaler (based on the guest IP and corresponding pod) while configuring INAT rule
2) Added new api - changeServiceForSystemVm - to support service offering upgrade for system vms
3) Removed global config parameters that are not in use anymore: consoleproxy.ram.size, consoleproxy.cpu.mhz, secstorage.vm.ram.size, secstorage.vm.cpu.mhz
Changes:
To migrate systems using 'use.user.concentrated.pod.allocation' as true and 'vm.allocation.algorithm' as true, we need to
add following changes:
- There will be 5 values to 'vm.allocation.algorithm': 'random', 'firstfit', 'userdispersing', 'userconcentratedpod_random', 'userconcentratedpod_firstfit'
- 'userconcentratedpod_random' means we apply user concentration to pods and clusters. To hosts and pools we use random ordering.
- 'userconcentratedpod_firstfit' means we apply user concentration to pods and clusters. To hosts and pools we use firstfit ordering.
Changes:
- We do not need these global setting anymore. These will be hidden since 3.0
- The default traffic label will be picked from the global setting which is null by default. When traffic label is null it means the resource uses tag on the default gateway
- Changes to invoke discoverer to reload the resource object on host connection
- Since a zone can have many physical networks, there can be multiple guest, public networks. Only the zone wide storage and management traffic label will be stored in host_details henceforth.
- If traffic labels are updated, discoverer should update the host_details
change default mount path of OS where mgmt server is running to /var/lib/cloud/managment/mnt
as /var/lib/cloud/management is home directory of user 'cloud', mgmt server can have full permission
to manipulate it
status 12956: resolved fixed
Admin can either configure system.vm.default.hypervisor which is a global configuration for all zones, or call updatezone add defaultSystemVMHypervisorType
status 12139: resolved fixed