diff --git a/docs/en-US/host-add-xenserver-kvm-ovm.xml b/docs/en-US/host-add-xenserver-kvm-ovm.xml index 4bbeefcbed4..1f13e72d4c3 100644 --- a/docs/en-US/host-add-xenserver-kvm-ovm.xml +++ b/docs/en-US/host-add-xenserver-kvm-ovm.xml @@ -83,6 +83,11 @@ Make sure the new host has the same network configuration (guest, private, and public network) as other hosts in the cluster. + + If you are using OpenVswitch bridges edit the file agent.properties on the KVM host + and set the parameter network.bridge.type to + openvswitch before adding the host to &PRODUCT; + + +
+ Configure the network using OpenVswitch + This is a very important section, please make sure you read this thoroughly. + In order to forward traffic to your instances you will need at least two bridges: public and private. + By default these bridges are called cloudbr0 and cloudbr1, but you do have to make sure they are available on each hypervisor. + The most important factor is that you keep the configuration consistent on all your hypervisors. +
+ Preparing + To make sure that the native bridge module will not interfere with openvswitch the bridge module should be added to the blacklist. See the modprobe documentation for your distribution on where to find the blacklist. Make sure the module is not loaded either by rebooting or executing rmmod bridge before executing next steps. + The network configurations below depend on the ifup-ovs and ifdown-ovs scripts which are part of the openvswitch installation. They should be installed in /etc/sysconfig/network-scripts/ +
+
+ Network example + There are many ways to configure your network. In the Basic networking mode you should have two (V)LAN's, one for your private network and one for the public network. + We assume that the hypervisor has one NIC (eth0) with three tagged VLAN's: + + VLAN 100 for management of the hypervisor + VLAN 200 for public network of the instances (cloudbr0) + VLAN 300 for private network of the instances (cloudbr1) + + On VLAN 100 we give the Hypervisor the IP-Address 192.168.42.11/24 with the gateway 192.168.42.1 + The Hypervisor and Management server don't have to be in the same subnet! +
+
+ Configuring the network bridges + It depends on the distribution you are using how to configure these, below you'll find + examples for RHEL/CentOS. + The goal is to have three bridges called 'mgmt0', 'cloudbr0' and 'cloudbr1' after this + section. This should be used as a guideline only. The exact configuration will + depend on your network layout. +
+ Configure OpenVswitch + The network interfaces using OpenVswitch are created using the ovs-vsctl command. This command will configure the interfaces and persist them to the OpenVswitch database. + First we create a main bridge connected to the eth0 interface. Next we create three fake bridges, each connected to a specific vlan tag. + +
+
+ Configure in RHEL or CentOS + The required packages were installed when openvswitch and libvirt were installed, + we can proceed to configuring the network. + First we configure eth0 + vi /etc/sysconfig/network-scripts/ifcfg-eth0 + Make sure it looks similair to: + + We have to configure the base bridge with the trunk. + vi /etc/sysconfig/network-scripts/ifcfg-cloudbr + + We now have to configure the three VLAN bridges: + vi /etc/sysconfig/network-scripts/ifcfg-mgmt0 + + vi /etc/sysconfig/network-scripts/ifcfg-cloudbr0 + + vi /etc/sysconfig/network-scripts/ifcfg-cloudbr1 + + With this configuration you should be able to restart the network, although a reboot is recommended to see if everything works properly. + Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a configuration error and the network stops functioning! +
+
+
diff --git a/docs/en-US/hypervisor-host-install-network.xml b/docs/en-US/hypervisor-host-install-network.xml index 8f6a10cdd69..3a6dfac48bd 100644 --- a/docs/en-US/hypervisor-host-install-network.xml +++ b/docs/en-US/hypervisor-host-install-network.xml @@ -25,6 +25,7 @@
Configure the network bridges This is a very important section, please make sure you read this thoroughly. + This section details how to configure bridges using the native implementation in Linux. Please refer to the next section if you intend to use OpenVswitch In order to forward traffic to your instances you will need at least two bridges: public and private. By default these bridges are called cloudbr0 and cloudbr1, but you do have to make sure they are available on each hypervisor. The most important factor is that you keep the configuration consistent on all your hypervisors. @@ -146,4 +147,4 @@ iface cloudbr1 inet manual Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a configuration error and the network stops functioning!
- \ No newline at end of file + diff --git a/docs/en-US/hypervisor-kvm-install-flow.xml b/docs/en-US/hypervisor-kvm-install-flow.xml index 76e03ef7919..6cc73e4fdfa 100644 --- a/docs/en-US/hypervisor-kvm-install-flow.xml +++ b/docs/en-US/hypervisor-kvm-install-flow.xml @@ -1,5 +1,5 @@ - %BOOK_ENTITIES; ]> @@ -31,6 +31,7 @@ + diff --git a/docs/en-US/hypervisor-kvm-requirements.xml b/docs/en-US/hypervisor-kvm-requirements.xml index c42db86a2b8..cdfc808e490 100644 --- a/docs/en-US/hypervisor-kvm-requirements.xml +++ b/docs/en-US/hypervisor-kvm-requirements.xml @@ -1,5 +1,5 @@ - %BOOK_ENTITIES; ]> @@ -35,6 +35,11 @@ libvirt: 0.9.4 or higher Qemu/KVM: 1.0 or higher + The default bridge in &PRODUCT; is the Linux native bridge implementation (bridge module). &PRODUCT; includes an option to work with OpenVswitch, the requirements are listed below + + libvirt: 0.9.11 or higher + openvswitch: 1.7.1 or higher + In addition, the following hardware requirements apply: Within a single cluster, the hosts must be of the same distribution version. diff --git a/docs/en-US/plugin-niciranvp-features.xml b/docs/en-US/plugin-niciranvp-features.xml index b67323d56d2..b71e67f4199 100644 --- a/docs/en-US/plugin-niciranvp-features.xml +++ b/docs/en-US/plugin-niciranvp-features.xml @@ -24,6 +24,10 @@ Features of the Nicira NVP Plugin In CloudStack release 4.0.0-incubating this plugin supports the Connectivity service. This service is responsible for creating Layer 2 networks supporting the networks created by Guests. In other words when an tennant creates a new network, instead of the traditional VLAN a logical network will be created by sending the appropriate calls to the Nicira NVP Controller. The plugin has been tested with Nicira NVP versions 2.1.0, 2.2.0 and 2.2.1 - In CloudStack 4.0.0-incubating only the XenServer hypervisor is supported for use in combination with Nicira NVP - In CloudStack 4.0.0-incubating the UI components for this plugin are not complete, configuration is done by sending commands to the API + In CloudStack 4.0.0-incubating only the XenServer hypervisor is supported for use in + combination with Nicira NVP. + In CloudStack 4.1.0-incubating both KVM and XenServer hypervisors are + supported. + In CloudStack 4.0.0-incubating the UI components for this plugin are not complete, + configuration is done by sending commands to the API. diff --git a/docs/en-US/plugin-niciranvp-preparations.xml b/docs/en-US/plugin-niciranvp-preparations.xml index 95a25bdca26..86b795ccd0b 100644 --- a/docs/en-US/plugin-niciranvp-preparations.xml +++ b/docs/en-US/plugin-niciranvp-preparations.xml @@ -24,7 +24,9 @@ Prerequisites Before enabling the Nicira NVP plugin the NVP Controller needs to be configured. Please review the NVP User Guide on how to do that. CloudStack needs to have at least one physical network with the isolation method set to "STT". This network should be enabled for the Guest traffic type. - The Guest traffic type should be configured with the traffic label that matches the name of the Integration Bridge on XenServer. See the Nicira NVP User Guide for more details on how to set this up in XenServer. + The Guest traffic type should be configured with the traffic label that matches the name of + the Integration Bridge on the hypervisor. See the Nicira NVP User Guide for more details + on how to set this up in XenServer or KVM. Make sure you have the following information ready: The IP address of the NVP Controller diff --git a/docs/en-US/plugin-niciranvp-ui.xml b/docs/en-US/plugin-niciranvp-ui.xml new file mode 100644 index 00000000000..8b1bbad8395 --- /dev/null +++ b/docs/en-US/plugin-niciranvp-ui.xml @@ -0,0 +1,26 @@ + + +%BOOK_ENTITIES; + +%xinclude; +]> + +
+ Configuring the Nicira NVP plugin from the UI + In CloudStack 4.1.0-incubating the Nicira NVP plugin and its resources can be configured in the infrastructure tab of the UI. Navigate to the physical network with STT isolation and configure the network elements. The NiciraNvp is listed here. +
diff --git a/docs/en-US/plugin-niciranvp-usage.xml b/docs/en-US/plugin-niciranvp-usage.xml index 17413387ea4..76f9a0b5b05 100644 --- a/docs/en-US/plugin-niciranvp-usage.xml +++ b/docs/en-US/plugin-niciranvp-usage.xml @@ -24,6 +24,7 @@ Using the Nicira NVP Plugin +