diff --git a/docs/en-US/add-load-balancer-rule.xml b/docs/en-US/add-load-balancer-rule.xml index fca54f94734..8cd0da4b7da 100644 --- a/docs/en-US/add-load-balancer-rule.xml +++ b/docs/en-US/add-load-balancer-rule.xml @@ -70,10 +70,6 @@ the algorithm for the stickiness policy. See Sticky Session Policies for Load Balancer Rules. - - AutoScale: Click Configure and complete the - AutoScale configuration as explained in . - diff --git a/docs/en-US/add-more-clusters.xml b/docs/en-US/add-more-clusters.xml new file mode 100644 index 00000000000..a2e41e38f84 --- /dev/null +++ b/docs/en-US/add-more-clusters.xml @@ -0,0 +1,29 @@ + + +%BOOK_ENTITIES; +]> + +
+ Add More Clusters (Optional) + You need to tell &PRODUCT; about the hosts that it will manage. Hosts exist inside clusters, + so before you begin adding hosts to the cloud, you must add at least one cluster. + + + + +
diff --git a/docs/en-US/add-primary-storage.xml b/docs/en-US/add-primary-storage.xml new file mode 100644 index 00000000000..9c7ad3dc9cf --- /dev/null +++ b/docs/en-US/add-primary-storage.xml @@ -0,0 +1,108 @@ + + +%BOOK_ENTITIES; +]> + +
+ Adding Primary Storage + + Ensure that nothing stored on the server. Adding the server to CloudStack will destroy any + existing data. + + When you create a new zone, the first primary storage is added as part of that procedure. + You can add primary storage servers at any time, such as when adding a new cluster or adding + more servers to an existing cluster. + + + Log in to the &PRODUCT; UI. + + + In the left navigation, choose Infrastructure. In Zones, click View More, then click the + zone in which you want to add the primary storage. + + + Click the Compute tab. + + + In the Primary Storage node of the diagram, click View All. + + + Click Add Primary Storage. + + + Provide the following information in the dialog. The information required varies + depending on your choice in Protocol. + + + Pod. The pod for the storage device. + + + Cluster. The cluster for the storage device. + + + Name. The name of the storage device + + + Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS + or SharedMountPoint. For vSphere choose either VMFS (iSCSI or FiberChannel) or + NFS + + + Server (for NFS, iSCSI, or PreSetup). The IP address or DNS name of the storage + device + + + Server (for VMFS). The IP address or DNS name of the vCenter server. + + + Path (for NFS). In NFS this is the exported path from the server. + + + Path (for VMFS). In vSphere this is a combination of the datacenter name and the + datastore name. The format is "/" datacenter name "/" datastore name. For example, + "/cloud.dc.VM/cluster1datastore". + + + Path (for SharedMountPoint). With KVM this is the path on each host that is where + this primary storage is mounted. For example, "/mnt/primary". + + + SR Name-Label (for PreSetup). Enter the name-label of the SR that has been set up + outside &PRODUCT;. + + + Target IQN (for iSCSI). In iSCSI this is the IQN of the target. For example, + iqn.1986-03.com.sun:02:01ec9bb549-1271378984 + + + Lun # (for iSCSI). In iSCSI this is the LUN number. For example, 3. + + + Tags (optional). The comma-separated list of tags for this storage device. It should + be an equivalent set or superset of the tags on your disk offerings + + + The tag sets on primary storage across clusters in a Zone must be identical. For + example, if cluster A provides primary storage that has tags T1 and T2, all other clusters + in the Zone must also provide primary storage that has tags T1 and T2. + + + Click OK. + + +
diff --git a/docs/en-US/add-secondary-storage.xml b/docs/en-US/add-secondary-storage.xml new file mode 100644 index 00000000000..318a6ea79b6 --- /dev/null +++ b/docs/en-US/add-secondary-storage.xml @@ -0,0 +1,48 @@ + + +%BOOK_ENTITIES; +]> + +
+ Adding Secondary Storage + + Be sure there is nothing stored on the server. Adding the server to CloudStack will + destroy any existing data. + + When you create a new zone, the first secondary storage is added as part of that procedure. + You can add secondary storage servers at any time to add more servers to an existing + zone. + + + If you are going to use Swift for cloud-wide secondary storage, you must add the Swift + storage to &PRODUCT; before you add the local zone secondary storage servers. + + + To prepare for local zone secondary storage, you should have created and mounted an NFS + share during Management Server installation. + + + Make sure you prepared the system VM template during Management Server + installation. + + + 4. Now that the secondary storage server for per-zone storage is prepared, add it to + &PRODUCT;. Secondary storage is added as part of the procedure for adding a new zone. + + +
diff --git a/docs/en-US/choosing_a_deployment_architecture.xml b/docs/en-US/choosing_a_deployment_architecture.xml new file mode 100644 index 00000000000..ec59d8b2bd3 --- /dev/null +++ b/docs/en-US/choosing_a_deployment_architecture.xml @@ -0,0 +1,29 @@ + +%BOOK_ENTITIES; +]> + + + + Choosing a Deployment Architecture + The architecture used in a deployment will vary depending on the size and purpose of the deployment. This section contains examples of deployment architecture, including a small-scale deployment useful for test and trial deployments and a fully-redundant large-scale setup for production deployments. + + + + + + diff --git a/docs/en-US/create-vpn-connection-vpc.xml b/docs/en-US/create-vpn-connection-vpc.xml new file mode 100644 index 00000000000..1fba09e18fb --- /dev/null +++ b/docs/en-US/create-vpn-connection-vpc.xml @@ -0,0 +1,103 @@ + + +%BOOK_ENTITIES; +]> + +
+ Creating a VPN Connection + + + Log in to the &PRODUCT; UI as an administrator or end user. + + + In the left navigation, choose Network. + + + In the Select view, select VPC. + All the VPCs that you create for the account are listed in the page. + + + Click the Configure button of the VPC to which you want to deploy the VMs. + The VPC page is displayed where all the tiers you created are listed in a + diagram. + + + Click the Settings icon. + The following options are displayed. + + + IP Addresses + + + Gateways + + + Site-to-Site VPN + + + Network ASLs + + + + + Select Site-to-Site VPN. + The Site-to-Site VPN page is displayed. + + + From the Select View drop-down, ensure that VPN Connection is selected. + + + Click Create VPN Connection. + The Create VPN Connection dialog is displayed: + + + + + + createvpnconnection.png: creating a vpn connection to the customer + gateway. + + + + + Select the desired customer gateway, then click OK to confirm. + Within a few moments, the VPN Connection is displayed. + The following information on the VPN connection is displayed: + + + IP Address + + + Gateway + + + State + + + IPSec Preshared Key + + + IKE Policy + + + ESP Policy + + + + +
\ No newline at end of file diff --git a/docs/en-US/create-vpn-customer-gateway.xml b/docs/en-US/create-vpn-customer-gateway.xml new file mode 100644 index 00000000000..bf56e36e8b9 --- /dev/null +++ b/docs/en-US/create-vpn-customer-gateway.xml @@ -0,0 +1,191 @@ + + +%BOOK_ENTITIES; +]> + +
+ Creating and Updating a VPN Customer Gateway + + A VPN customer gateway can be connected to only one VPN gateway at a time. + + To add a VPN Customer Gateway: + + + Log in to the &PRODUCT; UI as an administrator or end user. + + + In the left navigation, choose Network. + + + In the Select view, select VPN Customer Gateway. + + + Click Add site-to-site VPN. + + + + + + addvpncustomergateway.png: adding a customer gateway. + + + Provide the following information: + + + Name: A unique name for the VPN customer gateway + you create. + + + Gateway: The IP address for the remote + gateway. + + + CIDR list: The guest CIDR list of the remote + subnets. Enter a CIDR or a comma-separated list of CIDRs. Ensure that a guest CIDR list + is not overlapped with the VPC’s CIDR, or another guest CIDR. The CIDR must be + RFC1918-compliant. + + + IPsec Preshared Key: Preshared keying is a method + where the endpoints of the VPN share a secret key. This key value is used to + authenticate the customer gateway and the VPC VPN gateway to each other. + + The IKE peers (VPN end points) authenticate each other by computing and sending a + keyed hash of data that includes the Preshared key. If the receiving peer is able to + create the same hash independently by using its Preshared key, it knows that both + peers must share the same secret, thus authenticating the customer gateway. + + + + IKE Encryption: The Internet Key Exchange (IKE) + policy for phase-1. The supported encryption algorithms are AES128, AES192, AES256, and + 3DES. Authentication is accomplished through the Preshared Keys. + + The phase-1 is the first phase in the IKE process. In this initial negotiation + phase, the two VPN endpoints agree on the methods to be used to provide security for + the underlying IP traffic. The phase-1 authenticates the two VPN gateways to each + other, by confirming that the remote gateway has a matching Preshared Key. + + + + IKE Hash: The IKE hash for phase-1. The supported + hash algorithms are SHA1 and MD5. + + + IKE DH: A public-key cryptography protocol which + allows two parties to establish a shared secret over an insecure communications channel. + The 1536-bit Diffie-Hellman group is used within IKE to establish session keys. The + supported options are None, Group-5 (1536-bit) and Group-2 (1024-bit). + + + ESP Encryption: Encapsulating Security Payload + (ESP) algorithm within phase-2. The supported encryption algorithms are AES128, AES192, + AES256, and 3DES. + + The phase-2 is the second phase in the IKE process. The purpose of IKE phase-2 is + to negotiate IPSec security associations (SA) to set up the IPSec tunnel. In phase-2, + new keying material is extracted from the Diffie-Hellman key exchange in phase-1, to + provide session keys to use in protecting the VPN data flow. + + + + ESP Hash: Encapsulating Security Payload (ESP) hash + for phase-2. Supported hash algorithms are SHA1 and MD5. + + + Perfect Forward Secrecy: Perfect Forward Secrecy + (or PFS) is the property that ensures that a session key derived from a set of long-term + public and private keys will not be compromised. This property enforces a new + Diffie-Hellman key exchange. It provides the keying material that has greater key + material life and thereby greater resistance to cryptographic attacks. The available + options are None, Group-5 (1536-bit) and Group-2 (1024-bit). The security of the key + exchanges increase as the DH groups grow larger, as does the time of the + exchanges. + + When PFS is turned on, for every negotiation of a new phase-2 SA the two gateways + must generate a new set of phase-1 keys. This adds an extra layer of protection that + PFS adds, which ensures if the phase-2 SA’s have expired, the keys used for new + phase-2 SA’s have not been generated from the current phase-1 keying material. + + + + IKE Lifetime (seconds): The phase-1 lifetime of the + security association in seconds. Default is 86400 seconds (1 day). Whenever the time + expires, a new phase-1 exchange is performed. + + + ESP Lifetime (seconds): The phase-2 lifetime of the + security association in seconds. Default is 3600 seconds (1 hour). Whenever the value is + exceeded, a re-key is initiated to provide a new IPsec encryption and authentication + session keys. + + + Dead Peer Detection: A method to detect an + unavailable Internet Key Exchange (IKE) peer. Select this option if you want the virtual + router to query the liveliness of its IKE peer at regular intervals. It’s recommended to + have the same configuration of DPD on both side of VPN connection. + + + + + Click OK. + + + + Updating and Removing a VPN Customer Gateway + You can update a customer gateway either with no VPN connection, or related VPN connection + is in error state. + + + + Log in to the &PRODUCT; UI as an administrator or end user. + + + In the left navigation, choose Network. + + + In the Select view, select VPN Customer Gateway. + + + Select the VPN customer gateway you want to work with. + + + To modify the required parameters, click the Edit VPN Customer Gateway button + + + + + edit.png: button to edit a VPN customer gateway + + + + + To remove the VPN customer gateway, click the Delete VPN Customer Gateway button + + + + + delete.png: button to remove a VPN customer gateway + + + + + Click OK. + + +
\ No newline at end of file diff --git a/docs/en-US/create-vpn-gateway-for-vpc.xml b/docs/en-US/create-vpn-gateway-for-vpc.xml new file mode 100644 index 00000000000..396a7d9d174 --- /dev/null +++ b/docs/en-US/create-vpn-gateway-for-vpc.xml @@ -0,0 +1,80 @@ + + +%BOOK_ENTITIES; +]> + +
+ Creating a VPN gateway for the VPC + + + Log in to the &PRODUCT; UI as an administrator or end user. + + + In the left navigation, choose Network. + + + In the Select view, select VPC. + All the VPCs that you have created for the account is listed in the page. + + + Click the Configure button of the VPC to which you want to deploy the VMs. + The VPC page is displayed where all the tiers you created are listed in a + diagram. + + + Click the Settings icon. + The following options are displayed. + + + IP Addresses + + + Gateways + + + Site-to-Site VPN + + + Network ACLs + + + + + Select Site-to-Site VPN. + If you are creating the VPN gateway for the first time, selecting Site-to-Site VPN + prompts you to create a VPN gateway. + + + In the confirmation dialog, click Yes to confirm. + Within a few moments, the VPN gateway is created. You will be prompted to view the + details of the VPN gateway you have created. Click Yes to confirm. + The following details are displayed in the VPN Gateway page: + + + IP Address + + + Account + + + Domain + + + + +
\ No newline at end of file diff --git a/docs/en-US/large_scale_redundant_setup.xml b/docs/en-US/large_scale_redundant_setup.xml new file mode 100644 index 00000000000..9eb3190cb62 --- /dev/null +++ b/docs/en-US/large_scale_redundant_setup.xml @@ -0,0 +1,42 @@ + +%BOOK_ENTITIES; +]> + + +
+ Large-Scale Redundant Setup + + + + + Large-Scale Redundant Setup + + This diagram illustrates the network architecture of a large-scale &PRODUCT; deployment. + + A layer-3 switching layer is at the core of the data center. A router redundancy protocol like VRRP should be deployed. Typically high-end core switches also include firewall modules. Separate firewall appliances may also be used if the layer-3 switch does not have integrated firewall capabilities. The firewalls are configured in NAT mode. The firewalls provide the following functions: + + Forwards HTTP requests and API calls from the Internet to the Management Server. The Management Server resides on the management network. + When the cloud spans multiple zones, the firewalls should enable site-to-site VPN such that servers in different zones can directly reach each other. + + + A layer-2 access switch layer is established for each pod. Multiple switches can be stacked to increase port count. In either case, redundant pairs of layer-2 switches should be deployed. + The Management Server cluster (including front-end load balancers, Management Server nodes, and the MySQL database) is connected to the management network through a pair of load balancers. + Secondary storage servers are connected to the management network. + Each pod contains storage and computing servers. Each storage and computing server should have redundant NICs connected to separate layer-2 access switches. + +
\ No newline at end of file diff --git a/docs/en-US/multi_node_management_server.xml b/docs/en-US/multi_node_management_server.xml new file mode 100644 index 00000000000..9dea9499e8d --- /dev/null +++ b/docs/en-US/multi_node_management_server.xml @@ -0,0 +1,36 @@ + +%BOOK_ENTITIES; +]> + + +
+ Multi-Node Management Server + The &PRODUCT; Management Server is deployed on one or more front-end servers connected to a single MySQL database. Optionally a pair of hardware load balancers distributes requests from the web. A backup management server set may be deployed using MySQL replication at a remote site to add DR capabilities. + + + + + Multi-Node Management Server + + The administrator must decide the following. + + Whether or not load balancers will be used. + How many Management Servers will be deployed. + Whether MySQL replication will be deployed to enable disaster recovery. + +
\ No newline at end of file diff --git a/docs/en-US/multi_site_deployment.xml b/docs/en-US/multi_site_deployment.xml new file mode 100644 index 00000000000..2dce575589a --- /dev/null +++ b/docs/en-US/multi_site_deployment.xml @@ -0,0 +1,50 @@ + +%BOOK_ENTITIES; +]> + + +
+ Multi-Site Deployment + The &PRODUCT; platform scales well into multiple sites through the use of zones. The following diagram shows an example of a multi-site deployment. + + + + + Example Of A Multi-Site Deployment + + Data Center 1 houses the primary Management Server as well as zone 1. The MySQL database is replicated in real time to the secondary Management Server installation in Data Center 2. + + + + + Separate Storage Network + + This diagram illustrates a setup with a separate storage network. Each server has four NICs, two connected to pod-level network switches and two connected to storage network switches. + There are two ways to configure the storage network: + + Bonded NIC and redundant switches can be deployed for NFS. In NFS deployments, redundant switches and bonded NICs still result in one network (one CIDR block+ default gateway address). + iSCSI can take advantage of two separate storage networks (two CIDR blocks each with its own default gateway). Multipath iSCSI client can failover and load balance between separate storage networks. + + + + + + NIC Bonding And Multipath I/O + + This diagram illustrates the differences between NIC bonding and Multipath I/O (MPIO). NIC bonding configuration involves only one network. MPIO involves two separate networks. +
\ No newline at end of file diff --git a/docs/en-US/separate_storage_network.xml b/docs/en-US/separate_storage_network.xml new file mode 100644 index 00000000000..c3f6330cb14 --- /dev/null +++ b/docs/en-US/separate_storage_network.xml @@ -0,0 +1,24 @@ + +%BOOK_ENTITIES; +]> + + +
+ Separate Storage Network + In the large-scale redundant setup described in the previous section, storage traffic can overload the management network. A separate storage network is optional for deployments. Storage protocols such as iSCSI are sensitive to network delays. A separate storage network ensures guest network traffic contention does not impact storage performance. +
\ No newline at end of file diff --git a/docs/en-US/small_scale_deployment.xml b/docs/en-US/small_scale_deployment.xml new file mode 100644 index 00000000000..eb509a78d41 --- /dev/null +++ b/docs/en-US/small_scale_deployment.xml @@ -0,0 +1,37 @@ + +%BOOK_ENTITIES; +]> + + + +
+ Small-Scale Deployment + + + + + Small-Scale Deployment + + This diagram illustrates the network architecture of a small-scale &PRODUCT; deployment. + + A firewall provides a connection to the Internet. The firewall is configured in NAT mode. The firewall forwards HTTP requests and API calls from the Internet to the Management Server. The Management Server resides on the management network. + A layer-2 switch connects all physical servers and storage. + A single NFS server functions as both the primary and secondary storage. + The Management Server is connected to the management network. + +
\ No newline at end of file