diff --git a/docs/en-US/Installation_Guide.xml b/docs/en-US/Installation_Guide.xml
index 99d77f1ab5f..773fffb5815 100644
--- a/docs/en-US/Installation_Guide.xml
+++ b/docs/en-US/Installation_Guide.xml
@@ -45,9 +45,11 @@
+
-
+
-
+
+
diff --git a/docs/en-US/Revision_History_Install_Guide.xml b/docs/en-US/Revision_History_Install_Guide.xml
new file mode 100644
index 00000000000..ee8dd31325a
--- /dev/null
+++ b/docs/en-US/Revision_History_Install_Guide.xml
@@ -0,0 +1,55 @@
+
+
+%BOOK_ENTITIES;
+]>
+
+
+
+
+ Revision History
+
+
+
+ 1-0
+ October 5 2012
+
+ Jessica
+ Tomechak
+
+
+
+ Radhika
+ PC
+
+
+
+ Wido
+ den Hollander
+
+
+
+
+ Initial publication
+
+
+
+
+
+
diff --git a/docs/en-US/add-clusters-ovm.xml b/docs/en-US/add-clusters-ovm.xml
new file mode 100644
index 00000000000..d375a743de7
--- /dev/null
+++ b/docs/en-US/add-clusters-ovm.xml
@@ -0,0 +1,24 @@
+
+
+%BOOK_ENTITIES;
+]>
+
+ Add Cluster: OVM
+ To add a Cluster of hosts that run Oracle VM (OVM):
+
+ Add a companion non-OVM cluster to the Pod. This cluster provides an environment where the CloudPlatform System VMs can run. You should have already installed a non-OVM hypervisor on at least one Host to prepare for this step. Depending on which hypervisor you used:
+
+ For VMWare, follow the steps in Add Cluster: vSphere. When finished, return here and continue with the next step.
+ For KVM or XenServer, follow the steps in . When finished, return here and continue with the next step
+
+
+ In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add the cluster.
+ Click the Compute tab. In the Pods node, click View All. Select the same pod you used in step 1.
+ Click View Clusters, then click Add Cluster.
+ The Add Cluster dialog is displayed.
+ In Hypervisor, choose OVM.
+ In Cluster, enter a name for the cluster.
+ Click Add.
+
+
diff --git a/docs/en-US/choosing-a-deployment-architecture.xml b/docs/en-US/choosing-a-deployment-architecture.xml
new file mode 100644
index 00000000000..0503d8c7597
--- /dev/null
+++ b/docs/en-US/choosing-a-deployment-architecture.xml
@@ -0,0 +1,29 @@
+
+%BOOK_ENTITIES;
+]>
+
+
+
+ Choosing a Deployment Architecture
+ The architecture used in a deployment will vary depending on the size and purpose of the deployment. This section contains examples of deployment architecture, including a small-scale deployment useful for test and trial deployments and a fully-redundant large-scale setup for production deployments.
+
+
+
+
+
+
diff --git a/docs/en-US/citrix-xenserver-installation.xml b/docs/en-US/citrix-xenserver-installation.xml
index 75ba73d2664..b8ab5923982 100644
--- a/docs/en-US/citrix-xenserver-installation.xml
+++ b/docs/en-US/citrix-xenserver-installation.xml
@@ -100,11 +100,11 @@
Install NTP.
- # yum install ntp
+ # yum install ntp
Edit the NTP configuration file to point to your NTP server.
- # vi /etc/ntp.conf
+ # vi /etc/ntp.conf
Add one or more server lines in this file with the names of the NTP servers you want to use. For example:
server 0.xenserver.pool.ntp.org
@@ -115,11 +115,11 @@ server 3.xenserver.pool.ntp.org
Restart the NTP client.
- # service ntpd restart
+ # service ntpd restart
Make sure NTP will start again upon reboot.
- # chkconfig ntpd on
+ # chkconfig ntpd on
@@ -153,15 +153,15 @@ server 3.xenserver.pool.ntp.org
Extract the file:
- # tar xf xenserver-cloud-supp.tgz
+ # tar xf xenserver-cloud-supp.tgz
Run the following script:
- # xe-install-supplemental-pack xenserver-cloud-supp.iso
+ # xe-install-supplemental-pack xenserver-cloud-supp.iso
If the XenServer host is part of a zone that uses basic networking, disable Open vSwitch (OVS):
- # xe-switch-network-backend bridge
+ # xe-switch-network-backend bridge
Restart the host machine when prompted.
@@ -175,12 +175,12 @@ server 3.xenserver.pool.ntp.org
Connect FiberChannel cable to all hosts in the cluster and to the FiberChannel storage host.
Rescan the SCSI bus. Either use the following command or use XenCenter to perform an HBA rescan.
- # scsi-rescan
+ # scsi-rescan
Repeat step 2 on every host.
Check to be sure you see the new SCSI disk.
- # ls /dev/disk/by-id/scsi-360a98000503365344e6f6177615a516b -l
+ # ls /dev/disk/by-id/scsi-360a98000503365344e6f6177615a516b -l
The output should look like this, although the specific file name will be different (scsi-<scsiID>):
lrwxrwxrwx 1 root root 9 Mar 16 13:47
@@ -190,9 +190,9 @@ lrwxrwxrwx 1 root root 9 Mar 16 13:47
Repeat step 4 on every host.
On the storage server, run this command to get a unique ID for the new SR.
- # uuidgen
+ # uuidgen
The output should look like this, although the specific ID will be different:
- e6849e96-86c3-4f2c-8fcc-350cc711be3d
+ e6849e96-86c3-4f2c-8fcc-350cc711be3d
Create the FiberChannel SR. In name-label, use the unique ID you just generated.
@@ -202,11 +202,11 @@ device-config:SCSIid=360a98000503365344e6f6177615a516b
name-label="e6849e96-86c3-4f2c-8fcc-350cc711be3d"
This command returns a unique ID for the SR, like the following example (your ID will be different):
- 7a143820-e893-6c6a-236e-472da6ee66bf
+ 7a143820-e893-6c6a-236e-472da6ee66bf
To create a human-readable description for the SR, use the following command. In uuid, use the SR ID returned by the previous command. In name-description, set whatever friendly text you prefer.
- # xe sr-param-set uuid=7a143820-e893-6c6a-236e-472da6ee66bf name-description="Fiber Channel storage repository"
+ # xe sr-param-set uuid=7a143820-e893-6c6a-236e-472da6ee66bf name-description="Fiber Channel storage repository"
Make note of the values you will need when you add this storage to &PRODUCT; later (see ). In the Add Primary Storage dialog, in Protocol, you will choose PreSetup. In SR Name-Label, you will enter the name-label you set earlier (in this example, e6849e96-86c3-4f2c-8fcc-350cc711be3d).
(Optional) If you want to enable multipath I/O on a FiberChannel SAN, refer to the documentation provided by the SAN vendor.
@@ -238,7 +238,7 @@ name-label="e6849e96-86c3-4f2c-8fcc-350cc711be3d"
Run xe network-list and find the public network. This is usually attached to the NIC that is public. Once you find the network make note of its UUID. Call this <UUID-Public>.
Run the following command.
- # xe network-param-set name-label=cloud-public uuid=<UUID-Public>
+ # xe network-param-set name-label=cloud-public uuid=<UUID-Public>
@@ -250,7 +250,7 @@ name-label="e6849e96-86c3-4f2c-8fcc-350cc711be3d"
Run xe network-list and find one of the guest networks. Once you find the network make note of its UUID. Call this <UUID-Guest>.
Run the following command, substituting your own name-label and uuid values.
- # xe network-param-set name-label=<cloud-guestN> uuid=<UUID-Guest>
+ # xe network-param-set name-label=<cloud-guestN> uuid=<UUID-Guest>
Repeat these steps for each additional guest network, using a different name-label and uuid each time.
@@ -350,7 +350,7 @@ master-password=[your password]
Copy the script from the Management Server in /usr/lib64/cloud/agent/scripts/vm/hypervisor/xenserver/cloud-setup-bonding.sh to the master host and ensure it is executable.
Run the script:
- # ./cloud-setup-bonding.sh
+ # ./cloud-setup-bonding.sh
Now the bonds are set up and configured properly across the cluster.
@@ -398,11 +398,11 @@ master-password=[your password]
Log in to one of the hosts in the cluster, and run this command to clean up the VLAN:
- # . /opt/xensource/bin/cloud-clean-vlan.sh
+ # . /opt/xensource/bin/cloud-clean-vlan.sh
Still logged in to the host, run the upgrade preparation script:
- # /opt/xensource/bin/cloud-prepare-upgrade.sh
+ # /opt/xensource/bin/cloud-prepare-upgrade.sh
Troubleshooting: If you see the error "can't eject CD," log in to the VM and umount the CD, then run the script again.
@@ -416,7 +416,7 @@ master-password=[your password]
You attempted an operation on a VM which requires PV drivers to be installed but the drivers were not detected.
vm: b6cf79c8-02ee-050b-922f-49583d9f1a14 (i-2-8-VM)
To solve this issue, run the following:
- # /opt/xensource/bin/make_migratable.sh b6cf79c8-02ee-050b-922f-49583d9f1a14
+ # /opt/xensource/bin/make_migratable.sh b6cf79c8-02ee-050b-922f-49583d9f1a14
Reboot the host.
Upgrade to the newer version of XenServer. Use the steps in XenServer documentation.
@@ -455,13 +455,13 @@ vm: b6cf79c8-02ee-050b-922f-49583d9f1a14 (i-2-8-VM)
Run the following script:
- # /opt/xensource/bin/setupxenserver.sh
+ # /opt/xensource/bin/setupxenserver.sh
Troubleshooting: If you see the following error message, you can safely ignore it.
- mv: cannot stat `/etc/cron.daily/logrotate': No such file or directory
+ mv: cannot stat `/etc/cron.daily/logrotate': No such file or directory
Plug in the storage repositories (physical block devices) to the XenServer host:
- # for pbd in `xe pbd-list currently-attached=false| grep ^uuid | awk '{print $NF}'`; do xe pbd-plug uuid=$pbd ; done
+ # for pbd in `xe pbd-list currently-attached=false| grep ^uuid | awk '{print $NF}'`; do xe pbd-plug uuid=$pbd ; done
Note: If you add a host to this XenServer pool, you need to migrate all VMs on this host to other hosts, and eject this host from XenServer pool.
@@ -469,7 +469,7 @@ vm: b6cf79c8-02ee-050b-922f-49583d9f1a14 (i-2-8-VM)
Repeat these steps to upgrade every host in the cluster to the same version of XenServer.
Run the following command on one host in the XenServer cluster to clean up the host tags:
- # for host in $(xe host-list | grep ^uuid | awk '{print $NF}') ; do xe host-param-clear uuid=$host param-name=tags; done;
+ # for host in $(xe host-list | grep ^uuid | awk '{print $NF}') ; do xe host-param-clear uuid=$host param-name=tags; done;
When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text.
@@ -482,7 +482,7 @@ vm: b6cf79c8-02ee-050b-922f-49583d9f1a14 (i-2-8-VM)
After all hosts are up, run the following on one host in the cluster:
- # /opt/xensource/bin/cloud-clean-vlan.sh
+ # /opt/xensource/bin/cloud-clean-vlan.sh
diff --git a/docs/en-US/cluster-add.xml b/docs/en-US/cluster-add.xml
index 5210bd8b84c..89f9bd2dc9d 100644
--- a/docs/en-US/cluster-add.xml
+++ b/docs/en-US/cluster-add.xml
@@ -1,28 +1,31 @@
-
%BOOK_ENTITIES;
]>
- Adding a Cluster
- TODO
+ Adding a Cluster
+ You need to tell &PRODUCT; about the hosts that it will manage. Hosts exist inside clusters, so before you begin adding hosts to the cloud, you must add at least one cluster.
+
+
+
diff --git a/docs/en-US/host-add-vsphere.xml b/docs/en-US/host-add-vsphere.xml
new file mode 100644
index 00000000000..038d8a8c64a
--- /dev/null
+++ b/docs/en-US/host-add-vsphere.xml
@@ -0,0 +1,28 @@
+
+
+%BOOK_ENTITIES;
+]>
+
+
+
+
+ Adding a Host (vSphere)
+ For vSphere servers, we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to &PRODUCT;. See Add Cluster: vSphere.
+
diff --git a/docs/en-US/host-add-xenserver-kvm-ovm.xml b/docs/en-US/host-add-xenserver-kvm-ovm.xml
new file mode 100644
index 00000000000..710133211cb
--- /dev/null
+++ b/docs/en-US/host-add-xenserver-kvm-ovm.xml
@@ -0,0 +1,88 @@
+
+
+%BOOK_ENTITIES;
+]>
+
+
+
+
+ Adding a Host (XenServer, KVM, or OVM)
+ XenServer, KVM, and Oracle VM (OVM) hosts can be added to a cluster at any time.
+
+ Requirements for XenServer, KVM, and OVM Hosts
+ Make sure the hypervisor host does not have any VMs already running before you add it to &PRODUCT;.
+ Configuration requirements:
+
+ Each cluster must contain only hosts with the identical hypervisor.
+ For XenServer, do not put more than 8 hosts in a cluster.
+ For KVM, do not put more than 16 hosts in a cluster.
+
+ For hardware requirements, see the installation section for your hypervisor in the &PRODUCT; Installation Guide.
+
+ XenServer Host Additional Requirements
+ If network bonding is in use, the administrator must cable the new host identically to other hosts in the cluster.
+ For all additional hosts to be added to the cluster, run the following command. This will cause the host to join the master in a XenServer pool.
+ # xe pool-join master-address=[master IP] master-username=root master-password=[your password]
+ When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text.
+ With all hosts added to the XenServer pool, run the cloud-setup-bond script. This script will complete the configuration and setup of the bonds on the new hosts in the cluster.
+
+ Copy the script from the Management Server in /usr/lib64/cloud/agent/scripts/vm/hypervisor/xenserver/cloud-setup-bonding.sh to the master host and ensure it is executable.
+ Run the script:
+ # ./cloud-setup-bonding.sh
+
+
+
+
+ KVM Host Additional Requirements
+
+ If shared mountpoint storage is in use, the administrator should ensure that the new host has all the same mountpoints (with storage mounted) as the other hosts in the cluster.
+ Make sure the new host has the same network configuration (guest, private, and public network) as other hosts in the cluster.
+
+
+
+ OVM Host Additional Requirements
+ Before adding a used host in &PRODUCT;, as part of the cleanup procedure on the host, be sure to remove
+ /etc/ovs-agent/db/.
+
+
+
+
+ Adding a XenServer, KVM, or OVM Host
+
+ If you have not already done so, install the hypervisor software on the host. You will need to know which version of the hypervisor software version is supported by &PRODUCT; and what additional configuration is required to ensure the host will work with &PRODUCT;. To find these installation details, see
+ the appropriate section for your hypervisor in the &PRODUCT; Installation Guide.
+ Log in to the &PRODUCT; UI as administrator.
+ In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add the host.
+ Click the Compute tab. In the Clusters node, click View All.
+ Click the cluster where you want to add the host.
+ Click View Hosts.
+ Click Add Host.
+ Provide the following information.
+
+ Host Name. The DNS name or IP address of the host.
+ Username. Usually root.
+ Password. This is the password for the user named above (from your XenServer, KVM, or OVM install).
+ Host Tags (Optional). Any labels that you use to categorize hosts for ease of maintenance. For example, you can set to the cloud's HA tag (set in the ha.tag global configuration parameter) if you want this host to be used only for VMs with the "high availability" feature enabled. For more information, see HA-Enabled Virtual Machines as well as HA for Hosts.
+
+ There may be a slight delay while the host is provisioned. It should automatically display in the UI.
+ Repeat for additional hosts.
+
+
+
diff --git a/docs/en-US/hypervisor-installation.xml b/docs/en-US/hypervisor-installation.xml
new file mode 100644
index 00000000000..8bfc0c09c9b
--- /dev/null
+++ b/docs/en-US/hypervisor-installation.xml
@@ -0,0 +1,29 @@
+
+
+%BOOK_ENTITIES;
+]>
+
+
+
+
+ Hypervisor Installation
+
+
+
\ No newline at end of file
diff --git a/docs/en-US/hypervisor-kvm-install-flow.xml b/docs/en-US/hypervisor-kvm-install-flow.xml
new file mode 100644
index 00000000000..e2544c19251
--- /dev/null
+++ b/docs/en-US/hypervisor-kvm-install-flow.xml
@@ -0,0 +1,34 @@
+
+
+%BOOK_ENTITIES;
+]>
+
+
+
+
+ KVM Hypervisor Host Installation
+
+
+
+
+
+
+
+
diff --git a/docs/en-US/images/NIC_bonding_and_multipath_IO.png b/docs/en-US/images/NIC_bonding_and_multipath_IO.png
deleted file mode 100644
index e69de29bb2d..00000000000
diff --git a/docs/en-US/images/example_of_a_multi_site_deployment.png b/docs/en-US/images/example_of_a_multi_site_deployment.png
deleted file mode 100644
index e69de29bb2d..00000000000
diff --git a/docs/en-US/images/large-scale-redundant-setup.png b/docs/en-US/images/large-scale-redundant-setup.png
new file mode 100644
index 00000000000..5d2581afb43
Binary files /dev/null and b/docs/en-US/images/large-scale-redundant-setup.png differ
diff --git a/docs/en-US/images/large_scale_redundant_setup.png b/docs/en-US/images/large_scale_redundant_setup.png
deleted file mode 100644
index e69de29bb2d..00000000000
diff --git a/docs/en-US/images/multi-node-management-server.png b/docs/en-US/images/multi-node-management-server.png
new file mode 100644
index 00000000000..5cf5ed5456f
Binary files /dev/null and b/docs/en-US/images/multi-node-management-server.png differ
diff --git a/docs/en-US/images/multi-site-deployment.png b/docs/en-US/images/multi-site-deployment.png
new file mode 100644
index 00000000000..f3ae5bb6b5c
Binary files /dev/null and b/docs/en-US/images/multi-site-deployment.png differ
diff --git a/docs/en-US/images/multi_node_management_server.png b/docs/en-US/images/multi_node_management_server.png
deleted file mode 100644
index e69de29bb2d..00000000000
diff --git a/docs/en-US/images/nic-bonding-and-multipath-io.png b/docs/en-US/images/nic-bonding-and-multipath-io.png
new file mode 100644
index 00000000000..0fe60b66ed6
Binary files /dev/null and b/docs/en-US/images/nic-bonding-and-multipath-io.png differ
diff --git a/docs/en-US/images/separate-storage-network.png b/docs/en-US/images/separate-storage-network.png
new file mode 100644
index 00000000000..24dbbefc5b4
Binary files /dev/null and b/docs/en-US/images/separate-storage-network.png differ
diff --git a/docs/en-US/images/separate_storage_network.png b/docs/en-US/images/separate_storage_network.png
deleted file mode 100644
index e69de29bb2d..00000000000
diff --git a/docs/en-US/images/small-scale-deployment.png b/docs/en-US/images/small-scale-deployment.png
new file mode 100644
index 00000000000..1c88520e7b4
Binary files /dev/null and b/docs/en-US/images/small-scale-deployment.png differ
diff --git a/docs/en-US/images/small_scale_deployment.png b/docs/en-US/images/small_scale_deployment.png
deleted file mode 100644
index e69de29bb2d..00000000000
diff --git a/docs/en-US/installation.xml b/docs/en-US/installation.xml
index 5fc550edad6..35f1c681dc8 100644
--- a/docs/en-US/installation.xml
+++ b/docs/en-US/installation.xml
@@ -29,4 +29,5 @@
+
diff --git a/docs/en-US/large_scale_redundant_setup.xml b/docs/en-US/large_scale_redundant_setup.xml
index 9eb3190cb62..427a42d9182 100644
--- a/docs/en-US/large_scale_redundant_setup.xml
+++ b/docs/en-US/large_scale_redundant_setup.xml
@@ -22,7 +22,7 @@
Large-Scale Redundant Setup
-
+
Large-Scale Redundant Setup
@@ -39,4 +39,4 @@
Secondary storage servers are connected to the management network.
Each pod contains storage and computing servers. Each storage and computing server should have redundant NICs connected to separate layer-2 access switches.
-
\ No newline at end of file
+
diff --git a/docs/en-US/log-in.xml b/docs/en-US/log-in.xml
index e72d27bf61b..84328ce4d45 100644
--- a/docs/en-US/log-in.xml
+++ b/docs/en-US/log-in.xml
@@ -5,27 +5,26 @@
]>
-
- Log In to the UI
- &PRODUCT; provides a web-based UI that can be used by both administrators and end users. The appropriate version of the UI is displayed depending on the credentials used to log in. The UI is available in popular browsers including IE7, IE8, IE9, Firefox 3.5+, Firefox 4, Safari 4, and Safari 5. The URL is: (substitute your own management server IP address)
- http://<management-server-ip-address>:8080/client
+ Log In to the UI
+ &PRODUCT; provides a web-based UI that can be used by both administrators and end users. The appropriate version of the UI is displayed depending on the credentials used to log in. The UI is available in popular browsers including IE7, IE8, IE9, Firefox 3.5+, Firefox 4, Safari 4, and Safari 5. The URL is: (substitute your own management server IP address)
+ http://<management-server-ip-address>:8080/client
On a fresh Management Server installation, a guided tour splash screen appears. On later visits, you’ll see a login screen where you specify the following to proceed to your Dashboard:
Username
@@ -42,7 +41,8 @@
If you are a user in the sub-domains, enter the full path to the domain, excluding the root domain.
For example, suppose multiple levels are created under the root domain, such as Comp1/hr. The users in the Comp1 domain should enter Comp1 in the Domain field, whereas the users in the Comp1/sales domain should enter Comp1/sales.
For more guidance about the choices that appear when you log in to this UI, see Logging In as the Root Administrator.
-
-
-
+
+
+
+
diff --git a/docs/en-US/multi_node_management_server.xml b/docs/en-US/multi_node_management_server.xml
index 9dea9499e8d..1ff713dbd16 100644
--- a/docs/en-US/multi_node_management_server.xml
+++ b/docs/en-US/multi_node_management_server.xml
@@ -23,7 +23,7 @@
The &PRODUCT; Management Server is deployed on one or more front-end servers connected to a single MySQL database. Optionally a pair of hardware load balancers distributes requests from the web. A backup management server set may be deployed using MySQL replication at a remote site to add DR capabilities.
-
+
Multi-Node Management Server
@@ -33,4 +33,4 @@
How many Management Servers will be deployed.
Whether MySQL replication will be deployed to enable disaster recovery.
-
\ No newline at end of file
+
diff --git a/docs/en-US/multi_site_deployment.xml b/docs/en-US/multi_site_deployment.xml
index 2dce575589a..8ad94aa2a70 100644
--- a/docs/en-US/multi_site_deployment.xml
+++ b/docs/en-US/multi_site_deployment.xml
@@ -23,14 +23,14 @@
The &PRODUCT; platform scales well into multiple sites through the use of zones. The following diagram shows an example of a multi-site deployment.
-
+
Example Of A Multi-Site Deployment
Data Center 1 houses the primary Management Server as well as zone 1. The MySQL database is replicated in real time to the secondary Management Server installation in Data Center 2.
-
+
Separate Storage Network
@@ -42,9 +42,9 @@
-
+
NIC Bonding And Multipath I/O
This diagram illustrates the differences between NIC bonding and Multipath I/O (MPIO). NIC bonding configuration involves only one network. MPIO involves two separate networks.
-
\ No newline at end of file
+
diff --git a/docs/en-US/pod-add.xml b/docs/en-US/pod-add.xml
index 419e333272e..2a2b08753a9 100644
--- a/docs/en-US/pod-add.xml
+++ b/docs/en-US/pod-add.xml
@@ -1,28 +1,43 @@
-
%BOOK_ENTITIES;
]>
- Adding a Pod
- TODO
+ Adding a Pod
+ When you created a new zone, &PRODUCT; adds the first pod for you. You can add more pods at any time using the procedure in this section.
+
+ Log in to the &PRODUCT; UI. See .
+ In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone to which you want to add a pod.
+ Click the Compute and Storage tab. In the Pods node of the diagram, click View All.
+ Click Add Pod.
+ Enter the following details in the dialog.
+
+ Name. The name of the pod.
+ Gateway. The gateway for the hosts in that pod.
+ Netmask. The network prefix that defines the pod's subnet. Use CIDR notation.
+ Start/End Reserved System IP. The IP range in the management network that &PRODUCT; uses to manage various system VMs, such as Secondary Storage VMs, Console Proxy VMs, and DHCP. For more information, see System Reserved IP Addresses.
+
+
+ Click OK.
+
diff --git a/docs/en-US/provisioning-steps.xml.orig b/docs/en-US/provisioning-steps.xml.orig
new file mode 100644
index 00000000000..b532783b2d2
--- /dev/null
+++ b/docs/en-US/provisioning-steps.xml.orig
@@ -0,0 +1,42 @@
+
+
+%BOOK_ENTITIES;
+]>
+
+
+
+
+ Steps to Provisioning Your Cloud Infrastructure
+ This section tells how to add zones, pods, clusters, hosts, storage, and networks to your cloud. If you are unfamiliar with these entities, please begin by looking through .
+
+
+
+
+
+
+
+
+
+<<<<<<< HEAD
+
+
+=======
+
+>>>>>>> Promote sections to chapters: Cloud Infrastructure Concepts and Provisioning Steps.
diff --git a/docs/en-US/provisioning.xml b/docs/en-US/provisioning.xml
deleted file mode 100644
index ec28451519c..00000000000
--- a/docs/en-US/provisioning.xml
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Provisioning Your Cloud Infrastructure
-
-
-
diff --git a/docs/en-US/small_scale_deployment.xml b/docs/en-US/small_scale_deployment.xml
index eb509a78d41..bba2b9a7573 100644
--- a/docs/en-US/small_scale_deployment.xml
+++ b/docs/en-US/small_scale_deployment.xml
@@ -23,7 +23,7 @@
Small-Scale Deployment
-
+
Small-Scale Deployment
@@ -34,4 +34,4 @@
A single NFS server functions as both the primary and secondary storage.
The Management Server is connected to the management network.
-
\ No newline at end of file
+
diff --git a/docs/en-US/using-sshkeys.xml b/docs/en-US/using-sshkeys.xml
index b51569d1134..1e98eb699ad 100644
--- a/docs/en-US/using-sshkeys.xml
+++ b/docs/en-US/using-sshkeys.xml
@@ -23,27 +23,35 @@
-->
- Using the SSH Keys for Authentication on Cloud
- In addition to the username and password authentication, CloudStack supports using SSH keys to log in to the cloud infrastructure for additional security for your cloud infrastructure. You can use the createSSHKeyPair API to generate the SSH keys.
- Because each cloud user has their own ssh key, one cloud user cannot log in to another cloud user's instances unless they share their ssh key files. Using a single SSH key pair, you can manage multiple instances.
- Creating an Instance Template that Supports SSH Keys
+ Using SSH Keys for Authentication
+ In addition to the username and password authentication, &PRODUCT; supports using SSH keys to log in to the cloud infrastructure for additional security. You can use the createSSHKeyPair API to generate the SSH keys.
+ Because each cloud user has their own SSH key, one cloud user cannot log in to another cloud user's instances unless they share their SSH key files. Using a single SSH key pair, you can manage multiple instances.
+
+ Creating an Instance Template that Supports SSH Keys
+ Create a instance template that supports SSH Keys.
- Create a instance template that supports SSH Keys.
- Create a new instance by using the template provided by cloudstack.
- For more information on creating a new instance, see
+ Create a new instance by using the template provided by cloudstack.
+ For more information on creating a new instance, see
+
Download the cloudstack script from The SSH Key Gen Scriptto the instance you have created.
- wget http://downloads.sourceforge.net/project/cloudstack/SSH%20Key%20Gen%20Script/cloud-set-guest-sshkey.in?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fcloudstack%2Ffiles%2FSSH%2520Key%2520Gen%2520Script%2F&ts=1331225219&use_mirror=iweb
+ wget http://downloads.sourceforge.net/project/cloudstack/SSH%20Key%20Gen%20Script/cloud-set-guest-sshkey.in?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fcloudstack%2Ffiles%2FSSH%2520Key%2520Gen%2520Script%2F&ts=1331225219&use_mirror=iweb
+
Copy the file to /etc/init.d.
- cp cloud-set-guest-sshkey.in /etc/init.d/
+ cp cloud-set-guest-sshkey.in /etc/init.d/
+
Give the necessary permissions on the script:
- chmod +x /etc/init.d/cloud-set-guest-sshkey.in
+ chmod +x /etc/init.d/cloud-set-guest-sshkey.in
+
Run the script while starting up the operating system:
- chkconfig --add cloud-set-guest-sshkey.in
- Stop the instance.
-
-
- Creating the SSH Keypair
- You must make a call to the createSSHKeyPair api method. You can either use the cloudstack python api library or the curl commands to make the call to the cloudstack api.
+ chkconfig --add cloud-set-guest-sshkey.in
+
+ Stop the instance.
+
+
+
+
+ Creating the SSH Keypair
+ You must make a call to the createSSHKeyPair api method. You can either use the &PRODUCT; Python API library or the curl commands to make the call to the cloudstack api.
For example, make a call from the cloudstack server to create a SSH keypair called "keypair-doc" for the admin account in the root domain:
Ensure that you adjust these values to meet your needs. If you are making the API call from a different server, your URL/PORT will be different, and you will need to use the API keys.
@@ -78,15 +86,20 @@ KfEEuzcCUIxtJYTahJ1pvlFkQ8anpuxjSEDp8x/18bq3
-----END RSA PRIVATE KEY-----
Save the file.
- Creating an Instance
- After you save the SSH keypair file, you must create an instance by using the template that you created at . Ensure that you use the same SSH key name that you created at .
+
+
+ Creating an Instance
+ After you save the SSH keypair file, you must create an instance by using the template that you created at . Ensure that you use the same SSH key name that you created at .
You cannot create the instance by using the GUI at this time and associate the instance with the newly created SSH keypair.
A sample curl command to create a new instance is:
curl --globoff http://localhost:<port numbet>/?command=deployVirtualMachine\&zoneId=1\&serviceOfferingId=18727021-7556-4110-9322-d625b52e0813\&templateId=e899c18a-ce13-4bbf-98a9-625c5026e0b5\&securitygroupids=ff03f02f-9e3b-48f8-834d-91b822da40c5\&account=admin\&domainid=1\&keypair=keypair-doc
Substitute the template, service offering and security group IDs (if you are using the security group feature) that are in your cloud environment.
- Logging In Using the SSH Keypair
+
+
+ Logging In Using the SSH Keypair
To test your SSH key generation is successful, check whether you can log in to the cloud setup.
For exaple, from a Linux OS, run:
ssh -i ~/.ssh/keypair-doc <ip address>
The -i parameter tells the ssh client to use a ssh key found at ~/.ssh/keypair-doc.
+
diff --git a/docs/publican-trial-install.cfg b/docs/publican-trial-install.cfg
deleted file mode 100644
index 3e657b8fb8a..00000000000
--- a/docs/publican-trial-install.cfg
+++ /dev/null
@@ -1,29 +0,0 @@
-# Publican configuration file for CloudStack Trial Installation Guide
-# Config::Simple 4.58
-# Tue May 29 00:57:27 2012
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements. See the NOTICE file
-# distributed with this work for additional information#
-# regarding copyright ownership. The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License. You may obtain a copy of the License at
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied. See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-xml_lang: en-US
-type: Book
-docname: cloudstack_trial_installation
-brand: cloudstack
-chunk_first: 1
-chunk_section_depth: 1
-
-
-