diff --git a/docs/en-US/Installation_Guide.xml b/docs/en-US/Installation_Guide.xml
index 2f60acac984..f2f27ad9621 100644
--- a/docs/en-US/Installation_Guide.xml
+++ b/docs/en-US/Installation_Guide.xml
@@ -57,5 +57,6 @@
+
diff --git a/docs/en-US/best-practices.xml b/docs/en-US/best-practices.xml
new file mode 100644
index 00000000000..41d7cde9036
--- /dev/null
+++ b/docs/en-US/best-practices.xml
@@ -0,0 +1,82 @@
+
+
+%BOOK_ENTITIES;
+]>
+
+
+
+
+ Best Practices
+ Deploying a cloud is challenging. There are many different technology choices to make, and &PRODUCT; is flexible enough in its configuration that there are many possible ways to combine and configure the chosen technology. This section contains suggestions and requirements about cloud deployments.
+ These should be treated as suggestions and not absolutes. However, we do encourage anyone planning to build a cloud outside of these guidelines to seek guidance and advice on the project mailing lists.
+
+ Process Best Practices
+
+
+ A staging system that models the production environment is strongly advised. It is critical if customizations have been applied to &PRODUCT;.
+
+
+ Allow adequate time for installation, a beta, and learning the system. Installs with basic networking can be done in hours. Installs with advanced networking usually take several days for the first attempt, with complicated installations taking longer. For a full production system, allow at least 4-8 weeks for a beta to work through all of the integration issues. You can get help from fellow users on the cloudstack-users mailing list.
+
+
+
+
+ Setup Best Practices
+
+
+ Each host should be configured to accept connections only from well-known entities such as the &PRODUCT; Management Server or your network monitoring software.
+
+
+ Use multiple clusters per pod if you need to achieve a certain switch density.
+
+
+ Primary storage mountpoints or LUNs should not exceed 6 TB in size. It is better to have multiple smaller primary storage elements per cluster than one large one.
+
+
+ When exporting shares on primary storage, avoid data loss by restricting the range of IP addresses that can access the storage. See "Linux NFS on Local Disks and DAS" or "Linux NFS on iSCSI".
+
+
+ NIC bonding is straightforward to implement and provides increased reliability.
+
+
+ 10G networks are generally recommended for storage access when larger servers that can support relatively more VMs are used.
+
+
+ Host capacity should generally be modeled in terms of RAM for the guests. Storage and CPU may be overprovisioned. RAM may not. RAM is usually the limiting factor in capacity designs.
+
+
+ (XenServer) Configure the XenServer dom0 settings to allocate more memory to dom0. This can enable XenServer to handle larger numbers of virtual machines. We recommend 2940 MB of RAM for XenServer dom0. For instructions on how to do this, see http://support.citrix.com/article/CTX126531. The article refers to XenServer 5.6, but the same information applies to XenServer 6.0.
+
+
+
+
+ Maintenance Best Practices
+
+
+ Monitor host disk space. Many host failures occur because the host's root disk fills up from logs that were not rotated adequately.
+
+
+ Monitor the total number of VM instances in each cluster, and disable allocation to the cluster if the total is approaching the maximum that the hypervisor can handle. Be sure to leave a safety margin to allow for the possibility of one or more hosts failing, which would increase the VM load on the other hosts as the VMs are redeployed. Consult the documentation for your chosen hypervisor to find the maximum permitted number of VMs per host, then use &PRODUCT; global configuration settings to set this as the default limit. Monitor the VM activity in each cluster and keep the total number of VMs below a safe level that allows for the occasional host failure. For example, if there are N hosts in the cluster, and you want to allow for one host in the cluster to be down at any given time, the total number of VM instances you can permit in the cluster is at most (N-1) * (per-host-limit). Once a cluster reaches this number of VMs, use the &PRODUCT; UI to disable allocation to the cluster.
+
+
+ The lack of up-do-date hotfixes can lead to data corruption and lost VMs.
+ Be sure all the hotfixes provided by the hypervisor vendor are applied. Track the release of hypervisor patches through your hypervisor vendor’s support channel, and apply patches as soon as possible after they are released. &PRODUCT; will not track or notify you of required hypervisor patches. It is essential that your hosts are completely up to date with the provided hypervisor patches. The hypervisor vendor is likely to refuse to support any system that is not up to date with patches.
+
+