diff --git a/docs/en-US/hypervisor-host-install-primary-storage.xml b/docs/en-US/hypervisor-host-install-primary-storage.xml deleted file mode 100644 index 3e76bfbf3ba..00000000000 --- a/docs/en-US/hypervisor-host-install-primary-storage.xml +++ /dev/null @@ -1,62 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Adding Primary Storage to a KVM hypervisor - For most Primary Storages there are no special requirements to add primary storage, - it's all done via the management server. - The follow subchapters however describe a couple of prerequisites for adding primary storage to a KVM cluster/hypervisor. - -
- Requirements for NFS primary storage - &PRODUCT; will handle the mounting of the NFS storage, no manual intervention is required. - Before adding the Primary Storage make sure the NFS client packages are installed. It's always useful to do a manual mount on one or - more hypervisors prior to adding the primary storage to make sure the mountpoint works. -
- -
- Requirements for iSCSI primary storage - When adding iSCSI Primary Storage the management server of &PRODUCT; will configure the iSCSI initiator. The requirement however is that - Open-iSCSI is installed on all hypervisors. -
- -
- Requirements for Ceph primary storage - Support for RBD Primary Storage was added in &PRODUCT; 4.0 and requires a special version of libvirt. - With the KVM hypervisor &PRODUCT; relies on libvirt for handling it's storage pools. Most versions of libvirt - don't have the RBD storage pool support yet, so a manual compile of libvirt is required. - To use RBD primary storage make sure you hypervisors meet the following requirements - - Make sure librbd is installed on your system. - A RBD-enabled Qemu version is installed - Libivrt (>= 0.9.13) with RBD storage pool support enabled is installed - No /etc/ceph/ceph.conf configuration file is present on your hypervisors. - - After meeting these requirements you can add the RBD storage pool via the WebUI. - Hint: Ubuntu 13.04 meets all these requirements by default. - &PRODUCT; doesn't support multiple hostnames when adding a Primary Storage pool. If you have multiple Ceph monitor daemons - it's best to create a Round Robin-DNS record and use that as the hostname for the storage pool. -
- -
\ No newline at end of file diff --git a/docs/en-US/hypervisor-kvm-install-flow.xml b/docs/en-US/hypervisor-kvm-install-flow.xml index 7dfd47d2e52..6cc73e4fdfa 100644 --- a/docs/en-US/hypervisor-kvm-install-flow.xml +++ b/docs/en-US/hypervisor-kvm-install-flow.xml @@ -34,5 +34,4 @@ - diff --git a/docs/en-US/primary-storage-add.xml b/docs/en-US/primary-storage-add.xml index ddae418d925..067cf7114dc 100644 --- a/docs/en-US/primary-storage-add.xml +++ b/docs/en-US/primary-storage-add.xml @@ -37,11 +37,6 @@ Primary storage cannot be added until a host has been added to the cluster. If you do not provision shared primary storage, you must set the global configuration parameter system.vm.local.storage.required to true, or else you will not be able to start VMs. - There are some differences between hypervisors regarding Primary Storage. See the list below for more information per hypervisor. - - - -
Adding Primary Stroage