mirror of
https://github.com/apache/cloudstack.git
synced 2025-10-26 08:42:29 +01:00
haproxy tunning: 0. Test case: httpd running in 5 user VMs, all of them created on a xenserver host(16 core, 42G memroy, 10G network) domR running on an anther host with same hardware configuration. test application, ab, running on anther host behind an anther seperate switch 1.haproxy is not a memory intensive app. I can get 4625.96 connection/s with 1G memory. While it's really a CPU intensive app, domR always uses around 100% CPU on the host. 2.By default, you can't get better connection/s rate, because ip_conntrack_max and tw_bucket are too small, you will see the error in domR like: "TCP: time wait bucket table overflow" or "nf_conntrack: table full, dropping packet". So I increase these numbers to 1000000 from 65536, then I can steadly get around 4600 connection/s when memory is >= 1G. Here is the connection per second, tested by "ab -n 1000000 -c 100 http://192.168.170.152:880/test.html" domR memory conn/s 128M: 3545.55 256M: 4081.38 512M: 4318.18 1G: 4625.96 7G: 4745.53 3. If I enable notrack for both connections between domr/user vm, and public network, that tell iptable in domR don't track the connection during my test, then I can get better number, around 5800 connections/s. But we can't enable notrack, as iptables is used to track throughput in domR. 4. In a word, with this commit, the connection rate of haproxy can be increased from 1000-2000/s to 4700/s when domR's memory is larger than 1G. 5. How many CPU need to assign to domR to get this number? Haven't finished yet, as CPU is shared by all the VMs on the host, if other VMs are busy, it will impact the performance of haproxy.
1. The buildsystemvm.sh script builds a 32-bit system vm disk based on the Debian Squeeze distro. This system vm can boot on any hypervisor thanks to the pvops support in the kernel. It is fully automated
2. The files under config/ are the specific tweaks to the default Debian configuration that are required for CloudStack operation.
3. The variables at the top of the buildsystemvm.sh script can be customized:
IMAGENAME=systemvm # dont touch this
LOCATION=/var/lib/images/systemvm #
MOUNTPOINT=/mnt/$IMAGENAME/ # this is where the image is mounted on your host while the vm image is built
IMAGELOC=$LOCATION/$IMAGENAME.img
PASSWORD=password # password for the vm
APT_PROXY= #you can put in an APT cacher such as apt-cacher-ng
HOSTNAME=systemvm # dont touch this
SIZE=2000 # dont touch this for now
DEBIAN_MIRROR=ftp.us.debian.org/debian
MINIMIZE=true # if this is true, a lot of docs, fonts, locales and apt cache is wiped out
4. The systemvm includes the (non-free) Sun JRE. You can put in the standard debian jre-headless package instead but it pulls in X and bloats the image.
5. You need to be 'root' to run the buildsystemvm.sh script
6. The image is a raw image. You can run the convert.sh tool to produce images suitable for Citrix Xenserver, VMWare and KVM.
* Conversion to Citrix Xenserver VHD format requires the vhd-util tool. You can use the
-- checked in config/bin/vhd-util) OR
-- build the vhd-util tool yourself as follows:
a. The xen repository has a tool called vhd-util that compiles and runs on any linux system (http://xenbits.xensource.com/xen-4.0-testing.hg?file/8e8dd38374e9/tools/blktap2/vhd/ or full Xen source at http://www.xen.org/products/xen_source.html).
b. Apply this patch: http://lists.xensource.com/archives/cgi-bin/mesg.cgi?a=xen-devel&i=006101cb22f6%242004dd40%24600e97c0%24%40zhuo%40cloudex.cn.
c. Build the vhd-util tool
cd tools/blktap2
make
sudo make install
* Conversion to ova (VMWare) requires the ovf tool, available from
http://communities.vmware.com/community/vmtn/server/vsphere/automationtools/ovf
* Conversion to QCOW2 requires qemu-img