mirror of
https://github.com/apache/cloudstack.git
synced 2025-10-26 08:42:29 +01:00
haproxy tunning: 0. Test case: httpd running in 5 user VMs, all of them created on a xenserver host(16 core, 42G memroy, 10G network) domR running on an anther host with same hardware configuration. test application, ab, running on anther host behind an anther seperate switch 1.haproxy is not a memory intensive app. I can get 4625.96 connection/s with 1G memory. While it's really a CPU intensive app, domR always uses around 100% CPU on the host. 2.By default, you can't get better connection/s rate, because ip_conntrack_max and tw_bucket are too small, you will see the error in domR like: "TCP: time wait bucket table overflow" or "nf_conntrack: table full, dropping packet". So I increase these numbers to 1000000 from 65536, then I can steadly get around 4600 connection/s when memory is >= 1G. Here is the connection per second, tested by "ab -n 1000000 -c 100 http://192.168.170.152:880/test.html" domR memory conn/s 128M: 3545.55 256M: 4081.38 512M: 4318.18 1G: 4625.96 7G: 4745.53 3. If I enable notrack for both connections between domr/user vm, and public network, that tell iptable in domR don't track the connection during my test, then I can get better number, around 5800 connections/s. But we can't enable notrack, as iptables is used to track throughput in domR. 4. In a word, with this commit, the connection rate of haproxy can be increased from 1000-2000/s to 4700/s when domR's memory is larger than 1G. 5. How many CPU need to assign to domR to get this number? Haven't finished yet, as CPU is shared by all the VMs on the host, if other VMs are busy, it will impact the performance of haproxy.