haproxy tunning:
0. Test case:
httpd running in 5 user VMs, all of them created on a xenserver host(16 core, 42G memroy, 10G network)
domR running on an anther host with same hardware configuration.
test application, ab, running on anther host behind an anther seperate switch
1.haproxy is not a memory intensive app. I can get 4625.96 connection/s with 1G memory. While it's really a CPU intensive app, domR always uses around 100% CPU on the host.
2.By default, you can't get better connection/s rate, because ip_conntrack_max and tw_bucket are too small, you will see the error in domR like:
"TCP: time wait bucket table overflow" or "nf_conntrack: table full, dropping packet".
So I increase these numbers to 1000000 from 65536, then I can steadly get around 4600 connection/s when memory is >= 1G.
Here is the connection per second, tested by "ab -n 1000000 -c 100 http://192.168.170.152:880/test.html"
domR memory conn/s
128M: 3545.55
256M: 4081.38
512M: 4318.18
1G: 4625.96
7G: 4745.53
3. If I enable notrack for both connections between domr/user vm, and public network, that tell iptable in domR don't track the connection during my test, then I can get better number, around
5800 connections/s. But we can't enable notrack, as iptables is used to track throughput in domR.
4. In a word, with this commit, the connection rate of haproxy can be increased from 1000-2000/s to 4700/s when domR's memory is larger than 1G.
5. How many CPU need to assign to domR to get this number? Haven't finished yet, as CPU is shared by all the VMs on the host, if other VMs are busy, it will impact the performance of haproxy.
Changes specific for Xen hypervisor, and DB upgrade. Changes for vmware chcked-in already in commit 1c310a0d2ae81108386f0dd5c2e899ff00fee9e9, e71112e2f587f5d6c9c6d5337cfeb1f239f29633. KVM will not support this feature.
Changes:
- Added a new column `source_template_id` to vm_template table to carry the parent/source template ID from which the tempalte was created
- Added the column in db upgrade 224 to 225
- Changed code to save the source_template_id if there is one associated to the volume/ volume from which the snapshot was taken
- API response returns the sourcetemplateid field, if set, in all template usecases.
status 9623: resolved fixed
Also set ram_size to 1024 for console proxy offering during the upgrade
Conflicts:
core/src/com/cloud/vm/SecondaryStorageVmVO.java
server/src/com/cloud/agent/manager/allocator/impl/UserConcentratedAllocator.java
server/src/com/cloud/consoleproxy/ConsoleProxyManagerImpl.java
server/src/com/cloud/storage/allocator/LocalStoragePoolAllocator.java
server/src/com/cloud/storage/secondary/SecondaryStorageManagerImpl.java