mirror of
https://github.com/apache/cloudstack.git
synced 2025-10-26 08:42:29 +01:00
The routing table with two nics may be messed up, due to we sent same router(gateway) information from different DHCP server, in order to specify default gateway. E.g. Network A: 192.168.1.0/24, gw 192.168.1.1 Network B: 192.168.2.0/24, gw 192.168.2.1 User VM: Nic 1 connect to network A, get ip 192.168.1.10; nic 2 connect to network B, get ip 192.168.2.10. Set network A as the default network of user VM. Currently we would send this information to user VM through DHCP offer: In network A: dhcp-option:router 192.168.1.1 In network B: dhcp-option:router 192.168.1.1 So both NIC in the guest VM would receive 192.168.1.1 as router(gateway). But, in CentOS 5.6, dhclient-scripts try to tell if the gateway is reachable for current subnet. So when we try to enable nic 2(eth1) of user VM, dhclient would receive: IP: 192.168.2.10 Mask: 255.255.255.0 Router: 192.168.1.1 Then it would found that the specified gateway(router) is not within its own subnet(192.168.2.0/24). But since we send out this ip(192.168.1.1) as the gateway for it, dhclient thought that it should got someway to access the network through this IP. So it would execute: ip route add 192.168.1.1 dev eth1 ip route replace default via 192.168.1.1 dev eth1 But it can never reach 192.168.1.1(which is in the eth0's subnet and the gateway of eth0) by go through eth1 interface. So it is messed up. We've tested Windows 2008 R2, CentOS 5.3, CentOS 5.6 and Ubuntu 10.04. Windows and Ubuntu are fine with above policy. To solve this, we send different dhcp:router option according to the guest OS type now. We may need expand this list later, but for now we only know that CentOS and RHEL would behavior in this way. status 14042: resolved fixed
1. The buildsystemvm.sh script builds a 32-bit system vm disk based on the Debian Squeeze distro. This system vm can boot on any hypervisor thanks to the pvops support in the kernel. It is fully automated
2. The files under config/ are the specific tweaks to the default Debian configuration that are required for CloudStack operation.
3. The variables at the top of the buildsystemvm.sh script can be customized:
IMAGENAME=systemvm # dont touch this
LOCATION=/var/lib/images/systemvm #
MOUNTPOINT=/mnt/$IMAGENAME/ # this is where the image is mounted on your host while the vm image is built
IMAGELOC=$LOCATION/$IMAGENAME.img
PASSWORD=password # password for the vm
APT_PROXY= #you can put in an APT cacher such as apt-cacher-ng
HOSTNAME=systemvm # dont touch this
SIZE=2000 # dont touch this for now
DEBIAN_MIRROR=ftp.us.debian.org/debian
MINIMIZE=true # if this is true, a lot of docs, fonts, locales and apt cache is wiped out
4. The systemvm includes the (non-free) Sun JRE. You can put in the standard debian jre-headless package instead but it pulls in X and bloats the image.
5. You need to be 'root' to run the buildsystemvm.sh script
6. The image is a raw image. You can run the convert.sh tool to produce images suitable for Citrix Xenserver, VMWare and KVM.
* Conversion to Citrix Xenserver VHD format requires the vhd-util tool. You can use the
-- checked in config/bin/vhd-util) OR
-- build the vhd-util tool yourself as follows:
a. The xen repository has a tool called vhd-util that compiles and runs on any linux system (http://xenbits.xensource.com/xen-4.0-testing.hg?file/8e8dd38374e9/tools/blktap2/vhd/ or full Xen source at http://www.xen.org/products/xen_source.html).
b. Apply this patch: http://lists.xensource.com/archives/cgi-bin/mesg.cgi?a=xen-devel&i=006101cb22f6%242004dd40%24600e97c0%24%40zhuo%40cloudex.cn.
c. Build the vhd-util tool
cd tools/blktap2
make
sudo make install
* Conversion to ova (VMWare) requires the ovf tool, available from
http://communities.vmware.com/community/vmtn/server/vsphere/automationtools/ovf
* Conversion to QCOW2 requires qemu-img