Fixes #6873 When the management server is in a different subnet than the pod in which a VPC VR is deployed, an ip route must be added just after creation of eth0 to allow the connectivity between the VR and the management server. Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com> Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com> Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
===========================================================
Introduction
This is used to build appliances for use with CloudStack. Currently two build profiles are available for building systemvmtemplate (Debian based) and CentOS based built-in user VM template.
Setting up Tools and Environment
-
Install packer and latest KVM, qemu on a Linux machine
-
Install tools for exporting appliances: qemu-img, ovftool, faketime, sharutils
-
Build and install
vhd-utilas described in build.sh or use pre-built binaries at:http://packages.shapeblue.com/systemvmtemplate/vhd-util http://packages.shapeblue.com/systemvmtemplate/libvhd.so.1.0
How to build appliances
Just run build.sh, it will export archived appliances for KVM, XenServer,
VMWare and HyperV in dist directory:
bash build.sh systemvmtemplate
bash build.sh builtin