Source code committed

This commit is contained in:
Manuel Amador (Rudd-O) 2010-08-11 09:13:29 -07:00
parent 2b78b78272
commit 05c020e1f6
2098 changed files with 297952 additions and 0 deletions

652
HACKING Normal file
View File

@ -0,0 +1,652 @@
---------------------------------------------------------------------
THE QUICK GUIDE TO CLOUDSTACK DEVELOPMENT
---------------------------------------------------------------------
=== Overview of the development lifecycle ===
To hack on a CloudStack component, you will generally:
1. Configure the source code:
./waf configure --prefix=/home/youruser/cloudstack
(see below, "./waf configure")
2. Build and install the CloudStack
./waf install
(see below, "./waf install")
3. Set the CloudStack component up
(see below, "Running the CloudStack components from source")
4. Run the CloudStack component
(see below, "Running the CloudStack components from source")
5. Modify the source code
6. Build and install the CloudStack again
./waf install --preserve-config
(see below, "./waf install")
7. GOTO 4
=== What is this waf thing in my development lifecycle? ===
waf is a self-contained, advanced build system written by Thomas Nagy,
in the spirit of SCons or the GNU autotools suite.
* To run waf on Linux / Mac: ./waf [...commands...]
* To run waf on Windows: waf.bat [...commands...]
./waf --help should be your first discovery point to find out both the
configure-time options and the different processes that you can run
using waf.
=== What do the different waf commands above do? ===
1. ./waf configure --prefix=/some/path
You run this command *once*, in preparation to building, or every
time you need to change a configure-time variable.
This runs configure() in wscript, which takes care of setting the
variables and options that waf will use for compilation and
installation, including the installation directory (PREFIX).
For convenience reasons, if you forget to run configure, waf
will proceed with some default configuration options. By
default, PREFIX is /usr/local, but you can set it e.g. to
/home/youruser/cloudstack if you plan to do a non-root
install. Be ware that you can later install the stack as a
regular user, but most components need to *run* as root.
./waf showconfig displays the values of the configure-time options
2. ./waf
You run this command to trigger compilation of the modified files.
This runs the contents of wscript_build, which takes care of
discovering and describing what needs to be built, which
build products / sources need to be installed, and where.
3. ./waf install
You run this command when you want to install the CloudStack.
If you are going to install for production, you should run this
process as root. If, conversely, you only want to install the
stack as your own user and in a directory that you have write
permission, it's fine to run waf install as your own user.
This runs the contents of wscript_build, with an option variable
Options.is_install = True. When this variable is set, waf will
install the files described in wscript_build. For convenience
reasons, when you run install, any files that need to be recompiled
will also be recompiled prior to installation.
--------------------
WARNING: each time you do ./waf install, the configuration files
in the installation directory are *overwritten*.
There are, however, two ways to get around this:
a) ./waf install has an option --preserve-config. If you pass
this option when installing, configuration files are never
overwritten.
This option is useful when you have modified source files and
you need to deploy them on a system that already has the
CloudStack installed and configured, but you do *not* want to
overwrite the existing configuration of the CloudStack.
If, however, you have reconfigured and rebuilt the source
since the last time you did ./waf install, then you are
advised to replace the configuration files and set the
components up again, because some configuration files
in the source use identifiers that may have changed during
the last ./waf configure. So, if this is your case, check
out the next way:
b) Every configuration file can be overridden in the source
without touching the original.
- Look for said config file X (or X.in) in the source, then
- create an override/ folder in the folder that contains X, then
- place a file named X (or X.in) inside override/, then
- put the desired contents inside X (or X.in)
Now, every time you run ./waf install, the file that will be
installed is path/to/override/X.in, instead of /path/to/X.in.
This option is useful if you are developing the CloudStack
and constantly reinstalling it. It guarantees that every
time you install the CloudStack, the installation will have
the correct configuration and will be ready to run.
=== Running the CloudStack components from source (for debugging / coding) ===
It is not technically possible to run the CloudStack components from
the source. That, however, is fine -- each component can be run
independently from the install directory:
- Management Server
1) Execute ./waf install as your current user (or as root if the
installation path is only writable by root).
WARNING: if any CloudStack configuration files have been
already configured / altered, they will be *overwritten* by this
process. Append --preserve-config to ./waf install to prevent this
from happening. Or resort to the override method discussed
above (search for "override" in this document).
2) If you haven't done so yet, set up the management server database:
- either run ./waf deploydb_kvm, or
- run $BINDIR/cloud-setup-databases
3) Execute ./waf run as your current user (or as root if the
installation path is only writable by root). Alternatively,
you can use ./waf debug and this will run with debugging enabled.
- Agent (Linux-only):
1) Execute ./waf install as your current user (or as root if the
installation path is only writable by root).
WARNING: if any CloudStack configuration files have been
already configured / altered, they will be *overwritten* by this
process. Append --preserve-config to ./waf install to prevent this
from happening. Or resort to the override method discussed
above (search for "override" in this document).
2) If you haven't done so yet, set the Agent up:
- run $BINDIR/cloud-setup-agent
3) Execute ./waf run_agent as root
this will launch sudo and require your root password unless you have
set sudo up not to ask for it
- Console Proxy (Linux-only):
1) Execute ./waf install as your current user (or as root if the
installation path is only writable by root).
WARNING: if any CloudStack configuration files have been
already configured / altered, they will be *overwritten* by this
process. Append --preserve-config to ./waf install to prevent this
from happening. Or resort to the override method discussed
above (search for "override" in this document).
2) If you haven't done so yet, set the Console Proxy up:
- run $BINDIR/cloud-setup-console-proxy
3) Execute ./waf run_console_proxy
this will launch sudo and require your root password unless you have
set sudo up not to ask for it
---------------------------------------------------------------------
BUILD SYSTEM TIPS
---------------------------------------------------------------------
=== Integrating compilation and execution of each component into Eclipse ===
To run the Management Server from Eclipse, set up an External Tool of the
Program variety. Put the path to the waf binary in the Location of the
window, and the source directory as Working Directory. Then specify
"install --preserve-config run" as arguments (without the quotes). You can
now use the Run button in Eclipse to execute the Management Server directly
from Eclipse. You can replace run with debug if you want to run the
Management Server with the Debugging Proxy turned on.
To run the Agent or Console Proxy from Eclipse, set up an External Tool of
the Program variety just like in the Management Server case. In there,
however, specify "install --preserve-config run_agent" or
"install --preserve-config run_console_proxy" as arguments instead.
Remember that you need to set sudo up to not ask you for a password and not
require a TTY, otherwise sudo -- implicitly called by waf run_agent or
waf run_console_proxy -- will refuse to work.
=== Building targets selectively ===
You can find out the targets of the build system:
./waf list_targets
If you want to run a specific task generator,
./waf build --targets=patchsubst
should run just that one (and whatever targets are required to build that
one, of course).
=== Common targets ===
* ./waf configure: you must always run configure once, and provide it with
the target installation paths for when you run install later
o --help: will show you all the configure options
o --no-dep-check: will skip dependency checks for java packages
needed to compile (saves 20 seconds when redoing the configure)
o --with-db-user, --with-db-pw, --with-db-host: informs the build
system of the MySQL configuration needed to set up the management
server upon install, and to do deploydb
* ./waf build: will compile any source files (and, on some projects, will
also perform any variable substitutions on any .in files such as the
MANIFEST files). Build outputs will be in <projectdir>/artifacts/default.
* ./waf install: will compile if not compiled yet, then execute an install
of the built targets. I had to write a significantly large amount of code
(that is, couple tens of lines of code) to make install work.
* ./waf run: will run the management server in the foreground
* ./waf debug: will run the management server in the foreground, and open
port 8787 to connect with the debugger (see the Run / debug options of
waf --help to change that port)
* ./waf deploydb: deploys the database using the MySQL configuration supplied
with the configuration options when you did ./waf configure. RUN WAF BUILD
FIRST AT LEAST ONCE.
* ./waf dist: create a source tarball. These tarballs will be distributed
independently on our Web site, and will form the source release of the
Cloud Stack. It is a self-contained release that can be ./waf built and
./waf installed everywhere.
* ./waf clean: remove known build products
* ./waf distclean: remove the artifacts/ directory altogether
* ./waf uninstall: uninstall all installed files
* ./waf rpm: build RPM packages
o if the build fails because the system lacks dependencies from our
other modules, waf will attempt to install RPMs from the repos,
then try the build
o it will place the built packages in artifacts/rpmbuild/
* ./waf deb: build Debian packages
o if the build fails because the system lacks dependencies from our
other modules, waf will attempt to install DEBs from the repos,
then try the build
o it will place the built packages in artifacts/debbuild/
* ./waf uninstallrpms: removes all Cloud.com RPMs from a system (but not
logfiles or modified config files)
* ./waf viewrpmdeps: displays RPM dependencies declared in the RPM specfile
* ./waf installrpmdeps: runs Yum to install the packages required to build
the CloudStack
* ./waf uninstalldebs: removes all Cloud.com DEBs from a system (AND logfiles
AND modified config files)
* ./waf viewdebdeps: displays DEB dependencies declared in the project
debian/control file
* ./waf installdebdeps: runs aptitude to install the packages required to
build our software
=== Overriding certain source files ===
Earlier in this document we explored overriding configuration files.
Overrides are not limited to configuration files.
If you want to provide your own server-setup.xml or SQL files in client/setup:
* create a directory override inside the client/setup folder
* place your file that should override a file in client/setup there
There's also override support in client/tomcatconf and agent/conf.
=== Environment substitutions ===
Any file named "something.in" has its tokens (@SOMETOKEN@) automatically
substituted for the corresponding build environment variable. The build
environment variables are generally constructed at configure time and
controllable by the --command-line-parameters to waf configure, and should
be available as a list of variables inside the file
artifacts/c4che/build.default.py.
=== The prerelease mechanism ===
The prerelease mechanism (--prerelease=BRANCHNAME) allows developers and
builders to build packages with pre-release Release tags. The Release tags
are constructed in such a way that both the build number and the branch name
is included, so developers can push these packages to repositories and upgrade
them using yum or aptitude without having to delete packages manually and
install packages manually every time a new build is done. Any package built
with the prerelease mechanism gets a standard X.Y.Z version number -- and,
due to the way that the prerelease Release tags are concocted, always upgrades
any older prerelease package already present on any system. The prerelease
mechanism must never be used to create packages that are intended to be
released as stable software to the general public.
Relevant documentation:
http://www.debian.org/doc/debian-policy/ch-controlfields.html#s-f-Version
http://fedoraproject.org/wiki/PackageNamingGuidelines#Pre-Release_packages
Everything comes together on the build server in the following way:
=== SCCS info ===
When building a source distribution (waf dist), or RPM/DEB distributions
(waf deb / waf rpm), waf will automatically detect the relevant source code
control information if the git command is present on the machine where waf
is run, and it will write the information to a file called sccs-info inside
the source tarball / install it into /usr/share/doc/cloud*/sccs-info when
installing the packages.
If this source code conrol information cannot be calculated, then the old
sccs-info file is preserved across dist runs if it exists, and if it did
not exist before, the fact that the source could not be properly tracked
down to a repository is noted in the file.
=== Debugging the build system ===
Almost all targets have names. waf build -vvvvv --zones=task will give you
the task names that you can use in --targets.
---------------------------------------------------------------------
UNDERSTANDING THE BUILD SYSTEM
---------------------------------------------------------------------
=== Documentation for the build system ===
The first and foremost reference material:
- http://freehackers.org/~tnagy/wafbook/index.html
Examples
- http://code.google.com/p/waf/wiki/CodeSnippets
- http://code.google.com/p/waf/w/list
FAQ
- http://code.google.com/p/waf/wiki/FAQ
=== Why waf ===
The CloudStack uses waf to build itself. waf is a relative newcomer
to the build system world; it borrows concepts from SCons and
other later-generation build systems:
- waf is very flexible and rich; unlike other build systems, it covers
the entire life cycle, from compilation to installation to
uninstallation. it also supports dist (create source tarball),
distcheck (check that the source tarball compiles and installs),
autoconf-like checks for dependencies at compilation time,
and more.
- waf is self-contained. A single file, distributed with the project,
enables everything to be built, with only a dependency on Python,
which is freely available and shipped in all Linux computers.
- waf also supports building projects written in multiple languages
(in the case of the CloudStack, we build from C, Java and Python).
- since waf is written in Python, the entire library of the Python
language is available to use in the build process.
=== Hacking on the build system: what are these wscript files? ===
1. wscript: contains most commands you can run from within waf
2. wscript_configure: contains the process that discovers the software
on the system and configures the build to fit that
2. wscript_build: contains a manifest of *what* is built and installed
Refer to the waf book for general information on waf:
http://freehackers.org/~tnagy/wafbook/index.html
=== What happens when waf runs ===
When you run waf, this happens behind the scenes:
- When you run waf for the first time, it unpacks itself to a hidden
directory .waf-1.X.Y.MD5SUM, including the main program and all
the Python libraries it provides and needs.
- Immediately after unpacking itself, waf reads the wscript file
at the root of the source directory. After parsing this file and
loading the functions defined here, it reads wscript_build and
generates a function build() based on it.
- After loading the build scripts as explained above, waf calls
the functions you specified in the command line.
So, for example, ./waf configure build install will:
* call configure() from wscript,
* call build() loaded from the contents of wscript_build,
* call build() once more but with Options.is_install = True.
As part of build(), waf invokes ant to build the Java portion of our
stack.
=== How and why we use ant within waf ===
By now, you have probably noticed that we do, indeed, ship ant
build files in the CloudStack. During the build process, waf calls
ant directly to build the Java portions of our stack, and it uses
the resulting JAR files to perform the installation.
The reason we do this rather than use the native waf capabilities
for building Java projects is simple: by using ant, we can leverage
the support built-in for ant in Eclipse and many other IDEs. Another
reason to do this is because Java developers are familiar with ant,
so adding a new JAR file or modifying what gets built into the
existing JAR files is facilitated for Java developers.
If you add to the ant build files a new ant target that uses the
compile-java macro, waf will automatically pick it up, along with its
depends= and JAR name attributes. In general, all you need to do is
add the produced JAR name to the packaging manifests (cloud.spec and
debian/{name-of-package}.install).
---------------------------------------------------------------------
FOR ANT USERS
---------------------------------------------------------------------
If you are using Ant directly instead of using waf, these instructions apply to you:
in this document, the example instructions are based on local source repository rooted at c:\root. You are free to locate it to anywhere you'd like to.
3.1 Setup developer build type
1) Go to c:\cloud\java\build directory
2) Copy file build-cloud.properties.template to file build-cloud.properties, then modify some of the parameters to match your local setup. The template properties file should have content as
debug=true
debuglevel=lines,vars,source
tomcat.home=$TOMCAT_HOME --> change to your local Tomcat root directory such as c:/apache-tomcat-6.0.18
debug.jvmarg=-Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n
deprecation=off
build.type=developer
target.compat.version=1.5
source.compat.version=1.5
branding.name=default
3) Make sure the following Environment variables and Path are set:
set enviroment variables:
CATALINA_HOME:
JAVA_HOME:
CLOUD_HOME:
MYSQL_HOME:
update the path to include
MYSQL_HOME\bin
4) Clone a full directory tree of C:\cloud\java\build\deploy\production to C:\cloud\java\build\deploy\developer
You can use Windows Explorer to copy the directory tree over. Please note, during your daily development process, whenever you see updates in C:\cloud\java\build\deploy\production, be sure to sync it into C:\cloud\java\build\deploy\developer.
3.2 Common build instructions
After you have setup the build type, you are ready to perform build and run Management Server alone locally.
cd java
python waf configure build install
More at Build system.
Will install the management server and its requisites to the appropriate place (your Tomcat instance on Windows, /usr/local on Linux). It will also install the agent to /usr/local/cloud/agent (this will change in the future).
4. Database and Server deployment
After a successful management server build (database deployment scripts use some of the artifacts from build process), you can use database deployment script to deploy and initialize the database. You can find the deployment scripts in C:/cloud/java/build/deploy/db. deploy-db.sh is used to create, populate your DB instance. Please take a look at content of deploy-db.sh for more details
Before you run the scripts, you should edit C:/cloud/java/build/deploy/developer/db/server-setup-dev.xml to allocate Public and Private IP ranges for your development setup. Ensure that the ranges you pick are unallocated to others.
Customized VM templates to be populated are in C:/cloud/java/build/deploy/developer/db/templates-dev.sql Edit this file to customize the templates to your needs.
Deploy the DB by running
./deploy-db.sh ../developer/db/server-setup-dev.xml ../developer/db/templates-dev.xml
4.1. Management Server Deployment
ant build-server
Build Management Server
ant deploy-server
Deploy Management Server software to Tomcat environment
ant debug
Start Management Server in debug mode. The JVM debug options can be found in cloud-build.properties
ant run
Start Management Server in normal mode.
5. Agent deployment
After a successful build process, you should be able to find build artifacts at distribution directory, in this example case, for developer build type, the artifacts locate at c:\cloud\java\dist\developer, particularly, if you have run
ant package-agent build command, you should see the agent software be packaged in a single file named agent.zip under c:\cloud\java\dist\developer, together with the agent deployment script deploy-agent.sh.
5.1 Agent Type
Agent software can be deployed and configured to serve with different roles at run time. In current implementation, there are 3 types of agent configuration, respectively called as Computing Server, Routing Server and Storage Server.
* When agent software is configured to run as Computing server, it is responsible to host user VMs. Agent software should be running in Xen Dom0 system on computer server machine.
* When agent software is configured to run as Routing Server, it is responsible to host routing VMs for user virtual network and console proxy system VMs. Routing server serves as the bridge to outside network, the machine that agent software is running should have at least two network interfaces, one towards outside network, one participates the internal VMOps management network. Like computer server, agent software on routing server should also be running in Xen Dom0 system.
* When agent software is configured to run as Storage server, it is responsible to provide storage service for all VMs. The storage service is based on ZFS running on a Solaris system, agent software on storage server is therefore running under Solaris (actually a Solaris VM), Dom0 systems on computing server and routing server can access the storage service through iScsi initiator. The storage volume will be eventually mounted on Dom0 system and make available to DomU VMs through our agent software.
5.2 Resource sharing
All developers can share the same set of agent server machines for development, to make this possible, the concept of instance appears in various places
* VM names. VM names are structual names, it contains a instance section that can identify VMs from different VMOps cloud instances. VMOps cloud instance name is configured in server configuration parameter AgentManager/instance.name
* iScsi initiator mount point. For Computing servers and Routing servers, the mount point can distinguish the mounted DomU VM images from different agent deployments. The mount location can be specified in agent.properties file with a name-value pair named mount.parent
* iScsi target allocation point. For storage servers, this allocation point can distinguish the storage allocation from different storage agent deployments. The allocation point can be specified in agent.properties file with a name-value pair named parent
5.4 Deploy agent software
Before running the deployment scripts, first copy the build artifacts agent.zip and deploy-agent.sh to your personal development directory on agent server machines. By our current convention, you can create your personal development directory that usually locates at /root/your name. In following example, the agent package and deployment scripts are copied to test0.lab.vmops.com and the deployment script file has been marked as executible.
On build machine,
scp agent.zip root@test0:/root/your name
scp deploy-agent.sh root@test0:/root/your name
On agent server machine
chmod +x deploy-agent.sh
5.4.1 Deploy agent on computing server
deploy-agent.sh -d /root/<your name>/agent -h <management server IP> -t computing -m expert
5.4.2 Deploy agent on routing server
deploy-agent.sh -d /root/<your name>/agent -h <management server IP> -t routing -m expert
5.4.3 Deploy agent on storage server
deploy-agent.sh -d /root/<your name>/agent -h <management server IP> -t storage -m expert
5.5 Configure agent
After you have deployed the agent software, you should configure the agent by editing the agent.properties file under /root/<your name>/agent/conf directory on each of the Routing, Computing and Storage servers. Add/Edit following properties. The rest are defaults that get populated by the agent at runtime.
workers=3
host=<replace with your management server IP>
port=8250
pod=<replace with your pod id>
zone=<replace with your zone id>
instance=<your unique instance name>
developer=true
Following is a sample agent.properties file for Routing server
workers=3
id=1
port=8250
pod=RC
storage=comstar
zone=RC
type=routing
private.network.nic=xenbr0
instance=RC
public.network.nic=xenbr1
developer=true
host=192.168.1.138
5.5 Running agent
Edit /root/<ryour name>/agent/conf/log4j-cloud.xml to update the location of logs to somewhere under /root/<your name>
Once you have deployed and configured the agent software, you are ready to launch it. Under the agent root directory (in our example, /root/<your name>/agent. there is a scrip file named run.sh, you can use it to launch the agent.
Launch agent in detached background process
nohup ./run.sh &
Launch agent in interactive mode
./run.sh
Launch agent in debug mode, for example, following command makes JVM listen at TCP port 8787
./run.sh -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n
If agent is launched in debug mode, you may use Eclipse IDE to remotely debug it, please note, when you are sharing agent server machine with others, choose a TCP port that is not in use by someone else.
Please also note that, run.sh also searches for /etc/cloud directory for agent.properties, make sure it uses the correct agent.properties file!
5.5. Stopping the Agents
the pid of the agent process is in /var/run/agent.<Instance>.pid
To Stop the agent:
kill <pid of agent>

155
INSTALL Normal file
View File

@ -0,0 +1,155 @@
---------------------------------------------------------------------
TABLE OF CONTENTS
---------------------------------------------------------------------
1. Really quick start: building and installing a production stack
2. Post-install: setting the CloudStack components up
3. Installation paths: where the stack is installed on your system
4. Uninstalling the CloudStack from your system
---------------------------------------------------------------------
REALLY QUICK START: BUILDING AND INSTALLING A PRODUCTION STACK
---------------------------------------------------------------------
You have two options. Choose one:
a) Building distribution packages from the source and installing them
b) Building from the source and installing directly from there
=== I want to build and install distribution packages ===
This is the recommended way to run your CloudStack cloud. The
advantages are that dependencies are taken care of automatically
for you, and you can verify the integrity of the installed files
using your system's package manager.
1. As root, install the build dependencies.
a) Fedora / CentOS: ./waf installrpmdeps
b) Ubuntu: ./waf installdebdeps
2. As a non-root user, build the CloudStack packages.
a) Fedora / CentOS: ./waf rpm
b) Ubuntu: ./waf deb
3. As root, install the CloudStack packages.
You can choose which components to install on your system.
a) Fedora / CentOS: the installable RPMs are in artifacts/rpmbuild
install as root: rpm -ivh artifacts/rpmbuild/RPMS/{x86_64,noarch,i386}/*.rpm
b) Ubuntu: the installable DEBs are in artifacts/debbuild
install as root: dpkg -i artifacts/debbuild/*.deb
4. Configure and start the components you intend to run.
Consult the Installation Guide to find out how to
configure each component, and "Installation paths" for information
on where programs, initscripts and config files are installed.
=== I want to build and install directly from the source ===
This is the recommended way to run your CloudStack cloud if you
intend to modify the source, if you intend to port the CloudStack to
another distribution, or if you intend to run the CloudStack on a
distribution for which packages are not built.
1. As root, install the build dependencies.
See below for a list.
2. As non-root, configure the build.
See below to discover configuration options.
./waf configure
3. As non-root, build the CloudStack.
To learn more, see "Quick guide to developing, building and
installing from source" below.
./waf build
4. As root, install the runtime dependencies.
See below for a list.
5. As root, Install the CloudStack
./waf install
6. Configure and start the components you intend to run.
Consult the Installation Guide to find out how to
configure each component, and "Installation paths" for information
on where to find programs, initscripts and config files mentioned
in the Installation Guide (paths may vary).
=== Dependencies of the CloudStack ===
- Build dependencies:
1. FIXME DEPENDENCIES LIST THEM HERE
- Runtime dependencies:
2. FIXME DEPENDENCIES LIST THEM HERE
---------------------------------------------------------------------
INSTALLATION PATHS: WHERE THE STACK IS INSTALLED ON YOUR SYSTEM
---------------------------------------------------------------------
The CloudStack build system installs files on a variety of paths, each
one of which is selectable when building from source.
- $PREFIX:
the default prefix where the entire stack is installed
defaults to /usr/local on source builds
defaults to /usr on package builds
- $SYSCONFDIR/cloud:
the prefix for CloudStack configuration files
defaults to $PREFIX/etc/cloud on source builds
defaults to /etc/cloud on package builds
- $SYSCONFDIR/init.d:
the prefix for CloudStack initscripts
defaults to $PREFIX/etc/init.d on source builds
defaults to /etc/init.d on package builds
- $BINDIR:
the CloudStack installs programs there
defaults to $PREFIX/bin on source builds
defaults to /usr/bin on package builds
- $LIBEXECDIR:
the CloudStack installs service runners there
defaults to $PREFIX/libexec on source builds
defaults to /usr/libexec on package builds (/usr/bin on Ubuntu)
---------------------------------------------------------------------
UNINSTALLING THE CLOUDSTACK FROM YOUR SYSTEM
---------------------------------------------------------------------
- If you installed the CloudStack using packages, use your operating
system package manager to remove the CloudStack packages.
a) Fedora / CentOS: the installable RPMs are in artifacts/rpmbuild
as root: rpm -qa | grep ^cloud- | xargs rpm -e
b) Ubuntu: the installable DEBs are in artifacts/debbuild
aptitude purge '~ncloud'
- If you installed from a source tree:
./waf uninstall

52
README Normal file
View File

@ -0,0 +1,52 @@
Hello, and thanks for downloading the Cloud.com CloudStack™! The
Cloud.com CloudStack™ is Open Source Software that allows
organizations to build Infrastructure as a Service (Iaas) clouds.
Working with server, storage, and networking equipment of your
choice, the CloudStack provides a turn-key software stack that
dramatically simplifies the process of deploying and managing a
cloud.
---------------------------------------------------------------------
HOW TO INSTALL THE CLOUDSTACK
---------------------------------------------------------------------
Please refer to the document INSTALL distributed with the source.
---------------------------------------------------------------------
HOW TO HACK ON THE CLOUDSTACK
---------------------------------------------------------------------
Please refer to the document HACKING distributed with the source.
---------------------------------------------------------------------
BE PART OF THE CLOUD.COM COMMUNITY!
---------------------------------------------------------------------
We are more than happy to have you ask us questions, hack our source
code, and receive your contributions.
* Our forums are available at http://cloud.com/community .
* If you would like to modify / extend / hack on the CloudStack source,
refer to the file HACKING for more information.
* If you find bugs, please log on to http://bugs.cloud.com/ and file
a report.
* If you have patches to send us get in touch with us at info@cloud.com
or file them as attachments in our bug tracker above.
---------------------------------------------------------------------
Cloud.com's contact information is:
20400 Stevens Creek Blvd
Suite 390
Cupertino, CA 95014
Tel: +1 (888) 384-0962
This software is OSI certified Open Source Software. OSI Certified is a
certification mark of the Open Source Initiative.

10555
README.html Normal file

File diff suppressed because it is too large Load Diff

16
agent/.classpath Normal file
View File

@ -0,0 +1,16 @@
<?xml version="1.0" encoding="UTF-8"?>
<classpath>
<classpathentry kind="src" path="src"/>
<classpathentry kind="src" path="test"/>
<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER"/>
<classpathentry combineaccessrules="false" kind="src" path="/utils"/>
<classpathentry kind="lib" path="/thirdparty/log4j-1.2.15.jar"/>
<classpathentry combineaccessrules="false" kind="src" path="/core"/>
<classpathentry kind="lib" path="/thirdparty/commons-httpclient-3.1.jar"/>
<classpathentry kind="lib" path="/thirdparty/junit-4.8.1.jar"/>
<classpathentry kind="lib" path="/thirdparty/xmlrpc-client-3.1.3.jar"/>
<classpathentry kind="lib" path="/thirdparty/xmlrpc-common-3.1.3.jar"/>
<classpathentry kind="lib" path="/thirdparty/libvirt-0.4.5.jar"/>
<classpathentry combineaccessrules="false" kind="src" path="/api"/>
<classpathentry kind="output" path="bin"/>
</classpath>

17
agent/.project Normal file
View File

@ -0,0 +1,17 @@
<?xml version="1.0" encoding="UTF-8"?>
<projectDescription>
<name>agent</name>
<comment></comment>
<projects>
</projects>
<buildSpec>
<buildCommand>
<name>org.eclipse.jdt.core.javabuilder</name>
<arguments>
</arguments>
</buildCommand>
</buildSpec>
<natures>
<nature>org.eclipse.jdt.core.javanature</nature>
</natures>
</projectDescription>

109
agent/bindir/cloud-setup-agent.in Executable file
View File

@ -0,0 +1,109 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys, os, subprocess, errno, re, traceback
# ---- This snippet of code adds the sources path and the waf configured PYTHONDIR to the Python path ----
# ---- We do this so cloud_utils can be looked up in the following order:
# ---- 1) Sources directory
# ---- 2) waf configured PYTHONDIR
# ---- 3) System Python path
for pythonpath in (
"@PYTHONDIR@",
os.path.join(os.path.dirname(__file__),os.path.pardir,os.path.pardir,"python","lib"),
):
if os.path.isdir(pythonpath): sys.path.insert(0,pythonpath)
# ---- End snippet of code ----
import cloud_utils
from cloud_utils import stderr,CheckFailed,TaskFailed,backup_etc,restore_etc
from cloud_utils import setup_agent_config,stop_service,enable_service
from cloud_utils import exit as bail
from cloud_utils import all, any
#--------------- procedure starts here ------------
# FÏXME for backup and restore: collect service state for all services so we can restore the system's runtime state back to what it was before
# possible exit states:
# a. system configuration needs administrator attention
# b. automatic reconfiguration failed
# c. process interrupted
# d. everything was configured properly (exit status 0)
brname = "@PACKAGE@br0"
servicename = "@PACKAGE@-agent"
configfile = "@AGENTSYSCONFDIR@/agent.properties"
backupdir = "@SHAREDSTATEDIR@/@AGENTPATH@/etcbackup"
#=================== the magic happens here ====================
stderr("Welcome to the Cloud Agent setup")
stderr("")
try:
# pre-flight checks for things that the administrator must fix
do_check_kvm = not ( "--no-kvm" in sys.argv[1:] )
try:
for f,n in cloud_utils.preflight_checks(
do_check_kvm=do_check_kvm
):
stderr(n)
f()
except CheckFailed,e:
stderr(str(e))
bail(cloud_utils.E_NEEDSMANUALINTERVENTION,
"Cloud Agent setup cannot continue until these issues have been addressed")
# system configuration tasks that our Cloud Agent setup performs
try:
tasks = cloud_utils.config_tasks(brname)
if all( [ t.done() for t in tasks ] ):
stderr("All configuration tasks have been performed already")
else:
backup_etc(backupdir)
try:
# run all tasks that have not been done
for t in [ n for n in tasks if not n.done() ]:
t.run()
except:
# oops, something wrong, restore system to earlier state and re-raise
stderr("A fatal issue has been detected -- restoring system configuration.\nPlease be patient; *do not* interrupt this process.")
restore_etc(backupdir)
for t in [ n for n in tasks if hasattr(n,"restore_state") ]:
t.restore_state()
raise
except (TaskFailed,CheckFailed),e:
# some configuration task or post-flight check failed, we exit right away
stderr(str(e))
bail(cloud_utils.E_SETUPFAILED,"Cloud Agent setup failed")
setup_agent_config(configfile)
stderr("Enabling and starting the Cloud Agent")
stop_service(servicename)
enable_service(servicename)
stderr("Cloud Agent restarted")
except KeyboardInterrupt,e:
# user interrupted, we exit right away
bail(cloud_utils.E_INTERRUPTED,"Cloud Agent setup interrupted")
except SystemExit,e:
# process above handled a failure then called bail(), which raises a SystemExit on CentOS
sys.exit(e.code)
except Exception,e:
# at ths point, any exception has been dealt with cleanly by restoring system config from a backup
# we just inform the user that there was a problem
# and bail prematurely
stderr("Cloud Agent setup has experienced an unrecoverable error. Please report the following technical details to Cloud.com.")
traceback.print_exc()
bail(cloud_utils.E_UNHANDLEDEXCEPTION,"Cloud Agent setup ended prematurely")
stderr("")
stderr("Cloud Agent setup completed successfully")
# ========================= end program ========================

View File

@ -0,0 +1,30 @@
# Sample configuration file for VMOPS agent
#resource= the java class, which agent load to execute
resource=com.cloud.agent.resource.computing.LibvirtComputingResource
#workers= number of threads running in agent
workers=5
#host= The IP address of management server
host=localhost
#port = The port management server listening on, default is 8250
port=8250
#pod= The pod, which agent belonged to
pod=default
#zone= The zone, which agent belonged to
zone=default
#private.network.device= the private nic device
# if this is commented, it is autodetected on service startup
# private.network.device=cloudbr0
#public.network.device= the public nic device
# if this is commented, it is autodetected on service startup
# public.network.device=cloudbr0
#guid= a GUID to identify the agent

View File

@ -0,0 +1,38 @@
#instance=AH
#private.macaddr.start=00:16:3e:77:01:01
#private.ipaddr.start=192.168.166.128
#instance=KM
#private.macaddr.start=00:16:3e:77:02:01
#private.ipaddr.start=192.168.167.128
#instance=KY
#private.macaddr.start=00:16:3e:77:03:01
#private.ipaddr.start=192.168.168.128
#instance=WC
#private.macaddr.start=00:16:3e:77:04:01
#private.ipaddr.start=192.168.169.128
#instance=CV
#private.macaddr.start=00:16:3e:77:05:01
#private.ipaddr.start=192.168.170.128
#instance=KS
#private.macaddr.start=00:16:3e:77:06:01
#private.ipaddr.start=192.168.171.128
#instance=ES
#private.macaddr.start=00:16:3e:77:07:01
#private.ipaddr.start=192.168.172.128
#instance=RC
#private.macaddr.start=00:16:3e:77:08:01
#private.ipaddr.start=192.168.173.128
#instance=AX
#private.macaddr.start=00:16:3e:77:09:01
#private.ipaddr.start=192.168.174.128
private.macaddr.start=@private.macaddr.start@
private.ipaddr.start=@private.ipaddr.start@

View File

@ -0,0 +1,3 @@
# management server compile-time environment parameters
paths.pid=@PIDDIR@

View File

@ -0,0 +1,75 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="false">
<!-- ================================= -->
<!-- Preserve messages in a local file -->
<!-- ================================= -->
<!-- A time/date based rolling appender -->
<appender name="FILE" class="org.apache.log4j.rolling.RollingFileAppender">
<param name="Append" value="true"/>
<param name="Threshold" value="DEBUG"/>
<rollingPolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">
<param name="FileNamePattern" value="@AGENTLOG@.%d{yyyy-MM-dd}.gz"/>
<param name="ActiveFileName" value="@AGENTLOG@"/>
</rollingPolicy>
<layout class="org.apache.log4j.EnhancedPatternLayout">
<param name="ConversionPattern" value="%d{ISO8601} %-5p [%c{3}] (%t:%x) %m%n"/>
</layout>
</appender>
<!-- ============================== -->
<!-- Append messages to the console -->
<!-- ============================== -->
<appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender">
<param name="Target" value="System.out"/>
<param name="Threshold" value="INFO"/>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="%d{ISO8601}{GMT} %-5p [%c{3}] (%t:%x) %m%n"/>
</layout>
</appender>
<!-- ================ -->
<!-- Limit categories -->
<!-- ================ -->
<category name="com.cloud">
<priority value="DEBUG"/>
</category>
<category name="com.cloud.agent.metrics">
<priority value="INFO"/>
</category>
<category name="com.cloud.agent.resource.computing.ComputingResource$StorageMonitorTask">
<priority value="INFO"/>
</category>
<!-- Limit the org.apache category to INFO as its DEBUG is verbose -->
<category name="org.apache">
<priority value="INFO"/>
</category>
<category name="org">
<priority value="INFO"/>
</category>
<category name="net">
<priority value="INFO"/>
</category>
<!-- ======================= -->
<!-- Setup the Root category -->
<!-- ======================= -->
<root>
<level value="INFO"/>
<appender-ref ref="CONSOLE"/>
<appender-ref ref="FILE"/>
</root>
</log4j:configuration>

View File

@ -0,0 +1,81 @@
#!/bin/bash
# chkconfig: 35 99 10
# description: Cloud Agent
# WARNING: if this script is changed, then all other initscripts MUST BE changed to match it as well
. /etc/rc.d/init.d/functions
whatami=cloud-agent
# set environment variables
SHORTNAME="$whatami"
PIDFILE=@PIDDIR@/"$whatami".pid
LOCKFILE=@LOCKDIR@/"$SHORTNAME"
LOGFILE=@AGENTLOG@
PROGNAME="Cloud Agent"
unset OPTIONS
[ -r @SYSCONFDIR@/sysconfig/"$SHORTNAME" ] && source @SYSCONFDIR@/sysconfig/"$SHORTNAME"
DAEMONIZE=@BINDIR@/@PACKAGE@-daemonize
PROG=@LIBEXECDIR@/agent-runner
start() {
echo -n $"Starting $PROGNAME: "
if hostname --fqdn >/dev/null 2>&1 ; then
daemon --check=$SHORTNAME --pidfile=${PIDFILE} "$DAEMONIZE" \
-n "$SHORTNAME" -p "$PIDFILE" -l "$LOGFILE" "$PROG" $OPTIONS
RETVAL=$?
echo
else
failure
echo
echo The host name does not resolve properly to an IP address. Cannot start "$PROGNAME". > /dev/stderr
RETVAL=9
fi
[ $RETVAL = 0 ] && touch ${LOCKFILE}
return $RETVAL
}
stop() {
echo -n $"Stopping $PROGNAME: "
killproc -p ${PIDFILE} $SHORTNAME # -d 10 $SHORTNAME
RETVAL=$?
echo
[ $RETVAL = 0 ] && rm -f ${LOCKFILE} ${PIDFILE}
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status -p ${PIDFILE} $SHORTNAME
RETVAL=$?
;;
restart)
stop
sleep 3
start
;;
condrestart)
if status -p ${PIDFILE} $SHORTNAME >&/dev/null; then
stop
sleep 3
start
fi
;;
*)
echo $"Usage: $whatami {start|stop|restart|condrestart|status|help}"
RETVAL=3
esac
exit $RETVAL

View File

@ -0,0 +1,81 @@
#!/bin/bash
# chkconfig: 35 99 10
# description: Cloud Agent
# WARNING: if this script is changed, then all other initscripts MUST BE changed to match it as well
. /etc/rc.d/init.d/functions
whatami=cloud-agent
# set environment variables
SHORTNAME="$whatami"
PIDFILE=@PIDDIR@/"$whatami".pid
LOCKFILE=@LOCKDIR@/"$SHORTNAME"
LOGFILE=@AGENTLOG@
PROGNAME="Cloud Agent"
unset OPTIONS
[ -r @SYSCONFDIR@/sysconfig/"$SHORTNAME" ] && source @SYSCONFDIR@/sysconfig/"$SHORTNAME"
DAEMONIZE=@BINDIR@/@PACKAGE@-daemonize
PROG=@LIBEXECDIR@/agent-runner
start() {
echo -n $"Starting $PROGNAME: "
if hostname --fqdn >/dev/null 2>&1 ; then
daemon --check=$SHORTNAME --pidfile=${PIDFILE} "$DAEMONIZE" \
-n "$SHORTNAME" -p "$PIDFILE" -l "$LOGFILE" "$PROG" $OPTIONS
RETVAL=$?
echo
else
failure
echo
echo The host name does not resolve properly to an IP address. Cannot start "$PROGNAME". > /dev/stderr
RETVAL=9
fi
[ $RETVAL = 0 ] && touch ${LOCKFILE}
return $RETVAL
}
stop() {
echo -n $"Stopping $PROGNAME: "
killproc -p ${PIDFILE} $SHORTNAME # -d 10 $SHORTNAME
RETVAL=$?
echo
[ $RETVAL = 0 ] && rm -f ${LOCKFILE} ${PIDFILE}
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status -p ${PIDFILE} $SHORTNAME
RETVAL=$?
;;
restart)
stop
sleep 3
start
;;
condrestart)
if status -p ${PIDFILE} $SHORTNAME >&/dev/null; then
stop
sleep 3
start
fi
;;
*)
echo $"Usage: $whatami {start|stop|restart|condrestart|status|help}"
RETVAL=3
esac
exit $RETVAL

View File

@ -0,0 +1,103 @@
#!/bin/bash
# chkconfig: 35 99 10
# description: Cloud Agent
# WARNING: if this script is changed, then all other initscripts MUST BE changed to match it as well
. /lib/lsb/init-functions
. /etc/default/rcS
whatami=cloud-agent
# set environment variables
SHORTNAME="$whatami"
PIDFILE=@PIDDIR@/"$whatami".pid
LOCKFILE=@LOCKDIR@/"$SHORTNAME"
LOGFILE=@AGENTLOG@
PROGNAME="Cloud Agent"
unset OPTIONS
[ -r @SYSCONFDIR@/default/"$SHORTNAME" ] && source @SYSCONFDIR@/default/"$SHORTNAME"
DAEMONIZE=@BINDIR@/@PACKAGE@-daemonize
PROG=@LIBEXECDIR@/agent-runner
start() {
log_daemon_msg $"Starting $PROGNAME" "$SHORTNAME"
if [ -s "$PIDFILE" ] && kill -0 $(cat "$PIDFILE") >/dev/null 2>&1; then
log_progress_msg "apparently already running"
log_end_msg 0
exit 0
fi
if hostname --fqdn >/dev/null 2>&1 ; then
true
else
log_failure_msg "The host name does not resolve properly to an IP address. Cannot start $PROGNAME"
log_end_msg 1
exit 1
fi
# Workaround for ubuntu bug:577264, make sure cgconfig/cgred start before libvirt
service cgred stop
service cgconfig stop
service cgconfig start
service cgred start
service libvirt-bin restart
if start-stop-daemon --start --quiet \
--pidfile "$PIDFILE" \
--exec "$DAEMONIZE" -- -n "$SHORTNAME" -p "$PIDFILE" -l "$LOGFILE" "$PROG" $OPTIONS
RETVAL=$?
then
rc=0
sleep 1
if ! kill -0 $(cat "$PIDFILE") >/dev/null 2>&1; then
log_failure_msg "$PROG failed to start"
rc=1
fi
else
rc=1
fi
if [ $rc -eq 0 ]; then
log_end_msg 0
else
log_end_msg 1
rm -f "$PIDFILE"
fi
}
stop() {
echo -n $"Stopping $PROGNAME" "$SHORTNAME"
start-stop-daemon --stop --quiet --oknodo --pidfile "$PIDFILE"
log_end_msg $?
rm -f "$PIDFILE"
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status_of_proc -p "$PIDFILE" "$PROG" "$SHORTNAME"
RETVAL=$?
;;
restart)
stop
sleep 3
start
;;
*)
echo $"Usage: $whatami {start|stop|restart|status|help}"
RETVAL=3
esac
exit $RETVAL

View File

@ -0,0 +1,15 @@
description "Stop CloudStack VMs on shutdown"
author "Manuel Amador (Rudd-O) <manuel@vmops.com>"
start on stopping libvirt-bin
task
script
curr_runlevel=`runlevel | tail -c 2`
if [ "$curr_runlevel" = "6" -o "$curr_runlevel" = "0" ] ; then
for a in `virsh list | awk ' /^ +[0-9]+ [vri]-([0-9]+?)-/ { print $2 } '` ; do
echo Destroying CloudStack VM $a
virsh destroy $a
done
fi
end script

230
agent/doc/README-iscsi.txt Normal file
View File

@ -0,0 +1,230 @@
0. Contents
===========
../sbin/vnetd: userspace daemon that runs the vnet
../module/2.6.18/vnet_module.ko: kernel module (alternative to vnetd)
../vnetd.sh: init script for vnet
../vn: helper script to create vnets
../id_rsa: the private key used to ssh to the routing domain
createvm.sh: clones a vm image from a given template
mountvm.sh: script to mount a remote (nfs) image directory
runvm.sh: script to run a vm
rundomr.sh: script to run a routing domain (domR) for a given vnet
listvmdisk.sh: lists disks belonging to a vm
createtmplt.sh: installs a template
listvmdisksize.sh: lists actuala and total usage per disk
../ipassoc.sh: associate / de-associate a public ip with an instance
../firewall.sh: add or remove firewall rules
stopvm.sh: stop the vm and remove the entry from xend
../delvm.sh: delete the vm image from zfs
../listclones.sh: list all filesystems that are clones under a parent fs
1. Install
==========
On the hosts that run the customer vms as well as the domR
a) Copy vn to /usr/sbin on dom0
b) Copy module/2.6.18/vnet_module.ko to /lib/modules/`uname -r`/kernel
c) Run repos/vmdev/xen/xen-3.3.0/tools/vnet/examples/vnet-insert
Ensure that all iptables rules are flushed from domO before starting any domains
(use iptables -F)
d) Ensure that the ISCSI initiator is installed (yum install iscsi*)
2. Creating /deleting a vm image on Solaris ZFS
================
The template image consists of a filesystem to hold kernel and ramdisk (linux)
or the pygrub file (linux) or nothing (windows). Contained within the template
filesystem (but not visible using 'ls') is the root volume.
Use the createvm script to clone a template snapshot. For example:
./createvm.sh -t tank/volumes/demo/template/public/os/centos52-x86_64 -d /tank/volumes/demo/template/public/datadisk/ext3-8g -i /tank/demo/vm/chiradeep/i0007 -u /tank/demo/vm/chiradeep
-t: the template fs snapshot
-i: the target clone fs
-u: the user's fs under which the clone will be created. If the user fs does not exist, it will be created.
-d: the disk fs to be cloned under the image dir specified by -i
Once this is created, use the listvmdisk.sh to list the disks:
listvmdisk.sh -i tank/demo/vm/chiradeep/i0007 -r (for the root disk)
listvmdisk.sh -i tank/demo/vm/chiradeep/i0007 -w (for the data disk)
listvmdisk.sh -i tank/demo/vm/chiradeep/i0007 -d <n> (for the data disks)
This outputs the local target name (zfs name) and the ISCSI target name
separated by a comma:
tank/demo/vm/chiradeep/i0007/datadisk1-ext3-8g,iqn.1986-03.com.sun:02:0b6c18c9-7a13-e7c9-ce78-91af20023bb3
The local target name can be used to list total (-t)and actual(-a) disk usage:
./listvmdisksize.sh -d tank/demo/vm/chiradeep/i0007/datadisk1-ext3-8g -t
8589934592
Use the delvm.sh script to delete an instance. For example:
./delvm.sh -u tank/demo/vm/chiradeep -i tank/demo/vm/chiradeep/i0007
-i: the instance fs to delete
-u: the user fs to delete
Either -i or -u or both can be supplied.
Use the listclones.sh script to list all clones under a parent fs:
./listclones.sh -p tank/demo/vm
3. Mounting an image
==================
The image directory resides on the NFS server, you can mount it with the
mountvm.sh script. For example:
./mountvm.sh -m -h 192.168.1.248 -t iqn.1986-03.com.sun:02:bf65dcfd-42b5-6e0e-e08e-99ae311b39ba -l /images/chiradeep/i0005 -n centos52 -r tank/demo/vm/chiradeep/i0005 -1 iqn.1986-03.com.sun:02:6d505eee-bf64-6729-e362-bab6c148bbc8
-h : the nfs/iscsi server host
-l : the local directory
-r : the remote directory
-n : the vm name (the same name used in runvm or rundomr)
-r : the iscsi target name for the root volume (see listvmdisk above)
-w : the iscsi target name for the swap volume (see listvmdisk above)
-1 : the iscsi target name for the datadisk volume (see listvmdisk above)
[-m | -u] : mount or unmount
4. Routing Domain (domR)
=======================
The routing domain for a customer needs to be started before any other VM in that vnet can start. To start a routing domain, for example:
./rundomr.sh -v 0008 -m 128 -i 192.168.1.33 -g 65.37.141.1 -a aa:00:00:05:00:33 -l "domR-vnet0008" -A 06:01:02:03:04:05 -p 02:01:02:03:04:05 -n 255.255.255.0 -I 65.37.141.33 -N 255.255.255.128 -b eth1 -d "dns1=192.168.1.254 dns2=207.69.188.186 domain=vmops.org" /images/chiradeep/router
-v : the is the 16-bit vnet-id specified in 4 hex characters
-m : the ram size for the domain in megabytes (128 is usually sufficient)
-a : the mac address of the eth0 of the domR
-A : the mac address of the eth1 of the domR
-p : the mac address of the eth2 of the domR
-i : the eth1 ip address in the datacenter LAN (e.g., 192.168.1.33)
-n : the netmask of eth1
-I : the eth2 ip address in the public LAN (e.g., 65.37.141.33)
-N : the netmask of eth2 (e.g., 65.37.141.128)
-b : the Xen bridge (typ.eth1) that eth2 has to be enslaved to (public LAN)
-g : the default gateway in the public subnet (e.g., 65.37.141.1)
-l : the vm name for the doMR
-d : nameserver information in the format shown in the example
Note: -d option requires template tank/demo/template/public/t100001@12_16_2008
or later
5. Starting a vm
================
The VM files are assumed to exist in a single image directory with the following conventions:
a) The kernel file begins with vmlinuz (e.g. vmlinuz-2.6.18.8-xen) (Linux)
b) The root volume begins with vmi-root (e.g.,vmi-root-centos52-x86_64-pv)
c) The data partition begins with datadisk1 (e.g., datadisk1-ext3-8g)
d) The swap partition contains "swap" (e.g., fedora-swap) (Linux only)
If booting Linux using pygrub, only the root and data files are needed. An
empty file called 'pygrub' must be placed in the image directory
To run the vm, see the following example
/runvm.sh -v 0005 -i 10.1.1.56 -m 256 -g 192.168.1.33 -a 02:00:00:05:00:56 -l "centos5-2" -c 11 -n 2 -u 66 /images/chiradeep/i0007
-v : the is the 16-bit vnet-id specified in 4 hex characters
-i : this is the host ip address in the 10.x.y.z subnet (cannot be 10.1.1.1)
-m : the ram size for the domain in megabytes
-g : the eth1 ip address of the routing domain
-a : the mac address of the eth0 of the vm
-l : the vm name. This is also the hostname, ensure it is is a legal hostname
-c : the VNC console id
-w : the VNC password. If not specified, defaults to 'password'
-n : the number of VCPUs (eq to number of cores) to allocate (default all)
-u : the percentage of one VCPU to allocate (integer) (default no cap)
<image dir>: the absolute path of the directory holding the VM files/volumes
The vncviewer can connect to the eth0 ip of dom0 and the specified vnc console number (e.g., 192.168.1.125:11).
The 'n' and 'u' parameters depends on the physical CPU of the host and the
number of compute units requested. For example, lets say 1 compute unit = 1Ghz
and the physical CPU is a quad-core CPU running at 3.0 Ghz. To request 2 cores
running 1 compute unit each, n = 2 and u= 2 x (1/3)*100
6. Associate a public Ip with a domR (source NAT)
===========================================
The example below shows how to associate the public ip 65.37.141.33 the
routing domain. This has to be run on the dom0 of the host hosting the
routing domain.
ipassoc.sh -A -r domR-vnet0007 -i 192.168.1.32 -l 65.37.141.33 -a 06:01:02:03:06:05
-A|-D: create or delete an association
-r: the name (label) of the routing domain
-i: the eth1 ip of the routing domain
-a: the mac address of eth2 in the routing domain (not required for -D)
-l: the public ip to be used for source NAT
7. Firewall rules
=================
Each instance can have firewall rules associated to allow
some ports through. By default, when created, an instance has all ports and
protocols blocked. In the following example, the 10.1.1.155 instance gets ssh
traffic and icmp pings opened up:
firewall.sh -A -i 192.168.1.133 -P tcp -p 22 -r 10.1.1.155 -l 65.37.141.33 -d
22
firewall.sh -A -i 192.168.1.133 -P icmp -t echo-request -r 10.1.1.155 -l
65.37.141.33
-A|-D: add or delete a rule
-i: the eth1 ip of the routing domain
-r: the local eth0 ip of the target instance
-l: the public ip
-P: the protocol (tcp, udp, icmp)
-t: (for icmp) the icmp type
-p: (for tcp and udp) the port (port range in the form of a:b)
-d: (for tcp and udp) the target port (port range in the form of a:b)
8. Stopping and restarting a VM
===============================
You can use 'xm reboot vmname' to reboot the VM.
To stop it (and delete it from Xend's internal database), use
stopvm.sh -l <vmname>
This will not remove the vnet however.
The stopvm script will NOT attempt to umount the root and data disks as well
To explicitly unmount the root disk data disks from the NFS server, run
this on dom0:
mountvm.sh -u -l /images/u00000002/i0003
-u: (no arguments)
-l: the local directory on the compute server
9. Vnet cleanup
===============
When you kill the vnet task, all vnif* interfaces will disappear but the
bridges will linger.
You can use vnetcleanup.sh to clean up the vnet
vnetcleanup.sh -a will clean up all vnets
vnetcleanup.sh -v 0005 will only cleanup vnet0005.
10. VM Image Cleanup
===================
On ZFS, run delvm.sh, for example:
./delvm.sh -u tank/demo/vm/u00000003 -i tank/demo/vm/u00000003/i0001
-u: the user fs (optional)
-i: the instance fs (optional)
11. Template installation
=========================
Template installation involves copying the image file of the rootdisk to a
iscsi volume. For example:
createtmplt.sh -t rpool/volumes/demo/template/public/os/ubuntu8 -f
/rpool/volumes/demo/template/public/download/ubuntu8/ubuntu8.0.img -n ubuntu8 -s 12G
-t: the filesystem (created if non-existent) where the volume will be mounted
-f: the absolute path to the file containing the root disk image
-n: the name of the template. The create volume will be vmi-root-$name
-s: the size in gigabytes for the volume
-h: if a hvm image
12. Mapping iscsi target names to VM names
==========================================
The mapiscsi.sh script maps iscsi names of targets logged in to by the routing
host/compute host:
[root@r-1-1-1 iscsi]# ./mapiscsi.sh
iqn.1986-03.com.sun:02:ef4942ec-9f7e-4d71-e994-bb670867053e r-870-TEST-0186-root
iqn.1986-03.com.sun:02:599f5cc5-2f90-c1c3-9c5e-fef252345e64 r-870-TEST-0186-swap
iqn.1986-03.com.sun:02:0e893b01-fa32-682e-976d-d15781cf1a44 r-872-TEST-0187-root
iqn.1986-03.com.sun:02:21225d22-479c-4a35-dca0-ad56e60aa6f4 r-872-TEST-0187-swap
iqn.1986-03.com.sun:02:55b1a6d4-d202-e565-ffe1-ee63e4a48210 r-875-TEST-0188-root
iqn.1986-03.com.sun:02:4fac467c-7b63-6ffb-c207-aa35ccecfcd5 r-875-TEST-0188-swap
If no VM name can be found, the second field is blank
13. OpenVZ patch workarounds
============================
The openvz patch eliminates kernel oops related to bride reconfiguration.
However this requires an extra tickle to the bridge to make it actually send
packets to member port. The member port needs to be taken down (ifconfig down)
and up (ifconfig up).
This is done in
a) rundomr.sh -- on creation of vnet bridge, the vnif is taken down and up
b) runvm.sh -- ditto
c) /etc/xen/qemu-ifup -- the interface (tapX.0) is taken down and then up
after the interface is added to the bridge.

194
agent/doc/README.txt Normal file
View File

@ -0,0 +1,194 @@
0. Contents
===========
sbin/vnetd: userspace daemon that runs the vnet
module/2.6.18/vnet_module.ko: kernel module (alternative to vnetd)
vnetd.sh: init script for vnet
vn: helper script to create vnets
id_rsa: the private key used to ssh to the routing domain
createvm.sh: clones a vm image from a given template
mountvm.sh: script to mount a remote (nfs) image directory
runvm.sh: script to run a vm
rundomr.sh: script to run a routing domain (domR) for a given vnet
ipassoc.sh: associate / de-associate a public ip with an instance
firewall.sh: add or remove firewall rules
stopvm.sh: stop the vm and remove the entry from xend
delvm.sh: delete the vm image from zfs
loadbalancer.sh: configure the loadbalancer
listclones.sh: list all filesystems that are clones under a parent fs
1. Install
==========
On the hosts that run the customer vms as well as the domR
a) Copy vn to /usr/sbin on dom0
Either (vnetd):
1) Copy sbin/vnetd to /usr/sbin on dom0
2) Copy vnetd.sh to /etc/init.d/vnetd on dom0
3) run chkconfig vnetd on
OR
1) Copy module/2.6.18/vnet_module.ko to /lib/modules/`uname -r`/kernel
2) Run repos/vmdev/xen/xen-3.3.0/tools/vnet/examples/vnet-insert
Ensure that all iptables rules are flushed from domO before starting any domains
(use iptables -F)
2. Creating /deleting a vm image on Solaris ZFS
================
Use the createvm script to clone a template snapshot. For example:
createvm.sh -t tank/template/public/t100001@12_3_2008 -i tank/demo/vm/u00000002/i0001 -u tank/demo/vm/u00000002 -d /tank/demo/template/public/datadisk/ext3-8g
-t: the template fs snapshot
-i: the target clone fs
-u: the user's fs under which the clone will be created. If the user fs does not exist, it will be created.
-d: the disk fs to be cloned under the image dir specified by -i
Once this is created, use the listvmdisk.sh to list the disks:
listvmdisk.sh -i tank/demo/vm/u00000002/i0001 -r (for the root disk)
listvmdisk.sh -i tank/demo/vm/u00000002/i0001 -d <n> (for the data disks)
Use the delvm.sh script to delete an instance. For example:
./delvm.sh -u tank/demo/vm/u00000003 -i tank/demo/vm/u00000003/i0001
-i: the instance fs to delete
-u: the user fs to delete
Either -i or -u or both can be supplied.
Use the listclones.sh script to list all clones under a parent fs:
./listclones.sh -p tank/demo/vm
3. Mounting an image
==================
If the image directory resides on the NFS server, you can mount it with the
mountvm.sh script. For example:
./mountvm.sh -h sol10-1.lab.vmops.com -l /images/u00000002/i0001 -r
/tank/vm/demo/u00000002/i0001 -m
-h : the nfs server host
-l : the local directory
-r : the remote directory
[-m | -u] : mount or unmount
4. Routing Domain (domR)
=======================
The routing domain for a customer needs to be started before any other VM in that vnet can start. To start a routing domain, for example:
./rundomr.sh -v 0008 -m 128 -i 192.168.1.33 -g 65.37.141.1 -a aa:00:00:05:00:33 -l "domR-vnet0008" -A 06:01:02:03:04:05 -p 02:01:02:03:04:05 -n 255.255.255.0 -I 65.37.141.33 -N 255.255.255.128 -b eth1 -d "dns1=192.168.1.254 dns2=207.69.188.186 domain=vmops.org" /images/templates/t100001
-v : the is the 16-bit vnet-id specified in 4 hex characters
-m : the ram size for the domain in megabytes (128 is usually sufficient)
-a : the mac address of the eth0 of the domR
-A : the mac address of the eth1 of the domR
-p : the mac address of the eth2 of the domR
-i : the eth1 ip address in the datacenter LAN (e.g., 192.168.1.33)
-n : the netmask of eth1
-I : the eth2 ip address in the public LAN (e.g., 65.37.141.33)
-N : the netmask of eth2 (e.g., 65.37.141.128)
-b : the Xen bridge (typ.eth1) that eth2 has to be enslaved to (public LAN)
-g : the default gateway in the public subnet (e.g., 65.37.141.1)
-l : the vm name for the doMR
-d : nameserver information in the format shown in the example
Note: -d option requires template tank/demo/template/public/t100001@12_16_2008
or later
5. Starting a vm
================
The VM files are assumed to exist in a single image directory with the following conventions:
a) The kernel file begins with vmlinuz (e.g. vmlinuz-2.6.18.8-xen) (Linux)
b) The root filesystem begins with vmi-root (e.g., vmi-root-centos.5-2.64.img)
c) The data partition begins with vmi-data1 (e.g., vmi-data1.img)
d) The swap partition ends with ".swap" (e.g., centos64.swap) (Linux only)
If booting Linux using pygrub, only the root and data files are needed. An
empty file called 'pygrub' must be placed in the image directory
To run the vm, see the following example
/runvm.sh -v 0005 -i 10.1.1.122 -m 256 -g 192.168.1.108 -a 02:00:00:05:00:22 -l "centos.5-2.64" -c 11 -n 2 -u 66 /images/u00000002/i0003
-v : the is the 16-bit vnet-id specified in 4 hex characters
-i : this is the host ip address in the 10.x.y.z subnet (cannot be 10.1.1.1)
-m : the ram size for the domain in megabytes
-g : the eth1 ip address of the routing domain
-a : the mac address of the eth0 of the vm
-l : the vm name. This is also the hostname, ensure it is is a legal hostname
-c : the VNC console id
-w : the VNC password. If not specified, defaults to 'password'
-n : the number of VCPUs (eq to number of cores) to allocate (default all)
-u : the percentage of one VCPU to allocate (integer) (default no cap)
<image dir>: the absolute path of the directory holding the VM files
The vncviewer can connect to the eth0 ip of dom0 and the specified vnc console number (e.g., 192.168.1.125:11).
The 'n' and 'u' parameters depends on the physical CPU of the host and the
number of compute units requested. For example, lets say 1 compute unit = 1Ghz
and the physical CPU is a quad-core CPU running at 3.0 Ghz. To request 2 cores
running 1 compute unit each, n = 2 and u= 2 x (1/3)*100
6. Associate a public Ip with a domR (source NAT)
===========================================
The example below shows how to associate the public ip 65.37.141.33 the
routing domain. This has to be run on the dom0 of the host hosting the
routing domain.
ipassoc.sh -A -r domR-vnet0007 -i 192.168.1.32 -l 65.37.141.33 -a 06:01:02:03:06:05
-A|-D: create or delete an association
-r: the name (label) of the routing domain
-i: the eth1 ip of the routing domain
-a: the mac address of eth2 in the routing domain (not required for -D)
-l: the public ip to be used for source NAT
7. Firewall rules
=================
Each instance can have firewall rules associated to allow
some ports through. By default, when created, an instance has all ports and
protocols blocked. In the following example, the 10.1.1.155 instance gets ssh
traffic and icmp pings opened up:
firewall.sh -A -i 192.168.1.133 -P tcp -p 22 -r 10.1.1.155 -l 65.37.141.33 -d
22
firewall.sh -A -i 192.168.1.133 -P icmp -t echo-request -r 10.1.1.155 -l
65.37.141.33
-A|-D: add or delete a rule
-i: the eth1 ip of the routing domain
-r: the local eth0 ip of the target instance
-l: the public ip
-P: the protocol (tcp, udp, icmp)
-t: (for icmp) the icmp type
-p: (for tcp and udp) the port (port range in the form of a:b)
-d: (for tcp and udp) the target port (port range in the form of a:b)
7.5 Loadbalancer rules
=====================
Loadbalancing is provided by HAProxy running within the routing domain. Because the rules are large and consist of many components, it is expected that the entire HAProxy configuration file is provided to the script. This is copied over to the routing domain and the haproxy process is restarted.
loadbalancer.sh -A -i 192.168.1.35 -l 65.37.141.30 -d 80 -f /tmp/haproxy.cfg
New haproxy instance successfully loaded, stopping previous one.
-A|-D: add or delete a rule
-i: the eth1 ip of the routing domain
-l: the public ip
-d: the target port
-f: the haproxy configuration file
8. Stopping and restarting a VM
===============================
You can use 'xm reboot vmname' to reboot the VM.
To stop it (and delete it from Xend's internal database), use
stopvm.sh -l <vmname>
This will not remove the vnet however.
The stopvm script will attempt to umount the root and data disks as well
To explicitly unmount the root disk data disks from the NFS server, run
this on dom0:
mountvm.sh -u -l /images/u00000002/i0003
-u: (no arguments)
-l: the local directory on the compute server
9. Vnet cleanup
===============
When you kill the vnet task, all vnif* interfaces will disappear but the
bridges will linger.
You can use vnetcleanup.sh to clean up the vnet
vnetcleanup.sh -a will clean up all vnets
vnetcleanup.sh -v 0005 will only cleanup vnet0005.
10. VM Image Cleanup
===================
On ZFS, run delvm.sh, for example:
./delvm.sh -u tank/demo/vm/u00000003 -i tank/demo/vm/u00000003/i0001
-u: the user fs (optional)
-i: the instance fs (optional)
10. TODO
=======
5. Automatic install instead of manual steps of (1)

71
agent/libexec/agent-runner.in Executable file
View File

@ -0,0 +1,71 @@
#!/usr/bin/env bash
#run.sh runs the agent client.
cd `dirname "$0"`
SYSTEMJARS="@SYSTEMJARS@"
SCP=$(build-classpath $SYSTEMJARS) ; if [ $? != 0 ] ; then SCP="@SYSTEMCLASSPATH@" ; fi
DCP="@DEPSCLASSPATH@"
ACP="@AGENTCLASSPATH@"
export CLASSPATH=$SCP:$DCP:$ACP:@AGENTSYSCONFDIR@
for jarfile in "@PREMIUMJAVADIR@"/* ; do
if [ ! -e "$jarfile" ] ; then continue ; fi
CLASSPATH=$jarfile:$CLASSPATH
done
for plugin in "@PLUGINJAVADIR@"/* ; do
if [ ! -e "$plugin" ] ; then continue ; fi
CLASSPATH=$plugin:$CLASSPATH
done
export CLASSPATH
set -e
cd "@AGENTLIBDIR@"
echo Current directory is "$PWD"
echo CLASSPATH to run the agent: "$CLASSPATH"
export PATH=/sbin:/usr/sbin:"$PATH"
SERVICEARGS=
for x in private public ; do
configuration=`grep -q "^$x.network.device" "@AGENTSYSCONFDIR@"/agent.properties || true`
if [ -n "$CONFIGURATION" ] ; then
echo "Using manually-configured network device $CONFIGURATION"
else
defaultroute=`ip route | grep ^default | cut -d ' ' -f 5`
test -n "$defaultroute"
echo "Using auto-discovered network device $defaultroute which is the default route"
SERVICEARGS="$SERVICEARGS $x.network.device="$defaultroute
fi
done
function termagent() {
if [ "$agentpid" != "" ] ; then
echo Killing VMOps Agent "(PID $agentpid)" with SIGTERM >&2
kill -TERM $agentpid
echo Waiting for agent to exit >&2
wait $agentpid
ex=$?
echo Agent exited with return code $ex >&2
else
echo Agent PID is unknown >&2
fi
}
trap termagent TERM
while true ; do
java -Xms128M -Xmx384M -cp "$CLASSPATH" "$@" com.cloud.agent.AgentShell $SERVICEARGS &
agentpid=$!
echo "Agent started. PID: $!" >&2
wait $agentpid
ex=$?
if [ $ex -gt 128 ]; then
echo "wait on agent process interrupted by SIGTERM" >&2
exit $ex
fi
echo "Agent exited with return code $ex" >&2
if [ $ex -eq 0 ] || [ $ex -eq 1 ] || [ $ex -eq 66 ] || [ $ex -gt 128 ]; then
echo "Exiting..." > /dev/stderr
exit $ex
fi
echo "Restarting agent..." > /dev/stderr
sleep 1
done

BIN
agent/patch/patch.tgz Normal file

Binary file not shown.

6
agent/patch/redopatch.sh Executable file
View File

@ -0,0 +1,6 @@
#!/bin/bash -x
d=`dirname "$0"`
cd "$d"
tar c --owner root --group root -vzf patch.tgz root etc

13
agent/scripts/agent.sh Executable file
View File

@ -0,0 +1,13 @@
#!/bin/bash
#run.sh runs the agent client.
# set -x
while true
do
./run.sh "$@"
ex=$?
if [ $ex -eq 0 ] || [ $ex -eq 1 ] || [ $ex -eq 66 ] || [ $ex -gt 128 ]; then
exit $ex
fi
done

3
agent/scripts/run.sh Executable file
View File

@ -0,0 +1,3 @@
#!/usr/bin/env bash
#run.sh runs the agent client.
java $1 -Xms128M -Xmx384M -cp cglib-nodep-2.2.jar:xenserver-5.5.0-1.jar:trilead-ssh2-build213.jar:cloud-api.jar:cloud-core-extras.jar:cloud-utils.jar:cloud-agent.jar:cloud-console-proxy.jar:cloud-console-common.jar:freemarker.jar:log4j-1.2.15.jar:ws-commons-util-1.0.2.jar:xmlrpc-client-3.1.3.jar:cloud-core.jar:xmlrpc-common-3.1.3.jar:javaee-api-5.0-1.jar:gson-1.3.jar:commons-httpclient-3.1.jar:commons-logging-1.1.1.jar:commons-codec-1.4.jar:commons-collections-3.2.1.jar:commons-pool-1.4.jar:apache-log4j-extras-1.0.jar:libvirt-0.4.5.jar:jna.jar:.:/etc/cloud:./conf com.cloud.agent.AgentShell

View File

@ -0,0 +1,801 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent;
import java.io.IOException;
import java.io.PrintWriter;
import java.io.StringWriter;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.nio.channels.ClosedChannelException;
import java.nio.channels.SocketChannel;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Timer;
import java.util.TimerTask;
import java.util.concurrent.atomic.AtomicInteger;
import javax.naming.ConfigurationException;
import org.apache.log4j.Logger;
import com.cloud.agent.api.AgentControlAnswer;
import com.cloud.agent.api.AgentControlCommand;
import com.cloud.agent.api.Answer;
import com.cloud.agent.api.Command;
import com.cloud.agent.api.CronCommand;
import com.cloud.agent.api.ModifySshKeysCommand;
import com.cloud.agent.api.PingCommand;
import com.cloud.agent.api.ShutdownCommand;
import com.cloud.agent.api.StartupAnswer;
import com.cloud.agent.api.StartupCommand;
import com.cloud.agent.api.UpgradeAnswer;
import com.cloud.agent.api.UpgradeCommand;
import com.cloud.agent.transport.Request;
import com.cloud.agent.transport.Response;
import com.cloud.agent.transport.UpgradeResponse;
import com.cloud.exception.AgentControlChannelException;
import com.cloud.resource.ServerResource;
import com.cloud.utils.PropertiesUtil;
import com.cloud.utils.backoff.BackoffAlgorithm;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.nio.HandlerFactory;
import com.cloud.utils.nio.Link;
import com.cloud.utils.nio.NioClient;
import com.cloud.utils.nio.NioConnection;
import com.cloud.utils.nio.Task;
import com.cloud.utils.script.OutputInterpreter;
import com.cloud.utils.script.Script;
/**
* @config
* {@table
* || Param Name | Description | Values | Default ||
* || type | Type of server | Storage / Computing / Routing | No Default ||
* || workers | # of workers to process the requests | int | 1 ||
* || host | host to connect to | ip address | localhost ||
* || port | port to connect to | port number | 8250 ||
* || instance | Used to allow multiple agents running on the same host | String | none ||
* }
*
* For more configuration options, see the individual types.
*
**/
public class Agent implements HandlerFactory, IAgentControl {
private static final Logger s_logger = Logger.getLogger(Agent.class.getName());
public enum ExitStatus {
Normal(0), // Normal status = 0.
Upgrade(65), // Exiting for upgrade.
Configuration(66), // Exiting due to configuration problems.
Error(67); // Exiting because of error.
int value;
ExitStatus(final int value) {
this.value = value;
}
public int value() {
return value;
}
}
List<IAgentControlListener> _controlListeners = new ArrayList<IAgentControlListener>();
IAgentShell _shell;
NioConnection _connection;
ServerResource _resource;
Link _link;
Long _id;
Timer _timer = new Timer("Agent Timer");
List<WatchTask> _watchList = new ArrayList<WatchTask>();
long _sequence = 0;
long _lastPingResponseTime = 0;
long _pingInterval = 0;
AtomicInteger _inProgress = new AtomicInteger();
StartupTask _startup = null;
// for simulator use only
public Agent(IAgentShell shell) {
_shell = shell;
_link = null;
_connection = new NioClient(
"Agent",
_shell.getHost(),
_shell.getPort(),
_shell.getWorkers(),
this);
Runtime.getRuntime().addShutdownHook(new ShutdownThread(this));
}
public Agent(IAgentShell shell, int localAgentId, ServerResource resource) throws ConfigurationException {
_shell = shell;
_resource = resource;
_link = null;
resource.setAgentControl(this);
String value = _shell.getPersistentProperty(getResourceName(), "id");
_id = value != null ? Long.parseLong(value) : null;
s_logger.info("id is " + ((_id != null) ? _id : ""));
final Map<String, Object> params = PropertiesUtil.toMap(_shell.getProperties());
// merge with properties from command line to let resource access command line parameters
for(Map.Entry<String, Object> cmdLineProp : _shell.getCmdLineProperties().entrySet()) {
params.put(cmdLineProp.getKey(), cmdLineProp.getValue());
}
if (!_resource.configure(getResourceName(), params)) {
throw new ConfigurationException("Unable to configure " + _resource.getName());
}
_connection = new NioClient(
"Agent",
_shell.getHost(),
_shell.getPort(),
_shell.getWorkers(),
this);
// ((NioClient)_connection).setBindAddress(_shell.getPrivateIp());
s_logger.debug("Adding shutdown hook");
Runtime.getRuntime().addShutdownHook(new ShutdownThread(this));
s_logger.info("Agent [id = " + (_id != null ? _id : "new") + " : type = " + getResourceName()
+ " : zone = " + _shell.getZone() + " : pod = " + _shell.getPod()
+ " : workers = " + _shell.getWorkers() + " : host = " + _shell.getHost()
+ " : port = " + _shell.getPort());
}
public String getVersion() {
return _shell.getVersion();
}
public String getResourceGuid() {
String guid = _shell.getGuid();
return guid + "-" + getResourceName();
}
public String getZone() {
return _shell.getZone();
}
public String getPod() {
return _shell.getPod();
}
protected void setLink(final Link link) {
_link = link;
}
public ServerResource getResource() {
return _resource;
}
public BackoffAlgorithm getBackoffAlgorithm() {
return _shell.getBackoffAlgorithm();
}
public String getResourceName() {
return _resource.getClass().getSimpleName();
}
public void upgradeAgent(final String url, boolean protocol) {
// shell needs to take care of synchronization when multiple-instances demand upgrade
// at the same time
_shell.upgradeAgent(url);
// To stop agent after it has been upgraded, as shell executor may prematurely time out
// tasks if agent is in shutting down process
if (protocol) {
if (_connection != null) {
_connection.stop();
_connection = null;
}
if (_resource != null) {
_resource.stop();
_resource = null;
}
} else {
stop(ShutdownCommand.Update, null);
}
}
public void start() {
if (!_resource.start()) {
s_logger.error("Unable to start the resource: " + _resource.getName());
throw new CloudRuntimeException("Unable to start the resource: " + _resource.getName());
}
_connection.start();
}
public void stop(final String reason, final String detail) {
s_logger.info("Stopping the agent: Reason = " + reason + (detail != null ? ": Detail = " + detail : ""));
if (_connection != null) {
final ShutdownCommand cmd = new ShutdownCommand(reason, detail);
try {
if (_link != null) {
Request req = new Request(0, (_id != null? _id : -1), -1, cmd, false);
_link.send(req.toBytes());
}
} catch (final ClosedChannelException e) {
s_logger.warn("Unable to send: " + cmd.toString());
} catch(Exception e) {
s_logger.warn("Unable to send: " + cmd.toString() + " due to exception: ", e);
}
s_logger.debug("Sending shutdown to management server");
try {
Thread.sleep(1000);
} catch (final InterruptedException e) {
s_logger.debug("Who the heck interrupted me here?");
}
_connection.stop();
_connection = null;
}
if (_resource != null) {
_resource.stop();
_resource = null;
}
}
public Long getId() {
return _id;
}
public void setId(final Long id) {
s_logger.info("Set agent id " + id);
_id = id;
_shell.setPersistentProperty(getResourceName(), "id", Long.toString(id));
}
public void scheduleWatch(final Link link, final Request request, final long delay, final long period) {
synchronized (_watchList) {
if (s_logger.isDebugEnabled()) {
s_logger.debug("Adding a watch list");
}
final WatchTask task = new WatchTask(link, request, this);
_timer.schedule(task, delay, period);
_watchList.add(task);
}
}
protected void cancelTasks() {
synchronized(_watchList) {
for (final WatchTask task : _watchList) {
task.cancel();
}
if (s_logger.isDebugEnabled()) {
s_logger.debug("Clearing watch list: " + _watchList.size());
}
_watchList.clear();
}
}
public void sendStartup(Link link) {
final StartupCommand[] startup = _resource.initialize();
final Command[] commands = new Command[startup.length];
for (int i=0; i< startup.length; i++){
setupStartupCommand(startup[i]);
commands[i] = startup[i];
}
final Request request = new Request(getNextSequence(), _id != null ? _id : -1, -1, commands, false);
if (s_logger.isDebugEnabled()) {
s_logger.debug("Sending Startup: " + request.toString());
}
synchronized(this) {
_startup = new StartupTask(link);
_timer.schedule(_startup, 180000);
}
try {
link.send(request.toBytes());
} catch (final ClosedChannelException e) {
s_logger.warn("Unable to send reques: " + request.toString());
}
}
protected void setupStartupCommand(StartupCommand startup) {
InetAddress addr;
try {
addr = InetAddress.getLocalHost();
} catch (final UnknownHostException e) {
s_logger.warn("unknow host? ", e);
//ignore
return;
}
final Script command = new Script("hostname", 500, s_logger);
final OutputInterpreter.OneLineParser parser = new OutputInterpreter.OneLineParser();
final String result = command.execute(parser);
final String hostname = result == null ? parser.getLine() : addr.toString();
startup.setId(getId());
if (startup.getName() == null)
startup.setName(hostname);
startup.setDataCenter(getZone());
startup.setPod(getPod());
startup.setGuid(getResourceGuid());
startup.setResourceName(getResourceName());
startup.setVersion(getVersion());
}
@Override
public Task create(Task.Type type, Link link, byte[] data) {
return new ServerHandler(type, link, data);
}
protected void reconnect(final Link link) {
synchronized(this) {
if (_startup != null) {
_startup.cancel();
_startup= null;
}
}
link.close();
link.terminated();
setLink(null);
cancelTasks();
_resource.disconnected();
while (true) {
_shell.getBackoffAlgorithm().waitBeforeRetry();
s_logger.info("Lost connection to the server. Reconnecting....");
int inProgress = 0;
if ((inProgress = _inProgress.get()) > 0) {
s_logger.info("Cannot connect because we still have " + inProgress + " commands in progress.");
continue;
}
try {
final SocketChannel sch = SocketChannel.open();
sch.configureBlocking(false);
sch.connect(link.getSocketAddress());
link.connect(sch);
return;
} catch(final IOException e) {
s_logger.error("Unable to establish connection with the server", e);
}
}
}
public void processStartupAnswer(Answer answer, Response response, Link link) {
boolean cancelled = false;
synchronized(this) {
if (_startup != null) {
_startup.cancel();
_startup = null;
} else {
cancelled = true;
}
}
final StartupAnswer startup = (StartupAnswer)answer;
if (!startup.getResult()) {
s_logger.error("Not allowed to connect to the server: " + answer.getDetails());
System.exit(1);
}
if (cancelled) {
s_logger.warn("Threw away a startup answer because we're reconnecting.");
return;
}
setId(startup.getHostId());
_pingInterval = startup.getPingInterval() * 1000; // change to ms.
setLastPingResponseTime();
scheduleWatch(link, response, _pingInterval, _pingInterval);
s_logger.info("Startup Response Received: agent id = " + getId());
}
protected void processRequest(final Request request, final Link link) {
boolean requestLogged = false;
Response response = null;
try {
final Command[] cmds = request.getCommands();
final Answer[] answers = new Answer[cmds.length];
for (int i = 0; i < cmds.length; i++)
{
final Command cmd = cmds[i];
Answer answer;
try
{
if (s_logger.isDebugEnabled())
{
//this is a hack to make sure we do NOT log the ssh keys
if((cmd instanceof ModifySshKeysCommand))
{
s_logger.debug("Received the request for command: ModifySshKeysCommand");
}
else
{
if(!requestLogged) //ensures request is logged only once per method call
{
s_logger.debug("Request:" + request.toString());
requestLogged = true;
}
}
s_logger.debug("Processing command: " + cmd.toString());
}
if (cmd instanceof CronCommand) {
final CronCommand watch = (CronCommand)cmd;
scheduleWatch(link, request, watch.getInterval() * 1000, watch.getInterval() * 1000);
answer = new Answer(cmd, true, null);
} else if (cmd instanceof UpgradeCommand) {
final UpgradeCommand upgrade = (UpgradeCommand)cmd;
answer = upgradeAgent(upgrade.getUpgradeUrl(), upgrade);
} else if(cmd instanceof AgentControlCommand) {
answer = null;
synchronized(_controlListeners) {
for(IAgentControlListener listener: _controlListeners) {
answer = listener.processControlRequest(request, (AgentControlCommand)cmd);
if(answer != null)
break;
}
}
if(answer == null) {
s_logger.warn("No handler found to process cmd: " + cmd.toString());
answer = new AgentControlAnswer(cmd);
}
} else {
_inProgress.incrementAndGet();
try {
answer = _resource.executeRequest(cmd);
} finally {
_inProgress.decrementAndGet();
}
if (answer == null) {
s_logger.debug("Response: unsupported command" + cmd.toString());
answer = Answer.createUnsupportedCommandAnswer(cmd);
}
}
} catch (final Throwable th) {
s_logger.warn("Caught: ", th);
final StringWriter writer = new StringWriter();
th.printStackTrace(new PrintWriter(writer));
answer = new Answer(cmd, false, writer.toString());
}
answers[i] = answer;
if (!answer.getResult() && request.stopOnError()) {
for (i++; i < cmds.length; i++) {
answers[i] = new Answer(cmds[i], false, "Stopped by previous failure");
}
break;
}
}
response = new Response(request, answers);
} finally {
if (s_logger.isDebugEnabled()) {
s_logger.debug(response != null ? response.toString() : "response is null");
}
if (response != null) {
try {
link.send(response.toBytes());
} catch (final ClosedChannelException e) {
s_logger.warn("Unable to send response: " + response.toString());
}
}
}
}
public void processResponse(final Response response, final Link link) {
final Answer answer = response.getAnswer();
if (s_logger.isDebugEnabled()) {
s_logger.debug("Received response: " + response.toString());
}
if (answer instanceof StartupAnswer) {
processStartupAnswer(answer, response, link);
} else if(answer instanceof AgentControlAnswer) {
// Notice, we are doing callback while holding a lock!
synchronized(_controlListeners) {
for(IAgentControlListener listener : _controlListeners) {
listener.processControlResponse(response, (AgentControlAnswer)answer);
}
}
} else {
setLastPingResponseTime();
}
}
public void processOtherTask(Task task) {
final Object obj = task.get();
if (obj instanceof Response) {
if ((System.currentTimeMillis() - _lastPingResponseTime) > _pingInterval * _shell.getPingRetries()) {
s_logger.error("Ping Interval has gone past " + _pingInterval * _shell.getPingRetries() + ". Attempting to reconnect.");
final Link link = task.getLink();
reconnect(link);
return;
}
final PingCommand ping = _resource.getCurrentStatus(getId());
final Request request = new Request(getNextSequence(), _id, -1, ping, false);
if (s_logger.isDebugEnabled()) {
s_logger.debug("Sending ping: " + request.toString());
}
try {
task.getLink().send(request.toBytes());
} catch (final ClosedChannelException e) {
s_logger.warn("Unable to send request: " + request.toString());
}
} else if (obj instanceof Request){
final Request req = (Request)obj;
final Command command = req.getCommand();
Answer answer = null;
_inProgress.incrementAndGet();
try {
answer = _resource.executeRequest(command);
} finally {
_inProgress.decrementAndGet();
}
if (answer != null) {
final Response response = new Response(req, answer);
if (s_logger.isDebugEnabled()) {
s_logger.debug("Watch Sent: " + response.toString());
}
try {
task.getLink().send(response.toBytes());
} catch (final ClosedChannelException e) {
s_logger.warn("Unable to send response: " + response.toString());
}
}
} else {
s_logger.warn("Ignoring an unknown task");
}
}
protected UpgradeAnswer upgradeAgent(final String url, final UpgradeCommand cmd) {
try {
upgradeAgent(url, cmd == null);
return null;
} catch(final Exception e) {
s_logger.error("Unable to run this agent because we couldn't complete the upgrade process.", e);
if (cmd != null) {
final StringWriter writer = new StringWriter();
writer.append(e.getMessage());
writer.append("===>Stack<===");
e.printStackTrace(new PrintWriter(writer));
return new UpgradeAnswer(cmd, writer.toString());
}
System.exit(3);
return null;
}
}
public synchronized void setLastPingResponseTime() {
_lastPingResponseTime = System.currentTimeMillis();
}
protected synchronized long getNextSequence() {
return _sequence++;
}
@Override
public void registerControlListener(IAgentControlListener listener) {
synchronized(_controlListeners) {
_controlListeners.add(listener);
}
}
@Override
public void unregisterControlListener(IAgentControlListener listener) {
synchronized(_controlListeners) {
_controlListeners.remove(listener);
}
}
@Override
public AgentControlAnswer sendRequest(AgentControlCommand cmd, int timeoutInMilliseconds) throws AgentControlChannelException {
Request request = new Request(this.getNextSequence(), this.getId(),
-1, new Command[] {cmd}, true, false);
AgentControlListener listener = new AgentControlListener(request);
registerControlListener(listener);
try {
postRequest(request);
synchronized(listener) {
try {
listener.wait(timeoutInMilliseconds);
} catch (InterruptedException e) {
s_logger.warn("sendRequest is interrupted, exit waiting");
}
}
return listener.getAnswer();
} finally {
unregisterControlListener(listener);
}
}
@Override
public void postRequest(AgentControlCommand cmd) throws AgentControlChannelException {
Request request = new Request(this.getNextSequence(), this.getId(),
-1, new Command[] {cmd}, true, false);
postRequest(request);
}
private void postRequest(Request request) throws AgentControlChannelException {
if(_link != null) {
try {
_link.send(request.toBytes());
} catch (final ClosedChannelException e) {
s_logger.warn("Unable to post agent control reques: " + request.toString());
throw new AgentControlChannelException("Unable to post agent control request due to " + e.getMessage());
}
} else {
throw new AgentControlChannelException("Unable to post agent control request as link is not available");
}
}
public class AgentControlListener implements IAgentControlListener {
private AgentControlAnswer _answer;
private final Request _request;
public AgentControlListener(Request request) {
_request = request;
}
public AgentControlAnswer getAnswer() {
return _answer;
}
public Answer processControlRequest(Request request, AgentControlCommand cmd) {
return null;
}
public void processControlResponse(Response response, AgentControlAnswer answer) {
if(_request.getSequence() == response.getSequence()) {
_answer = answer;
synchronized(this) {
notifyAll();
}
}
}
}
protected class ShutdownThread extends Thread {
Agent _agent;
public ShutdownThread(final Agent agent) {
super("AgentShutdownThread");
_agent = agent;
}
@Override
public void run() {
_agent.stop(ShutdownCommand.Requested, null);
}
}
public class WatchTask extends TimerTask {
protected Request _request;
protected Agent _agent;
protected Link _link;
public WatchTask(final Link link, final Request request, final Agent agent) {
super();
_request = request;
_link = link;
_agent = agent;
}
@Override
public void run() {
if (s_logger.isTraceEnabled()) {
s_logger.trace("Scheduling " + (_request instanceof Response ? "Ping" : "Watch Task"));
}
try {
_link.schedule(new ServerHandler(Task.Type.OTHER, _link, _request));
} catch (final ClosedChannelException e) {
s_logger.warn("Unable to schedule task because channel is closed");
}
}
}
public class StartupTask extends TimerTask {
protected Link _link;
protected volatile boolean cancelled = false;
public StartupTask(final Link link) {
s_logger.debug("Startup task created");
_link = link;
}
@Override
public synchronized boolean cancel() {
// TimerTask.cancel may fail depends on the calling context
if (!cancelled) {
cancelled = true;
s_logger.debug("Startup task cancelled");
return super.cancel();
}
return true;
}
@Override
public synchronized void run() {
if(!cancelled) {
if(s_logger.isInfoEnabled()) {
s_logger.info("The startup command is now cancelled");
}
cancelled = true;
_startup = null;
reconnect(_link);
}
}
}
public class ServerHandler extends Task {
public ServerHandler(Task.Type type, Link link, byte[] data) {
super(type, link, data);
}
public ServerHandler(Task.Type type, Link link, Request req) {
super(type, link, req);
}
@Override
public void doTask(final Task task) {
if (task.getType() == Task.Type.CONNECT) {
_shell.getBackoffAlgorithm().reset();
setLink(task.getLink());
sendStartup(task.getLink());
} else if (task.getType() == Task.Type.DATA) {
Request request;
try {
request = Request.parse(task.getData());
if (request instanceof UpgradeResponse) {
upgradeAgent(((UpgradeResponse)request).getUpgradeUrl(), null);
} else if (request instanceof Response) {
processResponse((Response)request, task.getLink());
} else {
processRequest(request, task.getLink());
}
} catch (final ClassNotFoundException e) {
s_logger.error("Unable to find this request ");
} catch (final Exception e) {
s_logger.error("Error parsing task", e);
}
} else if (task.getType() == Task.Type.DISCONNECT) {
reconnect(task.getLink());
return;
} else if (task.getType() == Task.Type.OTHER) {
processOtherTask(task);
}
}
}
}

View File

@ -0,0 +1,590 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.lang.reflect.Constructor;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.net.HttpURLConnection;
import java.util.ArrayList;
import java.util.Date;
import java.util.Enumeration;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import javax.naming.ConfigurationException;
import org.apache.commons.httpclient.HttpClient;
import org.apache.commons.httpclient.methods.GetMethod;
import org.apache.log4j.Logger;
import com.cloud.agent.Agent.ExitStatus;
import com.cloud.agent.dao.StorageComponent;
import com.cloud.agent.dao.impl.PropertiesStorage;
import com.cloud.host.Host;
import com.cloud.resource.ServerResource;
import com.cloud.utils.NumbersUtil;
import com.cloud.utils.ProcessUtil;
import com.cloud.utils.PropertiesUtil;
import com.cloud.utils.backoff.BackoffAlgorithm;
import com.cloud.utils.backoff.impl.ConstantTimeBackoff;
import com.cloud.utils.component.Adapters;
import com.cloud.utils.component.ComponentLocator;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.net.MacAddress;
import com.cloud.utils.script.Script;
public class AgentShell implements IAgentShell {
private static final Logger s_logger = Logger.getLogger(AgentShell.class.getName());
private final Properties _properties = new Properties();
private final Map<String, Object> _cmdLineProperties = new HashMap<String, Object>();
private StorageComponent _storage;
private BackoffAlgorithm _backoff;
private String _version;
private String _zone;
private String _pod;
private String _host;
private String _privateIp;
private int _port;
private int _proxyPort;
private int _workers;
private String _guid;
private int _nextAgentId = 1;
private volatile boolean _exit = false;
private int _pingRetries;
private Thread _consoleProxyMain = null;
private final List<Agent> _agents = new ArrayList<Agent>();
public AgentShell() {
}
@Override
public Properties getProperties() {
return _properties;
}
@Override
public BackoffAlgorithm getBackoffAlgorithm() {
return _backoff;
}
@Override
public int getPingRetries() {
return _pingRetries;
}
@Override
public String getVersion() {
return _version;
}
@Override
public String getZone() {
return _zone;
}
@Override
public String getPod() {
return _pod;
}
@Override
public String getHost() {
return _host;
}
@Override
public String getPrivateIp() {
return _privateIp;
}
@Override
public int getPort() {
return _port;
}
@Override
public int getProxyPort() {
return _proxyPort;
}
@Override
public int getWorkers() {
return _workers;
}
@Override
public String getGuid() {
return _guid;
}
public Map<String, Object> getCmdLineProperties() {
return _cmdLineProperties;
}
public String getProperty(String prefix, String name) {
if(prefix != null)
return _properties.getProperty(prefix + "." + name);
return _properties.getProperty(name);
}
@Override
public String getPersistentProperty(String prefix, String name) {
if(prefix != null)
return _storage.get(prefix + "." + name);
return _storage.get(name);
}
@Override
public void setPersistentProperty(String prefix, String name, String value) {
if(prefix != null)
_storage.persist(prefix + "." + name, value);
else
_storage.persist(name, value);
}
@Override
public void upgradeAgent(final String url) {
s_logger.info("Updating agent with binary from " + url);
synchronized(this) {
final Class<?> c = this.getClass();
String path = c.getResource(c.getSimpleName() + ".class").toExternalForm();
final int begin = path.indexOf(File.separator);
int end = path.lastIndexOf("!");
end = path.lastIndexOf(File.separator, end);
path = path.substring(begin, end);
s_logger.debug("Current binaries reside at " + path);
File file = null;
try {
file = File.createTempFile("agent-", "-" + Long.toString(new Date().getTime()));
wget(url, file);
} catch (final IOException e) {
s_logger.warn("Exception while downloading agent update package, ", e);
throw new CloudRuntimeException("Unable to update from " + url + ", exception:" + e.getMessage(), e);
}
if (s_logger.isDebugEnabled()) {
s_logger.debug("Unzipping " + file.getAbsolutePath() + " to " + path);
}
final Script unzip = new Script("unzip", 120000, s_logger);
unzip.add("-o", "-q"); // overwrite and quiet
unzip.add(file.getAbsolutePath());
unzip.add("-d", path);
final String result = unzip.execute();
if (result != null) {
throw new CloudRuntimeException("Unable to unzip the retrieved file: " + result);
}
if (s_logger.isDebugEnabled()) {
s_logger.debug("Closing the connection to the management server");
}
}
if (s_logger.isDebugEnabled()) {
s_logger.debug("Exiting to start the new agent.");
}
System.exit(ExitStatus.Upgrade.value());
}
public static void wget(String url, File file) throws IOException {
final HttpClient client = new HttpClient();
final GetMethod method = new GetMethod(url);
int response;
response = client.executeMethod(method);
if (response != HttpURLConnection.HTTP_OK) {
s_logger.warn("Retrieving from " + url + " gives response code: " + response);
throw new CloudRuntimeException("Unable to download from " + url + ". Response code is " + response);
}
final InputStream is = method.getResponseBodyAsStream();
s_logger.debug("Downloading content into " + file.getAbsolutePath());
final FileOutputStream fos = new FileOutputStream(file);
byte[] buffer = new byte[4096];
int len = 0;
while( (len = is.read(buffer)) > 0)
fos.write(buffer, 0, len);
fos.close();
try {
is.close();
} catch(IOException e) {
s_logger.warn("Exception while closing download stream from " + url + ", ", e);
}
}
private void loadProperties() throws ConfigurationException {
final File file = PropertiesUtil.findConfigFile("agent.properties");
if (file == null) {
throw new ConfigurationException("Unable to find agent.properties.");
}
s_logger.info("agent.properties found at " + file.getAbsolutePath());
try {
_properties.load(new FileInputStream(file));
} catch (final FileNotFoundException ex) {
throw new CloudRuntimeException("Cannot find the file: " + file.getAbsolutePath(), ex);
} catch (final IOException ex) {
throw new CloudRuntimeException("IOException in reading " + file.getAbsolutePath(), ex);
}
}
protected boolean parseCommand(final String[] args) throws ConfigurationException {
String host = null;
String workers = null;
String port = null;
String zone = null;
String pod = null;
String guid = null;
for (int i = 0; i < args.length; i++) {
final String[] tokens = args[i].split("=");
if (tokens.length != 2) {
System.out.println("Invalid Parameter: " + args[i]);
continue;
}
// save command line properties
_cmdLineProperties.put(tokens[0], tokens[1]);
if (tokens[0].equalsIgnoreCase("port")) {
port = tokens[1];
} else if (tokens[0].equalsIgnoreCase("threads")) {
workers = tokens[1];
} else if (tokens[0].equalsIgnoreCase("host")) {
host = tokens[1];
} else if(tokens[0].equalsIgnoreCase("zone")) {
zone = tokens[1];
} else if(tokens[0].equalsIgnoreCase("pod")) {
pod = tokens[1];
} else if(tokens[0].equalsIgnoreCase("guid")) {
guid = tokens[1];
} else if(tokens[0].equalsIgnoreCase("eth1ip")) {
_privateIp = tokens[1];
}
}
if (port == null) {
port = getProperty(null, "port");
}
_port = NumbersUtil.parseInt(port, 8250);
_proxyPort = NumbersUtil.parseInt(getProperty(null, "consoleproxy.httpListenPort"), 443);
if (workers == null) {
workers = getProperty(null, "workers");
}
_workers = NumbersUtil.parseInt(workers, 5);
if (host == null) {
host = getProperty(null, "host");
}
if (host == null) {
host = "localhost";
}
_host = host;
if(zone != null)
_zone = zone;
else
_zone = getProperty(null, "zone");
if (_zone == null || (_zone.startsWith("@") && _zone.endsWith("@"))) {
_zone = "default";
}
if(pod != null)
_pod = pod;
else
_pod = getProperty(null, "pod");
if (_pod == null || (_pod.startsWith("@") && _pod.endsWith("@"))) {
_pod = "default";
}
if (_host == null || (_host.startsWith("@") && _host.endsWith("@"))) {
throw new ConfigurationException("Host is not configured correctly: " + _host);
}
final String retries = getProperty(null, "ping.retries");
_pingRetries = NumbersUtil.parseInt(retries, 5);
String value = getProperty(null, "developer");
boolean developer = Boolean.parseBoolean(value);
if(guid != null)
_guid = guid;
else
_guid = getProperty(null, "guid");
if (_guid == null) {
if (!developer) {
throw new ConfigurationException("Unable to find the guid");
}
_guid = MacAddress.getMacAddress().toString(":");
}
return true;
}
private void init(String[] args) throws ConfigurationException{
final ComponentLocator locator = ComponentLocator.getLocator("agent");
final Class<?> c = this.getClass();
_version = c.getPackage().getImplementationVersion();
if (_version == null) {
throw new CloudRuntimeException("Unable to find the implementation version of this agent");
}
s_logger.info("Implementation Version is " + _version);
parseCommand(args);
_storage = locator.getManager(StorageComponent.class);
if (_storage == null) {
s_logger.info("Defaulting to using properties file for storage");
_storage = new PropertiesStorage();
_storage.configure("Storage", new HashMap<String, Object>());
}
// merge with properties from command line to let resource access command line parameters
for(Map.Entry<String, Object> cmdLineProp : getCmdLineProperties().entrySet()) {
_properties.put(cmdLineProp.getKey(), cmdLineProp.getValue());
}
final Adapters adapters = locator.getAdapters(BackoffAlgorithm.class);
final Enumeration en = adapters.enumeration();
while (en.hasMoreElements()) {
_backoff = (BackoffAlgorithm)en.nextElement();
break;
}
if (en.hasMoreElements()) {
s_logger.info("More than one backoff algorithm specified. Using the first one ");
}
if (_backoff == null) {
s_logger.info("Defaulting to the constant time backoff algorithm");
_backoff = new ConstantTimeBackoff();
_backoff.configure("ConstantTimeBackoff", new HashMap<String, Object>());
}
}
private void launchAgent() throws ConfigurationException {
String resourceClassNames = getProperty(null, "resource");
s_logger.trace("resource=" + resourceClassNames);
if(resourceClassNames != null) {
launchAgentFromClassInfo(resourceClassNames);
return;
}
launchAgentFromTypeInfo();
}
private boolean needConsoleProxy() {
for(Agent agent: _agents) {
if( agent.getResource().getType().equals(Host.Type.ConsoleProxy)||
agent.getResource().getType().equals(Host.Type.Routing))
return true;
}
return false;
}
private int getConsoleProxyPort() {
int port = NumbersUtil.parseInt(getProperty(null, "consoleproxy.httpListenPort"), 443);
return port;
}
private void openPortWithIptables(int port) {
// TODO
}
private void launchConsoleProxy() throws ConfigurationException {
if(!needConsoleProxy()) {
if(s_logger.isInfoEnabled())
s_logger.info("Storage only agent, no need to start console proxy on it");
return;
}
int port = getConsoleProxyPort();
openPortWithIptables(port);
_consoleProxyMain = new Thread(new Runnable() {
public void run() {
try {
Class<?> consoleProxyClazz = Class.forName("com.cloud.consoleproxy.ConsoleProxy");
try {
Method method = consoleProxyClazz.getMethod("start", Properties.class);
method.invoke(null, _properties);
} catch (SecurityException e) {
s_logger.error("Unable to launch console proxy due to SecurityException");
System.exit(ExitStatus.Error.value());
} catch (NoSuchMethodException e) {
s_logger.error("Unable to launch console proxy due to NoSuchMethodException");
System.exit(ExitStatus.Error.value());
} catch (IllegalArgumentException e) {
s_logger.error("Unable to launch console proxy due to IllegalArgumentException");
System.exit(ExitStatus.Error.value());
} catch (IllegalAccessException e) {
s_logger.error("Unable to launch console proxy due to IllegalAccessException");
System.exit(ExitStatus.Error.value());
} catch (InvocationTargetException e) {
s_logger.error("Unable to launch console proxy due to InvocationTargetException");
System.exit(ExitStatus.Error.value());
}
} catch (final ClassNotFoundException e) {
s_logger.error("Unable to launch console proxy due to ClassNotFoundException");
System.exit(ExitStatus.Error.value());
}
}
}, "Console-Proxy-Main");
_consoleProxyMain.setDaemon(true);
_consoleProxyMain.start();
}
private void launchAgentFromClassInfo(String resourceClassNames) throws ConfigurationException {
String[] names = resourceClassNames.split("\\|");
for(String name: names) {
Class<?> impl;
try {
impl = Class.forName(name);
final Constructor<?> constructor = impl.getDeclaredConstructor();
constructor.setAccessible(true);
ServerResource resource = (ServerResource)constructor.newInstance();
launchAgent(getNextAgentId(), resource);
} catch (final ClassNotFoundException e) {
throw new ConfigurationException("Resource class not found: " + name);
} catch (final SecurityException e) {
throw new ConfigurationException("Security excetion when loading resource: " + name);
} catch (final NoSuchMethodException e) {
throw new ConfigurationException("Method not found excetion when loading resource: " + name);
} catch (final IllegalArgumentException e) {
throw new ConfigurationException("Illegal argument excetion when loading resource: " + name);
} catch (final InstantiationException e) {
throw new ConfigurationException("Instantiation excetion when loading resource: " + name);
} catch (final IllegalAccessException e) {
throw new ConfigurationException("Illegal access exception when loading resource: " + name);
} catch (final InvocationTargetException e) {
throw new ConfigurationException("Invocation target exception when loading resource: " + name);
}
}
}
private void launchAgentFromTypeInfo() throws ConfigurationException {
String typeInfo = getProperty(null, "type");
if (typeInfo == null) {
s_logger.error("Unable to retrieve the type");
throw new ConfigurationException("Unable to retrieve the type of this agent.");
}
s_logger.trace("Launching agent based on type=" + typeInfo);
}
private void launchAgent(int localAgentId, ServerResource resource) throws ConfigurationException {
// we don't track agent after it is launched for now
Agent agent = new Agent(this, localAgentId, resource);
_agents.add(agent);
agent.start();
}
public synchronized int getNextAgentId() {
return _nextAgentId++;
}
private void run(String[] args) {
try {
System.setProperty("java.net.preferIPv4Stack","true");
loadProperties();
init(args);
String instance = getProperty(null, "instance");
if (instance == null) {
instance = "";
} else {
instance += ".";
}
final String run = "agent." + instance + "pid";
s_logger.debug("Checking to see if " + run + "exists.");
ProcessUtil.pidCheck(run);
launchAgent();
//
// For both KVM & Xen-Server hypervisor, we have switched to VM-based console proxy solution, disable launching
// of console proxy here
//
// launchConsoleProxy();
//
try {
while(!_exit) Thread.sleep(1000);
} catch(InterruptedException e) {
}
} catch(final ConfigurationException e) {
s_logger.error("Unable to start agent: " + e.getMessage());
System.out.println("Unable to start agent: " + e.getMessage());
System.exit(ExitStatus.Configuration.value());
} catch (final Exception e) {
s_logger.error("Unable to start agent: ", e);
System.out.println("Unable to start agent: " + e.getMessage());
System.exit(ExitStatus.Error.value());
}
}
public void stop() {
_exit = true;
if(_consoleProxyMain != null) {
_consoleProxyMain.interrupt();
}
}
public static void main(String[] args) {
AgentShell shell = new AgentShell();
Runtime.getRuntime().addShutdownHook(new ShutdownThread(shell));
shell.run(args);
}
private static class ShutdownThread extends Thread {
AgentShell _shell;
public ShutdownThread(AgentShell shell) {
this._shell = shell;
}
@Override
public void run() {
_shell.stop();
}
}
}

View File

@ -0,0 +1,46 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent;
import java.util.Map;
import java.util.Properties;
import com.cloud.utils.backoff.BackoffAlgorithm;
public interface IAgentShell {
public Map<String, Object> getCmdLineProperties();
public Properties getProperties();
public String getPersistentProperty(String prefix, String name);
public void setPersistentProperty(String prefix, String name, String value);
public String getHost();
public String getPrivateIp();
public int getPort();
public int getWorkers();
public int getProxyPort();
public String getGuid();
public String getZone();
public String getPod();
public BackoffAlgorithm getBackoffAlgorithm();
public int getPingRetries();
public void upgradeAgent(final String url);
public String getVersion();
}

View File

@ -0,0 +1,30 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.dao;
import com.cloud.utils.component.Manager;
/**
* StorageDao is an abstraction layer for what the agent will use for storage.
*
*/
public interface StorageComponent extends Manager {
String get(String key);
void persist(String key, String value);
}

View File

@ -0,0 +1,131 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.dao.impl;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.Map;
import java.util.Properties;
import javax.ejb.Local;
import org.apache.log4j.Logger;
import com.cloud.agent.dao.StorageComponent;
import com.cloud.utils.PropertiesUtil;
/**
* Uses Properties to implement storage.
*
* @config
* {@table
* || Param Name | Description | Values | Default ||
* || path | path to the properties _file | String | db/db.properties ||
* }
**/
@Local(value={StorageComponent.class})
public class PropertiesStorage implements StorageComponent {
private static final Logger s_logger = Logger.getLogger(PropertiesStorage.class);
Properties _properties = new Properties();
File _file;
String _name;
@Override
public synchronized String get(String key) {
return _properties.getProperty(key);
}
@Override
public synchronized void persist(String key, String value) {
_properties.setProperty(key, value);
FileOutputStream output = null;
try {
output = new FileOutputStream(_file);
_properties.store(output, _name);
output.flush();
output.close();
} catch (FileNotFoundException e) {
s_logger.error("Who deleted the file? ", e);
} catch (IOException e) {
s_logger.error("Uh-oh: ", e);
} finally {
if (output != null) {
try {
output.close();
} catch (IOException e) {
//ignore.
}
}
}
}
@Override
public boolean configure(String name, Map<String, Object> params) {
_name = name;
String path = (String)params.get("path");
if (path == null) {
path = "agent.properties";
}
File file = PropertiesUtil.findConfigFile(path);
if (file == null) {
file = new File(path);
try {
if (!file.createNewFile()) {
s_logger.error("Unable to create _file: " + file.getAbsolutePath());
return false;
}
} catch (IOException e) {
s_logger.error("Unable to create _file: " + file.getAbsolutePath(), e);
return false;
}
}
try {
_properties.load(new FileInputStream(file));
_file = file;
} catch (FileNotFoundException e) {
s_logger.error("How did we get here? ", e);
return false;
} catch (IOException e) {
s_logger.error("IOException: ", e);
return false;
}
return true;
}
@Override
public String getName() {
return _name;
}
@Override
public boolean start() {
return true;
}
@Override
public boolean stop() {
return true;
}
}

View File

@ -0,0 +1,104 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.resource;
import java.util.Map;
import javax.ejb.Local;
import com.cloud.agent.IAgentControl;
import com.cloud.agent.api.Answer;
import com.cloud.agent.api.Command;
import com.cloud.agent.api.PingCommand;
import com.cloud.agent.api.StartupCommand;
import com.cloud.host.Host;
import com.cloud.host.Host.Type;
import com.cloud.resource.ServerResource;
@Local(value={ServerResource.class})
public class DummyResource implements ServerResource {
String _name;
Host.Type _type;
boolean _negative;
IAgentControl _agentControl;
@Override
public void disconnected() {
}
@Override
public Answer executeRequest(Command cmd) {
System.out.println("Received Command: " + cmd.toString());
Answer answer = new Answer(cmd, !_negative, "response");
System.out.println("Replying with: " + answer.toString());
return answer;
}
@Override
public PingCommand getCurrentStatus(long id) {
return new PingCommand(_type, id);
}
@Override
public Type getType() {
return _type;
}
@Override
public StartupCommand[] initialize() {
return new StartupCommand[] {new StartupCommand()};
}
@Override
public boolean configure(String name, Map<String, Object> params) {
_name = name;
String value = (String)params.get("type");
_type = Host.Type.valueOf(value);
value = (String)params.get("negative.reply");
_negative = Boolean.parseBoolean(value);
return true;
}
@Override
public String getName() {
return _name;
}
@Override
public boolean start() {
return true;
}
@Override
public boolean stop() {
return true;
}
@Override
public IAgentControl getAgentControl() {
return _agentControl;
}
@Override
public void setAgentControl(IAgentControl agentControl) {
_agentControl = agentControl;
}
}

View File

@ -0,0 +1,224 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.resource.computing;
import java.io.IOException;
import java.io.StringReader;
import java.util.ArrayList;
import org.apache.log4j.Logger;
import org.xml.sax.Attributes;
import org.xml.sax.InputSource;
import org.xml.sax.SAXException;
/**
* @author chiradeep
*
*/
public class LibvirtCapXMLParser extends LibvirtXMLParser {
private boolean _host = false;
private boolean _guest = false;
private boolean _osType = false;
private boolean _domainTypeKVM = false;
private boolean _emulatorFlag = false;
private String _emulator ;
private final StringBuffer _capXML = new StringBuffer();
private static final Logger s_logger = Logger.getLogger(LibvirtCapXMLParser.class);
private final ArrayList<String> guestOsTypes = new ArrayList<String>();
@Override
public void endElement(String uri, String localName, String qName)
throws SAXException {
if(qName.equalsIgnoreCase("host")) {
_host = false;
} else if (qName.equalsIgnoreCase("os_type")) {
_osType = false;
} else if (qName.equalsIgnoreCase("guest")) {
_guest = false;
} else if (qName.equalsIgnoreCase("domain")) {
_domainTypeKVM = false;
} else if (qName.equalsIgnoreCase("emulator")) {
_emulatorFlag = false;
} else if (_host) {
_capXML.append("<").append("/").append(qName).append(">");
}
}
@Override
public void characters(char[] ch, int start, int length) throws SAXException {
if (_host) {
_capXML.append(ch, start, length);
} else if (_osType) {
guestOsTypes.add(new String(ch, start, length));
} else if (_emulatorFlag) {
_emulator = new String(ch, start, length);
}
}
@Override
public void startElement(String uri, String localName, String qName,
Attributes attributes) throws SAXException {
if(qName.equalsIgnoreCase("host")) {
_host = true;
} else if (qName.equalsIgnoreCase("guest")) {
_guest = true;
} else if (qName.equalsIgnoreCase("os_type")) {
if (_guest) {
_osType = true;
}
} else if (qName.equalsIgnoreCase("domain")) {
for (int i = 0; i < attributes.getLength(); i++) {
if (attributes.getQName(i).equalsIgnoreCase("type")
&& attributes.getValue(i).equalsIgnoreCase("kvm")) {
_domainTypeKVM = true;
}
}
} else if (qName.equalsIgnoreCase("emulator") && _domainTypeKVM) {
_emulatorFlag = true;
} else if (_host) {
_capXML.append("<").append(qName);
for (int i=0; i < attributes.getLength(); i++) {
_capXML.append(" ").append(attributes.getQName(i)).append("=").append(attributes.getValue(i));
}
_capXML.append(">");
}
}
public String parseCapabilitiesXML(String capXML) {
if (!_initialized){
return null;
}
try {
_sp.parse(new InputSource(new StringReader(capXML)), this);
return _capXML.toString();
}catch(SAXException se) {
s_logger.warn(se.getMessage());
}catch (IOException ie) {
s_logger.error(ie.getMessage());
}
return null;
}
public ArrayList<String> getGuestOsType() {
return guestOsTypes;
}
public String getEmulator() {
return _emulator;
}
public static void main(String [] args) {
String capXML = "<capabilities>"+
" <host>"+
" <cpu>"+
" <arch>x86_64</arch>"+
" <model>core2duo</model>"+
" <topology sockets='1' cores='2' threads='1'/>"+
" <feature name='lahf_lm'/>"+
" <feature name='xtpr'/>"+
" <feature name='cx16'/>"+
" <feature name='tm2'/>"+
" <feature name='est'/>"+
" <feature name='vmx'/>"+
" <feature name='ds_cpl'/>"+
" <feature name='pbe'/>"+
" <feature name='tm'/>"+
" <feature name='ht'/>"+
" <feature name='ss'/>"+
" <feature name='acpi'/>"+
" <feature name='ds'/>"+
" </cpu>"+
" <migration_features>"+
" <live/>"+
" <uri_transports>"+
" <uri_transport>tcp</uri_transport>"+
" </uri_transports>"+
" </migration_features>"+
" <topology>"+
" <cells num='1'>"+
" <cell id='0'>"+
" <cpus num='2'>"+
" <cpu id='0'/>"+
" <cpu id='1'/>"+
" </cpus>"+
" </cell>"+
" </cells>"+
" </topology>"+
" </host>"+
""+
" <guest>"+
" <os_type>hvm</os_type>"+
" <arch name='i686'>"+
" <wordsize>32</wordsize>"+
" <emulator>/usr/bin/qemu</emulator>"+
" <machine>pc-0.11</machine>"+
" <machine canonical='pc-0.11'>pc</machine>"+
" <machine>pc-0.10</machine>"+
" <machine>isapc</machine>"+
" <domain type='qemu'>"+
" </domain>"+
" <domain type='kvm'>"+
" <emulator>/usr/bin/qemu-kvm</emulator>"+
" <machine>pc-0.11</machine>"+
" <machine canonical='pc-0.11'>pc</machine>"+
" <machine>pc-0.10</machine>"+
" <machine>isapc</machine>"+
" </domain>"+
" </arch>"+
" <features>"+
" <cpuselection/>"+
" <pae/>"+
" <nonpae/>"+
" <acpi default='on' toggle='yes'/>"+
" <apic default='on' toggle='no'/>"+
" </features>"+
" </guest>"+
" <guest>"+
" <os_type>hvm</os_type>"+
" <arch name='x86_64'>"+
" <wordsize>64</wordsize>"+
" <emulator>/usr/bin/qemu-system-x86_64</emulator>"+
" <machine>pc-0.11</machine>"+
" <machine canonical='pc-0.11'>pc</machine>"+
" <machine>pc-0.10</machine>"+
" <machine>isapc</machine>"+
" <domain type='qemu'>"+
" </domain>"+
" <domain type='kvm'>"+
" <emulator>/usr/bin/qemu-kvm</emulator>"+
" <machine>pc-0.11</machine>"+
" <machine canonical='pc-0.11'>pc</machine>"+
" <machine>pc-0.10</machine>"+
" <machine>isapc</machine>"+
" </domain>"+
" </arch>"+
" <features>"+
" <cpuselection/>"+
" <acpi default='on' toggle='yes'/>"+
" <apic default='on' toggle='no'/>"+
" </features>"+
" </guest>"+
"</capabilities>";
LibvirtCapXMLParser parser = new LibvirtCapXMLParser();
String cap = parser.parseCapabilitiesXML(capXML);
System.out.println(parser.getGuestOsType());
System.out.println(parser.getEmulator());
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,178 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.resource.computing;
import java.util.ArrayList;
import java.util.SortedMap;
import java.util.TreeMap;
import org.xml.sax.Attributes;
import org.xml.sax.SAXException;
/**
* @author chiradeep
*
*/
public class LibvirtDomainXMLParser extends LibvirtXMLParser {
private final ArrayList<String> interfaces = new ArrayList<String>();
private final SortedMap<String, String> diskMaps = new TreeMap<String, String>();
private boolean _interface;
private boolean _disk;
private boolean _desc;
private Integer vncPort;
private String diskDev;
private String diskFile;
private String desc;
public Integer getVncPort() {
return vncPort;
}
public void characters(char[] ch, int start, int length) throws SAXException {
if (_desc) {
desc = new String(ch, start, length);
}
}
@Override
public void startElement(String uri, String localName, String qName,
Attributes attributes) throws SAXException {
if(qName.equalsIgnoreCase("interface")) {
_interface = true;
} else if (qName.equalsIgnoreCase("target")){
if (_interface)
interfaces.add(attributes.getValue("dev"));
else if (_disk)
diskDev = attributes.getValue("dev");
} else if (qName.equalsIgnoreCase("source")){
if (_disk)
diskFile = attributes.getValue("file");
} else if (qName.equalsIgnoreCase("disk")) {
_disk = true;
} else if (qName.equalsIgnoreCase("graphics")) {
String port = attributes.getValue("port");
if (port != null) {
try {
vncPort = Integer.parseInt(port);
if (vncPort != -1) {
vncPort = vncPort - 5900;
} else {
vncPort = null;
}
}catch (NumberFormatException nfe){
vncPort = null;
}
}
} else if (qName.equalsIgnoreCase("description")) {
_desc = true;
}
}
@Override
public void endElement(String uri, String localName, String qName)
throws SAXException {
if(qName.equalsIgnoreCase("interface")) {
_interface = false;
} else if (qName.equalsIgnoreCase("disk")) {
diskMaps.put(diskDev, diskFile);
_disk = false;
} else if (qName.equalsIgnoreCase("description")) {
_desc = false;
}
}
public ArrayList<String> getInterfaces() {
return interfaces;
}
public SortedMap<String, String> getDiskMaps() {
return diskMaps;
}
public String getDescription() {
return desc;
}
public static void main(String [] args){
LibvirtDomainXMLParser parser = new LibvirtDomainXMLParser();
parser.parseDomainXML("<domain type='kvm' id='12'>"+
"<name>r-6-CV-5002-1</name>"+
"<uuid>581b5a4b-b496-8d4d-e44e-a7dcbe9df0b5</uuid>"+
"<description>testVM</description>"+
"<memory>131072</memory>"+
"<currentMemory>131072</currentMemory>"+
"<vcpu>1</vcpu>"+
"<os>"+
"<type arch='i686' machine='pc-0.11'>hvm</type>"+
"<kernel>/var/lib/libvirt/qemu/vmlinuz-2.6.31.6-166.fc12.i686</kernel>"+
"<cmdline>ro root=/dev/sda1 acpi=force selinux=0 eth0ip=10.1.1.1 eth0mask=255.255.255.0 eth2ip=192.168.10.152 eth2mask=255.255.255.0 gateway=192.168.10.1 dns1=72.52.126.11 dns2=72.52.126.12 domain=v4.myvm.com</cmdline>"+
"<boot dev='hd'/>"+
"</os>"+
"<features>"+
"<acpi/>"+
"<pae/>"+
"</features>"+
"<clock offset='utc'/>"+
"<on_poweroff>destroy</on_poweroff>"+
"<on_reboot>restart</on_reboot>"+
"<on_crash>destroy</on_crash>"+
"<devices>"+
"<emulator>/usr/bin/qemu-kvm</emulator>"+
"<disk type='file' device='disk'>"+
"<source file='/mnt/tank//vmops/CV/vm/u000004/r000006/rootdisk'/>"+
"<target dev='hda' bus='ide'/>"+
"</disk>"+
"<interface type='bridge'>"+
"<mac address='02:00:50:02:00:01'/>"+
"<source bridge='vnbr5002'/>"+
"<target dev='vtap5002'/>"+
"<model type='e1000'/>"+
"</interface>"+
"<interface type='network'>"+
"<mac address='00:16:3e:77:e2:a1'/>"+
"<source network='vmops-private'/>"+
"<target dev='vnet3'/>"+
"<model type='e1000'/>"+
"</interface>"+
"<interface type='bridge'>"+
"<mac address='06:85:00:00:00:04'/>"+
"<source bridge='br0'/>"+
"<target dev='tap5002'/>"+
"<model type='e1000'/>"+
"</interface>"+
"<input type='mouse' bus='ps2'/>"+
"<graphics type='vnc' port='6031' autoport='no' listen=''/>"+
"<video>"+
"<model type='cirrus' vram='9216' heads='1'/>"+
"</video>"+
"</devices>"+
"</domain>"
);
for (String intf: parser.getInterfaces()){
System.out.println(intf);
}
System.out.println(parser.getVncPort());
System.out.println(parser.getDescription());
}
}

View File

@ -0,0 +1,173 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.resource.computing;
import java.util.ArrayList;
import java.util.List;
public class LibvirtNetworkDef {
enum netType {
BRIDGE,
NAT,
LOCAL
}
private final String _networkName;
private final String _uuid;
private netType _networkType;
private String _brName;
private boolean _stp;
private int _delay;
private String _fwDev;
private final String _domainName;
private String _brIPAddr;
private String _brNetMask;
private final List<IPRange> ipranges = new ArrayList<IPRange>();
private final List<dhcpMapping> dhcpMaps = new ArrayList<dhcpMapping>();
public static class dhcpMapping {
String _mac;
String _name;
String _ip;
public dhcpMapping(String mac, String name, String ip) {
_mac = mac;
_name = name;
_ip = ip;
}
}
public static class IPRange {
String _start;
String _end;
public IPRange(String start, String end) {
_start = start;
_end = end;
}
}
public LibvirtNetworkDef(String netName, String uuid, String domName) {
_networkName = netName;
_uuid = uuid;
_domainName = domName;
}
public void defNATNetwork(String brName, boolean stp, int delay, String fwNic, String ipAddr, String netMask) {
_networkType = netType.NAT;
_brName = brName;
_stp = stp;
_delay = delay;
_fwDev = fwNic;
_brIPAddr = ipAddr;
_brNetMask = netMask;
}
public void defBrNetwork(String brName, boolean stp, int delay, String fwNic, String ipAddr, String netMask) {
_networkType = netType.BRIDGE;
_brName = brName;
_stp = stp;
_delay = delay;
_fwDev = fwNic;
_brIPAddr = ipAddr;
_brNetMask = netMask;
}
public void defLocalNetwork(String brName, boolean stp, int delay, String ipAddr, String netMask) {
_networkType = netType.LOCAL;
_brName = brName;
_stp = stp;
_delay = delay;
_brIPAddr = ipAddr;
_brNetMask = netMask;
}
public void adddhcpIPRange(String start, String end) {
IPRange ipr = new IPRange(start, end);
ipranges.add(ipr);
}
public void adddhcpMapping(String mac, String host, String ip) {
dhcpMapping map = new dhcpMapping(mac, host, ip);
dhcpMaps.add(map);
}
@Override
public String toString() {
StringBuilder netBuilder = new StringBuilder();
netBuilder.append("<network>\n");
netBuilder.append("<name>" + _networkName + "</name>\n");
if (_uuid != null)
netBuilder.append("<uuid>" + _uuid + "</uuid>\n");
if (_brName != null) {
netBuilder.append("<bridge name='" + _brName + "'");
if (_stp) {
netBuilder.append(" stp='on'");
} else {
netBuilder.append(" stp='off'");
}
if (_delay != -1) {
netBuilder.append(" delay='" + _delay +"'");
}
netBuilder.append("/>\n");
}
if (_domainName != null) {
netBuilder.append("<domain name='" + _domainName + "'/>\n");
}
if (_networkType == netType.BRIDGE) {
netBuilder.append("<forward mode='route'");
if (_fwDev != null) {
netBuilder.append(" dev='" + _fwDev + "'");
}
netBuilder.append("/>\n");
} else if (_networkType == netType.NAT) {
netBuilder.append("<forward mode='nat'");
if (_fwDev != null) {
netBuilder.append(" dev='" + _fwDev + "'");
}
netBuilder.append("/>\n");
}
if (_brIPAddr != null || _brNetMask != null || !ipranges.isEmpty() || !dhcpMaps.isEmpty()) {
netBuilder.append("<ip");
if (_brIPAddr != null)
netBuilder.append(" address='" + _brIPAddr + "'");
if (_brNetMask != null) {
netBuilder.append(" netmask='" + _brNetMask + "'");
}
netBuilder.append(">\n");
if (!ipranges.isEmpty() || !dhcpMaps.isEmpty()) {
netBuilder.append("<dhcp>\n");
for (IPRange ip : ipranges ) {
netBuilder.append("<range start='" + ip._start + "'" + " end='" + ip._end + "'/>\n");
}
for (dhcpMapping map : dhcpMaps) {
netBuilder.append("<host mac='" + map._mac + "' name='" + map._name + "' ip='" + map._ip + "'/>\n");
}
netBuilder.append("</dhcp>\n");
}
netBuilder.append("</ip>\n");
}
netBuilder.append("</network>\n");
return netBuilder.toString();
}
/**
* @param args
*/
public static void main(String[] args) {
LibvirtNetworkDef net = new LibvirtNetworkDef("cloudPrivate", null, "cloud.com");
net.defNATNetwork("cloudbr0", false, 0, null, "192.168.168.1", "255.255.255.0");
net.adddhcpIPRange("192.168.168.100", "192.168.168.220");
net.adddhcpIPRange("192.168.168.10", "192.168.168.50");
net.adddhcpMapping("branch0.cloud.com", "00:16:3e:77:e2:ed", "192.168.168.100");
net.adddhcpMapping("branch1.cloud.com", "00:16:3e:77:e2:ef", "192.168.168.101");
net.adddhcpMapping("branch2.cloud.com", "00:16:3e:77:e2:f0", "192.168.168.102");
System.out.println(net.toString());
}
}

View File

@ -0,0 +1,70 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.resource.computing;
public class LibvirtStoragePoolDef {
public enum poolType {
ISCSI("iscsi"),
NFS("netfs"),
DIR("dir");
String _poolType;
poolType(String poolType) {
_poolType = poolType;
}
@Override
public String toString() {
return _poolType;
}
}
private poolType _poolType;
private String _poolName;
private String _uuid;
private String _sourceHost;
private String _sourceDir;
private String _targetPath;
public LibvirtStoragePoolDef(poolType type, String poolName, String uuid, String host, String dir, String targetPath) {
_poolType = type;
_poolName = poolName;
_uuid = uuid;
_sourceHost = host;
_sourceDir = dir;
_targetPath = targetPath;
}
@Override
public String toString() {
StringBuilder storagePoolBuilder = new StringBuilder();
storagePoolBuilder.append("<pool type='" + _poolType + "'>\n");
storagePoolBuilder.append("<name>" + _poolName + "</name>\n");
if (_uuid != null)
storagePoolBuilder.append("<uuid>" + _uuid + "</uuid>\n");
if (_poolType == poolType.NFS) {
storagePoolBuilder.append("<source>\n");
storagePoolBuilder.append("<host name='" + _sourceHost + "'/>\n");
storagePoolBuilder.append("<dir path='" + _sourceDir + "'/>\n");
storagePoolBuilder.append("</source>\n");
}
storagePoolBuilder.append("<target>\n");
storagePoolBuilder.append("<path>" + _targetPath + "</path>\n");
storagePoolBuilder.append("</target>\n");
storagePoolBuilder.append("</pool>\n");
return storagePoolBuilder.toString();
}
}

View File

@ -0,0 +1,70 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.resource.computing;
public class LibvirtStorageVolumeDef {
public enum volFormat {
RAW("raw"),
QCOW2("qcow2"),
DIR("dir");
private String _format;
volFormat(String format) {
_format = format;
}
@Override
public String toString() {
return _format;
}
}
private String _volName;
private Long _volSize;
private volFormat _volFormat;
private String _backingPath;
private volFormat _backingFormat;
public LibvirtStorageVolumeDef(String volName, Long size, volFormat format, String tmplPath, volFormat tmplFormat) {
_volName = volName;
_volSize = size;
_volFormat = format;
_backingPath = tmplPath;
_backingFormat = tmplFormat;
}
@Override
public String toString() {
StringBuilder storageVolBuilder = new StringBuilder();
storageVolBuilder.append("<volume>\n");
storageVolBuilder.append("<name>" + _volName + "</name>\n");
if (_volSize != null) {
storageVolBuilder.append("<capacity >" + _volSize + "</capacity>\n");
}
storageVolBuilder.append("<target>\n");
storageVolBuilder.append("<format type='" + _volFormat + "'/>\n");
storageVolBuilder.append("</target>\n");
if (_backingPath != null) {
storageVolBuilder.append("<backingStore>\n");
storageVolBuilder.append("<path>" + _backingPath + "</path>\n");
storageVolBuilder.append("<format type='" + _backingFormat + "'/>\n");
storageVolBuilder.append("</backingStore>\n");
}
storageVolBuilder.append("</volume>\n");
return storageVolBuilder.toString();
}
}

View File

@ -0,0 +1,653 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.resource.computing;
import java.util.ArrayList;
import java.util.List;
import java.util.UUID;
public class LibvirtVMDef {
private String _hvsType;
private String _domName;
private String _domUUID;
private String _desc;
private final List<Object> components = new ArrayList<Object>();
public static class guestDef {
enum guestType {
KVM,
XEN,
EXE
}
enum bootOrder {
HARDISK("hd"),
CDROM("cdrom"),
FLOOPY("fd"),
NETWORK("network");
String _order;
bootOrder(String order) {
_order = order;
}
@Override
public String toString() {
return _order;
}
}
private guestType _type;
private String _arch;
private String _loader;
private String _kernel;
private String _initrd;
private String _root;
private String _cmdline;
private List<bootOrder> _bootdevs = new ArrayList<bootOrder>();
private String _machine;
public void setGuestType (guestType type) {
_type = type;
}
public void setGuestArch (String arch) {
_arch = arch;
}
public void setMachineType (String machine) {
_machine = machine;
}
public void setLoader (String loader) {
_loader = loader;
}
public void setBootKernel(String kernel, String initrd, String rootdev, String cmdline) {
_kernel = kernel;
_initrd = initrd;
_root = rootdev;
_cmdline = cmdline;
}
public void setBootOrder(bootOrder order) {
_bootdevs.add(order);
}
@Override
public String toString () {
if (_type == guestType.KVM) {
StringBuilder guestDef = new StringBuilder();
guestDef.append("<os>\n");
guestDef.append("<type ");
if (_arch != null) {
guestDef.append(" arch='" + _arch + "'");
}
if (_machine != null) {
guestDef.append(" machine='" + _machine + "'");
}
guestDef.append(">hvm</type>\n");
if (!_bootdevs.isEmpty()) {
for (bootOrder bo : _bootdevs) {
guestDef.append("<boot dev='" + bo + "'/>\n");
}
}
guestDef.append("</os>\n");
return guestDef.toString();
} else
return null;
}
}
public static class guestResourceDef {
private int _mem;
private int _currentMem = -1;
private String _memBacking;
private int _vcpu = -1;
public void setMemorySize(int mem) {
_mem = mem;
}
public void setCurrentMem(int currMem) {
_currentMem = currMem;
}
public void setMemBacking(String memBacking) {
_memBacking = memBacking;
}
public void setVcpuNum(int vcpu) {
_vcpu = vcpu;
}
@Override
public String toString(){
StringBuilder resBuidler = new StringBuilder();
resBuidler.append("<memory>" + _mem + "</memory>\n");
if (_currentMem != -1) {
resBuidler.append("<currentMemory>" + _currentMem + "</currentMemory>\n");
}
if (_memBacking != null) {
resBuidler.append("<memoryBacking>" + "<" + _memBacking + "/>" + "</memoryBacking>\n");
}
if (_vcpu != -1) {
resBuidler.append("<vcpu>" + _vcpu + "</vcpu>\n");
}
return resBuidler.toString();
}
}
public static class featuresDef {
private final List<String> _features = new ArrayList<String>();
public void addFeatures(String feature) {
_features.add(feature);
}
@Override
public String toString() {
StringBuilder feaBuilder = new StringBuilder();
feaBuilder.append("<features>\n");
for (String feature : _features) {
feaBuilder.append("<" + feature + "/>\n");
}
feaBuilder.append("</features>\n");
return feaBuilder.toString();
}
}
public static class termPolicy {
private String _reboot;
private String _powerOff;
private String _crash;
public termPolicy() {
_reboot = _powerOff = _crash = "destroy";
}
public void setRebootPolicy(String rbPolicy) {
_reboot = rbPolicy;
}
public void setPowerOffPolicy(String poPolicy) {
_powerOff = poPolicy;
}
public void setCrashPolicy(String crashPolicy) {
_crash = crashPolicy;
}
@Override
public String toString() {
StringBuilder term = new StringBuilder();
term.append("<on_reboot>" + _reboot + "</on_reboot>\n");
term.append("<on_poweroff>" + _powerOff + "</on_poweroff>\n");
term.append("<on_crash>" + _powerOff + "</on_crash>\n");
return term.toString();
}
}
public static class devicesDef {
private String _emulator;
private final List<Object> devices = new ArrayList<Object>();
public boolean addDevice(Object device) {
return devices.add(device);
}
public void setEmulatorPath(String emulator) {
_emulator = emulator;
}
@Override
public String toString() {
StringBuilder devicesBuilder = new StringBuilder();
devicesBuilder.append("<devices>\n");
if (_emulator != null) {
devicesBuilder.append("<emulator>" + _emulator + "</emulator>\n");
}
for (Object o : devices) {
devicesBuilder.append(o.toString());
}
devicesBuilder.append("</devices>\n");
return devicesBuilder.toString();
}
}
public static class diskDef {
enum deviceType {
FLOOPY("floopy"),
DISK("disk"),
CDROM("cdrom");
String _type;
deviceType(String type) {
_type = type;
}
@Override
public String toString() {
return _type;
}
}
enum diskType {
FILE("file"),
BLOCK("block"),
DIRECTROY("dir");
String _diskType;
diskType(String type) {
_diskType = type;
}
@Override
public String toString() {
return _diskType;
}
}
enum diskBus {
IDE("ide"),
SCSI("scsi"),
VIRTIO("virtio"),
XEN("xen"),
USB("usb"),
UML("uml"),
FDC("fdc");
String _bus;
diskBus(String bus) {
_bus = bus;
}
@Override
public String toString() {
return _bus;
}
}
enum diskFmtType {
RAW("raw"),
QCOW2("qcow2");
String _fmtType;
diskFmtType(String fmt) {
_fmtType = fmt;
}
@Override
public String toString() {
return _fmtType;
}
}
private deviceType _deviceType; /*floppy, disk, cdrom*/
private diskType _diskType;
private String _sourcePath;
private String _diskLabel;
private diskBus _bus;
private diskFmtType _diskFmtType; /*qcow2, raw etc.*/
private boolean _readonly = false;
private boolean _shareable = false;
private boolean _deferAttach = false;
public void setDeviceType(deviceType deviceType) {
_deviceType = deviceType;
}
public void defFileBasedDisk(String filePath, String diskLabel, diskBus bus, diskFmtType diskFmtType) {
_diskType = diskType.FILE;
_deviceType = deviceType.DISK;
_sourcePath = filePath;
_diskLabel = diskLabel;
_diskFmtType = diskFmtType;
_bus = bus;
}
public void defBlockBasedDisk(String diskName, String diskLabel, diskBus bus) {
_diskType = diskType.BLOCK;
_deviceType = deviceType.DISK;
_sourcePath = diskName;
_diskLabel = diskLabel;
_bus = bus;
}
public void setReadonly() {
_readonly = true;
}
public void setSharable() {
_shareable = true;
}
public void setAttachDeferred(boolean deferAttach) {
_deferAttach = deferAttach;
}
public boolean isAttachDeferred() {
return _deferAttach;
}
public String getDiskPath() {
return _sourcePath;
}
public String getDiskLabel() {
return _diskLabel;
}
@Override
public String toString() {
StringBuilder diskBuilder = new StringBuilder();
diskBuilder.append("<disk ");
if (_deviceType != null) {
diskBuilder.append(" device='" + _deviceType + "'");
}
diskBuilder.append(" type='" + _diskType + "'");
diskBuilder.append(">\n");
diskBuilder.append("<driver name='qemu'" + " type='" + _diskFmtType + "'/>\n");
if (_diskType == diskType.FILE) {
diskBuilder.append("<source ");
if (_sourcePath != null) {
diskBuilder.append("file='" + _sourcePath + "'");
} else if (_deviceType == deviceType.CDROM) {
diskBuilder.append("file=''");
}
diskBuilder.append("/>\n");
} else if (_diskType == diskType.BLOCK) {
diskBuilder.append("<source");
if (_sourcePath != null) {
diskBuilder.append(" dev='" + _sourcePath + "'");
}
diskBuilder.append("/>\n");
}
diskBuilder.append("<target dev='" + _diskLabel + "'");
if (_bus != null) {
diskBuilder.append(" bus='" + _bus + "'");
}
diskBuilder.append("/>\n");
diskBuilder.append("</disk>\n");
return diskBuilder.toString();
}
}
public static class interfaceDef {
enum guestNetType {
BRIDGE("bridge"),
NETWORK("network"),
USER("user"),
ETHERNET("ethernet"),
INTERNAL("internal");
String _type;
guestNetType(String type) {
_type = type;
}
@Override
public String toString() {
return _type;
}
}
enum nicModel {
E1000("e1000"),
VIRTIO("virtio"),
RTL8139("rtl8139"),
NE2KPCI("ne2k_pci");
String _model;
nicModel(String model) {
_model = model;
}
@Override
public String toString() {
return _model;
}
}
enum hostNicType {
DIRECT_ATTACHED_WITHOUT_DHCP,
DIRECT_ATTACHED_WITH_DHCP,
VNET,
VLAN;
}
private guestNetType _netType; /*bridge, ethernet, network, user, internal*/
private hostNicType _hostNetType; /*Only used by agent java code*/
private String _sourceName;
private String _networkName;
private String _macAddr;
private String _ipAddr;
private String _scriptPath;
private nicModel _model;
public void defBridgeNet(String brName, String targetBrName, String macAddr, nicModel model) {
_netType = guestNetType.BRIDGE;
_sourceName = brName;
_networkName = targetBrName;
_macAddr = macAddr;
_model = model;
}
public void defPrivateNet(String networkName, String targetName, String macAddr, nicModel model) {
_netType = guestNetType.NETWORK;
_sourceName = networkName;
_networkName = targetName;
_macAddr = macAddr;
_model = model;
}
public void setHostNetType(hostNicType hostNetType) {
_hostNetType = hostNetType;
}
public hostNicType getHostNetType() {
return _hostNetType;
}
public String getBrName() {
return _sourceName;
}
public guestNetType getNetType() {
return _netType;
}
@Override
public String toString() {
StringBuilder netBuilder = new StringBuilder();
netBuilder.append("<interface type='" + _netType +"'>\n");
if (_netType == guestNetType.BRIDGE) {
netBuilder.append("<source bridge='" + _sourceName +"'/>\n");
} else if (_netType == guestNetType.NETWORK) {
netBuilder.append("<source network='" + _sourceName +"'/>\n");
}
if (_networkName !=null) {
netBuilder.append("<target dev='" + _networkName + "'/>\n");
}
if (_macAddr !=null) {
netBuilder.append("<mac address='" + _macAddr + "'/>\n");
}
if (_model !=null) {
netBuilder.append("<model type='" + _model + "'/>\n");
}
netBuilder.append("</interface>\n");
return netBuilder.toString();
}
}
public static class consoleDef {
private final String _ttyPath;
private final String _type;
private final String _source;
private short _port = -1;
public consoleDef(String type, String path, String source, short port) {
_type = type;
_ttyPath = path;
_source = source;
_port = port;
}
@Override
public String toString() {
StringBuilder consoleBuilder = new StringBuilder();
consoleBuilder.append("<console ");
consoleBuilder.append("type='" + _type + "'");
if (_ttyPath != null) {
consoleBuilder.append("tty='" + _ttyPath + "'");
}
consoleBuilder.append(">\n");
if (_source != null) {
consoleBuilder.append("<source path='" + _source + "'/>\n");
}
if (_port != -1) {
consoleBuilder.append("<target port='" + _port + "'/>\n");
}
consoleBuilder.append("</console>\n");
return consoleBuilder.toString();
}
}
public static class serialDef {
private final String _type;
private final String _source;
private short _port = -1;
public serialDef(String type, String source, short port) {
_type = type;
_source = source;
_port = port;
}
@Override
public String toString() {
StringBuilder serialBuidler = new StringBuilder();
serialBuidler.append("<serial type='" + _type + "'>\n");
if (_source != null) {
serialBuidler.append("<source path='" + _source + "'/>\n");
}
if (_port != -1) {
serialBuidler.append("<target port='" + _port + "'/>\n");
}
serialBuidler.append("</serial>\n");
return serialBuidler.toString();
}
}
public static class graphicDef {
private final String _type;
private short _port = -2;
private boolean _autoPort = false;
private final String _listenAddr;
private final String _passwd;
private final String _keyMap;
public graphicDef(String type, short port, boolean auotPort, String listenAddr, String passwd, String keyMap) {
_type = type;
_port = port;
_autoPort = auotPort;
_listenAddr = listenAddr;
_passwd = passwd;
_keyMap = keyMap;
}
@Override
public String toString() {
StringBuilder graphicBuilder = new StringBuilder();
graphicBuilder.append("<graphics type='" + _type + "'");
if (_autoPort) {
graphicBuilder.append(" autoport='yes'");
} else if (_port != -2){
graphicBuilder.append(" port='" + _port + "'");
}
if (_listenAddr != null) {
graphicBuilder.append(" listen='" + _listenAddr + "'");
} else {
graphicBuilder.append(" listen='' ");
}
if (_passwd != null) {
graphicBuilder.append(" passwd='" + _passwd + "'");
} else if (_keyMap != null) {
graphicBuilder.append(" _keymap='" + _keyMap + "'");
}
graphicBuilder.append("/>\n");
return graphicBuilder.toString();
}
}
public static class inputDef {
private final String _type; /*tablet, mouse*/
private final String _bus; /*ps2, usb, xen*/
public inputDef(String type, String bus) {
_type = type;
_bus = bus;
}
@Override
public String toString() {
StringBuilder inputBuilder = new StringBuilder();
inputBuilder.append("<input type='" + _type + "'");
if (_bus != null) {
inputBuilder.append(" bus='" + _bus + "'");
}
inputBuilder.append("/>\n");
return inputBuilder.toString();
}
}
public void setHvsType(String hvs) {
_hvsType = hvs;
}
public void setDomainName(String domainName) {
_domName = domainName;
}
public void setDomUUID(String uuid) {
_domUUID = uuid;
}
public void setDomDescription(String desc) {
_desc = desc;
}
public boolean addComp(Object comp) {
return components.add(comp);
}
@Override
public String toString() {
StringBuilder vmBuilder = new StringBuilder();
vmBuilder.append("<domain type='" + _hvsType + "'>\n");
vmBuilder.append("<name>" + _domName + "</name>\n");
if (_domUUID != null) {
vmBuilder.append("<uuid>" + _domUUID + "</uuid>\n");
}
if (_desc != null ) {
vmBuilder.append("<description>" + _desc + "</description>\n");
}
for (Object o : components) {
vmBuilder.append(o.toString());
}
vmBuilder.append("</domain>\n");
return vmBuilder.toString();
}
public static void main(String [] args){
System.out.println("testing");
LibvirtVMDef vm = new LibvirtVMDef();
vm.setHvsType("kvm");
vm.setDomainName("testing");
vm.setDomUUID(UUID.randomUUID().toString());
guestDef guest = new guestDef();
guest.setGuestType(guestDef.guestType.KVM);
guest.setGuestArch("x86_64");
guest.setMachineType("pc-0.11");
guest.setBootOrder(guestDef.bootOrder.HARDISK);
vm.addComp(guest);
guestResourceDef grd = new guestResourceDef();
grd.setMemorySize(512*1024);
grd.setVcpuNum(1);
vm.addComp(grd);
featuresDef features = new featuresDef();
features.addFeatures("pae");
features.addFeatures("apic");
features.addFeatures("acpi");
vm.addComp(features);
termPolicy term = new termPolicy();
term.setCrashPolicy("destroy");
term.setPowerOffPolicy("destroy");
term.setRebootPolicy("destroy");
vm.addComp(term);
devicesDef devices = new devicesDef();
devices.setEmulatorPath("/usr/bin/cloud-qemu-system-x86_64");
diskDef hda = new diskDef();
hda.defFileBasedDisk("/path/to/hda1", "hda", diskDef.diskBus.IDE, diskDef.diskFmtType.QCOW2);
devices.addDevice(hda);
diskDef hdb = new diskDef();
hdb.defFileBasedDisk("/path/to/hda2", "hdb", diskDef.diskBus.IDE, diskDef.diskFmtType.QCOW2);
devices.addDevice(hdb);
interfaceDef pubNic = new interfaceDef();
pubNic.defBridgeNet("cloudbr0", "vnet1", "00:16:3e:77:e2:a1", interfaceDef.nicModel.VIRTIO);
devices.addDevice(pubNic);
interfaceDef privNic = new interfaceDef();
privNic.defPrivateNet("cloud-private", null, "00:16:3e:77:e2:a2", interfaceDef.nicModel.VIRTIO);
devices.addDevice(privNic);
interfaceDef vlanNic = new interfaceDef();
vlanNic.defBridgeNet("vnbr1000", "tap1", "00:16:3e:77:e2:a2", interfaceDef.nicModel.VIRTIO);
devices.addDevice(vlanNic);
serialDef serial = new serialDef("pty", null, (short)0);
devices.addDevice(serial);
consoleDef console = new consoleDef("pty", null, null, (short)0);
devices.addDevice(console);
graphicDef grap = new graphicDef("vnc", (short)0, true, null, null, null);
devices.addDevice(grap);
inputDef input = new inputDef("tablet", "usb");
devices.addDevice(input);
vm.addComp(devices);
System.out.println(vm.toString());
}
}

View File

@ -0,0 +1,85 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.resource.computing;
import java.io.IOException;
import java.io.StringReader;
import javax.xml.parsers.SAXParser;
import javax.xml.parsers.SAXParserFactory;
import org.apache.log4j.Logger;
import org.xml.sax.InputSource;
import org.xml.sax.SAXException;
import org.xml.sax.helpers.DefaultHandler;
public class LibvirtXMLParser extends DefaultHandler{
private static final Logger s_logger = Logger.getLogger(LibvirtXMLParser.class);
protected static SAXParserFactory s_spf;
static {
s_spf = SAXParserFactory.newInstance();
}
protected SAXParser _sp;
protected boolean _initialized = false;
public LibvirtXMLParser(){
try {
_sp = s_spf.newSAXParser();
_initialized = true;
} catch(Exception ex) {
}
}
public boolean parseDomainXML(String domXML) {
if (!_initialized){
return false;
}
try {
_sp.parse(new InputSource(new StringReader(domXML)), this);
return true;
}catch(SAXException se) {
s_logger.warn(se.getMessage());
}catch (IOException ie) {
s_logger.error(ie.getMessage());
}
return false;
}
@Override
public void characters(char[] ch, int start, int length) throws SAXException {
}
}

View File

@ -0,0 +1,353 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.resource.consoleproxy;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.net.URL;
import java.net.URLConnection;
import java.util.Map;
import java.util.Properties;
import javax.naming.ConfigurationException;
import org.apache.log4j.Logger;
import com.cloud.agent.Agent.ExitStatus;
import com.cloud.agent.api.AgentControlAnswer;
import com.cloud.agent.api.Answer;
import com.cloud.agent.api.CheckHealthAnswer;
import com.cloud.agent.api.CheckHealthCommand;
import com.cloud.agent.api.Command;
import com.cloud.agent.api.ConsoleAccessAuthenticationAnswer;
import com.cloud.agent.api.ConsoleAccessAuthenticationCommand;
import com.cloud.agent.api.ConsoleProxyLoadReportCommand;
import com.cloud.agent.api.PingCommand;
import com.cloud.agent.api.ReadyAnswer;
import com.cloud.agent.api.ReadyCommand;
import com.cloud.agent.api.StartupCommand;
import com.cloud.agent.api.StartupProxyCommand;
import com.cloud.agent.api.proxy.CheckConsoleProxyLoadCommand;
import com.cloud.agent.api.proxy.ConsoleProxyLoadAnswer;
import com.cloud.agent.api.proxy.WatchConsoleProxyLoadCommand;
import com.cloud.exception.AgentControlChannelException;
import com.cloud.host.Host;
import com.cloud.host.Host.Type;
import com.cloud.resource.ServerResource;
import com.cloud.resource.ServerResourceBase;
import com.cloud.utils.NumbersUtil;
import com.cloud.utils.net.NetUtils;
import com.cloud.utils.script.Script;
/**
*
* I don't want to introduce extra cross-cutting concerns into console proxy process, as it involves configurations like
* zone/pod, agent auto self-upgrade etc. I also don't want to introduce more module dependency issues into our build system,
* cross-communication between this resource and console proxy will be done through reflection. As a result, come out with
* following solution to solve the problem of building a communication channel between consoole proxy and management server.
*
* We will deploy an agent shell inside console proxy VM, and this agent shell will launch current console proxy from within
* this special server resource, through it console proxy can build a communication channel with management server.
*
*/
public class ConsoleProxyResource extends ServerResourceBase implements ServerResource {
static final Logger s_logger = Logger.getLogger(ConsoleProxyResource.class);
private final Properties _properties = new Properties();
private Thread _consoleProxyMain = null;
long _proxyVmId;
int _proxyPort;
String _localgw;
String _eth1ip;
String _eth1mask;
@Override
public Answer executeRequest(final Command cmd) {
if (cmd instanceof CheckConsoleProxyLoadCommand) {
return execute((CheckConsoleProxyLoadCommand)cmd);
} else if(cmd instanceof WatchConsoleProxyLoadCommand) {
return execute((WatchConsoleProxyLoadCommand)cmd);
} else if (cmd instanceof ReadyCommand) {
return new ReadyAnswer((ReadyCommand)cmd);
} else if(cmd instanceof CheckHealthCommand) {
return new CheckHealthAnswer((CheckHealthCommand)cmd, true);
} else {
return Answer.createUnsupportedCommandAnswer(cmd);
}
}
protected Answer execute(final CheckConsoleProxyLoadCommand cmd) {
return executeProxyLoadScan(cmd, cmd.getProxyVmId(), cmd.getProxyVmName(), cmd.getProxyManagementIp(), cmd.getProxyCmdPort());
}
protected Answer execute(final WatchConsoleProxyLoadCommand cmd) {
return executeProxyLoadScan(cmd, cmd.getProxyVmId(), cmd.getProxyVmName(), cmd.getProxyManagementIp(), cmd.getProxyCmdPort());
}
private Answer executeProxyLoadScan(final Command cmd, final long proxyVmId, final String proxyVmName, final String proxyManagementIp, final int cmdPort) {
String result = null;
final StringBuffer sb = new StringBuffer();
sb.append("http://").append(proxyManagementIp).append(":" + cmdPort).append("/cmd/getstatus");
boolean success = true;
try {
final URL url = new URL(sb.toString());
final URLConnection conn = url.openConnection();
final InputStream is = conn.getInputStream();
final BufferedReader reader = new BufferedReader(new InputStreamReader(is));
final StringBuilder sb2 = new StringBuilder();
String line = null;
try {
while ((line = reader.readLine()) != null)
sb2.append(line + "\n");
result = sb2.toString();
} catch (final IOException e) {
success = false;
} finally {
try {
is.close();
} catch (final IOException e) {
s_logger.warn("Exception when closing , console proxy address : " + proxyManagementIp);
success = false;
}
}
} catch(final IOException e) {
s_logger.warn("Unable to open console proxy command port url, console proxy address : " + proxyManagementIp);
success = false;
}
return new ConsoleProxyLoadAnswer(cmd, proxyVmId, proxyVmName, success, result);
}
@Override
protected String getDefaultScriptsDir() {
return null;
}
public Type getType() {
return Host.Type.ConsoleProxy;
}
@Override
public synchronized StartupCommand [] initialize() {
final StartupProxyCommand cmd = new StartupProxyCommand();
fillNetworkInformation(cmd);
cmd.setProxyPort(_proxyPort);
cmd.setProxyVmId(_proxyVmId);
return new StartupCommand[] {cmd};
}
@Override
public void disconnected() {
}
@Override
public PingCommand getCurrentStatus(long id) {
return new PingCommand(Type.ConsoleProxy, id);
}
@Override
public boolean configure(String name, Map<String, Object> params) throws ConfigurationException {
_localgw = (String)params.get("localgw");
_eth1mask = (String)params.get("eth1mask");
_eth1ip = (String)params.get("eth1ip");
if (_eth1ip != null) {
params.put("private.network.device", "eth1");
} else {
s_logger.warn("WARNING: eth1ip parameter is not found!");
}
String eth2ip = (String) params.get("eth2ip");
if (eth2ip != null) {
params.put("public.network.device", "eth2");
} else {
s_logger.warn("WARNING: eth2ip parameter is not found!");
}
super.configure(name, params);
for(Map.Entry<String, Object> entry : params.entrySet()) {
_properties.put(entry.getKey(), entry.getValue());
}
String value = (String)params.get("premium");
if(value != null && value.equals("premium"))
_proxyPort = 443;
else {
value = (String)params.get("consoleproxy.httpListenPort");
_proxyPort = NumbersUtil.parseInt(value, 80);
}
value = (String)params.get("proxy_vm");
_proxyVmId = NumbersUtil.parseLong(value, 0);
if (_localgw != null) {
String internalDns1 = (String)params.get("dns1");
String internalDns2 = (String)params.get("dns2");
if (internalDns1 == null) {
s_logger.warn("No DNS entry found during configuration of NfsSecondaryStorage");
} else {
addRouteToInternalIpOrCidr(_localgw, _eth1ip, _eth1mask, internalDns1);
}
String mgmtHost = (String)params.get("host");
addRouteToInternalIpOrCidr(_localgw, _eth1ip, _eth1mask, mgmtHost);
if (internalDns2 != null) {
addRouteToInternalIpOrCidr(_localgw, _eth1ip, _eth1mask, internalDns2);
}
}
if(s_logger.isInfoEnabled())
s_logger.info("Receive proxyVmId in ConsoleProxyResource configuration as " + _proxyVmId);
launchConsoleProxy();
return true;
}
private void addRouteToInternalIpOrCidr(String localgw, String eth1ip, String eth1mask, String destIpOrCidr) {
s_logger.debug("addRouteToInternalIp: localgw=" + localgw + ", eth1ip=" + eth1ip + ", eth1mask=" + eth1mask + ",destIp=" + destIpOrCidr);
if (destIpOrCidr == null) {
s_logger.debug("addRouteToInternalIp: destIp is null");
return;
}
if (!NetUtils.isValidIp(destIpOrCidr) && !NetUtils.isValidCIDR(destIpOrCidr)){
s_logger.warn(" destIp is not a valid ip address or cidr destIp=" + destIpOrCidr);
return;
}
boolean inSameSubnet = false;
if (NetUtils.isValidIp(destIpOrCidr)) {
if (eth1ip != null && eth1mask != null) {
inSameSubnet = NetUtils.sameSubnet(eth1ip, destIpOrCidr, eth1mask);
} else {
s_logger.warn("addRouteToInternalIp: unable to determine same subnet: _eth1ip=" + eth1ip + ", dest ip=" + destIpOrCidr + ", _eth1mask=" + eth1mask);
}
} else {
inSameSubnet = NetUtils.isNetworkAWithinNetworkB(destIpOrCidr, NetUtils.ipAndNetMaskToCidr(eth1ip, eth1mask));
}
if (inSameSubnet) {
s_logger.debug("addRouteToInternalIp: dest ip " + destIpOrCidr + " is in the same subnet as eth1 ip " + eth1ip);
return;
}
Script command = new Script("/bin/bash", s_logger);
command.add("-c");
command.add("ip route delete " + destIpOrCidr);
command.execute();
command = new Script("/bin/bash", s_logger);
command.add("-c");
command.add("ip route add " + destIpOrCidr + " via " + localgw);
String result = command.execute();
if (result != null) {
s_logger.warn("Error in configuring route to internal ip err=" + result );
} else {
s_logger.debug("addRouteToInternalIp: added route to internal ip=" + destIpOrCidr + " via " + localgw);
}
}
@Override
public String getName() {
return _name;
}
private void launchConsoleProxy() {
final Object resource = this;
_consoleProxyMain = new Thread(new Runnable() {
public void run() {
try {
Class<?> consoleProxyClazz = Class.forName("com.cloud.consoleproxy.ConsoleProxy");
try {
Method method = consoleProxyClazz.getMethod("startWithContext", Properties.class, Object.class);
method.invoke(null, _properties, resource);
} catch (SecurityException e) {
s_logger.error("Unable to launch console proxy due to SecurityException");
System.exit(ExitStatus.Error.value());
} catch (NoSuchMethodException e) {
s_logger.error("Unable to launch console proxy due to NoSuchMethodException");
System.exit(ExitStatus.Error.value());
} catch (IllegalArgumentException e) {
s_logger.error("Unable to launch console proxy due to IllegalArgumentException");
System.exit(ExitStatus.Error.value());
} catch (IllegalAccessException e) {
s_logger.error("Unable to launch console proxy due to IllegalAccessException");
System.exit(ExitStatus.Error.value());
} catch (InvocationTargetException e) {
s_logger.error("Unable to launch console proxy due to InvocationTargetException");
System.exit(ExitStatus.Error.value());
}
} catch (final ClassNotFoundException e) {
s_logger.error("Unable to launch console proxy due to ClassNotFoundException");
System.exit(ExitStatus.Error.value());
}
}
}, "Console-Proxy-Main");
_consoleProxyMain.setDaemon(true);
_consoleProxyMain.start();
}
public boolean authenticateConsoleAccess(String vmId, String sid) {
ConsoleAccessAuthenticationCommand cmd = new ConsoleAccessAuthenticationCommand(vmId, sid);
try {
AgentControlAnswer answer = getAgentControl().sendRequest(cmd, 10000);
if(answer != null) {
return ((ConsoleAccessAuthenticationAnswer)answer).succeeded();
} else {
s_logger.error("Authentication failed for vm: " + vmId + " with sid: " + sid);
}
} catch (AgentControlChannelException e) {
s_logger.error("Unable to send out console access authentication request due to " + e.getMessage(), e);
}
return false;
}
public void reportLoadInfo(String gsonLoadInfo) {
ConsoleProxyLoadReportCommand cmd = new ConsoleProxyLoadReportCommand(_proxyVmId, gsonLoadInfo);
try {
getAgentControl().postRequest(cmd);
if(s_logger.isDebugEnabled())
s_logger.debug("Report proxy load info, proxy : " + _proxyVmId + ", load: " + gsonLoadInfo);
} catch (AgentControlChannelException e) {
s_logger.error("Unable to send out load info due to " + e.getMessage(), e);
}
}
public void ensureRoute(String address) {
if(_localgw != null) {
if(s_logger.isDebugEnabled())
s_logger.debug("Ensure route for " + address + " via " + _localgw);
// this method won't be called in high frequency, serialize access to script execution
synchronized(this) {
addRouteToInternalIpOrCidr(_localgw, _eth1ip, _eth1mask, address);
}
}
}
}

View File

@ -0,0 +1,224 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.resource.storage;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import javax.naming.ConfigurationException;
import org.apache.log4j.Logger;
import com.cloud.resource.DiskPreparer;
import com.cloud.storage.Volume;
import com.cloud.storage.VolumeVO;
import com.cloud.storage.VirtualMachineTemplate.BootloaderType;
import com.cloud.storage.Volume.VolumeType;
import com.cloud.utils.NumbersUtil;
import com.cloud.utils.script.Script;
public class IscsiMountPreparer implements DiskPreparer {
private static final Logger s_logger = Logger.getLogger(IscsiMountPreparer.class);
private String _name;
private String _mountvmPath;
private String _mountRootdiskPath;
private String _mountDatadiskPath;
protected String _mountParent;
protected int _mountTimeout;
@Override
public String mount(String vmName, VolumeVO vol, BootloaderType type) {
return null;
}
@Override
public boolean unmount(String path) {
return false;
}
protected static VolumeVO findVolume(final List<VolumeVO> volumes, final Volume.VolumeType vType) {
if (volumes == null) return null;
for (final VolumeVO v: volumes) {
if (v.getVolumeType() == vType)
return v;
}
return null;
}
protected static List<VolumeVO> findVolumes(final List<VolumeVO> volumes, final Volume.VolumeType vType) {
if (volumes == null) return null;
final List<VolumeVO> result = new ArrayList<VolumeVO>();
for (final VolumeVO v: volumes) {
if (v.getVolumeType() == vType)
result.add(v);
}
return result;
}
protected static VolumeVO findVolume(final List<VolumeVO> volumes, final Volume.VolumeType vType, final String storageHost) {
if (volumes == null) return null;
for (final VolumeVO v: volumes) {
if ((v.getVolumeType() == vType) && (v.getHostIp().equalsIgnoreCase(storageHost)))
return v;
}
return null;
}
protected static boolean mirroredVolumes(final List<VolumeVO> vols, final Volume.VolumeType vType) {
final List<VolumeVO> volumes = findVolumes(vols, vType);
return volumes.size() > 1;
}
public synchronized String mountImage(final String host, final String dest, final String vmName, final List<VolumeVO> volumes, final BootloaderType bootloader) {
final Script command = new Script(_mountvmPath, _mountTimeout, s_logger);
command.add("-h", host);
command.add("-l", dest);
command.add("-n", vmName);
command.add("-b", bootloader.toString());
command.add("-t");
final VolumeVO root = findVolume(volumes, Volume.VolumeType.ROOT);
if (root == null) {
return null;
}
command.add(root.getIscsiName());
command.add("-r", root.getFolder());
final VolumeVO swap = findVolume(volumes, Volume.VolumeType.SWAP);
if (swap !=null && swap.getIscsiName() != null) {
command.add("-w", swap.getIscsiName());
}
final VolumeVO datadsk = findVolume(volumes, Volume.VolumeType.DATADISK);
if (datadsk !=null && datadsk.getIscsiName() != null) {
command.add("-1", datadsk.getIscsiName());
}
return command.execute();
}
public synchronized String mountImage(final String storageHosts[], final String dest, final String vmName, final List<VolumeVO> volumes, final boolean mirroredVols, final BootloaderType booter) {
if (!mirroredVols) {
return mountImage(storageHosts[0], dest, vmName, volumes, booter);
} else {
return mountMirroredImage(storageHosts, dest, vmName, volumes, booter);
}
}
protected String mountMirroredImage(final String hosts[], final String dest, final String vmName, final List<VolumeVO> volumes, final BootloaderType booter) {
final List<VolumeVO> rootDisks = findVolumes(volumes, VolumeType.ROOT);
final String storIp0 = hosts[0];
final String storIp1 = hosts[1];
//mountrootdisk.sh -m -h $STORAGE0 -t $iqn0 -l $src -n $vmname -r $dest -M -H $STORAGE1 -T $iqn1
final Script command = new Script(_mountRootdiskPath, _mountTimeout, s_logger);
command.add("-m");
command.add("-M");
command.add("-h", storIp0);
command.add("-H", storIp1);
command.add("-l", dest);
command.add("-r", rootDisks.get(0).getFolder());
command.add("-n", vmName);
command.add("-t", rootDisks.get(0).getIscsiName());
command.add("-T", rootDisks.get(1).getIscsiName());
command.add("-b", booter.toString());
final List<VolumeVO> swapDisks = findVolumes(volumes, VolumeType.SWAP);
if (swapDisks.size() == 2) {
command.add("-w", swapDisks.get(0).getIscsiName());
command.add("-W", swapDisks.get(1).getIscsiName());
}
final String result = command.execute();
if (result == null){
final List<VolumeVO> dataDisks = findVolumes(volumes, VolumeType.DATADISK);
if (dataDisks.size() == 2) {
final Script mountdata = new Script(_mountDatadiskPath, _mountTimeout, s_logger);
mountdata.add("-m");
mountdata.add("-M");
mountdata.add("-h", storIp0);
mountdata.add("-H", storIp1);
mountdata.add("-n", vmName);
mountdata.add("-c", "1");
mountdata.add("-d", dataDisks.get(0).getIscsiName());
mountdata.add("-D", dataDisks.get(1).getIscsiName());
return mountdata.execute();
} else if (dataDisks.size() == 0){
return result;
}
}
return result;
}
@Override
public boolean configure(String name, Map<String, Object> params) throws ConfigurationException {
_name = name;
String scriptsDir = (String)params.get("mount.scripts.dir");
if (scriptsDir == null) {
scriptsDir = "scripts/vm/storage/iscsi/comstar/filebacked";
}
_mountDatadiskPath = Script.findScript(scriptsDir, "mountdatadisk.sh");
if (_mountDatadiskPath == null) {
throw new ConfigurationException("Unable to find mountdatadisk.sh");
}
s_logger.info("mountdatadisk.sh found in " + _mountDatadiskPath);
String value = (String)params.get("mount.script.timeout");
_mountTimeout = NumbersUtil.parseInt(value, 240) * 1000;
_mountvmPath = Script.findScript(scriptsDir, "mountvm.sh");
if (_mountvmPath == null) {
throw new ConfigurationException("Unable to find mountvm.sh");
}
s_logger.info("mountvm.sh found in " + _mountvmPath);
_mountRootdiskPath = Script.findScript(scriptsDir, "mountrootdisk.sh");
if (_mountRootdiskPath == null) {
throw new ConfigurationException("Unable to find mountrootdisk.sh");
}
s_logger.info("mountrootdisk.sh found in " + _mountRootdiskPath);
return true;
}
@Override
public String getName() {
return _name;
}
@Override
public boolean start() {
return true;
}
@Override
public boolean stop() {
return true;
}
}

View File

@ -0,0 +1,46 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent;
import java.io.File;
import java.io.IOException;
import org.apache.log4j.Logger;
import com.cloud.agent.AgentShell;
import com.cloud.utils.testcase.Log4jEnabledTestCase;
public class TestAgentShell extends Log4jEnabledTestCase {
protected final static Logger s_logger = Logger.getLogger(TestAgentShell.class);
public void testWget() {
File file = null;
try {
file = File.createTempFile("wget", ".html");
AgentShell.wget("http://www.google.com/", file);
if (s_logger.isDebugEnabled()) {
s_logger.debug("file saved to " + file.getAbsolutePath());
}
} catch (final IOException e) {
s_logger.warn("Exception while downloading agent update package, ", e);
}
}
}

7
api/.classpath Normal file
View File

@ -0,0 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?>
<classpath>
<classpathentry kind="src" path="src"/>
<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER"/>
<classpathentry combineaccessrules="false" kind="src" path="/utils"/>
<classpathentry kind="output" path="bin"/>
</classpath>

17
api/.project Normal file
View File

@ -0,0 +1,17 @@
<?xml version="1.0" encoding="UTF-8"?>
<projectDescription>
<name>api</name>
<comment></comment>
<projects>
</projects>
<buildSpec>
<buildCommand>
<name>org.eclipse.jdt.core.javabuilder</name>
<arguments>
</arguments>
</buildCommand>
</buildSpec>
<natures>
<nature>org.eclipse.jdt.core.javanature</nature>
</natures>
</projectDescription>

View File

@ -0,0 +1,29 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.deploy;
public class DataCenterDeployment implements DeploymentStrategy {
long _dcId;
public DataCenterDeployment(long dataCenterId) {
_dcId = dataCenterId;
}
public long getDataCenterId() {
return _dcId;
}
}

View File

@ -0,0 +1,26 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.deploy;
/**
* Describes how a VM should be deployed.
*
*/
public interface DeploymentStrategy {
}

View File

@ -0,0 +1,45 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
import com.cloud.utils.SerialVersionUID;
/**
* This exception is thrown when the agent is unavailable to accept an
* command.
*
*/
public class AgentUnavailableException extends Exception {
private static final long serialVersionUID = SerialVersionUID.AgentUnavailableException;
long _agentId;
public AgentUnavailableException(String msg, long agentId) {
super("Host " + agentId + ": " + msg);
_agentId = agentId;
}
public AgentUnavailableException(long agentId) {
this("Unable to reach host.", agentId);
}
public long getAgentId() {
return _agentId;
}
}

View File

@ -0,0 +1,29 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
import com.cloud.utils.SerialVersionUID;
public class ConcurrentOperationException extends Exception {
private static final long serialVersionUID = SerialVersionUID.ConcurrentOperationException;
public ConcurrentOperationException(String msg) {
super(msg);
}
}

View File

@ -0,0 +1,33 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
import com.cloud.utils.SerialVersionUID;
public class DiscoveryException extends Exception {
private static final long serialVersionUID = SerialVersionUID.DiscoveryException;
public DiscoveryException(String msg) {
this(msg, null);
}
public DiscoveryException(String msg, Throwable cause) {
super(msg, cause);
}
}

View File

@ -0,0 +1,35 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
import com.cloud.utils.SerialVersionUID;
/**
* This exception is thrown when a machine is in HA State and a operation,
* such as start or stop, is attempted on it. Machines that are in HA
* states need to be properly cleaned up before anything special can be
* done with it. Hence this special state.
*/
public class HAStateException extends ManagementServerException {
private static final long serialVersionUID = SerialVersionUID.HAStateException;
public HAStateException(String msg) {
super(msg);
}
}

View File

@ -0,0 +1,38 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
import com.cloud.utils.SerialVersionUID;
/**
* Exception thrown when the end there's not enough ip addresses in the system.
*/
public class InsufficientAddressCapacityException extends InsufficientCapacityException {
private static final long serialVersionUID = SerialVersionUID.InsufficientAddressCapacityException;
public InsufficientAddressCapacityException(String msg) {
super(msg);
}
protected InsufficientAddressCapacityException() {
super();
}
}

View File

@ -0,0 +1,35 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
import com.cloud.utils.SerialVersionUID;
/**
* Generic parent exception class for capacity being reached.
*
*/
public abstract class InsufficientCapacityException extends Exception {
private static final long serialVersionUID = SerialVersionUID.InsufficientCapacityException;
protected InsufficientCapacityException() {
}
public InsufficientCapacityException(String msg) {
super(msg);
}
}

View File

@ -0,0 +1,33 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
import com.cloud.utils.SerialVersionUID;
/**
* InsufficientStorageCapcityException is thrown when there's not enough
* storage space to create the VM.
*/
public class InsufficientStorageCapacityException extends InsufficientCapacityException {
private static final long serialVersionUID = SerialVersionUID.InsufficientStorageCapacityException;
public InsufficientStorageCapacityException(String msg) {
super(msg);
}
}

View File

@ -0,0 +1,28 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
import com.cloud.utils.SerialVersionUID;
public class InsufficientVirtualNetworkCapcityException extends InsufficientCapacityException {
private static final long serialVersionUID = SerialVersionUID.InsufficientCapacityException;
public InsufficientVirtualNetworkCapcityException(String msg) {
super(msg);
}
}

View File

@ -0,0 +1,32 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
public class InternalErrorException extends ManagementServerException {
private static final long serialVersionUID = -3070582946175427902L;
public InternalErrorException(String message) {
super(message);
}
}

View File

@ -0,0 +1,34 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
/**
* @author chiradeep
*
*/
public class InvalidParameterValueException extends ManagementServerException {
private static final long serialVersionUID = -2232066904895010203L;
public InvalidParameterValueException(String message) {
super(message);
}
}

View File

@ -0,0 +1,44 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
import com.cloud.utils.SerialVersionUID;
/**
* @author chiradeep
*
*/
public class ManagementServerException extends Exception {
private static final long serialVersionUID = SerialVersionUID.ManagementServerException;
public ManagementServerException() {
}
public ManagementServerException(String message) {
super(message);
}
}

View File

@ -0,0 +1,30 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
public class NetworkRuleConflictException extends
ManagementServerException {
private static final long serialVersionUID = -294905017911859479L;
public NetworkRuleConflictException(String message) {
super(message);
}
}

View File

@ -0,0 +1,33 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
/**
* @author chiradeep
*
*/
public class PermissionDeniedException extends ManagementServerException {
private static final long serialVersionUID = -4631412831814398074L;
public PermissionDeniedException(String message) {
super(message);
// TODO Auto-generated constructor stub
}
}

View File

@ -0,0 +1,38 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
public class ResourceAllocationException extends ManagementServerException {
private static final long serialVersionUID = -2232066904895010203L;
private String resourceType;
public ResourceAllocationException(String message) {
super(message);
}
public void setResourceType(String resourceType) {
this.resourceType = resourceType;
}
public String getResourceType() {
return this.resourceType;
}
}

View File

@ -0,0 +1,56 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
/**
* @author chiradeep
*
*/
public class ResourceInUseException extends ManagementServerException {
private static final long serialVersionUID = 1383416910411639324L;
private String resourceType;
private String resourceName;
public ResourceInUseException(String message) {
super(message);
}
public ResourceInUseException(String message, String resourceType,
String resourceName) {
super(message);
this.resourceType = resourceType;
this.resourceName = resourceName;
}
public void setResourceType(String resourceType) {
this.resourceType = resourceType;
}
public String getResourceType() {
return this.resourceType;
}
public void setResourceName(String resourceName) {
this.resourceName = resourceName;
}
public String getResourceName() {
return resourceName;
}
}

View File

@ -0,0 +1,37 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
import com.cloud.utils.SerialVersionUID;
/**
* This exception is thrown when the storage device can not be reached.
*
*/
public class StorageUnavailableException extends AgentUnavailableException {
private static final long serialVersionUID = SerialVersionUID.StorageUnavailableException;
public StorageUnavailableException(long hostId) {
super(hostId);
}
public StorageUnavailableException(String msg) {
super(msg, -1);
}
}

View File

@ -0,0 +1,29 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.hypervisor;
public class Hypervisor {
public static enum Type {
None, //for storage hosts
Xen,
XenServer,
KVM;
}
}

View File

@ -0,0 +1,61 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.network;
/**
* Network includes all of the enums used within networking.
*
*/
public class Network {
/**
* Different ways to assign ip address to this network.
*/
public enum Mode {
None,
Static,
Dhcp,
ExternalDhcp;
};
public enum AddressFormat {
Ip4,
Ip6
}
/**
* Different types of broadcast domains.
*/
public enum BroadcastDomainType {
Native,
Vlan,
Vswitch,
Vnet;
};
/**
* Different types of network traffic in the data center.
*/
public enum TrafficType {
Public,
Guest,
Storage,
LinkLocal,
Vpn,
Management
};
}

View File

@ -0,0 +1,44 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.offering;
/**
* Represents a disk offering that specifies what the end user needs in
* the disk offering.
*
*/
public interface DiskOffering {
long getId();
String getUniqueName();
boolean getUseLocalStorage();
Long getDomainId();
String getName();
String getDisplayText();
long getDiskSizeInBytes();
public String getTags();
public String[] getTagsArray();
}

View File

@ -0,0 +1,63 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.offering;
/**
* Describes network offering
*
*/
public interface NetworkOffering {
public enum GuestIpType {
Virtualized,
DirectSingle,
DirectDual
}
long getId();
/**
* @return name for the network offering.
*/
String getName();
/**
* @return text to display to the end user.
*/
String getDisplayText();
/**
* @return the rate in megabits per sec to which a VM's network interface is throttled to
*/
Integer getRateMbps();
/**
* @return the rate megabits per sec to which a VM's multicast&broadcast traffic is throttled to
*/
Integer getMulticastRateMbps();
/**
* @return the type of IP address to allocate as the primary ip address to a guest
*/
GuestIpType getGuestIpType();
/**
* @return concurrent connections to be supported.
*/
Integer getConcurrentConnections();
}

View File

@ -0,0 +1,46 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.offering;
/**
*
* OfferingManager manages the different type of service offerings
* available to the administrators of the system.
*
*/
public interface OfferingManager {
/**
* Creates a service offering.
* @return ServiceOffering
*/
ServiceOffering createServiceOffering();
/**
* Creates a disk offering.
* @return DiskOffering
*/
DiskOffering createDiskOffering();
/**
* Creates a network offering.
* @return NetworkOffering
*/
NetworkOffering createNetworkOffering();
}

View File

@ -0,0 +1,76 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.offering;
/**
* ServiceOffering models the different types of service contracts to be
* offered.
*/
public interface ServiceOffering {
public enum GuestIpType {
Virtualized,
DirectSingle,
DirectDual
}
/**
* @return user readable description
*/
String getName();
/**
* @return # of cpu.
*/
int getCpu();
/**
* @return speed in mhz
*/
int getSpeed();
/**
* @return ram size in megabytes
*/
int getRamSize();
/**
* @return Does this service plan offer HA?
*/
boolean getOfferHA();
/**
* @return the rate in megabits per sec to which a VM's network interface is throttled to
*/
int getRateMbps();
/**
* @return the rate megabits per sec to which a VM's multicast&broadcast traffic is throttled to
*/
int getMulticastRateMbps();
/**
* @return the type of IP address to allocate as the primary ip address to a guest
*/
GuestIpType getGuestIpType();
/**
* @return whether or not the service offering requires local storage
*/
boolean getUseLocalStorage();
}

View File

@ -0,0 +1,87 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.storage;
public class Storage {
public enum ImageFormat {
QCOW2(true, true, false),
RAW(false, false, false),
VHD(true, true, true),
ISO(false, false, false);
private final boolean thinProvisioned;
private final boolean supportSparse;
private final boolean supportSnapshot;
private ImageFormat(boolean thinProvisioned, boolean supportSparse, boolean supportSnapshot) {
this.thinProvisioned = thinProvisioned;
this.supportSparse = supportSparse;
this.supportSnapshot = supportSnapshot;
}
public boolean isThinProvisioned() {
return thinProvisioned;
}
public boolean supportsSparse() {
return supportSparse;
}
public boolean supportSnapshot() {
return supportSnapshot;
}
public String getFileExtension() {
return toString().toLowerCase();
}
}
public enum FileSystem {
Unknown,
ext3,
ntfs,
fat,
fat32,
ext2,
ext4,
cdfs,
hpfs,
ufs,
hfs,
hfsp
}
public enum StoragePoolType {
Filesystem(false), //local directory
NetworkFilesystem(true), //NFS or CIFS
IscsiLUN(true), //shared LUN, with a clusterfs overlay
Iscsi(true), //for e.g., ZFS Comstar
ISO(false), // for iso image
LVM(false); // XenServer local LVM SR
boolean shared;
StoragePoolType(boolean shared) {
this.shared = shared;
}
public boolean isShared() {
return shared;
}
}
}

View File

@ -0,0 +1,71 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General License for more details.
*
* You should have received a copy of the GNU General License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.uservm;
import com.cloud.vm.VirtualMachine;
/**
* This represents one running virtual machine instance.
*/
public interface UserVm extends VirtualMachine {
/**
* @return service offering id
*/
long getServiceOfferingId();
/**
* @return the domain router associated with this vm.
*/
Long getDomainRouterId();
/**
* @return the vnet associated with this vm.
*/
String getVnet();
/**
* @return the account this vm instance belongs to.
*/
long getAccountId();
/**
* @return the domain this vm instance belongs to.
*/
long getDomainId();
/**
* @return ip address within the guest network.
*/
String getGuestIpAddress();
/**
* @return mac address of the guest network.
*/
String getGuestMacAddress();
Long getIsoId();
String getDisplayName();
String getGroup();
String getUserData();
void setUserData(String userData);
}

View File

@ -0,0 +1,50 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.vm;
/**
* Nic represents one nic on the VM.
*/
public interface Nic {
enum State {
AcquireIp,
IpAcquired,
}
State getState();
String getIp4Address();
String getMacAddress();
/**
* @return network profile id that this
*/
long getNetworkProfileId();
/**
* @return the unique id to reference this nic.
*/
long getId();
/**
* @return the vm instance id that this nic belongs to.
*/
long getInstanceId();
}

View File

@ -0,0 +1,99 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.vm;
import java.util.List;
import com.cloud.utils.fsm.StateMachine;
public enum State {
Creating(true),
Starting(true),
Running(false),
Stopping(true),
Stopped(false),
Destroyed(false),
Expunging(true),
Migrating(true),
Error(false),
Unknown(false);
private final boolean _transitional;
private State(boolean transitional) {
_transitional = transitional;
}
public boolean isTransitional() {
return _transitional;
}
public static String[] toStrings(State... states) {
String[] strs = new String[states.length];
for (int i = 0; i < states.length; i++) {
strs[i] = states[i].toString();
}
return strs;
}
public State getNextState(VirtualMachine.Event e) {
return s_fsm.getNextState(this, e);
}
public State[] getFromStates(VirtualMachine.Event e) {
List<State> from = s_fsm.getFromStates(this, e);
return from.toArray(new State[from.size()]);
}
protected static final StateMachine<State, VirtualMachine.Event> s_fsm = new StateMachine<State, VirtualMachine.Event>();
static {
s_fsm.addTransition(null, VirtualMachine.Event.CreateRequested, State.Creating);
s_fsm.addTransition(State.Creating, VirtualMachine.Event.OperationSucceeded, State.Stopped);
s_fsm.addTransition(State.Creating, VirtualMachine.Event.OperationFailed, State.Destroyed);
s_fsm.addTransition(State.Stopped, VirtualMachine.Event.StartRequested, State.Starting);
s_fsm.addTransition(State.Stopped, VirtualMachine.Event.DestroyRequested, State.Destroyed);
s_fsm.addTransition(State.Stopped, VirtualMachine.Event.StopRequested, State.Stopped);
s_fsm.addTransition(State.Stopped, VirtualMachine.Event.AgentReportStopped, State.Stopped);
s_fsm.addTransition(State.Starting, VirtualMachine.Event.OperationRetry, State.Starting);
s_fsm.addTransition(State.Starting, VirtualMachine.Event.OperationSucceeded, State.Running);
s_fsm.addTransition(State.Starting, VirtualMachine.Event.OperationFailed, State.Stopped);
s_fsm.addTransition(State.Starting, VirtualMachine.Event.AgentReportRunning, State.Running);
s_fsm.addTransition(State.Starting, VirtualMachine.Event.AgentReportStopped, State.Stopped);
s_fsm.addTransition(State.Destroyed, VirtualMachine.Event.RecoveryRequested, State.Stopped);
s_fsm.addTransition(State.Destroyed, VirtualMachine.Event.ExpungeOperation, State.Expunging);
s_fsm.addTransition(State.Creating, VirtualMachine.Event.MigrationRequested, State.Destroyed);
s_fsm.addTransition(State.Running, VirtualMachine.Event.MigrationRequested, State.Migrating);
s_fsm.addTransition(State.Running, VirtualMachine.Event.AgentReportRunning, State.Running);
s_fsm.addTransition(State.Running, VirtualMachine.Event.AgentReportStopped, State.Stopped);
s_fsm.addTransition(State.Running, VirtualMachine.Event.StopRequested, State.Stopping);
s_fsm.addTransition(State.Migrating, VirtualMachine.Event.MigrationRequested, State.Migrating);
s_fsm.addTransition(State.Migrating, VirtualMachine.Event.OperationSucceeded, State.Running);
s_fsm.addTransition(State.Migrating, VirtualMachine.Event.OperationFailed, State.Running);
s_fsm.addTransition(State.Migrating, VirtualMachine.Event.AgentReportRunning, State.Running);
s_fsm.addTransition(State.Migrating, VirtualMachine.Event.AgentReportStopped, State.Stopped);
s_fsm.addTransition(State.Stopping, VirtualMachine.Event.OperationSucceeded, State.Stopped);
s_fsm.addTransition(State.Stopping, VirtualMachine.Event.OperationFailed, State.Running);
s_fsm.addTransition(State.Stopping, VirtualMachine.Event.AgentReportRunning, State.Running);
s_fsm.addTransition(State.Stopping, VirtualMachine.Event.AgentReportStopped, State.Stopped);
s_fsm.addTransition(State.Stopping, VirtualMachine.Event.StopRequested, State.Stopping);
s_fsm.addTransition(State.Expunging, VirtualMachine.Event.OperationFailed, State.Expunging);
s_fsm.addTransition(State.Expunging, VirtualMachine.Event.ExpungeOperation, State.Expunging);
}
}

View File

@ -0,0 +1,124 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.vm;
import java.util.Date;
/**
* VirtualMachine describes the properties held by a virtual machine
*
*/
public interface VirtualMachine {
public enum Event {
CreateRequested,
StartRequested,
StopRequested,
DestroyRequested,
RecoveryRequested,
AgentReportStopped,
AgentReportRunning,
MigrationRequested,
ExpungeOperation,
OperationSucceeded,
OperationFailed,
OperationRetry,
OperationCancelled
};
public enum Type {
User,
DomainRouter,
ConsoleProxy,
SecondaryStorageVm
}
public String getInstanceName();
/**
* @return the id of this virtual machine. null means the id has not been set.
*/
public long getId();
/**
* @return the name of the virtual machine.
*/
public String getName();
/**
* @return the ip address of the virtual machine.
*/
public String getPrivateIpAddress();
/**
* @return mac address.
*/
public String getPrivateMacAddress();
/**
* @return password of the host for vnc purposes.
*/
public String getVncPassword();
/**
* @return the state of the virtual machine
*/
public State getState();
/**
* @return template id.
*/
public long getTemplateId();
/**
* returns the guest OS ID
* @return guestOSId
*/
public long getGuestOSId();
/**
* @return pod id.
*/
public long getPodId();
/**
* @return data center id.
*/
public long getDataCenterId();
/**
* @return id of the host it is running on. If not running, returns null.
*/
public Long getHostId();
/**
* @return id of the host it was assigned last time.
*/
public Long getLastHostId();
/**
* @return should HA be enabled for this machine?
*/
public boolean isHaEnabled();
/**
* @return date when machine was created
*/
public Date getCreated();
Type getType();
}

View File

@ -0,0 +1,68 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.vm;
import java.util.Map;
import com.cloud.hypervisor.Hypervisor;
public class VmCharacteristics {
int core;
int speed; // in mhz
long ram; // in bytes
Hypervisor.Type hypervisorType;
VirtualMachine.Type type;
Map<String, String> params;
public VmCharacteristics(VirtualMachine.Type type) {
this.type = type;
}
public VirtualMachine.Type getType() {
return type;
}
public VmCharacteristics() {
}
public int getCores() {
return core;
}
public int getSpeed() {
return speed;
}
public long getRam() {
return ram;
}
public Hypervisor.Type getHypervisorType() {
return hypervisorType;
}
public VmCharacteristics(int core, int speed, long ram, Hypervisor.Type type, Map<String, String> params) {
this.core = core;
this.speed = speed;
this.ram = ram;
this.hypervisorType = type;
this.params = params;
}
}

78
build.xml Executable file
View File

@ -0,0 +1,78 @@
<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2008 Cloud.Com Inc. All Rights Reserved -->
<project name="Cloud.com Cloud Stack Build Dispatch" default="help" basedir=".">
<description>
This is the overall dispatch file. It includes other build
files but doesnot provide targets of its own. Do not modify
this file. If you need to create your own targets, modify the
developer.xml.
</description>
<dirname property="base.dir" file="${ant.file.Cloud.com Cloud Stack Build Dispatch}"/>
<property name="build.dir" location="${base.dir}/build"/>
<condition property="build-cloud.properties.file" value="${build.dir}/override/build-cloud.properties" else="${build.dir}/build-cloud.properties">
<available file="${build.dir}/override/build-cloud.properties" />
</condition>
<property file="${build-cloud.properties.file}"/>
<property name="dist.dir" location="${base.dir}/dist"/>
<property name="target.dir" location="${base.dir}/target"/>
<condition property="build.file" value="premium/build-cloud-premium.xml" else="build-cloud.xml">
<and>
<available file="build/premium/build-cloud-premium.xml"/>
<not>
<isset property="OSS"/>
</not>
</and>
</condition>
<condition property="package.file" value="premium/package-premium.xml" else="package.xml">
<and>
<available file="build/premium/package-premium.xml"/>
<not>
<isset property="OSS"/>
</not>
</and>
</condition>
<condition property="developer.file" value="premium/developer-premium.xml" else="developer.xml">
<and>
<available file="build/premium/developer-premium.xml"/>
<not>
<isset property="OSS"/>
</not>
</and>
</condition>
<condition property="docs.file" value="premium/build-docs-premium.xml" else="build-docs.xml">
<and>
<available file="build/premium/build-docs-premium.xml"/>
<not>
<isset property="OSS"/>
</not>
</and>
</condition>
<condition property="test.file" value="premium/build-tests-premium.xml" else="build-tests.xml">
<and>
<available file="build/premium/build-tests-premium.xml"/>
<not>
<isset property="OSS"/>
</not>
</and>
</condition>
<import file="${base.dir}/plugins/zynga/build.xml" optional='true'/>
<import file="${build.dir}/${build.file}" optional="false"/>
<import file="${build.dir}/${docs.file}" optional="true"/>
<import file="${build.dir}/${test.file}" optional="true"/>
<import file="${build.dir}/${package.file}" optional="true"/>
<import file="${build.dir}/${developer.file}" optional="true"/>
</project>

40
build/build-cloud.properties Executable file
View File

@ -0,0 +1,40 @@
debug=true
debuglevel=lines,source,vars
debug.jvmarg=-Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n
deprecation=off
build.type=developer
target.compat.version=1.6
source.compat.version=1.6
branding.name=default
#
# Set your own default instance for server-server-dev.xml
#
#default.zone=KY
#default.instance=KY
#
# Set your own log directory
# for production build set logdir=/var/log/vmops
#
#logdir=logs
#
# Set your own KVM developer.properties values
#
#private.macaddr.start=00:16:3e:77:03:01
#private.ipaddr.start=192.168.168.128
#
# Set your own agent.properties values
#
#WORKERS=3
#HOST=192.168.1.190
#PORT=8250
#POD=KY
#ZONE=KY

566
build/build-cloud.xml Executable file
View File

@ -0,0 +1,566 @@
<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2008 Cloud, Inc. All Rights Reserved -->
<project name="Cloud Stack" default="help" basedir=".">
<description>
Cloud Stack ant build file
</description>
<!--
Always use this variable to refer to the base directory because this
variable is changeable
-->
<dirname property="base.dir" file="${ant.file.Cloud Stack}/.." />
<property name="build.dir" location="${base.dir}/build" />
<!-- Import anything that the user wants to set-->
<!-- Import properties files and environment variables here -->
<property environment="env" />
<condition property="build-cloud.properties.file" value="${build.dir}/override/build-cloud.properties" else="${build.dir}/build-cloud.properties">
<available file="${build.dir}/override/build-cloud.properties" />
</condition>
<condition property="cloud.properties.file" value="${build.dir}/override/cloud.properties" else="${build.dir}/cloud.properties">
<available file="${build.dir}/override/cloud.properties" />
</condition>
<condition property="override.file" value="${build.dir}/override/replace.properties" else="${build.dir}/replace.properties">
<available file="${build.dir}/override/replace.properties" />
</condition>
<echo message="Using build parameters from ${build-cloud.properties.file}" />
<property file="${build-cloud.properties.file}" />
<echo message="Using company info from ${cloud.properties.file}" />
<property file="${cloud.properties.file}" />
<echo message="Using override file from ${override.file}" />
<property file="${override.file}" />
<property file="${base.dir}/build/build.number" />
<import file="${build.dir}/build-common.xml" />
<!-- In case these didn't get defined in the build-cloud.properties -->
<property name="branding.name" value="default" />
<property name="tomcat.home" value="${env.CATALINA_HOME}" />
<property name="deprecation" value="off" />
<property name="target.compat.version" value="1.6" />
<property name="source.compat.version" value="1.6" />
<property name="debug" value="true" />
<property name="debuglevel" value="lines,source"/>
<!-- directories for build and distribution -->
<property name="dist.dir" location="${base.dir}/dist/" />
<property name="target.dir" location="${base.dir}/target" />
<property name="classes.dir" location="${target.dir}/classes" />
<property name="jar.dir" location="${target.dir}/jar" />
<property name="dep.cache.dir" location="${target.dir}/dep-cache" />
<property name="build.log" location="${target.dir}/ant_verbose.txt" />
<property name="thirdparty.dir" location="${base.dir}/thirdparty" />
<property name="deps.dir" location="${base.dir}/deps" />
<!-- directories for client compilation-->
<property name="client.dir" location="${base.dir}/client" />
<property name="client.test.dir" location="${client.dir}/test" />
<property name="client.target.dir" location="${target.dir}/ui" />
<property name="ui.user.dir" location="${base.dir}/ui" />
<property name="setup.db.dir" location="${base.dir}/setup/db" />
<!-- directories for server compilation-->
<property name="server.dir" location="${base.dir}/server" />
<property name="server.test.dir" location="${server.dir}/test" />
<property name="server.dist.dir" location="${dist.dir}/client" />
<!-- directories for core code compilation-->
<property name="core.dir" location="${base.dir}/core" />
<property name="core.test.dir" location="${core.dir}/test/" />
<!-- directories for agent code compilation-->
<property name="agent.dir" location="${base.dir}/agent" />
<property name="agent.test.dir" location="${utils.dir}/test/" />
<property name="agent.dist.dir" location="${dist.dir}/agent" />
<property name="scripts.dir" location="${base.dir}/scripts" />
<property name="scripts.target.dir" location="${target.dir}/scripts"/>
<!-- directories for console proxy & applet code compilation-->
<property name="console-common.dir" location="${base.dir}/console" />
<property name="console-common.dist.dir" location="${dist.dir}/console-common" />
<property name="console-proxy.dir" location="${base.dir}/console-proxy" />
<property name="console-proxy.dist.dir" location="${dist.dir}/console-proxy" />
<property name="console-viewer.dir" location="${base.dir}/console-viewer" />
<property name="console-viewer.dist.dir" location="${dist.dir}/console-viewer" />
<property name="tools.dir" location="${base.dir}/tools" />
<!-- <property name="antcontrib.dir" location="${tools.dir}/tools/ant/apache-ant-1.8.0/lib" />-->
<property name="deploy.dir" location="${build.dir}/deploy" />
<property name="production.dir" location="${deploy.dir}/production" />
<property name="meld.home" location="/usr/local/bin" />
<property name="assertion" value="-da" />
<!-- directories for testing -->
<property name="test.target.dir" location="${target.dir}/test" />
<property name="test.classes.dir" location="${test.target.dir}/classes" />
<!-- directories for branding -->
<property name="branding.dir" location="${build.dir}/deploy/branding/${branding.name}" />
<property name="core.jar" value="cloud-core.jar" />
<property name="utils.jar" value="cloud-utils.jar" />
<property name="server.jar" value="cloud-server.jar" />
<property name="agent.jar" value="cloud-agent.jar" />
<property name="console-common.jar" value="cloud-console-common.jar" />
<property name="console-proxy.jar" value="cloud-console-proxy.jar" />
<property name="api.jar" value="cloud-api.jar"/>
<!--
Import information about the build version and company information
-->
<property name="version" value="${company.major.version}.${company.minor.version}.${company.patch.version}" />
<!-- Class paths -->
<path id="prod.src.path">
<pathelement location="${server.dir}/src" />
<pathelement location="${utils.dir}/src" />
<pathelement location="${core.dir}/src" />
<pathelement location="${agent.dir}/src" />
</path>
<path id="src.classpath">
</path>
<path id="thirdparty.classpath">
<filelist files="${thirdparty.classpath}" />
<fileset dir="${thirdparty.dir}" erroronmissingdir="false">
<include name="*.jar" />
</fileset>
</path>
<path id="dist.classpath">
<fileset dir="${target.dir}">
<include name="**/*.jar" />
</fileset>
</path>
<path id="test.classpath">
<fileset dir="${dist.dir}">
<include name="**/*.jar" />
</fileset>
</path>
<!-- directories for util code compilation-->
<property name="utils.dir" location="${base.dir}/utils" />
<property name="utils.test.dir" location="${utils.dir}/test/" />
<path id="utils.classpath">
<path refid="thirdparty.classpath" />
</path>
<target name="compile-utils" depends="-init" description="Compile the utilities jar that is shared.">
<compile-java jar.name="${utils.jar}" top.dir="${utils.dir}" classpath="utils.classpath" />
</target>
<property name="api.dir" location="${base.dir}/api" />
<property name="api.test.dir" location="${api.dir}/test/" />
<path id="api.classpath">
<path refid="thirdparty.classpath" />
<path refid="dist.classpath"/>
</path>
<target name="compile-api" depends="-init, compile-utils" description="Compile the utilities jar that is shared.">
<compile-java jar.name="${api.jar}" top.dir="${api.dir}" classpath="api.classpath" />
</target>
<path id="core.classpath">
<path refid="thirdparty.classpath" />
<path refid="dist.classpath" />
</path>
<target name="compile-core" depends="-init, compile-utils, compile-api" description="Compile the core business logic.">
<compile-java jar.name="${core.jar}" top.dir="${core.dir}" classpath="core.classpath" />
</target>
<path id="server.classpath">
<path refid="thirdparty.classpath" />
<path refid="dist.classpath" />
</path>
<target name="compile-server" depends="-init, compile-utils, compile-core" description="Compile the management server.">
<compile-java jar.name="${server.jar}" top.dir="${server.dir}" classpath="server.classpath" />
</target>
<path id="client.classpath">
<path refid="thirdparty.classpath" />
<path refid="dist.classpath" />
</path>
<target name="build-scripts" depends="-init">
<copy todir="${scripts.target.dir}">
<fileset dir="${scripts.dir}">
<include name="**/*"/>
<exclude name="**/.*" />
<exclude name="**/network/domr/mth/" />
<exclude name="**/network/domr/kvm/" />
<exclude name="**/network/domr/xenserver/" />
<exclude name="**/storage/zfs/" />
<exclude name="**/storage/iscsi/" />
<exclude name="**/hypervisor/xen/" />
</fileset>
<filterset>
<filter token="VERSION" value="${impl.version}"/>
</filterset>
</copy>
</target>
<target name="build-ui" depends="-init" description="Builds the UI">
<mkdir dir="${client.target.dir}" />
<copy todir="${client.target.dir}">
<fileset dir="${ui.user.dir}">
<include name="**/*.html" />
<include name="**/*.js"/>
<include name="**/*.jsp"/>
<include name="**/*.properties"/>
<exclude name="**/.classpath" />
<exclude name="**/.project" />
</fileset>
<filterset>
<filter token="VERSION" value="${impl.version}"/>
</filterset>
</copy>
<copy todir="${client.target.dir}">
<fileset dir="${ui.user.dir}">
<include name="**/*"/>
<exclude name="**/*.html" />
<exclude name="**/*.js"/>
<exclude name="**/*.jsp"/>
<exclude name="**/*.properties"/>
<exclude name="**/.classpath" />
<exclude name="**/.project" />
</fileset>
</copy>
</target>
<target name="build-server" depends="compile-server">
<mkdir dir="${server.dist.dir}" />
<mkdir dir="${server.dist.dir}/lib" />
<mkdir dir="${server.dist.dir}/conf" />
<copy todir="${server.dist.dir}/lib">
<fileset dir="${thirdparty.dir}">
<include name="mysql-connector-java-5.1.7-bin.jar" />
<include name="cglib-nodep-2.2.jar" />
<include name="gson-1.3.jar" />
<include name="log4j-1.2.15.jar" />
<include name="apache-log4j-extras-1.0.jar" />
<include name="ehcache-1.5.0.jar" />
<include name="commons-logging-1.1.1.jar" />
<include name="commons-dbcp-1.2.2.jar" />
<include name="commons-pool-1.4.jar" />
<include name="backport-util-concurrent-3.0.jar" />
<include name="httpcore-4.0.jar" />
<include name="commons-httpclient-3.1.jar" />
<include name="commons-codec-1.4.jar" />
<include name="email.jar" />
<include name="xmlrpc-client-3.1.3.jar" />
<include name="xmlrpc-common-3.1.3.jar" />
<include name="xenserver-5.5.0-1.jar" />
<include name="ws-commons-util-1.0.2.jar" />
<include name="trilead-ssh2-build213.jar" />
</fileset>
</copy>
<copy overwrite="true" todir="${server.dist.dir}/conf">
<fileset dir="${base.dir}/client/tomcatconf">
<include name="*.in" />
</fileset>
<globmapper from="*.in" to="*" />
<filterchain>
<filterreader classname="org.apache.tools.ant.filters.ReplaceTokens">
<param type="propertiesfile" value="${override.file}" />
</filterreader>
</filterchain>
</copy>
<copy overwrite="true" todir="${server.dist.dir}/conf">
<fileset dir="${server.dir}/src/com/cloud/migration">
<include name="*.xml" />
</fileset>
</copy>
</target>
<path id="console-common.classpath">
<path refid="thirdparty.classpath" />
<path refid="dist.classpath" />
</path>
<target name="compile-console-common" depends="-init" description="Compile the console-common jar that is shared.">
<compile-java jar.name="${console-common.jar}" top.dir="${console-common.dir}" classpath="console-common.classpath" />
</target>
<path id="console-proxy.classpath">
<path refid="thirdparty.classpath" />
<path refid="dist.classpath" />
</path>
<target name="compile-console-proxy" depends="-init, compile-console-common" description="Compile the console proxy.">
<compile-java jar.name="${console-proxy.jar}" top.dir="${console-proxy.dir}" classpath="console-proxy.classpath" />
</target>
<target name="copy-console-proxy" depends="-init">
<property name="copyto.dir" value="${console-proxy.dist.dir}" />
<mkdir dir="${copyto.dir}" />
<mkdir dir="${copyto.dir}/conf" />
<mkdir dir="${copyto.dir}/logs" />
<mkdir dir="${copyto.dir}/applet" />
<mkdir dir="${copyto.dir}/images" />
<mkdir dir="${copyto.dir}/js" />
<mkdir dir="${copyto.dir}/ui" />
<mkdir dir="${copyto.dir}/css" />
<copy todir="${copyto.dir}">
<fileset dir="${thirdparty.dir}">
<include name="log4j-1.2.15.jar" />
<include name="apache-log4j-extras-1.0.jar" />
<include name="gson-1.3.jar" />
</fileset>
</copy>
<copy todir="${copyto.dir}">
<fileset dir="${jar.dir}">
<include name="cloud-console-proxy.jar" />
<include name="cloud-console-common.jar" />
</fileset>
</copy>
<copy todir="${copyto.dir}/conf">
<fileset dir="${production.dir}/consoleproxy/conf">
<include name="log4j-cloud.xml" />
<include name="consoleproxy.properties" />
</fileset>
</copy>
<copy todir="${copyto.dir}/images">
<fileset dir="${console-proxy.dir}/images">
<include name="*.jpg" />
<include name="*.gif" />
<include name="*.png" />
<include name="*.cur" />
</fileset>
</copy>
<copy todir="${copyto.dir}/applet">
<fileset dir="${jar.dir}">
<include name="VMOpsConsoleApplet.jar" />
</fileset>
</copy>
<copy todir="${copyto.dir}/js">
<fileset dir="${console-proxy.dir}/js">
<include name="*.js" />
</fileset>
</copy>
<copy todir="${copyto.dir}/ui">
<fileset dir="${console-proxy.dir}/ui">
<include name="*.ftl" />
</fileset>
</copy>
<copy todir="${copyto.dir}/css">
<fileset dir="${console-proxy.dir}/css">
<include name="*.css" />
</fileset>
</copy>
</target>
<target name="build-console-proxy" depends="-init, build-console-viewer, compile-console-proxy, copy-console-proxy">
<copy todir="${console-proxy.dist.dir}">
<fileset dir="${console-proxy.dir}/scripts">
</fileset>
</copy>
<copy todir="${console-proxy.dist.dir}">
<fileset dir="${console-proxy.dir}/scripts">
</fileset>
</copy>
<copy todir="${console-proxy.dist.dir}/conf">
<fileset dir="${console-proxy.dir}/conf">
</fileset>
</copy>
</target>
<path id="console-viewer.classpath">
<path refid="thirdparty.classpath" />
<path refid="dist.classpath" />
</path>
<target name="build-console-viewer" depends="-init" description="Compile console viewer applet">
<mkdir dir="${classes.dir}/console-viewer" />
<depend srcdir="${console-viewer.dir}/src" destdir="${classes.dir}/console-viewer" cache="${dep.cache.dir}" />
<javac srcdir="${console-common.dir}/src" debug="${debug}" debuglevel="${debuglevel}" deprecation="${deprecation}" destdir="${classes.dir}/console-viewer" source="${source.compat.version}" target="${target.compat.version}" includeantruntime="false">
<classpath refid="console-viewer.classpath" />
<exclude name="${compile.java.exclude.files}" />
<compilerarg value="-Xlint:all" />
</javac>
<javac srcdir="${console-viewer.dir}/src" debug="${debug}" debuglevel="${debuglevel}" deprecation="${deprecation}" destdir="${classes.dir}/console-viewer" source="${source.compat.version}" target="${target.compat.version}" includeantruntime="false">
<classpath refid="console-viewer.classpath" />
<exclude name="${compile.java.exclude.files}" />
<compilerarg value="-Xlint:all" />
</javac>
<jar jarfile="${jar.dir}/VMOpsConsoleApplet.jar" basedir="${classes.dir}/console-viewer">
<manifest>
<attribute name="Class-Path" value="" />
<attribute name="Built-By" value="${built.by}" />
<attribute name="Manifest-Version" value="1.0" />
<attribute name="Main-Class" value="ConsoleViewer" />
</manifest>
</jar>
</target>
<path id="agent.classpath">
<path refid="thirdparty.classpath" />
<fileset dir="${target.dir}">
<include name="**/${core.jar}" />
<include name="**/${utils.jar}" />
<include name="**/${api.jar}"/>
</fileset>
</path>
<target name="compile-agent" depends="-init, compile-utils, compile-core, compile-api" description="Compile the management agent.">
<compile-java jar.name="${agent.jar}" top.dir="${agent.dir}" classpath="agent.classpath" />
</target>
<target name="-init-test" depends="-init">
<mkdir dir="${test.target.dir}" />
<mkdir dir="${test.classes.dir}" />
</target>
<target name="build-agent" depends="-init, build-console-proxy, compile-agent">
<mkdir dir="${agent.dist.dir}" />
<mkdir dir="${agent.dist.dir}/scripts" />
<mkdir dir="${agent.dist.dir}/conf" />
<mkdir dir="${agent.dist.dir}/logs" />
<mkdir dir="${agent.dist.dir}/db" />
<mkdir dir="${agent.dist.dir}/storagehdpatch" />
<condition property="agent.properties" value="override/agent.properties" else="agent.properties">
<available file="${agent.dir}/conf/override/agent.properties" />
</condition>
<condition property="developer.properties" value="override/developer.properties" else="developer.properties">
<available file="${agent.dir}/conf/override/developer.properties" />
</condition>
<copy overwrite="true" todir="${agent.dist.dir}/conf" flatten="true">
<fileset dir="${agent.dir}/conf">
<include name="${agent.properties}" />
<include name="${developer.properties}" />
</fileset>
<filterchain>
<filterreader classname="org.apache.tools.ant.filters.ReplaceTokens">
<param type="propertiesfile" value="${override.file}" />
</filterreader>
</filterchain>
</copy>
<copy overwrite="true" todir="${agent.dist.dir}/conf" flatten="true">
<fileset dir="${agent.dir}/conf">
<include name="log4j-cloud.xml.in" />
</fileset>
<globmapper from="*.in" to="*" />
<filterchain>
<filterreader classname="org.apache.tools.ant.filters.ReplaceTokens">
<param type="propertiesfile" value="${override.file}" />
</filterreader>
</filterchain>
</copy>
<delete file="${agent.dist.dir}/conf/log4j-cloud.xml.in"/>
<copy todir="${agent.dist.dir}">
<fileset dir="${agent.dir}/scripts">
<include name="agent.sh" />
<include name="run.sh" />
</fileset>
</copy>
</target>
<target name="build-servers" depends="-init, build-server" />
<target name="build-opensource" depends="-init, build-server, build-agent, build-console-proxy, build-scripts, build-ui">
<copy overwrite="true" todir="${dist.dir}">
<fileset dir="${base.dir}/build/deploy/">
<include name="deploy-agent.sh" />
<include name="deploy-server.sh" />
<include name="deploy-console-proxy.sh" />
<include name="install.sh" />
</fileset>
<fileset dir="${base.dir}/client">
<include name="setup/**/*" />
</fileset>
</copy>
<chmod file="${dist.dir}/deploy-agent.sh" perm="uog+xr" />
<chmod file="${dist.dir}/deploy-server.sh" perm="uog+xr" />
</target>
<target name="build-kvm-domr-patch" depends="-init">
<tar destfile="${dist.dir}/patch.tar">
<tarfileset dir="${base.dir}/patches/kvm" filemode="755">
<include name="**/*"/>
<exclude name="**/.classpath" />
<exclude name="**/.project" />
</tarfileset>
<tarfileset dir="${base.dir}/patches/shared" filemode="755">
<include name="**/*"/>
<exclude name="**/.classpath" />
<exclude name="**/.project" />
</tarfileset>
</tar>
<gzip destfile="${dist.dir}/patch.tgz" src="${dist.dir}/patch.tar"/>
<delete file="${dist.dir}/patch.tar"/>
</target>
<target name="help">
<echo level="info" message="Ant Build File for Cloud.com Cloud Stack" />
<echo level="info" message="Type 'ant -projecthelp' to get a list of targets and their descriptions." />
</target>
<target name="usage" depends="help" />
<target name="-init">
<mkdir dir="${dist.dir}" />
<mkdir dir="${target.dir}" />
<record name="${build.log}" loglevel="verbose" action="start" />
<!-- create a UTC build timestamp using ISO 8601 formatting -->
<tstamp>
<format property="utc.build.timestamp" pattern="yyyy-MM-dd'T'HH:mm:ss.SSS'Z'" timezone="GMT" />
</tstamp>
<!-- remember who/where did the build -->
<exec executable="hostname" outputproperty="host.name" />
<property name="builder.at" value="${user.name} at ${host.name}" />
<property name="builder.id" value="${builder.at}, on ${utc.build.timestamp}" />
<property name="built.by" value="${builder.at}, ${utc.build.timestamp}" />
<echo level="info" message="builder: ${builder.id}" />
<!-- set build.number property, stored in eponymous file -->
<buildnumber file="${build.dir}/build.number" />
<condition property="impl.version" value="${version}.${manual.build.number}" else="${version}.${build.number}">
<isset property="manual.build.number"/>
</condition>
<echo message="Build number is ${impl.version}" />
<!-- Create the build directory structure used by compile -->
<mkdir dir="${jar.dir}" />
<mkdir dir="${docs.dir}" />
<mkdir dir="${dep.cache.dir}" />
<record name="${build.log}" action="stop" />
</target>
<target name="clean" description="clean up files generated by the build">
<delete file="${build.log}" />
<delete dir="${classes.dir}" />
<delete dir="${jar.dir}" />
<delete dir="${dist.dir}" />
</target>
<target name="clean-all" depends="clean" description="Clean all of the generated files, including dependency cache and javadoc">
<delete dir="${target.dir}" />
</target>
</project>

82
build/build-common.xml Executable file
View File

@ -0,0 +1,82 @@
<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2008 VMOps, Inc. All Rights Reserved -->
<project name="VMOps-Common" default="help" basedir=".">
<!--
compile-java requires the following parameters
- top.dir = the root directory of the source.
- jar.name = name of the jar file.
- classpath = classpath to use for this compile
The directory structure under the ${top.dir} needs to be.
- src
The target files are generated to ${classes.dir}/${jar.name}.
The jar file is generated to ${dist.dir}/${jar.name}.jar.
-->
<target name="help">
<echo message="This file is meant to be imported by other build.xml to provide common
functionality. Don not edit this file unless you are sure about adding
common functionality."/>
</target>
<dirname property="base.dir" file="${ant.file.VMOps-Common}/.."/>
<property name="build.dir" location="${base.dir}/build"/>
<property name="target.dir" location="${base.dir}/target"/>
<property name="classes.dir" location="${target.dir}/classes"/>
<property name="jar.dir" location="${target.dir}/jar"/>
<property name="dep.cache.dir" location="${target.dir}/dep-cache"/>
<property name="debug" value="true"/>
<property name="debuglevel" value="lines,source"/>
<macrodef name="compile-java">
<attribute name="top.dir" description="Top Directory of the source. We will add src to this to get the source code."/>
<attribute name="jar.name" description="Name of the jar file"/>
<attribute name="classpath" description="class path to use"/>
<element name="include-files" optional="true"/>
<element name="exclude-files" optional="true"/>
<sequential>
<mkdir dir="${classes.dir}/@{jar.name}"/>
<depend srcdir="@{top.dir}/src" destdir="${classes.dir}/@{jar.name}" cache="${dep.cache.dir}" />
<javac srcdir="@{top.dir}/src" debug="${debug}" debuglevel="${debuglevel}" deprecation="${deprecation}" destdir="${classes.dir}/@{jar.name}" source="${source.compat.version}" target="${target.compat.version}" includeantruntime="false" compiler="javac1.6">
<!-- compilerarg line="-processor com.cloud.annotation.LocalProcessor -processorpath ${base.dir}/tools/src -Xlint:all"/ -->
<!-- compilerarg line="-processor com.cloud.utils.LocalProcessor -processorpath ${base.dir}/utils/src -Xlint:all"/ -->
<compilerarg line="-Xlint:all"/>
<classpath refid="@{classpath}" />
<exclude-files/>
</javac>
<jar jarfile="${jar.dir}/@{jar.name}" basedir="${classes.dir}/@{jar.name}">
<manifest>
<attribute name="Class-Path" value="" />
<attribute name="Built-By" value="${built.by}" />
<attribute name="Specification-Title" value="VMOps Cloud Stack" />
<attribute name="Specification-Version" value="${impl.version}" />
<attribute name="Specification-Vendor" value="${company.name}" />
<attribute name="Implementation-Title" value="@{jar.name}" />
<attribute name="Implementation-Version" value="${impl.version}" />
<attribute name="Implementation-Vendor" value="${company.name}" />
</manifest>
<include-files/>
</jar>
</sequential>
</macrodef>
<macrodef name="clean-java">
<attribute name="top.dir" description="Top Directory of the source. We will add src to this to get the source code."/>
<attribute name="jar.name" description="Name of the jar file"/>
<sequential>
<local name="compile.java.bin.dir"/>
<property name="compile.java.bin.dir" location="${classes.dir}/@{jar.name}" />
<rmdir dir="${compile.java.bin.dir}"/>
<rm file="${jar.dir}/@{jar.name}"/>
</sequential>
</macrodef>
</project>

66
build/build-docs.xml Executable file
View File

@ -0,0 +1,66 @@
<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2008 Cloud, Inc. All Rights Reserved -->
<project name="Cloud Stack Document Tasks" default="help" basedir=".">
<description>
Cloud Stack ant build file
</description>
<!--
Always use this variable to refer to the base directory because this
variable is changeable
-->
<dirname property="base.dir" file="${ant.file.Cloud Stack Document Tasks}/.." />
<import file="./build-cloud.xml" optional="false"/>
<!-- directories for java doc -->
<property name="docs.dir" location="${target.dir}/docs" />
<property name="docs.dist.dir" location="${dist.dir}/docs" />
<target name="doc" depends="-init, javadoc, readme" description="create all javadoc" />
<target name="readme" depends="-init">
<mkdir dir="${docs.dir}/readme" />
<copy file="${agent.dir}/scripts/README.txt" todir="${docs.dir}/readme" />
</target>
<target name="pdf" depends="-init">
<javadoc doclet="com.tarsec.javadoc.pdfdoclet.PDFDoclet" docletpath="${tools.dir}/pdfdoclet/pdfdoclet-1.0.2-all.jar" overview="${build.dir}/overview.html" additionalparam="-pdf javadoc.pdf -debug" private="no" access="public" classpathref="thirdparty.classpath" linksource="true" sourcepathref="prod.src.path">
<!--
<taglet name="net.sourceforge.taglets.Taglets" path="${tools.dir}/taglets/taglets.jar"/>
<tag name="config" description="Configurable Parameters in components.xml" scope="types"/>
<tag name="see" />
<tag name="author" />
<tag name="since" />
-->
<!--<packages>com.cloud.agent</packages-->
<!--package name="com.cloud.agent"/-->
<packageset dir="${server.dir}/src" />
</javadoc>
</target>
<target name="javadoc" depends="-init, build-all" description="Generate internal javadoc documentation for maintenance">
<!-- documentation properties -->
<property name="jdoc.footer" value="Copyright &amp;copy; ${company.copyright.year} ${company.name}" />
<javadoc destdir="${docs.dir}/html/api-internal" author="true" version="true" classpathref="thirdparty.classpath" sourcepathref="prod.src.path" access="protected" linksource="true" windowtitle="${company.name} ${version} Maintenance API Reference" doctitle="${company.name} ${version} Maintenance API Reference" bottom="${jdoc.footer}" overview="${build.dir}/overview.html">
<excludepackage name="com.xensource.xenapi.*" />
<taglet name="net.sourceforge.taglets.Taglets" path="${tools.dir}/taglets/taglets.jar" />
<tag name="config" description="Configurable Parameters in components.xml" scope="types" />
<tag name="see" />
<tag name="author" />
<tag name="since" />
<packageset dir="${server.dir}/src" />
</javadoc>
</target>
<target name="build-docs" depends="javadoc">
<copy todir="${docs.dist.dir}">
<fileset dir="${docs.dir}" />
</copy>
</target>
</project>

3
build/build.number Normal file
View File

@ -0,0 +1,3 @@
#Build Number for ANT. Do not edit!
#Sat Aug 07 12:54:57 PDT 2010
build.number=927

14
build/cloud.properties Executable file
View File

@ -0,0 +1,14 @@
# Copyright 2007 VMOps, Inc.
# major.minor.patch versioning scheme for vmops
company.major.version=1
company.minor.version=9
company.patch.version=1
svn.revision=2
# copyright year
company.copyright.year=2008-2010
company.url=http://www.vmops.com
company.license.name=GPL
company.name=VMOps Inc.

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 KiB

103
build/deploy/db/deploy-db.sh Executable file
View File

@ -0,0 +1,103 @@
#!/usr/bin/env bash
# deploy-db.sh -- deploys the database configuration.
# set -x
if [ "$1" == "" ]; then
printf "Usage: %s [path to additional sql] [root password]\n" $(basename $0) >&2
exit 1;
fi
if [ ! -f $1 ]; then
echo "Error: Unable to find $1"
exit 2
fi
if [ "$2" != "" ]; then
if [ ! -f $2 ]; then
echo "Error: Unable to find $2"
exit 3
fi
fi
if [ ! -f create-database.sql ]; then
printf "Error: Unable to find create-database.sql\n"
exit 4
fi
if [ ! -f create-schema.sql ]; then
printf "Error: Unable to find create-schema.sql\n"
exit 5
fi
if [ ! -f create-index-fk.sql ]; then
printf "Error: Unable to find create-index-fk.sql\n"
exit 6;
fi
PATHSEP=':'
if [[ $OSTYPE == "cygwin" ]] ; then
export CATALINA_HOME=`cygpath -m $CATALINA_HOME`
PATHSEP=';'
else
mysql="mysql"
service mysql status > /dev/null 2>/dev/null
if [ $? -eq 1 ]; then
mysql="mysqld"
service mysqld status > /dev/null 2>/dev/null
if [ $? -ne 0 ]; then
printf "Unable to find mysql daemon\n"
exit 7
fi
fi
echo "Starting mysql"
service $mysql start > /dev/null 2>/dev/null
fi
echo "Recreating Database."
mysql --user=root --password=$3 < create-database.sql > /dev/null 2>/dev/null
mysqlout=$?
if [ $mysqlout -eq 1 ]; then
printf "Please enter root password for MySQL.\n"
mysql --user=root --password < create-database.sql
if [ $? -ne 0 ]; then
printf "Error: Cannot execute create-database.sql\n"
exit 10
fi
elif [ $mysqlout -ne 0 ]; then
printf "Error: Cannot execute create-database.sql\n"
exit 11
fi
mysql --user=cloud --password=cloud cloud < create-schema.sql
if [ $? -ne 0 ]; then
printf "Error: Cannot execute create-schema.sql\n"
exit 11
fi
if [ "$1" != "" ]; then
mysql --user=cloud --password=cloud cloud < $1
if [ $? -ne 0 ]; then
printf "Error: Cannot execute $1\n"
exit 12
fi
fi
if [ "$2" != "" ]; then
echo "Adding Templates"
mysql --user=cloud --password=cloud cloud < $2
if [ $? -ne 0 ]; then
printf "Error: Cannot execute $2\n"
exit 12
fi
fi
echo "Creating Indice and Foreign Keys"
mysql --user=cloud --password=cloud cloud < create-index-fk.sql
if [ $? -ne 0 ]; then
printf "Error: Cannot execute create-index-fk.sql\n"
exit 13
fi

View File

@ -0,0 +1,7 @@
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
log4j.appender.stdout.threshold=ERROR
log4j.rootLogger=INFO, stdout
log4j.category.org.apache=INFO, stdout

217
build/deploy/deploy-agent.sh Executable file
View File

@ -0,0 +1,217 @@
#!/usr/bin/env bash
# install.sh -- installs an agent
#
#
usage() {
printf "Usage: %s: -d [directory to deploy to] -t [routing|storage|computing] -z [zip file] -h [host] -p [pod] -c [data center] -m [expert|novice|setup]\n" $(basename $0) >&2
}
mode=
host=
pod=
zone=
deploydir=
confdir=
zipfile=
typ=
#set -x
while getopts 'd:z:t:x:m:h:p:c:' OPTION
do
case "$OPTION" in
d) deploydir="$OPTARG"
;;
z) zipfile="$OPTARG"
;;
t) typ="$OPTARG"
;;
m) mode="$OPTARG"
;;
h) host="$OPTARG"
;;
p) pod="$OPTARG"
;;
c) zone="$OPTARG"
;;
?) usage
exit 2
;;
esac
done
printf "NOTE: You must have root privileges to install and run this program.\n"
if [ "$typ" == "" ]; then
if [ "$mode" != "expert" ]
then
printf "Type of agent to install [routing|computing|storage]: "
read typ
fi
fi
if [ "$typ" != "computing" ] && [ "$typ" != "routing" ] && [ "$typ" != "storage" ]
then
printf "ERROR: The choices are computing, routing, or storage.\n"
exit 4
fi
if [ "$host" == "" ]; then
if [ "$mode" != "expert" ]
then
printf "Host name or ip address of management server [Required]: "
read host
if [ "$host" == "" ]; then
printf "ERROR: Host is required\n"
exit 23;
fi
fi
fi
port=
if [ "$mode" != "expert" ]
then
printf "Port number of management server [defaults to 8250]: "
read port
fi
if [ "$port" == "" ]
then
port=8250
fi
if [ "$zone" == "" ]; then
if [ "$mode" != "expert" ]; then
printf "Availability Zone [Required]: "
read zone
if [ "$zone" == "" ]; then
printf "ERROR: Zone is required\n";
exit 21;
fi
fi
fi
if [ "$pod" == "" ]; then
if [ "$mode" != "expert" ]; then
printf "Pod [Required]: "
read pod
if [ "$pod" == "" ]; then
printf "ERROR: Pod is required\n";
exit 22;
fi
fi
fi
workers=
if [ "$mode" != "expert" ]; then
printf "# of workers to start [defaults to 3]: "
read workers
fi
if [ "$workers" == "" ]; then
workers=3
fi
if [ "$deploydir" == "" ]; then
if [ "$mode" != "expert" ]; then
printf "Directory to deploy to [defaults to /usr/local/vmops/agent]: "
read deploydir
fi
if [ "$deploydir" == "" ]; then
deploydir="/usr/local/vmops/agent"
fi
fi
if ! mkdir -p $deploydir
then
printf "ERROR: Unable to create $deploydir\n"
exit 5
fi
if [ "$zipfile" == "" ]; then
if [ "$mode" != "expert" ]; then
printf "Path of the zip file [defaults to agent.zip]: "
read zipfile
fi
if [ "$zipfile" == "" ]; then
zipfile="agent.zip"
fi
fi
if ! unzip -o $zipfile -d $deploydir
then
printf "ERROR: Unable to unzip $zipfile to $deploydir\n"
exit 6
fi
#if ! chmod -R +x $deploydir/scripts/*.sh
#then
# printf "ERROR: Unable to change scripts to executable.\n"
# exit 7
#fi
#if ! chmod -R +x $deploydir/scripts/iscsi/*.sh
#then
# printf "ERROR: Unable to change scripts to executable.\n"
# exit 8
#fi
#if ! chmod -R +x $deploydir/*.sh
#then
# printf "ERROR: Unable to change scripts to executable.\n"
# exit 9
#fi
if [ "$mode" == "setup" ]; then
mode="expert"
deploydir="/usr/local/vmops/agent"
confdir="/etc/vmops"
/bin/cp -f $deploydir/conf/agent.properties $confdir/agent.properties
if [ $? -gt 0 ]; then
printf "ERROR: Failed to copy the agent.properties file into the right place."
exit 10;
fi
else
confdir="$deploydir/conf"
fi
if [ "$typ" != "" ]; then
sed s/@TYPE@/"$typ"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Type is not set\n"
fi
if [ "$host" != "" ]; then
sed s/@HOST@/"$host"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: host is not set\n"
fi
if [ "$port" != "" ]; then
sed s/@PORT@/"$port"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Port is not set\n"
fi
if [ "$pod" != "" ]; then
sed s/@POD@/"$pod"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Pod is not set\n"
fi
if [ "$zone" != "" ]; then
sed s/@ZONE@/"$zone"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Zone is not set\n"
fi
if [ "$workers" != "" ]; then
sed s/@WORKERS@/"$workers"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Workers is not set\n"
fi
printf "SUCCESS: Installation is now complete. If you like to make changes, edit $confdir/agent.properties\n"
exit 0

View File

@ -0,0 +1,73 @@
#!/usr/bin/env bash
# Deploy console proxy package to an existing VM template
#
usage() {
printf "Usage: %s: -d [work directory to deploy to] -z [zip file]" $(basename $0) >&2
}
deploydir=
zipfile=
#set -x
while getopts 'd:z:' OPTION
do
case "$OPTION" in
d) deploydir="$OPTARG"
;;
z) zipfile="$OPTARG"
;;
?) usage
exit 2
;;
esac
done
printf "NOTE: You must have root privileges to install and run this program.\n"
if [ "$deploydir" == "" ]; then
printf "ERROR: Unable to find deployment work directory $deploydir\n"
exit 3;
fi
if [ ! -f $deploydir/consoleproxy.tar.gz ]
then
printf "ERROR: Unable to find existing console proxy template file (consoleproxy.tar.gz) to work on at $deploydir\n"
exit 5
fi
if [ "$zipfile" == "" ]; then
zipfile="console-proxy.zip"
fi
if ! mkdir -p /mnt/consoleproxy
then
printf "ERROR: Unable to create /mnt/consoleproxy for mounting template image\n"
exit 5
fi
tar xvfz $deploydir/consoleproxy.tar.gz -C $deploydir
mount -o loop $deploydir/vmi-root-fc8-x86_64-domP /mnt/consoleproxy
if ! unzip -o $zipfile -d /mnt/consoleproxy/usr/local/vmops/consoleproxy
then
printf "ERROR: Unable to unzip $zipfile to $deploydir\n"
exit 6
fi
umount /mnt/consoleproxy
pushd $deploydir
tar cvf consoleproxy.tar vmi-root-fc8-x86_64-domP
mv -f consoleproxy.tar.gz consoleproxy.tar.gz.old
gzip consoleproxy.tar
popd
if [ ! -f $deploydir/consoleproxy.tar.gz ]
then
mv consoleproxy.tar.gz.old consoleproxy.tar.gz
printf "ERROR: failed to deploy and recreate the template at $deploydir\n"
fi
printf "SUCCESS: Installation is now complete. please go to $deploydir to review it\n"
exit 0

106
build/deploy/deploy-server.sh Executable file
View File

@ -0,0 +1,106 @@
#!/usr/bin/env bash
# deploy.sh -- deploys a management server
#
#
usage() {
printf "Usage: %s: -d [tomcat directory to deploy to] -z [zip file to use]\n" $(basename $0) >&2
}
dflag=
zflag=
tflag=
iflag=
deploydir=
zipfile="client.zip"
typ=
#set -x
while getopts 'd:z:x:h:' OPTION
do
case "$OPTION" in
d) dflag=1
deploydir="$OPTARG"
;;
z) zflag=1
zipfile="$OPTARG"
;;
h) iflag="$OPTARG"
;;
?) usage
exit 2
;;
esac
done
if [ "$deploydir" == "" ]
then
if [ "$CATALINA_HOME" == "" ]
then
printf "Tomcat Directory to deploy to: "
read deploydir
else
deploydir="$CATALINA_HOME"
fi
fi
if [ "$deploydir" == "" ]
then
printf "Tomcat directory was not specified\n";
exit 15;
fi
printf "Check to see if the Tomcat directory exist: $deploydir\n"
if [ ! -d $deploydir ]
then
printf "Tomcat directory does not exist\n";
exit 16;
fi
if [ "$zipfile" == "" ]
then
printf "Path of the zip file [defaults to client.zip]: "
read zipfile
if [ "$zipfile" == "" ]
then
zipfile="client.zip"
fi
fi
if ! unzip -o $zipfile client.war
then
exit 6
fi
rm -fr $deploydir/webapps/client
if ! unzip -o ./client.war -d $deploydir/webapps/client
then
exit 10;
fi
rm -f ./client.war
if ! unzip -o $zipfile lib/* -d $deploydir
then
exit 11;
fi
if ! unzip -o $zipfile conf/* -d $deploydir
then
exit 12;
fi
if ! unzip -o $zipfile bin/* -d $deploydir
then
exit 13;
fi
printf "Adding the conf directory to the class loader for tomcat\n"
sed 's/shared.loader=$/shared.loader=\$\{catalina.home\},\$\{catalina.home\}\/conf\
/' $deploydir/conf/catalina.properties > $deploydir/conf/catalina.properties.tmp
mv $deploydir/conf/catalina.properties.tmp $deploydir/conf/catalina.properties
printf "Installation is now complete\n"
exit 0

View File

@ -0,0 +1,185 @@
#!/usr/bin/env bash
# install.sh -- installs an agent
#
#
usage() {
printf "Usage: %s: -d [directory to deploy to] -z [zip file] -h [host] -p [pod] -c [data center] -m [expert|novice|setup]\n" $(basename $0) >&2
}
mode=
host=
pod=
zone=
deploydir=
confdir=
zipfile=
typ=
#set -x
while getopts 'd:z:x:m:h:p:c:' OPTION
do
case "$OPTION" in
d) deploydir="$OPTARG"
;;
z) zipfile="$OPTARG"
;;
m) mode="$OPTARG"
;;
h) host="$OPTARG"
;;
p) pod="$OPTARG"
;;
c) zone="$OPTARG"
;;
?) usage
exit 2
;;
esac
done
printf "NOTE: You must have root privileges to install and run this program.\n"
if [ "$mode" == "setup" ]; then
mode="expert"
deploydir="/usr/local/vmops/agent-simulator"
confdir="/etc/vmops"
/bin/cp -f $deploydir/conf/agent.properties $confdir/agent.properties
if [ $? -gt 0 ]; then
printf "ERROR: Failed to copy the agent.properties file into the right place."
exit 10;
fi
else
confdir="$deploydir/conf"
fi
if [ "$host" == "" ]; then
if [ "$mode" != "expert" ]
then
printf "Host name or ip address of management server [Required]: "
read host
if [ "$host" == "" ]; then
printf "ERROR: Host is required\n"
exit 23;
fi
fi
fi
port=
if [ "$mode" != "expert" ]
then
printf "Port number of management server [defaults to 8250]: "
read port
fi
if [ "$port" == "" ]
then
port=8250
fi
if [ "$zone" == "" ]; then
if [ "$mode" != "expert" ]; then
printf "Availability Zone [Required]: "
read zone
if [ "$zone" == "" ]; then
printf "ERROR: Zone is required\n";
exit 21;
fi
fi
fi
if [ "$pod" == "" ]; then
if [ "$mode" != "expert" ]; then
printf "Pod [Required]: "
read pod
if ["$pod" == ""]; then
printf "ERROR: Pod is required\n";
exit 22;
fi
fi
fi
workers=
if [ "$mode" != "expert" ]; then
printf "# of workers to start [defaults to 3]: "
read workers
fi
if [ "$workers" == "" ]; then
workers=3
fi
if [ "$deploydir" == "" ]; then
if [ "$mode" != "expert" ]; then
printf "Directory to deploy to [defaults to /usr/local/vmops/agent-simulator]: "
read deploydir
fi
if [ "$deploydir" == "" ]; then
deploydir="/usr/local/vmops/agent-simulator"
fi
fi
if ! mkdir -p $deploydir
then
printf "ERROR: Unable to create $deploydir\n"
exit 5
fi
if [ "$zipfile" == "" ]; then
if [ "$mode" != "expert" ]; then
printf "Path of the zip file [defaults to agent-simulator.zip]: "
read zipfile
fi
if [ "$zipfile" == "" ]; then
zipfile="agent-simulator.zip"
fi
fi
if ! unzip -o $zipfile -d $deploydir
then
printf "ERROR: Unable to unzip $zipfile to $deploydir\n"
exit 6
fi
if ! chmod +x $deploydir/*.sh
then
printf "ERROR: Unable to change scripts to executable.\n"
exit 9
fi
if [ "$host" != "" ]; then
sed s/@HOST@/"$host"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: host is not set\n"
fi
if [ "$port" != "" ]; then
sed s/@PORT@/"$port"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Port is not set\n"
fi
if [ "$pod" != "" ]; then
sed s/@POD@/"$pod"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Pod is not set\n"
fi
if [ "$zone" != "" ]; then
sed s/@ZONE@/"$zone"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Zone is not set\n"
fi
if [ "$workers" != "" ]; then
sed s/@WORKERS@/"$workers"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Workers is not set\n"
fi
printf "SUCCESS: Installation is now complete. If you like to make changes, edit $confdir/agent.properties\n"
exit 0

View File

@ -0,0 +1,133 @@
#!/usr/bin/env bash
# install-storage-server.sh: Installs a VMOps Storage Server
#
choose_correct_filename() {
local default_filename=$1
local user_specified_filename=$2
if [ -f "$user_specified_filename" ]
then
echo $user_specified_filename
return 0
else
if [ -f "$default_filename" ]
then
echo $default_filename
return 0
else
echo ""
return 1
fi
fi
}
install_opensolaris_package() {
pkg_name=$1
pkg info $pkg_name >> /dev/null
if [ $? -gt 0 ]
then
# The package is not installed, so install it
pkg install $pkg_name
return $?
else
# The package is already installed
return 0
fi
}
exit_if_error() {
return_code=$1
msg=$2
if [ $return_code -gt 0 ]
then
echo $msg
exit 1
fi
}
usage() {
printf "Usage: ./install-storage-server.sh <path to agent.zip> <path to templates.tar.gz>"
}
AGENT_FILE=$(choose_correct_filename "./agent.zip" $1)
exit_if_error $? "Please download agent.zip to your Storage Server."
TEMPLATES_FILE=$(choose_correct_filename "./templates.tar.gz" $2)
exit_if_error $? "Please download templates.tar.gz to your Storage Server."
VMOPS_DIR="/usr/local/vmops"
AGENT_DIR="/usr/local/vmops/agent"
CONF_DIR="/etc/vmops"
TEMPLATES_DIR="/root/template"
# Make all the necessary directories if they don't already exist
echo "Creating VMOps directories..."
for dir in $VMOPS_DIR $CONF_DIR $TEMPLATES_DIR
do
mkdir -p $dir
done
# Unzip agent.zip to $AGENT_DIR
echo "Uncompressing and installing VMOps Storage Agent..."
unzip -o $AGENT_FILE -d $AGENT_DIR >> /dev/null
# Remove agent/conf/agent.properties, since we should use the file in the real configuration directory
rm $AGENT_DIR/conf/agent.properties
# Backup any existing VMOps configuration files, if there aren't any backups already
if [ ! -d $CONF_DIR/BACKUP ]
then
echo "Backing up existing configuration files..."
mkdir -p $CONF_DIR/BACKUP
cp $CONF_DIR/*.properties $CONF_DIR/BACKUP >> /dev/null
fi
# Copy all the files in storagehdpatch to their proper places
echo "Installing system files..."
(cd $AGENT_DIR/storagehdpatch; tar cf - .) | (cd /; tar xf -)
exit_if_error $? "There was a problem with installing system files. Please contact VMOps Support."
# Make vsetup executable
chmod +x /usr/sbin/vsetup
# Make vmops executable
chmod +x /lib/svc/method/vmops
# Uncompress the templates and copy them to the templates directory
echo "Uncompressing templates..."
tar -xzf $TEMPLATES_FILE -C $TEMPLATES_DIR >> /dev/null
exit_if_error $? "There was a problem with uncompressing templates. Please contact VMOps Support."
# Install the storage-server package, if it is not already installed
echo "Installing OpenSolaris storage server package..."
install_opensolaris_package "storage-server"
exit_if_error $? "There was a problem with installing the storage server package. Please contact VMOps Support."
echo "Installing COMSTAR..."
install_opensolaris_package "SUNWiscsit"
exit_if_error $? "Unable to install COMSTAR iscsi target. Please contact VMOps Support."
# Install the SUNWinstall-test package, if it is not already installed
echo "Installing OpenSolaris test tools package..."
install_opensolaris_package "SUNWinstall-test"
exit_if_error $? "There was a problem with installing the test tools package. Please contact VMOps Support."
# Print a success message
printf "\nSuccessfully installed the VMOps Storage Server.\n"
printf "Please complete the following steps to configure your networking settings and storage pools:\n\n"
printf "1. Specify networking settings in /etc/vmops/network.properties\n"
printf "2. Run \"vsetup networking\" and then specify disk settings in /etc/vmops/disks.properties\n"
printf "3. Run \"vsetup zpool\" and reboot the machine when prompted.\n\n"

139
build/deploy/install.sh Normal file
View File

@ -0,0 +1,139 @@
#!/bin/bash
# install.sh -- installs MySQL, Java, Tomcat, and the VMOps server
#set -x
set -e
EX_NOHOSTNAME=15
EX_SELINUX=16
function usage() {
printf "Usage: %s [path to server-setup.xml]\n" $(basename $0) >&2
exit 64
}
function checkhostname() {
if hostname | grep -qF . ; then true ; else
echo "You need to have a fully-qualified host name for the setup to work." > /dev/stderr
echo "Please use your operating system's network setup tools to set one." > /dev/stderr
exit $EX_NOHOSTNAME
fi
}
function checkselinux() {
#### before checking arguments, make sure SELINUX is "permissible" in /etc/selinux/config
if /usr/sbin/getenforce | grep -qi enforcing ; then borked=1 ; fi
if grep -i SELINUX=enforcing /etc/selinux/config ; then borked=1 ; fi
if [ "$borked" == "1" ] ; then
echo "SELINUX is set to enforcing, please set it to permissive in /etc/selinux/config" > /dev/stderr
echo "then reboot the machine, after which you can run the install script again." > /dev/stderr
exit $EX_SELINUX
fi
}
checkhostname
checkselinux
if [ "$1" == "" ]; then
usage
fi
if [ ! -f $1 ]; then
echo "Error: Unable to find $1" > /dev/stderr
exit 2
fi
#### check that all files exist
if [ ! -f apache-tomcat-6.0.18.tar.gz ]; then
printf "Error: Unable to find apache-tomcat-6.0.18.tar.gz\n" > /dev/stderr
exit 3
fi
if [ ! -f MySQL-client-5.1.30-0.glibc23.x86_64.rpm ]; then
printf "Error: Unable to find MySQL-client-5.1.30-0.glibc23.x86_64.rpm\n" > /dev/stderr
exit 4
fi
if [ ! -f MySQL-server-5.1.30-0.glibc23.x86_64.rpm ]; then
printf "Error: Unable to find MySQL-server-5.1.30-0.glibc23.x86_64.rpm\n" > /dev/stderr
exit 5
fi
if [ ! -f jdk-6u13-linux-amd64.rpm.bin ]; then
printf "Error: Unable to find jdk-6u13-linux-amd64.rpm.bin\n" > /dev/stderr
exit 6
fi
#if [ ! -f osol.tar.bz2 ]; then
# printf "Error: Unable to find osol.tar.bz2\n"
# exit 7
#fi
if [ ! -f apache-tomcat-6.0.18.tar.gz ]; then
printf "Error: Unable to find apache-tomcat-6.0.18.tar.gz\n" > /dev/stderr
exit 8
fi
if [ ! -f vmops-*.zip ]; then
printf "Error: Unable to find vmops install file\n" > /dev/stderr
exit 9
fi
if [ ! -f catalina ] ; then
printf "Error: Unable to find catalina initscript\n" > /dev/stderr
exit 10
fi
if [ ! -f usageserver ] ; then
printf "Error: Unable to find usageserver initscript\n" > /dev/stderr
exit 11
fi
###### install Apache
# if [ ! -d /usr/local/tomcat ] ; then
echo "installing Apache..."
mkdir -p /usr/local/tomcat
tar xfz apache-tomcat-6.0.18.tar.gz -C /usr/local/tomcat
ln -s /usr/local/tomcat/apache-tomcat-6.0.18 /usr/local/tomcat/current
# fi
# if [ ! -f /etc/profile.d/catalinahome.sh ] ; then
# echo "export CATALINA_HOME=/usr/local/tomcat/current" >> /etc/profile.d/catalinahome.sh
# fi
source /etc/profile.d/catalinahome.sh
# if [ ! -f /etc/init.d/catalina ] ; then
cp -f catalina /etc/init.d
/sbin/chkconfig catalina on
# fi
####### set up usage server as a service
if [ ! -f /ec/init.d/usageserver ] ; then
cp -f usageserver /etc/init.d
/sbin/chkconfig usageserver on
fi
##### set up mysql
if rpm -q MySQL-server MySQL-client > /dev/null 2>&1 ; then true ; else
echo "installing MySQL..."
yum localinstall --nogpgcheck -y MySQL-*.rpm
fi
#### install JDK
echo "installing JDK..."
sh jdk-6u13-linux-amd64.rpm.bin
rm -rf /usr/bin/java
ln -s /usr/java/default/bin/java /usr/bin/java
#### setting up OSOL image
#mkdir -p $CATALINA_HOME/webapps/images
#echo "copying Open Solaris image, this may take a few moments..."
#cp osol.tar.bz2 $CATALINA_HOME/webapps/images
#### deploying database
unzip -o vmops-*.zip
cd vmops-*
sh deploy-server.sh -d "$CATALINA_HOME"
cd db
sh deploy-db.sh "../../$1" templates.sql
exit 0

View File

@ -0,0 +1,38 @@
#
# Copyright 2005 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License, Version 1.0 only
# (the "License"). You may not use this file except in compliance
# with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
#ident "%Z%%M% %I% %E% SMI"
#
# This file is /etc/default/init. /etc/TIMEZONE is a symlink to this file.
# This file looks like a shell script, but it is not. To maintain
# compatibility with old versions of /etc/TIMEZONE, some shell constructs
# (i.e., export commands) are allowed in this file, but are ignored.
#
# Lines of this file should be of the form VAR=value, where VAR is one of
# TZ, LANG, CMASK, or any of the LC_* environment variables. value may
# be enclosed in double quotes (") or single quotes (').
#
TZ=GMT
CMASK=022
LANG=en_US.UTF-8

View File

@ -0,0 +1,6 @@
driftfile /var/lib/ntp/ntp.drift
server 0.pool.ntp.org
server 1.pool.ntp.org
server 2.pool.ntp.org
server 3.pool.ntp.org

View File

@ -0,0 +1,70 @@
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
# Copyright 2007 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#ident "%Z%%M% %I% %E% SMI"
#
# /etc/nsswitch.dns:
#
# An example file that could be copied over to /etc/nsswitch.conf; it uses
# DNS for hosts lookups, otherwise it does not use any other naming service.
#
# "hosts:" and "services:" in this file are used only if the
# /etc/netconfig file has a "-" for nametoaddr_libs of "inet" transports.
# DNS service expects that an instance of svc:/network/dns/client be
# enabled and online.
passwd: files
group: files
# You must also set up the /etc/resolv.conf file for DNS name
# server lookup. See resolv.conf(4). For lookup via mdns
# svc:/network/dns/multicast:default must also be enabled. See mdnsd(1M)
hosts: files dns
# Note that IPv4 addresses are searched for in all of the ipnodes databases
# before searching the hosts databases.
ipnodes: files dns
networks: files
protocols: files
rpc: files
ethers: files
netmasks: files
bootparams: files
publickey: files
# At present there isn't a 'files' backend for netgroup; the system will
# figure it out pretty quickly, and won't use netgroups at all.
netgroup: files
automount: files
aliases: files
services: files
printers: user files
auth_attr: files
prof_attr: files
project: files
tnrhtp: files
tnrhdb: files

View File

@ -0,0 +1,154 @@
#
# Copyright 2008 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# Configuration file for sshd(1m)
# Protocol versions supported
#
# The sshd shipped in this release of Solaris has support for major versions
# 1 and 2. It is recommended due to security weaknesses in the v1 protocol
# that sites run only v2 if possible. Support for v1 is provided to help sites
# with existing ssh v1 clients/servers to transition.
# Support for v1 may not be available in a future release of Solaris.
#
# To enable support for v1 an RSA1 key must be created with ssh-keygen(1).
# RSA and DSA keys for protocol v2 are created by /etc/init.d/sshd if they
# do not already exist, RSA1 keys for protocol v1 are not automatically created.
# Uncomment ONLY ONE of the following Protocol statements.
# Only v2 (recommended)
Protocol 2
# Both v1 and v2 (not recommended)
#Protocol 2,1
# Only v1 (not recommended)
#Protocol 1
# Listen port (the IANA registered port number for ssh is 22)
Port 22
# The default listen address is all interfaces, this may need to be changed
# if you wish to restrict the interfaces sshd listens on for a multi homed host.
# Multiple ListenAddress entries are allowed.
# IPv4 only
#ListenAddress 0.0.0.0
# IPv4 & IPv6
ListenAddress ::
# Port forwarding
AllowTcpForwarding no
# If port forwarding is enabled, specify if the server can bind to INADDR_ANY.
# This allows the local port forwarding to work when connections are received
# from any remote host.
GatewayPorts no
# X11 tunneling options
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost yes
# The maximum number of concurrent unauthenticated connections to sshd.
# start:rate:full see sshd(1) for more information.
# The default is 10 unauthenticated clients.
#MaxStartups 10:30:60
# Banner to be printed before authentication starts.
#Banner /etc/issue
# Should sshd print the /etc/motd file and check for mail.
# On Solaris it is assumed that the login shell will do these (eg /etc/profile).
PrintMotd no
# KeepAlive specifies whether keep alive messages are sent to the client.
# See sshd(1) for detailed description of what this means.
# Note that the client may also be sending keep alive messages to the server.
KeepAlive yes
# Syslog facility and level
SyslogFacility auth
LogLevel info
#
# Authentication configuration
#
# Host private key files
# Must be on a local disk and readable only by the root user (root:sys 600).
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key
# Length of the server key
# Default 768, Minimum 512
ServerKeyBits 768
# sshd regenerates the key every KeyRegenerationInterval seconds.
# The key is never stored anywhere except the memory of sshd.
# The default is 1 hour (3600 seconds).
KeyRegenerationInterval 3600
# Ensure secure permissions on users .ssh directory.
StrictModes yes
# Length of time in seconds before a client that hasn't completed
# authentication is disconnected.
# Default is 600 seconds. 0 means no time limit.
LoginGraceTime 600
# Maximum number of retries for authentication
# Default is 6. Default (if unset) for MaxAuthTriesLog is MaxAuthTries / 2
MaxAuthTries 6
MaxAuthTriesLog 3
# Are logins to accounts with empty passwords allowed.
# If PermitEmptyPasswords is no, pass PAM_DISALLOW_NULL_AUTHTOK
# to pam_authenticate(3PAM).
PermitEmptyPasswords no
# To disable tunneled clear text passwords, change PasswordAuthentication to no.
PasswordAuthentication yes
# Use PAM via keyboard interactive method for authentication.
# Depending on the setup of pam.conf(4) this may allow tunneled clear text
# passwords even when PasswordAuthentication is set to no. This is dependent
# on what the individual modules request and is out of the control of sshd
# or the protocol.
PAMAuthenticationViaKBDInt yes
# Are root logins permitted using sshd.
# Note that sshd uses pam_authenticate(3PAM) so the root (or any other) user
# maybe denied access by a PAM module regardless of this setting.
# Valid options are yes, without-password, no.
PermitRootLogin yes
# sftp subsystem
Subsystem sftp /usr/lib/ssh/sftp-server
# SSH protocol v1 specific options
#
# The following options only apply to the v1 protocol and provide
# some form of backwards compatibility with the very weak security
# of /usr/bin/rsh. Their use is not recommended and the functionality
# will be removed when support for v1 protocol is removed.
# Should sshd use .rhosts and .shosts for password less authentication.
IgnoreRhosts yes
RhostsAuthentication no
# Rhosts RSA Authentication
# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts.
# If the user on the client side is not root then this won't work on
# Solaris since /usr/bin/ssh is not installed setuid.
RhostsRSAAuthentication no
# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication.
#IgnoreUserKnownHosts yes
# Is pure RSA authentication allowed.
# Default is yes
RSAAuthentication yes

View File

@ -0,0 +1,101 @@
*ident "%Z%%M% %I% %E% SMI" /* SVR4 1.5 */
*
* CDDL HEADER START
*
* The contents of this file are subject to the terms of the
* Common Development and Distribution License, Version 1.0 only
* (the "License"). You may not use this file except in compliance
* with the License.
*
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
* or http://www.opensolaris.org/os/licensing.
* See the License for the specific language governing permissions
* and limitations under the License.
*
* When distributing Covered Code, include this CDDL HEADER in each
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
* If applicable, add the following below this CDDL HEADER, with the
* fields enclosed by brackets "[]" replaced with your own identifying
* information: Portions Copyright [yyyy] [name of copyright owner]
*
* CDDL HEADER END
*
*
* SYSTEM SPECIFICATION FILE
*
* moddir:
*
* Set the search path for modules. This has a format similar to the
* csh path variable. If the module isn't found in the first directory
* it tries the second and so on. The default is /kernel /usr/kernel
*
* Example:
* moddir: /kernel /usr/kernel /other/modules
* root device and root filesystem configuration:
*
* The following may be used to override the defaults provided by
* the boot program:
*
* rootfs: Set the filesystem type of the root.
*
* rootdev: Set the root device. This should be a fully
* expanded physical pathname. The default is the
* physical pathname of the device where the boot
* program resides. The physical pathname is
* highly platform and configuration dependent.
*
* Example:
* rootfs:ufs
* rootdev:/sbus@1,f8000000/esp@0,800000/sd@3,0:a
*
* (Swap device configuration should be specified in /etc/vfstab.)
* exclude:
*
* Modules appearing in the moddir path which are NOT to be loaded,
* even if referenced. Note that `exclude' accepts either a module name,
* or a filename which includes the directory.
*
* Examples:
* exclude: win
* exclude: sys/shmsys
* forceload:
*
* Cause these modules to be loaded at boot time, (just before mounting
* the root filesystem) rather than at first reference. Note that
* forceload expects a filename which includes the directory. Also
* note that loading a module does not necessarily imply that it will
* be installed.
*
* Example:
* forceload: drv/foo
* set:
*
* Set an integer variable in the kernel or a module to a new value.
* This facility should be used with caution. See system(4).
*
* Examples:
*
* To set variables in 'unix':
*
* set nautopush=32
* set maxusers=40
*
* To set a variable named 'debug' in the module named 'test_module'
*
* set test_module:debug = 0x13
* set zfs:zfs_arc_max=0x4002000
set zfs:zfs_vdev_cache_size=0

View File

@ -0,0 +1,7 @@
# Specify disks in this file
# D: Data
# C: Cache
# L: Intent Log
# S: Spare
# U: Unused

View File

@ -0,0 +1,35 @@
# Host Settings
hostname=
domain=
dns1=
dns2=
# Private/Storage Network Settings (required)
storage.ip=
storage.netmask=
storage.gateway=
# Second Storage Network Settings (optional)
storage.ip.2=
storage.netmask.2=
storage.gateway.2=
# Datacenter Settings
pod=
zone=
host=
port=
# Storage Appliance Settings (optional)
# Specify if you would like to use this Storage Server with an external storage appliance)
iscsi.iqn=
iscsi.ip=
iscsi.port=
# VMOps IQN (optional)
# Specify if you would like to manually change the IQN of the Storage Server's iSCSI target
vmops.iqn=
# MTU (optional)
mtu=

View File

@ -0,0 +1,106 @@
#!/bin/bash
#
# vmops Script to start and stop the VMOps Agent.
#
# Author: Chiradeep Vittal <chiradeep@vmops.com>
# chkconfig: 2345 99 01
# description: Start up the VMOps agent
# Source function library.
if [ -f /etc/init.d/functions ]
then
. /etc/init.d/functions
fi
_success() {
if [ -f /etc/init.d/functions ]
then
success
else
echo "Success"
fi
}
_failure() {
if [ -f /etc/init.d/functions ]
then
failure
else
echo "Failed"
fi
}
RETVAL=$?
VMOPS_HOME="/usr/local/vmops"
mkdir -p /var/log/vmops
get_pids() {
local i
for i in $(ps -ef | grep agent.sh | grep -v grep | awk '{print $2}');
do
echo $(pwdx $i) | grep "$VMOPS_HOME" | grep agent | awk -F: '{print $1}';
done
}
start() {
local pid=$(get_pids)
echo -n "Starting VMOps agent: "
if [ -f $VMOPS_HOME/agent/agent.sh ];
then
if [ "$pid" == "" ]
then
(cd $VMOPS_HOME/agent; nohup ./agent.sh > /var/log/vmops/vmops.out 2>&1 & )
pid=$(get_pids)
echo $pid > /var/run/vmops.pid
fi
_success
else
_failure
fi
echo
}
stop() {
local pid
echo -n "Stopping VMOps agent: "
for pid in $(get_pids)
do
pgid=$(ps -o pgid -p $pid | tr '\n' ' ' | awk '{print $2}')
pgid=${pgid## }
pgid=${pgid%% }
kill -- -$pgid
done
rm /var/run/vmops.pid
_success
echo
}
status() {
local pids=$(get_pids)
if [ "$pids" == "" ]
then
echo "VMOps agent is not running"
return 1
fi
echo "VMOps agent (pid $pids) is running"
return 0
}
case "$1" in
start) start
;;
stop) stop
;;
status) status
;;
restart) stop
sleep 1.5
start
;;
*) echo $"Usage: $0 {start|stop|status|restart}"
exit 1
;;
esac
exit $RETVAL

View File

@ -0,0 +1,44 @@
#! /bin/bash
stage=$1
option=$2
export VMOPS_HOME=/usr/local/vmops
usage() {
echo "Usage: vsetup [networking|zpool]"
echo " networking: probe NICs, configure networking, and detect disks"
echo " zpool: create ZFS storage pool"
}
if [ "$stage" != "networking" ] && [ "$stage" != "zpool" ] && [ "$stage" != "detectdisks" ]
then
usage
exit 1
fi
if [ "$option" != "" ] && [ "$option" != "-listonly" ]
then
usage
exit 1
fi
$VMOPS_HOME/agent/scripts/installer/run_installer.sh storage $stage $option
if [ $? -eq 0 ]
then
if [ "$stage" == "networking" ]
then
echo "Please edit /etc/vmops/disks.properties and then run \"vsetup zpool\"."
else
if [ "$stage" == "zpool" ]
then
echo "Press enter to reboot the computer..."
read
reboot
fi
fi
fi

Some files were not shown because too many files have changed in this diff Show More