mirror of
https://github.com/apache/cloudstack.git
synced 2025-11-02 20:02:29 +01:00
more file changes
This commit is contained in:
parent
6134f7dfd2
commit
7249f168d5
652
HACKING
652
HACKING
@ -1,652 +0,0 @@
|
||||
---------------------------------------------------------------------
|
||||
THE QUICK GUIDE TO CLOUDSTACK DEVELOPMENT
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
||||
=== Overview of the development lifecycle ===
|
||||
|
||||
To hack on a CloudStack component, you will generally:
|
||||
|
||||
1. Configure the source code:
|
||||
./waf configure --prefix=/home/youruser/cloudstack
|
||||
(see below, "./waf configure")
|
||||
|
||||
2. Build and install the CloudStack
|
||||
./waf install
|
||||
(see below, "./waf install")
|
||||
|
||||
3. Set the CloudStack component up
|
||||
(see below, "Running the CloudStack components from source")
|
||||
|
||||
4. Run the CloudStack component
|
||||
(see below, "Running the CloudStack components from source")
|
||||
|
||||
5. Modify the source code
|
||||
|
||||
6. Build and install the CloudStack again
|
||||
./waf install --preserve-config
|
||||
(see below, "./waf install")
|
||||
|
||||
7. GOTO 4
|
||||
|
||||
|
||||
=== What is this waf thing in my development lifecycle? ===
|
||||
|
||||
waf is a self-contained, advanced build system written by Thomas Nagy,
|
||||
in the spirit of SCons or the GNU autotools suite.
|
||||
|
||||
* To run waf on Linux / Mac: ./waf [...commands...]
|
||||
* To run waf on Windows: waf.bat [...commands...]
|
||||
|
||||
./waf --help should be your first discovery point to find out both the
|
||||
configure-time options and the different processes that you can run
|
||||
using waf.
|
||||
|
||||
|
||||
=== What do the different waf commands above do? ===
|
||||
|
||||
1. ./waf configure --prefix=/some/path
|
||||
|
||||
You run this command *once*, in preparation to building, or every
|
||||
time you need to change a configure-time variable.
|
||||
|
||||
This runs configure() in wscript, which takes care of setting the
|
||||
variables and options that waf will use for compilation and
|
||||
installation, including the installation directory (PREFIX).
|
||||
|
||||
For convenience reasons, if you forget to run configure, waf
|
||||
will proceed with some default configuration options. By
|
||||
default, PREFIX is /usr/local, but you can set it e.g. to
|
||||
/home/youruser/cloudstack if you plan to do a non-root
|
||||
install. Be ware that you can later install the stack as a
|
||||
regular user, but most components need to *run* as root.
|
||||
|
||||
./waf showconfig displays the values of the configure-time options
|
||||
|
||||
2. ./waf
|
||||
|
||||
You run this command to trigger compilation of the modified files.
|
||||
|
||||
This runs the contents of wscript_build, which takes care of
|
||||
discovering and describing what needs to be built, which
|
||||
build products / sources need to be installed, and where.
|
||||
|
||||
3. ./waf install
|
||||
|
||||
You run this command when you want to install the CloudStack.
|
||||
|
||||
If you are going to install for production, you should run this
|
||||
process as root. If, conversely, you only want to install the
|
||||
stack as your own user and in a directory that you have write
|
||||
permission, it's fine to run waf install as your own user.
|
||||
|
||||
This runs the contents of wscript_build, with an option variable
|
||||
Options.is_install = True. When this variable is set, waf will
|
||||
install the files described in wscript_build. For convenience
|
||||
reasons, when you run install, any files that need to be recompiled
|
||||
will also be recompiled prior to installation.
|
||||
|
||||
--------------------
|
||||
|
||||
WARNING: each time you do ./waf install, the configuration files
|
||||
in the installation directory are *overwritten*.
|
||||
|
||||
There are, however, two ways to get around this:
|
||||
|
||||
a) ./waf install has an option --preserve-config. If you pass
|
||||
this option when installing, configuration files are never
|
||||
overwritten.
|
||||
|
||||
This option is useful when you have modified source files and
|
||||
you need to deploy them on a system that already has the
|
||||
CloudStack installed and configured, but you do *not* want to
|
||||
overwrite the existing configuration of the CloudStack.
|
||||
|
||||
If, however, you have reconfigured and rebuilt the source
|
||||
since the last time you did ./waf install, then you are
|
||||
advised to replace the configuration files and set the
|
||||
components up again, because some configuration files
|
||||
in the source use identifiers that may have changed during
|
||||
the last ./waf configure. So, if this is your case, check
|
||||
out the next way:
|
||||
|
||||
b) Every configuration file can be overridden in the source
|
||||
without touching the original.
|
||||
|
||||
- Look for said config file X (or X.in) in the source, then
|
||||
- create an override/ folder in the folder that contains X, then
|
||||
- place a file named X (or X.in) inside override/, then
|
||||
- put the desired contents inside X (or X.in)
|
||||
|
||||
Now, every time you run ./waf install, the file that will be
|
||||
installed is path/to/override/X.in, instead of /path/to/X.in.
|
||||
|
||||
This option is useful if you are developing the CloudStack
|
||||
and constantly reinstalling it. It guarantees that every
|
||||
time you install the CloudStack, the installation will have
|
||||
the correct configuration and will be ready to run.
|
||||
|
||||
|
||||
=== Running the CloudStack components from source (for debugging / coding) ===
|
||||
|
||||
It is not technically possible to run the CloudStack components from
|
||||
the source. That, however, is fine -- each component can be run
|
||||
independently from the install directory:
|
||||
|
||||
- Management Server
|
||||
|
||||
1) Execute ./waf install as your current user (or as root if the
|
||||
installation path is only writable by root).
|
||||
|
||||
WARNING: if any CloudStack configuration files have been
|
||||
already configured / altered, they will be *overwritten* by this
|
||||
process. Append --preserve-config to ./waf install to prevent this
|
||||
from happening. Or resort to the override method discussed
|
||||
above (search for "override" in this document).
|
||||
|
||||
2) If you haven't done so yet, set up the management server database:
|
||||
|
||||
- either run ./waf deploydb_kvm, or
|
||||
- run $BINDIR/cloud-setup-databases
|
||||
|
||||
3) Execute ./waf run as your current user (or as root if the
|
||||
installation path is only writable by root). Alternatively,
|
||||
you can use ./waf debug and this will run with debugging enabled.
|
||||
|
||||
|
||||
- Agent (Linux-only):
|
||||
|
||||
1) Execute ./waf install as your current user (or as root if the
|
||||
installation path is only writable by root).
|
||||
|
||||
WARNING: if any CloudStack configuration files have been
|
||||
already configured / altered, they will be *overwritten* by this
|
||||
process. Append --preserve-config to ./waf install to prevent this
|
||||
from happening. Or resort to the override method discussed
|
||||
above (search for "override" in this document).
|
||||
|
||||
2) If you haven't done so yet, set the Agent up:
|
||||
|
||||
- run $BINDIR/cloud-setup-agent
|
||||
|
||||
3) Execute ./waf run_agent as root
|
||||
|
||||
this will launch sudo and require your root password unless you have
|
||||
set sudo up not to ask for it
|
||||
|
||||
|
||||
- Console Proxy (Linux-only):
|
||||
|
||||
1) Execute ./waf install as your current user (or as root if the
|
||||
installation path is only writable by root).
|
||||
|
||||
WARNING: if any CloudStack configuration files have been
|
||||
already configured / altered, they will be *overwritten* by this
|
||||
process. Append --preserve-config to ./waf install to prevent this
|
||||
from happening. Or resort to the override method discussed
|
||||
above (search for "override" in this document).
|
||||
|
||||
2) If you haven't done so yet, set the Console Proxy up:
|
||||
|
||||
- run $BINDIR/cloud-setup-console-proxy
|
||||
|
||||
3) Execute ./waf run_console_proxy
|
||||
|
||||
this will launch sudo and require your root password unless you have
|
||||
set sudo up not to ask for it
|
||||
|
||||
|
||||
---------------------------------------------------------------------
|
||||
BUILD SYSTEM TIPS
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
||||
=== Integrating compilation and execution of each component into Eclipse ===
|
||||
|
||||
To run the Management Server from Eclipse, set up an External Tool of the
|
||||
Program variety. Put the path to the waf binary in the Location of the
|
||||
window, and the source directory as Working Directory. Then specify
|
||||
"install --preserve-config run" as arguments (without the quotes). You can
|
||||
now use the Run button in Eclipse to execute the Management Server directly
|
||||
from Eclipse. You can replace run with debug if you want to run the
|
||||
Management Server with the Debugging Proxy turned on.
|
||||
|
||||
To run the Agent or Console Proxy from Eclipse, set up an External Tool of
|
||||
the Program variety just like in the Management Server case. In there,
|
||||
however, specify "install --preserve-config run_agent" or
|
||||
"install --preserve-config run_console_proxy" as arguments instead.
|
||||
Remember that you need to set sudo up to not ask you for a password and not
|
||||
require a TTY, otherwise sudo -- implicitly called by waf run_agent or
|
||||
waf run_console_proxy -- will refuse to work.
|
||||
|
||||
|
||||
=== Building targets selectively ===
|
||||
|
||||
You can find out the targets of the build system:
|
||||
|
||||
./waf list_targets
|
||||
|
||||
If you want to run a specific task generator,
|
||||
|
||||
./waf build --targets=patchsubst
|
||||
|
||||
should run just that one (and whatever targets are required to build that
|
||||
one, of course).
|
||||
|
||||
|
||||
=== Common targets ===
|
||||
|
||||
* ./waf configure: you must always run configure once, and provide it with
|
||||
the target installation paths for when you run install later
|
||||
o --help: will show you all the configure options
|
||||
o --no-dep-check: will skip dependency checks for java packages
|
||||
needed to compile (saves 20 seconds when redoing the configure)
|
||||
o --with-db-user, --with-db-pw, --with-db-host: informs the build
|
||||
system of the MySQL configuration needed to set up the management
|
||||
server upon install, and to do deploydb
|
||||
|
||||
* ./waf build: will compile any source files (and, on some projects, will
|
||||
also perform any variable substitutions on any .in files such as the
|
||||
MANIFEST files). Build outputs will be in <projectdir>/artifacts/default.
|
||||
|
||||
* ./waf install: will compile if not compiled yet, then execute an install
|
||||
of the built targets. I had to write a significantly large amount of code
|
||||
(that is, couple tens of lines of code) to make install work.
|
||||
|
||||
* ./waf run: will run the management server in the foreground
|
||||
|
||||
* ./waf debug: will run the management server in the foreground, and open
|
||||
port 8787 to connect with the debugger (see the Run / debug options of
|
||||
waf --help to change that port)
|
||||
|
||||
* ./waf deploydb: deploys the database using the MySQL configuration supplied
|
||||
with the configuration options when you did ./waf configure. RUN WAF BUILD
|
||||
FIRST AT LEAST ONCE.
|
||||
|
||||
* ./waf dist: create a source tarball. These tarballs will be distributed
|
||||
independently on our Web site, and will form the source release of the
|
||||
Cloud Stack. It is a self-contained release that can be ./waf built and
|
||||
./waf installed everywhere.
|
||||
|
||||
* ./waf clean: remove known build products
|
||||
|
||||
* ./waf distclean: remove the artifacts/ directory altogether
|
||||
|
||||
* ./waf uninstall: uninstall all installed files
|
||||
|
||||
* ./waf rpm: build RPM packages
|
||||
o if the build fails because the system lacks dependencies from our
|
||||
other modules, waf will attempt to install RPMs from the repos,
|
||||
then try the build
|
||||
o it will place the built packages in artifacts/rpmbuild/
|
||||
|
||||
* ./waf deb: build Debian packages
|
||||
o if the build fails because the system lacks dependencies from our
|
||||
other modules, waf will attempt to install DEBs from the repos,
|
||||
then try the build
|
||||
o it will place the built packages in artifacts/debbuild/
|
||||
|
||||
* ./waf uninstallrpms: removes all Cloud.com RPMs from a system (but not
|
||||
logfiles or modified config files)
|
||||
|
||||
* ./waf viewrpmdeps: displays RPM dependencies declared in the RPM specfile
|
||||
|
||||
* ./waf installrpmdeps: runs Yum to install the packages required to build
|
||||
the CloudStack
|
||||
|
||||
* ./waf uninstalldebs: removes all Cloud.com DEBs from a system (AND logfiles
|
||||
AND modified config files)
|
||||
* ./waf viewdebdeps: displays DEB dependencies declared in the project
|
||||
debian/control file
|
||||
|
||||
* ./waf installdebdeps: runs aptitude to install the packages required to
|
||||
build our software
|
||||
|
||||
|
||||
=== Overriding certain source files ===
|
||||
|
||||
Earlier in this document we explored overriding configuration files.
|
||||
Overrides are not limited to configuration files.
|
||||
|
||||
If you want to provide your own server-setup.xml or SQL files in client/setup:
|
||||
|
||||
* create a directory override inside the client/setup folder
|
||||
* place your file that should override a file in client/setup there
|
||||
|
||||
There's also override support in client/tomcatconf and agent/conf.
|
||||
|
||||
|
||||
=== Environment substitutions ===
|
||||
|
||||
Any file named "something.in" has its tokens (@SOMETOKEN@) automatically
|
||||
substituted for the corresponding build environment variable. The build
|
||||
environment variables are generally constructed at configure time and
|
||||
controllable by the --command-line-parameters to waf configure, and should
|
||||
be available as a list of variables inside the file
|
||||
artifacts/c4che/build.default.py.
|
||||
|
||||
|
||||
=== The prerelease mechanism ===
|
||||
|
||||
The prerelease mechanism (--prerelease=BRANCHNAME) allows developers and
|
||||
builders to build packages with pre-release Release tags. The Release tags
|
||||
are constructed in such a way that both the build number and the branch name
|
||||
is included, so developers can push these packages to repositories and upgrade
|
||||
them using yum or aptitude without having to delete packages manually and
|
||||
install packages manually every time a new build is done. Any package built
|
||||
with the prerelease mechanism gets a standard X.Y.Z version number -- and,
|
||||
due to the way that the prerelease Release tags are concocted, always upgrades
|
||||
any older prerelease package already present on any system. The prerelease
|
||||
mechanism must never be used to create packages that are intended to be
|
||||
released as stable software to the general public.
|
||||
|
||||
Relevant documentation:
|
||||
|
||||
http://www.debian.org/doc/debian-policy/ch-controlfields.html#s-f-Version
|
||||
http://fedoraproject.org/wiki/PackageNamingGuidelines#Pre-Release_packages
|
||||
|
||||
Everything comes together on the build server in the following way:
|
||||
|
||||
|
||||
=== SCCS info ===
|
||||
|
||||
When building a source distribution (waf dist), or RPM/DEB distributions
|
||||
(waf deb / waf rpm), waf will automatically detect the relevant source code
|
||||
control information if the git command is present on the machine where waf
|
||||
is run, and it will write the information to a file called sccs-info inside
|
||||
the source tarball / install it into /usr/share/doc/cloud*/sccs-info when
|
||||
installing the packages.
|
||||
|
||||
If this source code conrol information cannot be calculated, then the old
|
||||
sccs-info file is preserved across dist runs if it exists, and if it did
|
||||
not exist before, the fact that the source could not be properly tracked
|
||||
down to a repository is noted in the file.
|
||||
|
||||
|
||||
=== Debugging the build system ===
|
||||
|
||||
Almost all targets have names. waf build -vvvvv --zones=task will give you
|
||||
the task names that you can use in --targets.
|
||||
|
||||
|
||||
---------------------------------------------------------------------
|
||||
UNDERSTANDING THE BUILD SYSTEM
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
||||
=== Documentation for the build system ===
|
||||
|
||||
The first and foremost reference material:
|
||||
|
||||
- http://freehackers.org/~tnagy/wafbook/index.html
|
||||
|
||||
Examples
|
||||
|
||||
- http://code.google.com/p/waf/wiki/CodeSnippets
|
||||
- http://code.google.com/p/waf/w/list
|
||||
|
||||
FAQ
|
||||
|
||||
- http://code.google.com/p/waf/wiki/FAQ
|
||||
|
||||
|
||||
=== Why waf ===
|
||||
|
||||
The CloudStack uses waf to build itself. waf is a relative newcomer
|
||||
to the build system world; it borrows concepts from SCons and
|
||||
other later-generation build systems:
|
||||
|
||||
- waf is very flexible and rich; unlike other build systems, it covers
|
||||
the entire life cycle, from compilation to installation to
|
||||
uninstallation. it also supports dist (create source tarball),
|
||||
distcheck (check that the source tarball compiles and installs),
|
||||
autoconf-like checks for dependencies at compilation time,
|
||||
and more.
|
||||
|
||||
- waf is self-contained. A single file, distributed with the project,
|
||||
enables everything to be built, with only a dependency on Python,
|
||||
which is freely available and shipped in all Linux computers.
|
||||
|
||||
- waf also supports building projects written in multiple languages
|
||||
(in the case of the CloudStack, we build from C, Java and Python).
|
||||
|
||||
- since waf is written in Python, the entire library of the Python
|
||||
language is available to use in the build process.
|
||||
|
||||
|
||||
=== Hacking on the build system: what are these wscript files? ===
|
||||
|
||||
1. wscript: contains most commands you can run from within waf
|
||||
2. wscript_configure: contains the process that discovers the software
|
||||
on the system and configures the build to fit that
|
||||
2. wscript_build: contains a manifest of *what* is built and installed
|
||||
|
||||
Refer to the waf book for general information on waf:
|
||||
http://freehackers.org/~tnagy/wafbook/index.html
|
||||
|
||||
|
||||
=== What happens when waf runs ===
|
||||
|
||||
When you run waf, this happens behind the scenes:
|
||||
|
||||
- When you run waf for the first time, it unpacks itself to a hidden
|
||||
directory .waf-1.X.Y.MD5SUM, including the main program and all
|
||||
the Python libraries it provides and needs.
|
||||
|
||||
- Immediately after unpacking itself, waf reads the wscript file
|
||||
at the root of the source directory. After parsing this file and
|
||||
loading the functions defined here, it reads wscript_build and
|
||||
generates a function build() based on it.
|
||||
|
||||
- After loading the build scripts as explained above, waf calls
|
||||
the functions you specified in the command line.
|
||||
|
||||
So, for example, ./waf configure build install will:
|
||||
|
||||
* call configure() from wscript,
|
||||
* call build() loaded from the contents of wscript_build,
|
||||
* call build() once more but with Options.is_install = True.
|
||||
|
||||
As part of build(), waf invokes ant to build the Java portion of our
|
||||
stack.
|
||||
|
||||
|
||||
=== How and why we use ant within waf ===
|
||||
|
||||
By now, you have probably noticed that we do, indeed, ship ant
|
||||
build files in the CloudStack. During the build process, waf calls
|
||||
ant directly to build the Java portions of our stack, and it uses
|
||||
the resulting JAR files to perform the installation.
|
||||
|
||||
The reason we do this rather than use the native waf capabilities
|
||||
for building Java projects is simple: by using ant, we can leverage
|
||||
the support built-in for ant in Eclipse and many other IDEs. Another
|
||||
reason to do this is because Java developers are familiar with ant,
|
||||
so adding a new JAR file or modifying what gets built into the
|
||||
existing JAR files is facilitated for Java developers.
|
||||
|
||||
If you add to the ant build files a new ant target that uses the
|
||||
compile-java macro, waf will automatically pick it up, along with its
|
||||
depends= and JAR name attributes. In general, all you need to do is
|
||||
add the produced JAR name to the packaging manifests (cloud.spec and
|
||||
debian/{name-of-package}.install).
|
||||
|
||||
|
||||
---------------------------------------------------------------------
|
||||
FOR ANT USERS
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
||||
If you are using Ant directly instead of using waf, these instructions apply to you:
|
||||
|
||||
in this document, the example instructions are based on local source repository rooted at c:\root. You are free to locate it to anywhere you'd like to.
|
||||
3.1 Setup developer build type
|
||||
|
||||
1) Go to c:\cloud\java\build directory
|
||||
|
||||
2) Copy file build-cloud.properties.template to file build-cloud.properties, then modify some of the parameters to match your local setup. The template properties file should have content as
|
||||
|
||||
debug=true
|
||||
debuglevel=lines,vars,source
|
||||
tomcat.home=$TOMCAT_HOME --> change to your local Tomcat root directory such as c:/apache-tomcat-6.0.18
|
||||
debug.jvmarg=-Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n
|
||||
deprecation=off
|
||||
build.type=developer
|
||||
target.compat.version=1.5
|
||||
source.compat.version=1.5
|
||||
branding.name=default
|
||||
|
||||
3) Make sure the following Environment variables and Path are set:
|
||||
|
||||
set enviroment variables:
|
||||
CATALINA_HOME:
|
||||
JAVA_HOME:
|
||||
CLOUD_HOME:
|
||||
MYSQL_HOME:
|
||||
|
||||
update the path to include
|
||||
|
||||
MYSQL_HOME\bin
|
||||
|
||||
4) Clone a full directory tree of C:\cloud\java\build\deploy\production to C:\cloud\java\build\deploy\developer
|
||||
|
||||
You can use Windows Explorer to copy the directory tree over. Please note, during your daily development process, whenever you see updates in C:\cloud\java\build\deploy\production, be sure to sync it into C:\cloud\java\build\deploy\developer.
|
||||
3.2 Common build instructions
|
||||
|
||||
After you have setup the build type, you are ready to perform build and run Management Server alone locally.
|
||||
|
||||
cd java
|
||||
python waf configure build install
|
||||
|
||||
More at Build system.
|
||||
|
||||
Will install the management server and its requisites to the appropriate place (your Tomcat instance on Windows, /usr/local on Linux). It will also install the agent to /usr/local/cloud/agent (this will change in the future).
|
||||
4. Database and Server deployment
|
||||
|
||||
After a successful management server build (database deployment scripts use some of the artifacts from build process), you can use database deployment script to deploy and initialize the database. You can find the deployment scripts in C:/cloud/java/build/deploy/db. deploy-db.sh is used to create, populate your DB instance. Please take a look at content of deploy-db.sh for more details
|
||||
|
||||
Before you run the scripts, you should edit C:/cloud/java/build/deploy/developer/db/server-setup-dev.xml to allocate Public and Private IP ranges for your development setup. Ensure that the ranges you pick are unallocated to others.
|
||||
|
||||
Customized VM templates to be populated are in C:/cloud/java/build/deploy/developer/db/templates-dev.sql Edit this file to customize the templates to your needs.
|
||||
|
||||
Deploy the DB by running
|
||||
|
||||
./deploy-db.sh ../developer/db/server-setup-dev.xml ../developer/db/templates-dev.xml
|
||||
4.1. Management Server Deployment
|
||||
|
||||
ant build-server
|
||||
|
||||
Build Management Server
|
||||
|
||||
ant deploy-server
|
||||
|
||||
Deploy Management Server software to Tomcat environment
|
||||
|
||||
ant debug
|
||||
|
||||
Start Management Server in debug mode. The JVM debug options can be found in cloud-build.properties
|
||||
|
||||
ant run
|
||||
|
||||
Start Management Server in normal mode.
|
||||
|
||||
5. Agent deployment
|
||||
|
||||
After a successful build process, you should be able to find build artifacts at distribution directory, in this example case, for developer build type, the artifacts locate at c:\cloud\java\dist\developer, particularly, if you have run
|
||||
|
||||
ant package-agent build command, you should see the agent software be packaged in a single file named agent.zip under c:\cloud\java\dist\developer, together with the agent deployment script deploy-agent.sh.
|
||||
5.1 Agent Type
|
||||
|
||||
Agent software can be deployed and configured to serve with different roles at run time. In current implementation, there are 3 types of agent configuration, respectively called as Computing Server, Routing Server and Storage Server.
|
||||
|
||||
* When agent software is configured to run as Computing server, it is responsible to host user VMs. Agent software should be running in Xen Dom0 system on computer server machine.
|
||||
|
||||
* When agent software is configured to run as Routing Server, it is responsible to host routing VMs for user virtual network and console proxy system VMs. Routing server serves as the bridge to outside network, the machine that agent software is running should have at least two network interfaces, one towards outside network, one participates the internal VMOps management network. Like computer server, agent software on routing server should also be running in Xen Dom0 system.
|
||||
|
||||
* When agent software is configured to run as Storage server, it is responsible to provide storage service for all VMs. The storage service is based on ZFS running on a Solaris system, agent software on storage server is therefore running under Solaris (actually a Solaris VM), Dom0 systems on computing server and routing server can access the storage service through iScsi initiator. The storage volume will be eventually mounted on Dom0 system and make available to DomU VMs through our agent software.
|
||||
|
||||
5.2 Resource sharing
|
||||
|
||||
All developers can share the same set of agent server machines for development, to make this possible, the concept of instance appears in various places
|
||||
|
||||
* VM names. VM names are structual names, it contains a instance section that can identify VMs from different VMOps cloud instances. VMOps cloud instance name is configured in server configuration parameter AgentManager/instance.name
|
||||
* iScsi initiator mount point. For Computing servers and Routing servers, the mount point can distinguish the mounted DomU VM images from different agent deployments. The mount location can be specified in agent.properties file with a name-value pair named mount.parent
|
||||
* iScsi target allocation point. For storage servers, this allocation point can distinguish the storage allocation from different storage agent deployments. The allocation point can be specified in agent.properties file with a name-value pair named parent
|
||||
|
||||
5.4 Deploy agent software
|
||||
|
||||
Before running the deployment scripts, first copy the build artifacts agent.zip and deploy-agent.sh to your personal development directory on agent server machines. By our current convention, you can create your personal development directory that usually locates at /root/your name. In following example, the agent package and deployment scripts are copied to test0.lab.vmops.com and the deployment script file has been marked as executible.
|
||||
|
||||
On build machine,
|
||||
|
||||
scp agent.zip root@test0:/root/your name
|
||||
|
||||
scp deploy-agent.sh root@test0:/root/your name
|
||||
|
||||
On agent server machine
|
||||
|
||||
chmod +x deploy-agent.sh
|
||||
5.4.1 Deploy agent on computing server
|
||||
|
||||
deploy-agent.sh -d /root/<your name>/agent -h <management server IP> -t computing -m expert
|
||||
5.4.2 Deploy agent on routing server
|
||||
|
||||
deploy-agent.sh -d /root/<your name>/agent -h <management server IP> -t routing -m expert
|
||||
5.4.3 Deploy agent on storage server
|
||||
|
||||
deploy-agent.sh -d /root/<your name>/agent -h <management server IP> -t storage -m expert
|
||||
5.5 Configure agent
|
||||
|
||||
After you have deployed the agent software, you should configure the agent by editing the agent.properties file under /root/<your name>/agent/conf directory on each of the Routing, Computing and Storage servers. Add/Edit following properties. The rest are defaults that get populated by the agent at runtime.
|
||||
workers=3
|
||||
host=<replace with your management server IP>
|
||||
port=8250
|
||||
pod=<replace with your pod id>
|
||||
zone=<replace with your zone id>
|
||||
instance=<your unique instance name>
|
||||
developer=true
|
||||
|
||||
Following is a sample agent.properties file for Routing server
|
||||
|
||||
workers=3
|
||||
id=1
|
||||
port=8250
|
||||
pod=RC
|
||||
storage=comstar
|
||||
zone=RC
|
||||
type=routing
|
||||
private.network.nic=xenbr0
|
||||
instance=RC
|
||||
public.network.nic=xenbr1
|
||||
developer=true
|
||||
host=192.168.1.138
|
||||
5.5 Running agent
|
||||
|
||||
Edit /root/<ryour name>/agent/conf/log4j-cloud.xml to update the location of logs to somewhere under /root/<your name>
|
||||
|
||||
Once you have deployed and configured the agent software, you are ready to launch it. Under the agent root directory (in our example, /root/<your name>/agent. there is a scrip file named run.sh, you can use it to launch the agent.
|
||||
|
||||
Launch agent in detached background process
|
||||
|
||||
nohup ./run.sh &
|
||||
|
||||
Launch agent in interactive mode
|
||||
|
||||
./run.sh
|
||||
|
||||
Launch agent in debug mode, for example, following command makes JVM listen at TCP port 8787
|
||||
|
||||
./run.sh -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n
|
||||
|
||||
If agent is launched in debug mode, you may use Eclipse IDE to remotely debug it, please note, when you are sharing agent server machine with others, choose a TCP port that is not in use by someone else.
|
||||
|
||||
Please also note that, run.sh also searches for /etc/cloud directory for agent.properties, make sure it uses the correct agent.properties file!
|
||||
5.5. Stopping the Agents
|
||||
|
||||
the pid of the agent process is in /var/run/agent.<Instance>.pid
|
||||
|
||||
To Stop the agent:
|
||||
|
||||
kill <pid of agent>
|
||||
|
||||
|
||||
155
INSTALL
155
INSTALL
@ -1,155 +0,0 @@
|
||||
---------------------------------------------------------------------
|
||||
TABLE OF CONTENTS
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
||||
1. Really quick start: building and installing a production stack
|
||||
2. Post-install: setting the CloudStack components up
|
||||
3. Installation paths: where the stack is installed on your system
|
||||
4. Uninstalling the CloudStack from your system
|
||||
|
||||
|
||||
---------------------------------------------------------------------
|
||||
REALLY QUICK START: BUILDING AND INSTALLING A PRODUCTION STACK
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
||||
You have two options. Choose one:
|
||||
|
||||
a) Building distribution packages from the source and installing them
|
||||
b) Building from the source and installing directly from there
|
||||
|
||||
|
||||
=== I want to build and install distribution packages ===
|
||||
|
||||
This is the recommended way to run your CloudStack cloud. The
|
||||
advantages are that dependencies are taken care of automatically
|
||||
for you, and you can verify the integrity of the installed files
|
||||
using your system's package manager.
|
||||
|
||||
1. As root, install the build dependencies.
|
||||
|
||||
a) Fedora / CentOS: ./waf installrpmdeps
|
||||
|
||||
b) Ubuntu: ./waf installdebdeps
|
||||
|
||||
2. As a non-root user, build the CloudStack packages.
|
||||
|
||||
a) Fedora / CentOS: ./waf rpm
|
||||
|
||||
b) Ubuntu: ./waf deb
|
||||
|
||||
3. As root, install the CloudStack packages.
|
||||
You can choose which components to install on your system.
|
||||
|
||||
a) Fedora / CentOS: the installable RPMs are in artifacts/rpmbuild
|
||||
install as root: rpm -ivh artifacts/rpmbuild/RPMS/{x86_64,noarch,i386}/*.rpm
|
||||
|
||||
b) Ubuntu: the installable DEBs are in artifacts/debbuild
|
||||
install as root: dpkg -i artifacts/debbuild/*.deb
|
||||
|
||||
4. Configure and start the components you intend to run.
|
||||
Consult the Installation Guide to find out how to
|
||||
configure each component, and "Installation paths" for information
|
||||
on where programs, initscripts and config files are installed.
|
||||
|
||||
|
||||
=== I want to build and install directly from the source ===
|
||||
|
||||
This is the recommended way to run your CloudStack cloud if you
|
||||
intend to modify the source, if you intend to port the CloudStack to
|
||||
another distribution, or if you intend to run the CloudStack on a
|
||||
distribution for which packages are not built.
|
||||
|
||||
1. As root, install the build dependencies.
|
||||
See below for a list.
|
||||
|
||||
2. As non-root, configure the build.
|
||||
See below to discover configuration options.
|
||||
|
||||
./waf configure
|
||||
|
||||
3. As non-root, build the CloudStack.
|
||||
To learn more, see "Quick guide to developing, building and
|
||||
installing from source" below.
|
||||
|
||||
./waf build
|
||||
|
||||
4. As root, install the runtime dependencies.
|
||||
See below for a list.
|
||||
|
||||
5. As root, Install the CloudStack
|
||||
|
||||
./waf install
|
||||
|
||||
6. Configure and start the components you intend to run.
|
||||
Consult the Installation Guide to find out how to
|
||||
configure each component, and "Installation paths" for information
|
||||
on where to find programs, initscripts and config files mentioned
|
||||
in the Installation Guide (paths may vary).
|
||||
|
||||
|
||||
=== Dependencies of the CloudStack ===
|
||||
|
||||
- Build dependencies:
|
||||
|
||||
1. FIXME DEPENDENCIES LIST THEM HERE
|
||||
|
||||
- Runtime dependencies:
|
||||
|
||||
2. FIXME DEPENDENCIES LIST THEM HERE
|
||||
|
||||
|
||||
---------------------------------------------------------------------
|
||||
INSTALLATION PATHS: WHERE THE STACK IS INSTALLED ON YOUR SYSTEM
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
||||
The CloudStack build system installs files on a variety of paths, each
|
||||
one of which is selectable when building from source.
|
||||
|
||||
- $PREFIX:
|
||||
the default prefix where the entire stack is installed
|
||||
defaults to /usr/local on source builds
|
||||
defaults to /usr on package builds
|
||||
|
||||
- $SYSCONFDIR/cloud:
|
||||
|
||||
the prefix for CloudStack configuration files
|
||||
defaults to $PREFIX/etc/cloud on source builds
|
||||
defaults to /etc/cloud on package builds
|
||||
|
||||
- $SYSCONFDIR/init.d:
|
||||
the prefix for CloudStack initscripts
|
||||
defaults to $PREFIX/etc/init.d on source builds
|
||||
defaults to /etc/init.d on package builds
|
||||
|
||||
- $BINDIR:
|
||||
the CloudStack installs programs there
|
||||
defaults to $PREFIX/bin on source builds
|
||||
defaults to /usr/bin on package builds
|
||||
|
||||
- $LIBEXECDIR:
|
||||
the CloudStack installs service runners there
|
||||
defaults to $PREFIX/libexec on source builds
|
||||
defaults to /usr/libexec on package builds (/usr/bin on Ubuntu)
|
||||
|
||||
|
||||
---------------------------------------------------------------------
|
||||
UNINSTALLING THE CLOUDSTACK FROM YOUR SYSTEM
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
||||
- If you installed the CloudStack using packages, use your operating
|
||||
system package manager to remove the CloudStack packages.
|
||||
|
||||
a) Fedora / CentOS: the installable RPMs are in artifacts/rpmbuild
|
||||
as root: rpm -qa | grep ^cloud- | xargs rpm -e
|
||||
|
||||
b) Ubuntu: the installable DEBs are in artifacts/debbuild
|
||||
aptitude purge '~ncloud'
|
||||
|
||||
- If you installed from a source tree:
|
||||
|
||||
./waf uninstall
|
||||
|
||||
52
README
52
README
@ -1,52 +0,0 @@
|
||||
Hello, and thanks for downloading the Cloud.com CloudStack™! The
|
||||
Cloud.com CloudStack™ is Open Source Software that allows
|
||||
organizations to build Infrastructure as a Service (Iaas) clouds.
|
||||
Working with server, storage, and networking equipment of your
|
||||
choice, the CloudStack provides a turn-key software stack that
|
||||
dramatically simplifies the process of deploying and managing a
|
||||
cloud.
|
||||
|
||||
|
||||
---------------------------------------------------------------------
|
||||
HOW TO INSTALL THE CLOUDSTACK
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
||||
Please refer to the document INSTALL distributed with the source.
|
||||
|
||||
|
||||
---------------------------------------------------------------------
|
||||
HOW TO HACK ON THE CLOUDSTACK
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
||||
Please refer to the document HACKING distributed with the source.
|
||||
|
||||
|
||||
---------------------------------------------------------------------
|
||||
BE PART OF THE CLOUD.COM COMMUNITY!
|
||||
---------------------------------------------------------------------
|
||||
|
||||
|
||||
We are more than happy to have you ask us questions, hack our source
|
||||
code, and receive your contributions.
|
||||
|
||||
* Our forums are available at http://cloud.com/community .
|
||||
* If you would like to modify / extend / hack on the CloudStack source,
|
||||
refer to the file HACKING for more information.
|
||||
* If you find bugs, please log on to http://bugs.cloud.com/ and file
|
||||
a report.
|
||||
* If you have patches to send us get in touch with us at info@cloud.com
|
||||
or file them as attachments in our bug tracker above.
|
||||
|
||||
|
||||
---------------------------------------------------------------------
|
||||
Cloud.com's contact information is:
|
||||
|
||||
20400 Stevens Creek Blvd
|
||||
Suite 390
|
||||
Cupertino, CA 95014
|
||||
Tel: +1 (888) 384-0962
|
||||
|
||||
This software is OSI certified Open Source Software. OSI Certified is a
|
||||
certification mark of the Open Source Initiative.
|
||||
260
README.html
260
README.html
@ -512,6 +512,13 @@ Also see [[AdvancedOptions]]</pre>
|
||||
</div>
|
||||
<!--POST-SHADOWAREA-->
|
||||
<div id="storeArea">
|
||||
<div title="(default) on http://tiddlyvault.tiddlyspot.com/#%5B%5BDisableWikiLinksPlugin%20(TiddlyTools)%5D%5D" modifier="(System)" created="201009040211" tags="systemServer" changecount="1">
|
||||
<pre>|''Type:''|file|
|
||||
|''URL:''|http://tiddlyvault.tiddlyspot.com/#%5B%5BDisableWikiLinksPlugin%20(TiddlyTools)%5D%5D|
|
||||
|''Workspace:''|(default)|
|
||||
|
||||
This tiddler was automatically created to record the details of this server</pre>
|
||||
</div>
|
||||
<div title="AntInformation" creator="RuddO" modifier="RuddO" created="201008072228" changecount="1">
|
||||
<pre>---------------------------------------------------------------------
|
||||
FOR ANT USERS
|
||||
@ -702,21 +709,18 @@ Once this command is done, the packages will be built in the directory {{{artifa
|
||||
# As a non-root user, run the command {{{./waf deb}}} in the source directory.
|
||||
Once this command is done, the packages will be built in the directory {{{artifacts/debbuild}}}.</pre>
|
||||
</div>
|
||||
<div title="Building from the source and installing directly from there" creator="RuddO" modifier="RuddO" created="201008080022" modified="201008081327" changecount="14">
|
||||
<pre>!Obtain the source for the CloudStack
|
||||
<div title="Building from the source and installing directly from there" creator="RuddO" modifier="RuddO" created="201008080022" modified="201009040235" changecount="20">
|
||||
<pre>You need to do the following steps on each machine that will run a CloudStack component.
|
||||
!Obtain the source for the CloudStack
|
||||
If you aren't reading this from a local copy of the source code, see [[Obtaining the source]].
|
||||
!Prepare your development environment
|
||||
See [[Preparing your development environment]].
|
||||
!Configure the build on the builder machine
|
||||
!Prepare your environment
|
||||
See [[Preparing your environment]].
|
||||
!Configure the build
|
||||
As non-root, run the command {{{./waf configure}}}. See [[waf configure]] to discover configuration options for that command.
|
||||
!Build the CloudStack on the builder machine
|
||||
!Build the CloudStack
|
||||
As non-root, run the command {{{./waf build}}}. See [[waf build]] for an explanation.
|
||||
!Install the CloudStack on the target systems
|
||||
On each machine where you intend to run a CloudStack component:
|
||||
# upload the entire source code tree after compilation, //ensuring that the source ends up in the same path as the machine in which you compiled it//,
|
||||
## {{{rsync}}} is [[usually very handy|Using rsync to quickly transport the source tree to another machine]] for this
|
||||
# in that newly uploaded directory of the target machine, run the command {{{./waf install}}} //as root//.
|
||||
Consult [[waf install]] for information on installation.</pre>
|
||||
!Install the CloudStack
|
||||
Run the command {{{./waf install}}} //as root//. Consult [[waf install]] for information on installation.</pre>
|
||||
</div>
|
||||
<div title="Changing the build, install and packaging processes" creator="RuddO" modifier="RuddO" created="201008081215" modified="201008081309" tags="fixme" changecount="15">
|
||||
<pre>!Changing the [[configuration|waf configure]] process
|
||||
@ -737,11 +741,91 @@ See the files in the {{{debian/}}} folder.</pre>
|
||||
<div title="CloudStack" creator="RuddO" modifier="RuddO" created="201008072205" changecount="1">
|
||||
<pre>The Cloud.com CloudStack is an open source software product that enables the deployment, management, and configuration of multi-tier and multi-tenant infrastructure cloud services by enterprises and service providers.</pre>
|
||||
</div>
|
||||
<div title="CloudStack build dependencies" creator="RuddO" modifier="RuddO" created="201008081310" tags="fixme" changecount="1">
|
||||
<pre>Not done yet!</pre>
|
||||
<div title="CloudStack build dependencies" creator="RuddO" modifier="RuddO" created="201008081310" modified="201009040226" changecount="20">
|
||||
<pre>Prior to building the CloudStack, you need to install the following software packages in your system.
|
||||
# Sun Java 1.6
|
||||
## You must install the Java Development Kit with {{{javac}}}, not just the Java Runtime Environment
|
||||
## The commands {{{java}}} and {{{javac}}} must be found in your {{{PATH}}}
|
||||
# Apache Tomcat
|
||||
## If you are using the official Apache binary distribution, set the environment variable {{{TOMCAT_HOME}}} to point to the Apache Tomcat directory
|
||||
# MySQL
|
||||
## At the very minimum, you need to have the client and libraries installed
|
||||
## If your development machine is also going to be the database server, you need to have the server installed and running as well
|
||||
# Python 2.6
|
||||
## Ensure that the {{{python}}} command is in your {{{PATH}}}
|
||||
## Do ''not'' install Cygwin Python!
|
||||
# The MySQLdb module for Python 2.6
|
||||
## If you use Windows, you can find a [[pre-built package here|http://soemin.googlecode.com/files/MySQL-python-1.2.3c1.win32-py2.6.exe]]
|
||||
# The Bourne-again shell (also known as bash)
|
||||
# GNU coreutils
|
||||
''Note for Windows users'': Some of the packages in the above list are only available on Windows through Cygwin. If that is your case, install them using Cygwin and remember to include the Cygwin {{{bin/}}} directory in your PATH. Under no circumstances install Cygwin Python! Use the Python for Windows official installer instead.
|
||||
!Additional dependencies for Linux development environments
|
||||
# GCC (only needed on Linux)
|
||||
# glibc-devel / glibc-dev
|
||||
# The Java packages (usually available in your distribution):
|
||||
## commons-collections
|
||||
## commons-dbcp
|
||||
## commons-logging
|
||||
## commons-logging-api
|
||||
## commons-pool
|
||||
## commons-httpclient
|
||||
## ws-commons-util
|
||||
# useradd
|
||||
# userdel</pre>
|
||||
</div>
|
||||
<div title="CloudStack run-time dependencies" creator="RuddO" modifier="RuddO" created="201008081310" tags="fixme" changecount="1">
|
||||
<pre>Not done yet!</pre>
|
||||
<div title="CloudStack run-time dependencies" creator="RuddO" modifier="RuddO" created="201008081310" modified="201009040225" tags="fixme" changecount="16">
|
||||
<pre>The following software / programs must be correctly installed in the machines where you will run a CloudStack component. This list is by no means complete yet, but it will be soon.
|
||||
|
||||
''Note for Windows users'': Some of the packages in the lists below are only available on Windows through Cygwin. If that is your case, install them using Cygwin and remember to include the Cygwin {{{bin/}}} directory in your PATH. Under no circumstances install Cygwin Python! Use the Python for Windows official installer instead.
|
||||
!Run-time dependencies common to all components of the CloudStack
|
||||
# bash
|
||||
# coreutils
|
||||
# Sun Java 1.6
|
||||
## You must install the Java Development Kit with {{{javac}}}, not just the Java Runtime Environment
|
||||
## The commands {{{java}}} and {{{javac}}} must be found in your {{{PATH}}}
|
||||
# Python 2.6
|
||||
## Ensure that the {{{python}}} command is in your {{{PATH}}}
|
||||
## Do ''not'' install Cygwin Python!
|
||||
# The Java packages (usually available in your distribution):
|
||||
## commons-collections
|
||||
## commons-dbcp
|
||||
## commons-logging
|
||||
## commons-logging-api
|
||||
## commons-pool
|
||||
## commons-httpclient
|
||||
## ws-commons-util
|
||||
!Management Server-specific dependencies
|
||||
# Apache Tomcat
|
||||
## If you are using the official Apache binary distribution, set the environment variable {{{TOMCAT_HOME}}} to point to the Apache Tomcat directory
|
||||
# MySQL
|
||||
## At the very minimum, you need to have the client and libraries installed
|
||||
## If you will be running the Management Server in the same machine that will run the database server, you need to have the server installed and running as well
|
||||
# The MySQLdb module for Python 2.6
|
||||
## If you use Windows, you can find a [[pre-built package here|http://soemin.googlecode.com/files/MySQL-python-1.2.3c1.win32-py2.6.exe]]
|
||||
# openssh-clients (provides the ssh-keygen command)
|
||||
# mkisofs (provides the genisoimage command)</pre>
|
||||
</div>
|
||||
<div title="Database migration infrastructure" creator="RuddO" modifier="RuddO" created="201009011837" modified="201009011852" changecount="14">
|
||||
<pre>To support incremental migration from one version to another without having to redeploy the database, the CloudStack supports an incremental schema migration mechanism for the database.
|
||||
!!!How does it work?
|
||||
When the database is deployed for the first time with [[waf deploydb]] or the command {{{cloud-setup-databases}}}, a row is written to the {{{configuration}}} table, named {{{schema.level}}} and containing the current schema level. This schema level row comes from the file {{{setup/db/schema-level.sql}}} in the source (refer to the [[Installation paths]] topic to find out where this file is installed in a running system).
|
||||
|
||||
This value is used by the database migrator {{{cloud-migrate-databases}}} (source {{{setup/bindir/cloud-migrate-databases.in}}}) to determine the starting schema level. The database migrator has a series of classes -- each class represents a step in the migration process and is usually tied to the execution of a SQL file stored in {{{setup/db}}}. To migrate the database, the database migrator:
|
||||
# walks the list of steps it knows about,
|
||||
# generates a list of steps sorted by the order they should be executed in,
|
||||
# executes each step in order
|
||||
# at the end of each step, records the new schema level to the database table {{{configuration}}}
|
||||
For more information, refer to the database migrator source -- it is documented.
|
||||
!!!What impact does this have on me as a developer?
|
||||
Whenever you need to evolve the schema of the database:
|
||||
# write a migration SQL script and store it in {{{setup/db}}},
|
||||
# include your schema changes in the appropriate SQL file {{{create-*.sql}}} too (as the database is expected to be at its latest evolved schema level right after deploying a fresh database)
|
||||
# write a class in {{{setup/bindir/cloud-migrate-databases.in}}}, describing the migration step; in detail:
|
||||
## the schema level your migration step expects the database to be in,
|
||||
## the schema level your migration step will leave your database in (presumably the latest schema level, which you will have to choose!),
|
||||
## and the name / description of the step
|
||||
# bump the schema level in {{{setup/db/schema-level.sql}}} to the latest schema level
|
||||
Otherwise, ''end-user migration will fail catastrophically''.</pre>
|
||||
</div>
|
||||
<div title="DefaultTiddlers" creator="RuddO" modifier="RuddO" created="201008072205" modified="201008072257" changecount="4">
|
||||
<pre>[[Welcome]]</pre>
|
||||
@ -749,13 +833,115 @@ See the files in the {{{debian/}}} folder.</pre>
|
||||
<div title="Development conventions" creator="RuddO" modifier="RuddO" created="201008081334" modified="201008081336" changecount="4">
|
||||
<pre>#[[Source layout guide]]</pre>
|
||||
</div>
|
||||
<div title="DisableWikiLinksPlugin" modifier="ELSDesignStudios" created="200512092239" modified="200807230133" tags="systemConfig" server.type="file" server.host="www.tiddlytools.com" server.page.revision="200807230133">
|
||||
<pre>/***
|
||||
|Name|DisableWikiLinksPlugin|
|
||||
|Source|http://www.TiddlyTools.com/#DisableWikiLinksPlugin|
|
||||
|Version|1.6.0|
|
||||
|Author|Eric Shulman|
|
||||
|License|http://www.TiddlyTools.com/#LegalStatements|
|
||||
|~CoreVersion|2.1|
|
||||
|Type|plugin|
|
||||
|Description|selectively disable TiddlyWiki's automatic ~WikiWord linking behavior|
|
||||
This plugin allows you to disable TiddlyWiki's automatic ~WikiWord linking behavior, so that WikiWords embedded in tiddler content will be rendered as regular text, instead of being automatically converted to tiddler links. To create a tiddler link when automatic linking is disabled, you must enclose the link text within {{{[[...]]}}}.
|
||||
!!!!!Usage
|
||||
<<<
|
||||
You can block automatic WikiWord linking behavior for any specific tiddler by ''tagging it with<<tag excludeWikiWords>>'' (see configuration below) or, check a plugin option to disable automatic WikiWord links to non-existing tiddler titles, while still linking WikiWords that correspond to existing tiddlers titles or shadow tiddler titles. You can also block specific selected WikiWords from being automatically linked by listing them in [[DisableWikiLinksList]] (see configuration below), separated by whitespace. This tiddler is optional and, when present, causes the listed words to always be excluded, even if automatic linking of other WikiWords is being permitted.
|
||||
|
||||
Note: WikiWords contained in default ''shadow'' tiddlers will be automatically linked unless you select an additional checkbox option lets you disable these automatic links as well, though this is not recommended, since it can make it more difficult to access some TiddlyWiki standard default content (such as AdvancedOptions or SideBarTabs)
|
||||
<<<
|
||||
!!!!!Configuration
|
||||
<<<
|
||||
<<option chkDisableWikiLinks>> Disable ALL automatic WikiWord tiddler links
|
||||
<<option chkAllowLinksFromShadowTiddlers>> ... except for WikiWords //contained in// shadow tiddlers
|
||||
<<option chkDisableNonExistingWikiLinks>> Disable automatic WikiWord links for non-existing tiddlers
|
||||
Disable automatic WikiWord links for words listed in: <<option txtDisableWikiLinksList>>
|
||||
Disable automatic WikiWord links for tiddlers tagged with: <<option txtDisableWikiLinksTag>>
|
||||
<<<
|
||||
!!!!!Revisions
|
||||
<<<
|
||||
2008.07.22 [1.6.0] hijack tiddler changed() method to filter disabled wiki words from internal links[] array (so they won't appear in the missing tiddlers list)
|
||||
2007.06.09 [1.5.0] added configurable txtDisableWikiLinksTag (default value: "excludeWikiWords") to allows selective disabling of automatic WikiWord links for any tiddler tagged with that value.
|
||||
2006.12.31 [1.4.0] in formatter, test for chkDisableNonExistingWikiLinks
|
||||
2006.12.09 [1.3.0] in formatter, test for excluded wiki words specified in DisableWikiLinksList
|
||||
2006.12.09 [1.2.2] fix logic in autoLinkWikiWords() (was allowing links TO shadow tiddlers, even when chkDisableWikiLinks is TRUE).
|
||||
2006.12.09 [1.2.1] revised logic for handling links in shadow content
|
||||
2006.12.08 [1.2.0] added hijack of Tiddler.prototype.autoLinkWikiWords so regular (non-bracketed) WikiWords won't be added to the missing list
|
||||
2006.05.24 [1.1.0] added option to NOT bypass automatic wikiword links when displaying default shadow content (default is to auto-link shadow content)
|
||||
2006.02.05 [1.0.1] wrapped wikifier hijack in init function to eliminate globals and avoid FireFox 1.5.0.1 crash bug when referencing globals
|
||||
2005.12.09 [1.0.0] initial release
|
||||
<<<
|
||||
!!!!!Code
|
||||
***/
|
||||
//{{{
|
||||
version.extensions.DisableWikiLinksPlugin= {major: 1, minor: 6, revision: 0, date: new Date(2008,7,22)};
|
||||
|
||||
if (config.options.chkDisableNonExistingWikiLinks==undefined) config.options.chkDisableNonExistingWikiLinks= false;
|
||||
if (config.options.chkDisableWikiLinks==undefined) config.options.chkDisableWikiLinks=false;
|
||||
if (config.options.txtDisableWikiLinksList==undefined) config.options.txtDisableWikiLinksList="DisableWikiLinksList";
|
||||
if (config.options.chkAllowLinksFromShadowTiddlers==undefined) config.options.chkAllowLinksFromShadowTiddlers=true;
|
||||
if (config.options.txtDisableWikiLinksTag==undefined) config.options.txtDisableWikiLinksTag="excludeWikiWords";
|
||||
|
||||
// find the formatter for wikiLink and replace handler with 'pass-thru' rendering
|
||||
initDisableWikiLinksFormatter();
|
||||
function initDisableWikiLinksFormatter() {
|
||||
for (var i=0; i<config.formatters.length && config.formatters[i].name!="wikiLink"; i++);
|
||||
config.formatters[i].coreHandler=config.formatters[i].handler;
|
||||
config.formatters[i].handler=function(w) {
|
||||
// supress any leading "~" (if present)
|
||||
var skip=(w.matchText.substr(0,1)==config.textPrimitives.unWikiLink)?1:0;
|
||||
var title=w.matchText.substr(skip);
|
||||
var exists=store.tiddlerExists(title);
|
||||
var inShadow=w.tiddler && store.isShadowTiddler(w.tiddler.title);
|
||||
// check for excluded Tiddler
|
||||
if (w.tiddler && w.tiddler.isTagged(config.options.txtDisableWikiLinksTag))
|
||||
{ w.outputText(w.output,w.matchStart+skip,w.nextMatch); return; }
|
||||
// check for specific excluded wiki words
|
||||
var t=store.getTiddlerText(config.options.txtDisableWikiLinksList);
|
||||
if (t && t.length && t.indexOf(w.matchText)!=-1)
|
||||
{ w.outputText(w.output,w.matchStart+skip,w.nextMatch); return; }
|
||||
// if not disabling links from shadows (default setting)
|
||||
if (config.options.chkAllowLinksFromShadowTiddlers && inShadow)
|
||||
return this.coreHandler(w);
|
||||
// check for non-existing non-shadow tiddler
|
||||
if (config.options.chkDisableNonExistingWikiLinks && !exists)
|
||||
{ w.outputText(w.output,w.matchStart+skip,w.nextMatch); return; }
|
||||
// if not enabled, just do standard WikiWord link formatting
|
||||
if (!config.options.chkDisableWikiLinks)
|
||||
return this.coreHandler(w);
|
||||
// just return text without linking
|
||||
w.outputText(w.output,w.matchStart+skip,w.nextMatch)
|
||||
}
|
||||
}
|
||||
|
||||
Tiddler.prototype.coreAutoLinkWikiWords = Tiddler.prototype.autoLinkWikiWords;
|
||||
Tiddler.prototype.autoLinkWikiWords = function()
|
||||
{
|
||||
// if all automatic links are not disabled, just return results from core function
|
||||
if (!config.options.chkDisableWikiLinks)
|
||||
return this.coreAutoLinkWikiWords.apply(this,arguments);
|
||||
return false;
|
||||
}
|
||||
|
||||
Tiddler.prototype.disableWikiLinks_changed = Tiddler.prototype.changed;
|
||||
Tiddler.prototype.changed = function()
|
||||
{
|
||||
this.disableWikiLinks_changed.apply(this,arguments);
|
||||
// remove excluded wiki words from links array
|
||||
var t=store.getTiddlerText(config.options.txtDisableWikiLinksList,"").readBracketedList();
|
||||
if (t.length) for (var i=0; i<t.length; i++)
|
||||
if (this.links.contains(t[i]))
|
||||
this.links.splice(this.links.indexOf(t[i]),1);
|
||||
};
|
||||
//}}}</pre>
|
||||
</div>
|
||||
<div title="Git" creator="RuddO" modifier="RuddO" created="201008081330" tags="fixme" changecount="1">
|
||||
<pre>Not done yet!</pre>
|
||||
</div>
|
||||
<div title="Hacking on the CloudStack" creator="RuddO" modifier="RuddO" created="201008072228" modified="201008081354" changecount="47">
|
||||
<div title="Hacking on the CloudStack" creator="RuddO" modifier="RuddO" created="201008072228" modified="201009040156" changecount="52">
|
||||
<pre>Start here if you want to learn the essentials to extend, modify and enhance the CloudStack. This assumes that you've already familiarized yourself with CloudStack concepts, installation and configuration using the [[Getting started|Welcome]] instructions.
|
||||
* [[Obtain the source|Obtaining the source]]
|
||||
* [[Prepare your environment|Preparing your development environment]]
|
||||
* [[Prepare your environment|Preparing your environment]]
|
||||
* [[Get acquainted with the development lifecycle|Your development lifecycle]]
|
||||
* [[Familiarize yourself with our development conventions|Development conventions]]
|
||||
Extra developer information:
|
||||
@ -764,6 +950,7 @@ Extra developer information:
|
||||
* [[How to integrate with Eclipse]]
|
||||
* [[Starting over]]
|
||||
* [[Making a source release|waf dist]]
|
||||
* [[How to write database migration scripts|Database migration infrastructure]]
|
||||
</pre>
|
||||
</div>
|
||||
<div title="How to integrate with Eclipse" creator="RuddO" modifier="RuddO" created="201008081029" modified="201008081346" changecount="3">
|
||||
@ -785,13 +972,13 @@ Any ant target added to the ant project files will automatically be detected --
|
||||
|
||||
The reason we do this rather than use the native waf capabilities for building Java projects is simple: by using ant, we can leverage the support built-in for ant in [[Eclipse|How to integrate with Eclipse]] and many other """IDEs""". Another reason to do this is because Java developers are familiar with ant, so adding a new JAR file or modifying what gets built into the existing JAR files is facilitated for Java developers.</pre>
|
||||
</div>
|
||||
<div title="Installation paths" creator="RuddO" modifier="RuddO" created="201008080025" modified="201008080028" changecount="6">
|
||||
<div title="Installation paths" creator="RuddO" modifier="RuddO" created="201008080025" modified="201009012342" changecount="8">
|
||||
<pre>The CloudStack build system installs files on a variety of paths, each
|
||||
one of which is selectable when building from source.
|
||||
* {{{$PREFIX}}}:
|
||||
** the default prefix where the entire stack is installed
|
||||
** defaults to /usr/local on source builds
|
||||
** defaults to /usr on package builds
|
||||
** defaults to {{{/usr/local}}} on source builds as root, {{{$HOME/cloudstack}}} on source builds as a regular user, {{{C:\CloudStack}}} on Windows builds
|
||||
** defaults to {{{/usr}}} on package builds
|
||||
* {{{$SYSCONFDIR/cloud}}}:
|
||||
** the prefix for CloudStack configuration files
|
||||
** defaults to $PREFIX/etc/cloud on source builds
|
||||
@ -901,16 +1088,17 @@ This will create a folder called {{{cloudstack-oss}}} in your current folder.
|
||||
!Browsing the source code online
|
||||
You can browse the CloudStack source code through [[our CGit Web interface|http://git.cloud.com/cloudstack-oss]].</pre>
|
||||
</div>
|
||||
<div title="Preparing your development environment" creator="RuddO" modifier="RuddO" created="201008081133" modified="201008081159" changecount="7">
|
||||
<pre>!Install the build dependencies on the machine where you will compile the CloudStack
|
||||
!!Fedora / CentOS
|
||||
The command [[waf installrpmdeps]] issued from the source tree gets it done.
|
||||
!!Ubuntu
|
||||
The command [[waf installdebdeps]] issues from the source tree gets it done.
|
||||
!!Other distributions
|
||||
See [[CloudStack build dependencies]]
|
||||
!Install the run-time dependencies on the machines where you will run the CloudStack
|
||||
See [[CloudStack run-time dependencies]].</pre>
|
||||
<div title="Preparing your environment" creator="RuddO" modifier="RuddO" created="201008081133" modified="201009040238" changecount="17">
|
||||
<pre>!Install the build dependencies
|
||||
* If you want to compile the CloudStack on Linux:
|
||||
** Fedora / CentOS: The command [[waf installrpmdeps]] issued from the source tree gets it done.
|
||||
** Ubuntu: The command [[waf installdebdeps]] issues from the source tree gets it done.
|
||||
** Other distributions: Manually install the packages listed in [[CloudStack build dependencies]].
|
||||
* If you want to compile the CloudStack on Windows or Mac:
|
||||
** Manually install the packages listed in [[CloudStack build dependencies]].
|
||||
** Note that you won't be able to deploy this compiled CloudStack onto Linux machines -- you will be limited to running the Management Server.
|
||||
!Install the run-time dependencies
|
||||
In addition to the build dependencies, a number of software packages need to be installed on the machine to be able to run certain components of the CloudStack. These packages are not strictly required to //build// the stack, but they are required to run at least one part of it. See the topic [[CloudStack run-time dependencies]] for the list of packages.</pre>
|
||||
</div>
|
||||
<div title="Preserving the CloudStack configuration across source reinstalls" creator="RuddO" modifier="RuddO" created="201008080958" modified="201008080959" changecount="2">
|
||||
<pre>Every time you run {{{./waf install}}} to deploy changed code, waf will install configuration files once again. This can be a nuisance if you are developing the stack.
|
||||
@ -1149,9 +1337,9 @@ Cloud.com's contact information is:
|
||||
!Legal information
|
||||
//Unless otherwise specified// by Cloud.com, Inc., or in the sources themselves, [[this software is OSI certified Open Source Software distributed under the GNU General Public License, version 3|License statement]]. OSI Certified is a certification mark of the Open Source Initiative. The software powering this documentation is """BSD-licensed""" and obtained from [[TiddlyWiki.com|http://tiddlywiki.com/]].</pre>
|
||||
</div>
|
||||
<div title="Your development lifecycle" creator="RuddO" modifier="RuddO" created="201008080933" modified="201008081349" changecount="16">
|
||||
<pre>This is the typical lifecycle that you would follow when hacking on a CloudStack component, assuming that your [[development environment has been set up|Preparing your development environment]]:
|
||||
# [[Configure|waf configure]] the source code<br>{{{./waf configure --prefix=/home/youruser/cloudstack}}}
|
||||
<div title="Your development lifecycle" creator="RuddO" modifier="RuddO" created="201008080933" modified="201009040158" changecount="18">
|
||||
<pre>This is the typical lifecycle that you would follow when hacking on a CloudStack component, assuming that your [[development environment has been set up|Preparing your environment]]:
|
||||
# [[Configure|waf configure]] the source code<br>{{{./waf configure}}}
|
||||
# [[Build|waf build]] and [[install|waf install]] the CloudStack
|
||||
## {{{./waf install}}}
|
||||
## [[How to perform these tasks from Eclipse|How to integrate with Eclipse]]
|
||||
@ -1229,7 +1417,7 @@ Makes an inventory of all build products in {{{artifacts/default}}}, and removes
|
||||
|
||||
Contrast to [[waf distclean]].</pre>
|
||||
</div>
|
||||
<div title="waf configure" creator="RuddO" modifier="RuddO" created="201008080940" modified="201008081146" changecount="14">
|
||||
<div title="waf configure" creator="RuddO" modifier="RuddO" created="201008080940" modified="201009012344" changecount="15">
|
||||
<pre>{{{
|
||||
./waf configure --prefix=/directory/that/you/have/write/permission/to
|
||||
}}}
|
||||
@ -1238,7 +1426,7 @@ This runs the file {{{wscript_configure}}}, which takes care of setting the var
|
||||
!When / why should I run this?
|
||||
You run this command //once//, in preparation to building the stack, or every time you need to change a configure-time variable. Once you find an acceptable set of configure-time variables, you should not need to run {{{configure}}} again.
|
||||
!What happens if I don't run it?
|
||||
For convenience reasons, if you forget to configure the source, waf will autoconfigure itself and select some sensible default configuration options. By default, {{{PREFIX}}} is {{{/usr/local}}}, but you can set it e.g. to {{{/home/youruser/cloudstack}}} if you plan to do a non-root install. Be ware that you can later install the stack as a regular user, but most components need to //run// as root.
|
||||
For convenience reasons, if you forget to configure the source, waf will autoconfigure itself and select some sensible default configuration options. By default, {{{PREFIX}}} is {{{/usr/local}}} if you configure as root (do this if you plan to do a non-root install), or {{{/home/youruser/cloudstack}}} if you configure as your regular user name. Be ware that you can later install the stack as a regular user, but most components need to //run// as root.
|
||||
!What variables / options exist for configure?
|
||||
In general: refer to the output of {{{./waf configure --help}}}.
|
||||
|
||||
|
||||
@ -1311,7 +1311,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
|
||||
try {
|
||||
StoragePool secondaryStoragePool = getNfsSPbyURI(_conn, new URI(secondaryStoragePoolURL));
|
||||
String ssPmountPath = _mountPoint + File.separator + secondaryStoragePool.getUUIDString();
|
||||
snapshotDestPath = ssPmountPath + File.separator + dcId + File.separator + "snapshots" + File.separator + accountId + File.separator + volumeId;
|
||||
snapshotDestPath = ssPmountPath + File.separator + "snapshots" + File.separator + dcId + File.separator + accountId + File.separator + volumeId;
|
||||
Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
|
||||
command.add("-b", snapshotPath);
|
||||
command.add("-n", snapshotName);
|
||||
@ -1367,7 +1367,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
|
||||
try {
|
||||
StoragePool secondaryStoragePool = getNfsSPbyURI(_conn, new URI(cmd.getSecondaryStoragePoolURL()));
|
||||
String ssPmountPath = _mountPoint + File.separator + secondaryStoragePool.getUUIDString();
|
||||
String snapshotDestPath = ssPmountPath + File.separator + dcId + File.separator + "snapshots" + File.separator + accountId + File.separator + volumeId;
|
||||
String snapshotDestPath = ssPmountPath + File.separator + "snapshots" + File.separator + dcId + File.separator + accountId + File.separator + volumeId;
|
||||
|
||||
final Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
|
||||
command.add("-d", snapshotDestPath);
|
||||
@ -1389,11 +1389,12 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
|
||||
try {
|
||||
StoragePool secondaryStoragePool = getNfsSPbyURI(_conn, new URI(cmd.getSecondaryStoragePoolURL()));
|
||||
String ssPmountPath = _mountPoint + File.separator + secondaryStoragePool.getUUIDString();
|
||||
String snapshotDestPath = ssPmountPath + File.separator + dcId + File.separator + "snapshots" + File.separator + accountId + File.separator + volumeId;
|
||||
String snapshotDestPath = ssPmountPath + File.separator + "snapshots" + File.separator + dcId + File.separator + accountId + File.separator + volumeId;
|
||||
|
||||
final Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
|
||||
command.add("-d", snapshotDestPath);
|
||||
command.add("-n", cmd.getSnapshotName());
|
||||
command.add("-f");
|
||||
command.execute();
|
||||
} catch (LibvirtException e) {
|
||||
return new Answer(cmd, false, e.toString());
|
||||
@ -1428,10 +1429,8 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
|
||||
secondaryPool = getNfsSPbyURI(_conn, new URI(cmd.getSecondaryStoragePoolURL()));
|
||||
/*TODO: assuming all the storage pools mounted under _mountPoint, the mount point should be got from pool.dumpxml*/
|
||||
String templatePath = _mountPoint + File.separator + secondaryPool.getUUIDString() + File.separator + templateInstallFolder;
|
||||
File f = new File(templatePath);
|
||||
if (!f.exists()) {
|
||||
f.mkdirs();
|
||||
}
|
||||
_storage.mkdirs(templatePath);
|
||||
|
||||
String tmplPath = templateInstallFolder + File.separator + tmplFileName;
|
||||
Script command = new Script(_createTmplPath, _timeout, s_logger);
|
||||
command.add("-t", templatePath);
|
||||
@ -1487,10 +1486,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
|
||||
secondaryStorage = getNfsSPbyURI(_conn, new URI(secondaryStorageURL));
|
||||
/*TODO: assuming all the storage pools mounted under _mountPoint, the mount point should be got from pool.dumpxml*/
|
||||
String tmpltPath = _mountPoint + File.separator + secondaryStorage.getUUIDString() + templateInstallFolder;
|
||||
File mpfile = new File(tmpltPath);
|
||||
if (!mpfile.exists()) {
|
||||
mpfile.mkdirs();
|
||||
}
|
||||
_storage.mkdirs(tmpltPath);
|
||||
|
||||
Script command = new Script(_createTmplPath, _timeout, s_logger);
|
||||
command.add("-f", cmd.getSnapshotPath());
|
||||
@ -1589,10 +1585,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
|
||||
|
||||
if (sp == null) {
|
||||
try {
|
||||
File tpFile = new File(targetPath);
|
||||
if (!tpFile.exists()) {
|
||||
tpFile.mkdir();
|
||||
}
|
||||
_storage.mkdir(targetPath);
|
||||
LibvirtStoragePoolDef spd = new LibvirtStoragePoolDef(poolType.NFS, uuid, uuid,
|
||||
sourceHost, sourcePath, targetPath);
|
||||
s_logger.debug(spd.toString());
|
||||
@ -1702,10 +1695,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
|
||||
String targetPath = _mountPoint + File.separator + pool.getUuid();
|
||||
LibvirtStoragePoolDef spd = new LibvirtStoragePoolDef(poolType.NFS, pool.getUuid(), pool.getUuid(),
|
||||
pool.getHostAddress(), pool.getPath(), targetPath);
|
||||
File tpFile = new File(targetPath);
|
||||
if (!tpFile.exists()) {
|
||||
tpFile.mkdirs();
|
||||
}
|
||||
_storage.mkdir(targetPath);
|
||||
StoragePool sp = null;
|
||||
try {
|
||||
s_logger.debug(spd.toString());
|
||||
|
||||
@ -17,6 +17,8 @@
|
||||
*/
|
||||
package com.cloud.storage;
|
||||
|
||||
import java.util.Date;
|
||||
|
||||
import com.cloud.domain.PartOf;
|
||||
import com.cloud.template.BasedOn;
|
||||
import com.cloud.user.OwnedBy;
|
||||
@ -86,4 +88,8 @@ public interface Volume extends PartOf, OwnedBy, BasedOn {
|
||||
void setSourceId(Long sourceId);
|
||||
|
||||
Long getSourceId();
|
||||
|
||||
Date getAttached();
|
||||
|
||||
void setAttached(Date attached);
|
||||
}
|
||||
|
||||
@ -107,9 +107,6 @@
|
||||
<property name="meld.home" location="/usr/local/bin" />
|
||||
<property name="assertion" value="-da" />
|
||||
|
||||
<!-- directories for patches -->
|
||||
<property name="kvm.patch.dist.dir" location="${dist.dir}/patches/kvm" />
|
||||
<property name="xenserver.patch.dist.dir" location="${dist.dir}/patches/xenserver" />
|
||||
|
||||
<!-- directories for testing -->
|
||||
<property name="test.target.dir" location="${target.dir}/test" />
|
||||
@ -518,40 +515,19 @@
|
||||
</target>
|
||||
|
||||
|
||||
<target name="build-kvm-domr-patch" depends="-init">
|
||||
<mkdir dir="${kvm.patch.dist.dir}" />
|
||||
<tar destfile="${kvm.patch.dist.dir}/patch.tar">
|
||||
<tarfileset dir="${base.dir}/patches/kvm" filemode="755">
|
||||
<include name="**/*"/>
|
||||
<exclude name="**/.classpath" />
|
||||
<exclude name="**/.project" />
|
||||
</tarfileset>
|
||||
<tarfileset dir="${base.dir}/patches/shared" filemode="755">
|
||||
<target name="build-systemvm-patch" depends="-init">
|
||||
<mkdir dir="${dist.dir}" />
|
||||
<tar destfile="${dist.dir}/patch.tar">
|
||||
<tarfileset dir="${base.dir}/patches/systemvm" filemode="755">
|
||||
<include name="**/*"/>
|
||||
<exclude name="**/.classpath" />
|
||||
<exclude name="**/.project" />
|
||||
<exclude name="**/wscript_build" />
|
||||
</tarfileset>
|
||||
</tar>
|
||||
<gzip destfile="${kvm.patch.dist.dir}/patch.tgz" src="${kvm.patch.dist.dir}/patch.tar"/>
|
||||
<delete file="${kvm.patch.dist.dir}/patch.tar"/>
|
||||
</target>
|
||||
|
||||
<target name="build-xenserver-domr-patch" depends="-init">
|
||||
<mkdir dir="${xenserver.patch.dist.dir}" />
|
||||
<tar destfile="${xenserver.patch.dist.dir}/patch.tar">
|
||||
<tarfileset dir="${base.dir}/patches/xenserver" filemode="755">
|
||||
<include name="**/*"/>
|
||||
<exclude name="**/.classpath" />
|
||||
<exclude name="**/.project" />
|
||||
</tarfileset>
|
||||
<tarfileset dir="${base.dir}/patches/shared" filemode="755">
|
||||
<include name="**/*"/>
|
||||
<exclude name="**/.classpath" />
|
||||
<exclude name="**/.project" />
|
||||
</tarfileset>
|
||||
</tar>
|
||||
<gzip destfile="${xenserver.patch.dist.dir}/patch.tgz" src="${xenserver.patch.dist.dir}/patch.tar"/>
|
||||
<delete file="${xenserver.patch.dist.dir}/patch.tar"/>
|
||||
<copy file="${base.dir}/patches/systemvm/root/.ssh/authorized_keys" todir="${dist.dir}/"/>
|
||||
<gzip destfile="${dist.dir}/patch.tgz" src="${dist.dir}/patch.tar"/>
|
||||
<delete file="${dist.dir}/patch.tar"/>
|
||||
</target>
|
||||
|
||||
<target name="help">
|
||||
|
||||
@ -23,7 +23,6 @@
|
||||
<property name="docs.dist.dir" location="${dist.dir}/docs" />
|
||||
<property name="db.dist.dir" location="${dist.dir}/db" />
|
||||
<property name="usage.dist.dir" location="${dist.dir}/usage" />
|
||||
<property name="kvm.domr.patch.dir" location="${scripts.dir}/vm/hypervisor/kvm/patch" />
|
||||
|
||||
<target name="-init-package">
|
||||
<mkdir dir="${dist.dir}" />
|
||||
@ -92,9 +91,9 @@
|
||||
</target>
|
||||
|
||||
|
||||
<target name="package-agent" depends="-init-package, package-oss-systemvm, build-kvm-domr-patch, package-agent-common">
|
||||
<target name="package-agent" depends="-init-package, package-oss-systemvm, build-systemvm-patch, package-agent-common">
|
||||
<zip destfile="${dist.dir}/agent.zip" duplicate="preserve" update="true">
|
||||
<zipfileset dir="${kvm.patch.dist.dir}" prefix="scripts/vm/hypervisor/kvm">
|
||||
<zipfileset dir="${dist.dir}" prefix="vms">
|
||||
<include name="patch.tgz" />
|
||||
</zipfileset>
|
||||
<zipfileset dir="${dist.dir}" prefix="vms" filemode="555">
|
||||
@ -103,14 +102,15 @@
|
||||
</zip>
|
||||
</target>
|
||||
|
||||
<target name="package-oss-systemvm-iso" depends="-init-package, package-oss-systemvm, build-xenserver-domr-patch">
|
||||
<target name="package-oss-systemvm-iso" depends="-init-package, package-oss-systemvm, build-systemvm-patch">
|
||||
<exec executable="mkisofs" dir="${dist.dir}">
|
||||
<arg value="-quiet"/>
|
||||
<arg value="-r"/>
|
||||
<arg value="-o"/>
|
||||
<arg value="systemvm.iso"/>
|
||||
<arg value="systemvm.zip"/>
|
||||
<arg value="patches/xenserver/patch.tgz"/>
|
||||
<arg value="patch.tgz"/>
|
||||
<arg value="authorized_keys"/>
|
||||
</exec>
|
||||
</target>
|
||||
|
||||
@ -135,7 +135,7 @@
|
||||
</zip>
|
||||
</target>
|
||||
|
||||
<target name="build-all" depends="build-opensource, build-kvm-domr-patch, build-ui, build-war-oss, package-oss-systemvm-iso">
|
||||
<target name="build-all" depends="build-opensource, build-ui, build-war-oss, package-oss-systemvm-iso">
|
||||
</target>
|
||||
|
||||
<target name="build-war-oss" depends="-init-package" description="Compile the GWT client UI and builds WAR file.">
|
||||
|
||||
1
client/tomcatconf/commands.properties.in
Normal file → Executable file
1
client/tomcatconf/commands.properties.in
Normal file → Executable file
@ -61,6 +61,7 @@ deleteTemplate=com.cloud.api.commands.DeleteTemplateCmd;15
|
||||
listTemplates=com.cloud.api.commands.ListTemplatesCmd;15
|
||||
updateTemplatePermissions=com.cloud.api.commands.UpdateTemplatePermissionsCmd;15
|
||||
listTemplatePermissions=com.cloud.api.commands.ListTemplatePermissionsCmd;15
|
||||
extractTemplate=com.cloud.api.commands.ExtractTemplateCmd;15
|
||||
|
||||
#### iso commands
|
||||
attachIso=com.cloud.api.commands.AttachIsoCmd;15
|
||||
|
||||
@ -172,6 +172,8 @@
|
||||
</manager>
|
||||
<manager name="download manager" class="com.cloud.storage.download.DownloadMonitorImpl">
|
||||
</manager>
|
||||
<manager name="upload manager" class="com.cloud.storage.upload.UploadMonitorImpl">
|
||||
</manager>
|
||||
<manager name="console proxy manager" class="com.cloud.consoleproxy.AgentBasedStandaloneConsoleProxyManager">
|
||||
</manager>
|
||||
<manager name="secondary storage vm manager" class="com.cloud.storage.secondary.SecondaryStorageManagerImpl">
|
||||
|
||||
94
cloud.spec
94
cloud.spec
@ -35,6 +35,7 @@ BuildRequires: jpackage-utils
|
||||
BuildRequires: gcc
|
||||
BuildRequires: glibc-devel
|
||||
BuildRequires: /usr/bin/mkisofs
|
||||
BuildRequires: MySQL-python
|
||||
|
||||
%global _premium %(tar jtvmf %{SOURCE0} '*/cloudstack-proprietary/' --occurrence=1 2>/dev/null | wc -l)
|
||||
|
||||
@ -182,12 +183,11 @@ Summary: Cloud.com setup tools
|
||||
Obsoletes: vmops-setup < %{version}-%{release}
|
||||
Requires: java >= 1.6.0
|
||||
Requires: python
|
||||
Requires: mysql
|
||||
Requires: MySQL-python
|
||||
Requires: %{name}-utils = %{version}-%{release}
|
||||
Requires: %{name}-server = %{version}-%{release}
|
||||
Requires: %{name}-deps = %{version}-%{release}
|
||||
Requires: %{name}-python = %{version}-%{release}
|
||||
Requires: MySQL-python
|
||||
Group: System Environment/Libraries
|
||||
%description setup
|
||||
The Cloud.com setup tools let you set up your Management Server and Usage Server.
|
||||
@ -373,7 +373,6 @@ if [ "$1" == "1" ] ; then
|
||||
/sbin/chkconfig --add %{name}-management > /dev/null 2>&1 || true
|
||||
/sbin/chkconfig --level 345 %{name}-management on > /dev/null 2>&1 || true
|
||||
fi
|
||||
test -f %{_sharedstatedir}/%{name}/management/.ssh/id_rsa || su - %{name} -c 'yes "" 2>/dev/null | ssh-keygen -t rsa -q -N ""' < /dev/null
|
||||
|
||||
|
||||
|
||||
@ -457,30 +456,17 @@ fi
|
||||
%doc %{_docdir}/%{name}-%{version}/sccs-info
|
||||
%doc %{_docdir}/%{name}-%{version}/version-info
|
||||
%doc %{_docdir}/%{name}-%{version}/configure-info
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc README.html
|
||||
%doc debian/copyright
|
||||
|
||||
%files client-ui
|
||||
%defattr(0644,root,root,0755)
|
||||
%{_datadir}/%{name}/management/webapps/client/*
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc README.html
|
||||
%doc debian/copyright
|
||||
|
||||
%files server
|
||||
%defattr(0644,root,root,0755)
|
||||
%{_javadir}/%{name}-server.jar
|
||||
%{_sysconfdir}/%{name}/server/*
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc README.html
|
||||
%doc debian/copyright
|
||||
|
||||
%files agent-scripts
|
||||
%defattr(-,root,root,-)
|
||||
@ -498,20 +484,10 @@ fi
|
||||
%endif
|
||||
%{_libdir}/%{name}/agent/vms/systemvm.zip
|
||||
%{_libdir}/%{name}/agent/vms/systemvm.iso
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc README.html
|
||||
%doc debian/copyright
|
||||
|
||||
%files daemonize
|
||||
%defattr(-,root,root,-)
|
||||
%attr(755,root,root) %{_bindir}/%{name}-daemonize
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc README.html
|
||||
%doc debian/copyright
|
||||
|
||||
%files deps
|
||||
%defattr(0644,root,root,0755)
|
||||
@ -532,39 +508,20 @@ fi
|
||||
%{_javadir}/%{name}-xenserver-5.5.0-1.jar
|
||||
%{_javadir}/%{name}-xmlrpc-common-3.*.jar
|
||||
%{_javadir}/%{name}-xmlrpc-client-3.*.jar
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc README.html
|
||||
%doc debian/copyright
|
||||
|
||||
%files core
|
||||
%defattr(0644,root,root,0755)
|
||||
%{_javadir}/%{name}-core.jar
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc debian/copyright
|
||||
|
||||
%files vnet
|
||||
%defattr(0644,root,root,0755)
|
||||
%attr(0755,root,root) %{_sbindir}/%{name}-vnetd
|
||||
%attr(0755,root,root) %{_sbindir}/%{name}-vn
|
||||
%attr(0755,root,root) %{_initrddir}/%{name}-vnetd
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc README.html
|
||||
%doc debian/copyright
|
||||
|
||||
%files python
|
||||
%defattr(0644,root,root,0755)
|
||||
%{_prefix}/lib*/python*/site-packages/%{name}*
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc README.html
|
||||
%doc debian/copyright
|
||||
|
||||
%files setup
|
||||
%attr(0755,root,root) %{_bindir}/%{name}-setup-databases
|
||||
@ -582,11 +539,9 @@ fi
|
||||
%{_datadir}/%{name}/setup/index-212to213.sql
|
||||
%{_datadir}/%{name}/setup/postprocess-20to21.sql
|
||||
%{_datadir}/%{name}/setup/schema-20to21.sql
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc README.html
|
||||
%doc debian/copyright
|
||||
%{_datadir}/%{name}/setup/schema-level.sql
|
||||
%{_datadir}/%{name}/setup/schema-21to22.sql
|
||||
%{_datadir}/%{name}/setup/data-21to22.sql
|
||||
|
||||
%files client
|
||||
%defattr(0644,root,root,0755)
|
||||
@ -626,19 +581,10 @@ fi
|
||||
%dir %attr(770,root,%{name}) %{_localstatedir}/cache/%{name}/management/temp
|
||||
%dir %attr(770,root,%{name}) %{_localstatedir}/log/%{name}/management
|
||||
%dir %attr(770,root,%{name}) %{_localstatedir}/log/%{name}/agent
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc README.html
|
||||
%doc debian/copyright
|
||||
|
||||
%files agent-libs
|
||||
%defattr(0644,root,root,0755)
|
||||
%{_javadir}/%{name}-agent.jar
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc debian/copyright
|
||||
|
||||
%files agent
|
||||
%defattr(0644,root,root,0755)
|
||||
@ -654,11 +600,6 @@ fi
|
||||
%{_libdir}/%{name}/agent/images
|
||||
%attr(0755,root,root) %{_bindir}/%{name}-setup-agent
|
||||
%dir %attr(770,root,root) %{_localstatedir}/log/%{name}/agent
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc README.html
|
||||
%doc debian/copyright
|
||||
|
||||
%files console-proxy
|
||||
%defattr(0644,root,root,0755)
|
||||
@ -671,11 +612,6 @@ fi
|
||||
%{_libdir}/%{name}/console-proxy/*
|
||||
%attr(0755,root,root) %{_bindir}/%{name}-setup-console-proxy
|
||||
%dir %attr(770,root,root) %{_localstatedir}/log/%{name}/console-proxy
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc README.html
|
||||
%doc debian/copyright
|
||||
|
||||
%if %{_premium}
|
||||
|
||||
@ -686,20 +622,10 @@ fi
|
||||
%{_sharedstatedir}/%{name}/test/*
|
||||
%{_libdir}/%{name}/test/*
|
||||
%{_sysconfdir}/%{name}/test/*
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc README.html
|
||||
%doc debian/copyright
|
||||
|
||||
%files premium-deps
|
||||
%defattr(0644,root,root,0755)
|
||||
%{_javadir}/%{name}-premium/*.jar
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc README.html
|
||||
%doc debian/copyright
|
||||
|
||||
%files premium
|
||||
%defattr(0644,root,root,0755)
|
||||
@ -719,11 +645,6 @@ fi
|
||||
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xenheartbeat.sh
|
||||
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xenserver56/patch-premium
|
||||
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xs_cleanup.sh
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc README.html
|
||||
%doc debian/copyright
|
||||
|
||||
%files usage
|
||||
%defattr(0644,root,root,0755)
|
||||
@ -734,11 +655,6 @@ fi
|
||||
%{_sysconfdir}/%{name}/usage/usage-components.xml
|
||||
%config(noreplace) %{_sysconfdir}/%{name}/usage/log4j-%{name}_usage.xml
|
||||
%config(noreplace) %attr(640,root,%{name}) %{_sysconfdir}/%{name}/usage/db.properties
|
||||
%doc README
|
||||
%doc INSTALL
|
||||
%doc HACKING
|
||||
%doc README.html
|
||||
%doc debian/copyright
|
||||
|
||||
%endif
|
||||
|
||||
|
||||
@ -2,7 +2,12 @@
|
||||
|
||||
BASE_DIR="/var/www/html/copy/template/"
|
||||
HTACCESS="$BASE_DIR/.htaccess"
|
||||
|
||||
PASSWDFILE="/etc/httpd/.htpasswd"
|
||||
if [ -d /etc/apache2 ]
|
||||
then
|
||||
PASSWDFILE="/etc/apache2/.htpasswd"
|
||||
fi
|
||||
|
||||
config_htaccess() {
|
||||
mkdir -p $BASE_DIR
|
||||
|
||||
@ -15,6 +15,17 @@ config_httpd_conf() {
|
||||
echo "</VirtualHost>" >> /etc/httpd/conf/httpd.conf
|
||||
}
|
||||
|
||||
config_apache2_conf() {
|
||||
local ip=$1
|
||||
local srvr=$2
|
||||
cp -f /etc/apache2/sites-available/default.orig /etc/apache2/sites-available/default
|
||||
cp -f /etc/apache2/sites-available/default-ssl.orig /etc/apache2/sites-available/default-ssl
|
||||
sed -i -e "s/VirtualHost.*:80$/VirtualHost $ip:80/" /etc/httpd/conf/httpd.conf
|
||||
sed -i 's/_default_/$ip/' /etc/apache2/sites-available/default-ssl
|
||||
sed -i 's/ssl-cert-snakeoil.key/realhostip.key/' /etc/apache2/sites-available/default-ssl
|
||||
sed -i 's/ssl-cert-snakeoil.pem/realhostip.crt/' /etc/apache2/sites-available/default-ssl
|
||||
}
|
||||
|
||||
copy_certs() {
|
||||
local certdir=$(dirname $0)/certs
|
||||
local mydir=$(dirname $0)
|
||||
@ -25,16 +36,37 @@ copy_certs() {
|
||||
return 1
|
||||
}
|
||||
|
||||
copy_certs_apache2() {
|
||||
local certdir=$(dirname $0)/certs
|
||||
local mydir=$(dirname $0)
|
||||
if [ -d $certdir ] && [ -f $certdir/realhostip.key ] && [ -f $certdir/realhostip.crt ] ; then
|
||||
cp $certdir/realhostip.key /etc/ssl/private/ && cp $certdir/realhostip.crt /etc/ssl/certs/
|
||||
return $?
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
if [ $# -ne 2 ] ; then
|
||||
echo $"Usage: `basename $0` ipaddr servername "
|
||||
exit 0
|
||||
fi
|
||||
|
||||
copy_certs
|
||||
if [ -d /etc/apache2 ]
|
||||
then
|
||||
copy_certs_apache2
|
||||
else
|
||||
copy_certs
|
||||
fi
|
||||
|
||||
if [ $? -ne 0 ]
|
||||
then
|
||||
echo "Failed to copy certificates"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
config_httpd_conf $1 $2
|
||||
if [ -d /etc/apache2 ]
|
||||
then
|
||||
config_apache2_conf $1 $2
|
||||
else
|
||||
config_httpd_conf $1 $2
|
||||
fi
|
||||
|
||||
@ -213,4 +213,6 @@ public interface AgentManager extends Manager {
|
||||
public boolean reconnect(final long hostId) throws AgentUnavailableException;
|
||||
|
||||
public List<HostVO> discoverHosts(long dcId, Long podId, Long clusterId, URI url, String username, String password) throws DiscoveryException;
|
||||
|
||||
Answer easySend(Long hostId, Command cmd, int timeout);
|
||||
}
|
||||
|
||||
@ -0,0 +1,52 @@
|
||||
package com.cloud.agent.api.storage;
|
||||
|
||||
import com.cloud.storage.Storage.ImageFormat;
|
||||
|
||||
public class AbstractUploadCommand extends StorageCommand{
|
||||
|
||||
|
||||
private String url;
|
||||
private ImageFormat format;
|
||||
private long accountId;
|
||||
private String name;
|
||||
|
||||
protected AbstractUploadCommand() {
|
||||
}
|
||||
|
||||
protected AbstractUploadCommand(String name, String url, ImageFormat format, long accountId) {
|
||||
this.url = url;
|
||||
this.format = format;
|
||||
this.accountId = accountId;
|
||||
this.name = name;
|
||||
}
|
||||
|
||||
protected AbstractUploadCommand(AbstractUploadCommand that) {
|
||||
this(that.name, that.url, that.format, that.accountId);
|
||||
}
|
||||
|
||||
public String getUrl() {
|
||||
return url;
|
||||
}
|
||||
|
||||
public String getName() {
|
||||
return name;
|
||||
}
|
||||
|
||||
public ImageFormat getFormat() {
|
||||
return format;
|
||||
}
|
||||
|
||||
public long getAccountId() {
|
||||
return accountId;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean executeInSequence() {
|
||||
return true;
|
||||
}
|
||||
|
||||
public void setUrl(String url) {
|
||||
this.url = url;
|
||||
}
|
||||
|
||||
}
|
||||
@ -64,7 +64,7 @@ public class CreateCommand extends Command {
|
||||
this.pool = new StoragePoolTO(pool);
|
||||
this.templateUrl = null;
|
||||
this.size = size;
|
||||
this.instanceName = vm.getInstanceName();
|
||||
//this.instanceName = vm.getInstanceName();
|
||||
}
|
||||
|
||||
@Override
|
||||
|
||||
103
core/src/com/cloud/agent/api/storage/UploadAnswer.java
Normal file
103
core/src/com/cloud/agent/api/storage/UploadAnswer.java
Normal file
@ -0,0 +1,103 @@
|
||||
package com.cloud.agent.api.storage;
|
||||
|
||||
import java.io.File;
|
||||
|
||||
import com.cloud.agent.api.Answer;
|
||||
import com.cloud.agent.api.Command;
|
||||
import com.cloud.storage.VMTemplateHostVO;
|
||||
import com.cloud.storage.VMTemplateStorageResourceAssoc.Status;
|
||||
|
||||
public class UploadAnswer extends Answer {
|
||||
|
||||
|
||||
private String jobId;
|
||||
private int uploadPct;
|
||||
private String errorString;
|
||||
private VMTemplateHostVO.Status uploadStatus;
|
||||
private String uploadPath;
|
||||
private String installPath;
|
||||
public Long templateSize = 0L;
|
||||
|
||||
public int getUploadPct() {
|
||||
return uploadPct;
|
||||
}
|
||||
public String getErrorString() {
|
||||
return errorString;
|
||||
}
|
||||
|
||||
public String getUploadStatusString() {
|
||||
return uploadStatus.toString();
|
||||
}
|
||||
|
||||
public VMTemplateHostVO.Status getUploadStatus() {
|
||||
return uploadStatus;
|
||||
}
|
||||
|
||||
public String getUploadPath() {
|
||||
return uploadPath;
|
||||
}
|
||||
protected UploadAnswer() {
|
||||
|
||||
}
|
||||
|
||||
public String getJobId() {
|
||||
return jobId;
|
||||
}
|
||||
public void setJobId(String jobId) {
|
||||
this.jobId = jobId;
|
||||
}
|
||||
|
||||
public UploadAnswer(String jobId, int uploadPct, String errorString,
|
||||
Status uploadStatus, String fileSystemPath, String installPath, long templateSize) {
|
||||
super();
|
||||
this.jobId = jobId;
|
||||
this.uploadPct = uploadPct;
|
||||
this.errorString = errorString;
|
||||
this.uploadStatus = uploadStatus;
|
||||
this.uploadPath = fileSystemPath;
|
||||
this.installPath = fixPath(installPath);
|
||||
this.templateSize = templateSize;
|
||||
}
|
||||
|
||||
public UploadAnswer(String jobId, int uploadPct, Command command,
|
||||
Status uploadStatus, String fileSystemPath, String installPath) {
|
||||
super(command);
|
||||
this.jobId = jobId;
|
||||
this.uploadPct = uploadPct;
|
||||
this.uploadStatus = uploadStatus;
|
||||
this.uploadPath = fileSystemPath;
|
||||
this.installPath = installPath;
|
||||
}
|
||||
|
||||
private static String fixPath(String path){
|
||||
if (path == null)
|
||||
return path;
|
||||
if (path.startsWith(File.separator)) {
|
||||
path=path.substring(File.separator.length());
|
||||
}
|
||||
if (path.endsWith(File.separator)) {
|
||||
path=path.substring(0, path.length()-File.separator.length());
|
||||
}
|
||||
return path;
|
||||
}
|
||||
|
||||
public void setUploadStatus(VMTemplateHostVO.Status uploadStatus) {
|
||||
this.uploadStatus = uploadStatus;
|
||||
}
|
||||
|
||||
public String getInstallPath() {
|
||||
return installPath;
|
||||
}
|
||||
public void setInstallPath(String installPath) {
|
||||
this.installPath = fixPath(installPath);
|
||||
}
|
||||
|
||||
public void setTemplateSize(long templateSize) {
|
||||
this.templateSize = templateSize;
|
||||
}
|
||||
|
||||
public Long getTemplateSize() {
|
||||
return templateSize;
|
||||
}
|
||||
|
||||
}
|
||||
115
core/src/com/cloud/agent/api/storage/UploadCommand.java
Normal file
115
core/src/com/cloud/agent/api/storage/UploadCommand.java
Normal file
@ -0,0 +1,115 @@
|
||||
package com.cloud.agent.api.storage;
|
||||
|
||||
import com.cloud.storage.VMTemplateHostVO;
|
||||
import com.cloud.storage.VMTemplateVO;
|
||||
import com.cloud.agent.api.storage.AbstractUploadCommand;
|
||||
import com.cloud.agent.api.storage.DownloadCommand.PasswordAuth;
|
||||
|
||||
|
||||
public class UploadCommand extends AbstractUploadCommand {
|
||||
|
||||
private VMTemplateVO template;
|
||||
private String url;
|
||||
private String installPath;
|
||||
private boolean hvm;
|
||||
private String description;
|
||||
private String checksum;
|
||||
private PasswordAuth auth;
|
||||
private long templateSizeInBytes;
|
||||
private long id;
|
||||
|
||||
public UploadCommand(VMTemplateVO template, String url, VMTemplateHostVO vmTemplateHost) {
|
||||
|
||||
this.template = template;
|
||||
this.url = url;
|
||||
this.installPath = vmTemplateHost.getInstallPath();
|
||||
this.checksum = template.getChecksum();
|
||||
this.id = template.getId();
|
||||
this.templateSizeInBytes = vmTemplateHost.getSize();
|
||||
|
||||
}
|
||||
|
||||
protected UploadCommand() {
|
||||
}
|
||||
|
||||
public UploadCommand(UploadCommand that) {
|
||||
this.template = that.template;
|
||||
this.url = that.url;
|
||||
this.installPath = that.installPath;
|
||||
this.checksum = that.getChecksum();
|
||||
this.id = that.id;
|
||||
}
|
||||
|
||||
public String getDescription() {
|
||||
return description;
|
||||
}
|
||||
|
||||
|
||||
public VMTemplateVO getTemplate() {
|
||||
return template;
|
||||
}
|
||||
|
||||
public void setTemplate(VMTemplateVO template) {
|
||||
this.template = template;
|
||||
}
|
||||
|
||||
public String getUrl() {
|
||||
return url;
|
||||
}
|
||||
|
||||
public void setUrl(String url) {
|
||||
this.url = url;
|
||||
}
|
||||
|
||||
public boolean isHvm() {
|
||||
return hvm;
|
||||
}
|
||||
|
||||
public void setHvm(boolean hvm) {
|
||||
this.hvm = hvm;
|
||||
}
|
||||
|
||||
public PasswordAuth getAuth() {
|
||||
return auth;
|
||||
}
|
||||
|
||||
public void setAuth(PasswordAuth auth) {
|
||||
this.auth = auth;
|
||||
}
|
||||
|
||||
public Long getTemplateSizeInBytes() {
|
||||
return templateSizeInBytes;
|
||||
}
|
||||
|
||||
public void setTemplateSizeInBytes(Long templateSizeInBytes) {
|
||||
this.templateSizeInBytes = templateSizeInBytes;
|
||||
}
|
||||
|
||||
public long getId() {
|
||||
return id;
|
||||
}
|
||||
|
||||
public void setId(long id) {
|
||||
this.id = id;
|
||||
}
|
||||
|
||||
public void setInstallPath(String installPath) {
|
||||
this.installPath = installPath;
|
||||
}
|
||||
|
||||
public void setDescription(String description) {
|
||||
this.description = description;
|
||||
}
|
||||
|
||||
public void setChecksum(String checksum) {
|
||||
this.checksum = checksum;
|
||||
}
|
||||
|
||||
public String getInstallPath() {
|
||||
return installPath;
|
||||
}
|
||||
|
||||
public String getChecksum() {
|
||||
return checksum;
|
||||
}
|
||||
}
|
||||
@ -0,0 +1,32 @@
|
||||
package com.cloud.agent.api.storage;
|
||||
|
||||
public class UploadProgressCommand extends UploadCommand {
|
||||
|
||||
public static enum RequestType {GET_STATUS, ABORT, RESTART, PURGE, GET_OR_RESTART}
|
||||
private String jobId;
|
||||
private RequestType request;
|
||||
|
||||
protected UploadProgressCommand() {
|
||||
super();
|
||||
}
|
||||
|
||||
public UploadProgressCommand(UploadCommand cmd, String jobId, RequestType req) {
|
||||
super(cmd);
|
||||
|
||||
this.jobId = jobId;
|
||||
this.setRequest(req);
|
||||
}
|
||||
|
||||
public String getJobId() {
|
||||
return jobId;
|
||||
}
|
||||
|
||||
public void setRequest(RequestType request) {
|
||||
this.request = request;
|
||||
}
|
||||
|
||||
public RequestType getRequest() {
|
||||
return request;
|
||||
}
|
||||
|
||||
}
|
||||
@ -78,7 +78,10 @@ public class EventTypes {
|
||||
public static final String EVENT_TEMPLATE_COPY = "TEMPLATE.COPY";
|
||||
public static final String EVENT_TEMPLATE_DOWNLOAD_START = "TEMPLATE.DOWNLOAD.START";
|
||||
public static final String EVENT_TEMPLATE_DOWNLOAD_SUCCESS = "TEMPLATE.DOWNLOAD.SUCCESS";
|
||||
public static final String EVENT_TEMPLATE_DOWNLOAD_FAILED = "TEMPLATE.DOWNLOAD.FAILED";
|
||||
public static final String EVENT_TEMPLATE_DOWNLOAD_FAILED = "TEMPLATE.DOWNLOAD.FAILED";
|
||||
public static final String EVENT_TEMPLATE_UPLOAD_FAILED = "TEMPLATE.UPLOAD.FAILED";
|
||||
public static final String EVENT_TEMPLATE_UPLOAD_START = "TEMPLATE.UPLOAD.START";
|
||||
public static final String EVENT_TEMPLATE_UPLOAD_SUCCESS = "TEMPLATE.UPLOAD.SUCCESS";
|
||||
|
||||
// Volume Events
|
||||
public static final String EVENT_VOLUME_CREATE = "VOLUME.CREATE";
|
||||
|
||||
163
core/src/com/cloud/hypervisor/xen/resource/CitrixHelper.java
Normal file
163
core/src/com/cloud/hypervisor/xen/resource/CitrixHelper.java
Normal file
@ -0,0 +1,163 @@
|
||||
/**
|
||||
* Copyright (C) 2010 Cloud.com. All rights reserved.
|
||||
*
|
||||
* This software is licensed under the GNU General Public License v3 or later.
|
||||
*
|
||||
* It is free software: you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
* the Free Software Foundation, either version 3 of the License, or any later
|
||||
version.
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*
|
||||
*/
|
||||
package com.cloud.hypervisor.xen.resource;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
|
||||
/**
|
||||
* Reduce bloat inside CitrixResourceBase
|
||||
*
|
||||
*/
|
||||
public class CitrixHelper {
|
||||
private static final HashMap<String, String> _guestOsMap = new HashMap<String, String>(70);
|
||||
private static final ArrayList<String> _guestOsList = new ArrayList<String>(70);
|
||||
|
||||
|
||||
static {
|
||||
_guestOsMap.put("CentOS 4.5 (32-bit)", "CentOS 4.5");
|
||||
_guestOsMap.put("CentOS 4.6 (32-bit)", "CentOS 4.6");
|
||||
_guestOsMap.put("CentOS 4.7 (32-bit)", "CentOS 4.7");
|
||||
_guestOsMap.put("CentOS 4.8 (32-bit)", "CentOS 4.8");
|
||||
_guestOsMap.put("CentOS 5.0 (32-bit)", "CentOS 5.0");
|
||||
_guestOsMap.put("CentOS 5.0 (64-bit)", "CentOS 5.0 x64");
|
||||
_guestOsMap.put("CentOS 5.1 (32-bit)", "CentOS 5.1");
|
||||
_guestOsMap.put("CentOS 5.1 (64-bit)", "CentOS 5.1 x64");
|
||||
_guestOsMap.put("CentOS 5.2 (32-bit)", "CentOS 5.2");
|
||||
_guestOsMap.put("CentOS 5.2 (64-bit)", "CentOS 5.2 x64");
|
||||
_guestOsMap.put("CentOS 5.3 (32-bit)", "CentOS 5.3");
|
||||
_guestOsMap.put("CentOS 5.3 (64-bit)", "CentOS 5.3 x64");
|
||||
_guestOsMap.put("CentOS 5.4 (32-bit)", "CentOS 5.4");
|
||||
_guestOsMap.put("CentOS 5.4 (64-bit)", "CentOS 5.4 x64");
|
||||
_guestOsMap.put("Debian Lenny 5.0 (32-bit)", "Debian Lenny 5.0 (32-bit)");
|
||||
_guestOsMap.put("Oracle Enterprise Linux 5.0 (32-bit)", "Oracle Enterprise Linux 5.0");
|
||||
_guestOsMap.put("Oracle Enterprise Linux 5.0 (64-bit)", "Oracle Enterprise Linux 5.0 x64");
|
||||
_guestOsMap.put("Oracle Enterprise Linux 5.1 (32-bit)", "Oracle Enterprise Linux 5.1");
|
||||
_guestOsMap.put("Oracle Enterprise Linux 5.1 (64-bit)", "Oracle Enterprise Linux 5.1 x64");
|
||||
_guestOsMap.put("Oracle Enterprise Linux 5.2 (32-bit)", "Oracle Enterprise Linux 5.2");
|
||||
_guestOsMap.put("Oracle Enterprise Linux 5.2 (64-bit)", "Oracle Enterprise Linux 5.2 x64");
|
||||
_guestOsMap.put("Oracle Enterprise Linux 5.3 (32-bit)", "Oracle Enterprise Linux 5.3");
|
||||
_guestOsMap.put("Oracle Enterprise Linux 5.3 (64-bit)", "Oracle Enterprise Linux 5.3 x64");
|
||||
_guestOsMap.put("Oracle Enterprise Linux 5.4 (32-bit)", "Oracle Enterprise Linux 5.4");
|
||||
_guestOsMap.put("Oracle Enterprise Linux 5.4 (64-bit)", "Oracle Enterprise Linux 5.4 x64");
|
||||
_guestOsMap.put("Red Hat Enterprise Linux 4.5 (32-bit)", "Red Hat Enterprise Linux 4.5");
|
||||
_guestOsMap.put("Red Hat Enterprise Linux 4.6 (32-bit)", "Red Hat Enterprise Linux 4.6");
|
||||
_guestOsMap.put("Red Hat Enterprise Linux 4.7 (32-bit)", "Red Hat Enterprise Linux 4.7");
|
||||
_guestOsMap.put("Red Hat Enterprise Linux 4.8 (32-bit)", "Red Hat Enterprise Linux 4.8");
|
||||
_guestOsMap.put("Red Hat Enterprise Linux 5.0 (32-bit)", "Red Hat Enterprise Linux 5.0");
|
||||
_guestOsMap.put("Red Hat Enterprise Linux 5.0 (64-bit)", "Red Hat Enterprise Linux 5.0 x64");
|
||||
_guestOsMap.put("Red Hat Enterprise Linux 5.1 (32-bit)", "Red Hat Enterprise Linux 5.1");
|
||||
_guestOsMap.put("Red Hat Enterprise Linux 5.1 (64-bit)", "Red Hat Enterprise Linux 5.1 x64");
|
||||
_guestOsMap.put("Red Hat Enterprise Linux 5.2 (32-bit)", "Red Hat Enterprise Linux 5.2");
|
||||
_guestOsMap.put("Red Hat Enterprise Linux 5.2 (64-bit)", "Red Hat Enterprise Linux 5.2 x64");
|
||||
_guestOsMap.put("Red Hat Enterprise Linux 5.3 (32-bit)", "Red Hat Enterprise Linux 5.3");
|
||||
_guestOsMap.put("Red Hat Enterprise Linux 5.3 (64-bit)", "Red Hat Enterprise Linux 5.3 x64");
|
||||
_guestOsMap.put("Red Hat Enterprise Linux 5.4 (32-bit)", "Red Hat Enterprise Linux 5.4");
|
||||
_guestOsMap.put("Red Hat Enterprise Linux 5.4 (64-bit)", "Red Hat Enterprise Linux 5.4 x64");
|
||||
_guestOsMap.put("SUSE Linux Enterprise Server 9 SP4 (32-bit)", "SUSE Linux Enterprise Server 9 SP4");
|
||||
_guestOsMap.put("SUSE Linux Enterprise Server 10 SP1 (32-bit)", "SUSE Linux Enterprise Server 10 SP1");
|
||||
_guestOsMap.put("SUSE Linux Enterprise Server 10 SP1 (64-bit)", "SUSE Linux Enterprise Server 10 SP1 x64");
|
||||
_guestOsMap.put("SUSE Linux Enterprise Server 10 SP2 (32-bit)", "SUSE Linux Enterprise Server 10 SP2");
|
||||
_guestOsMap.put("SUSE Linux Enterprise Server 10 SP2 (64-bit)", "SUSE Linux Enterprise Server 10 SP2 x64");
|
||||
_guestOsMap.put("SUSE Linux Enterprise Server 10 SP3 (64-bit)", "Other install media");
|
||||
_guestOsMap.put("SUSE Linux Enterprise Server 11 (32-bit)", "SUSE Linux Enterprise Server 11");
|
||||
_guestOsMap.put("SUSE Linux Enterprise Server 11 (64-bit)", "SUSE Linux Enterprise Server 11 x64");
|
||||
_guestOsMap.put("Windows 7 (32-bit)", "Windows 7");
|
||||
_guestOsMap.put("Windows 7 (64-bit)", "Windows 7 x64");
|
||||
_guestOsMap.put("Windows Server 2003 (32-bit)", "Windows Server 2003");
|
||||
_guestOsMap.put("Windows Server 2003 (64-bit)", "Windows Server 2003 x64");
|
||||
_guestOsMap.put("Windows Server 2008 (32-bit)", "Windows Server 2008");
|
||||
_guestOsMap.put("Windows Server 2008 (64-bit)", "Windows Server 2008 x64");
|
||||
_guestOsMap.put("Windows Server 2008 R2 (64-bit)", "Windows Server 2008 R2 x64");
|
||||
_guestOsMap.put("Windows 2000 SP4 (32-bit)", "Windows 2000 SP4");
|
||||
_guestOsMap.put("Windows Vista (32-bit)", "Windows Vista");
|
||||
_guestOsMap.put("Windows XP SP2 (32-bit)", "Windows XP SP2");
|
||||
_guestOsMap.put("Windows XP SP3 (32-bit)", "Windows XP SP3");
|
||||
_guestOsMap.put("Other install media", "Other install media");
|
||||
|
||||
//access by index
|
||||
_guestOsList.add("CentOS 4.5");
|
||||
_guestOsList.add("CentOS 4.6");
|
||||
_guestOsList.add("CentOS 4.7");
|
||||
_guestOsList.add("CentOS 4.8");
|
||||
_guestOsList.add("CentOS 5.0");
|
||||
_guestOsList.add("CentOS 5.0 x64");
|
||||
_guestOsList.add("CentOS 5.1");
|
||||
_guestOsList.add("CentOS 5.1 x64");
|
||||
_guestOsList.add("CentOS 5.2");
|
||||
_guestOsList.add("CentOS 5.2 x64");
|
||||
_guestOsList.add("CentOS 5.3");
|
||||
_guestOsList.add("CentOS 5.3 x64");
|
||||
_guestOsList.add("CentOS 5.4");
|
||||
_guestOsList.add("CentOS 5.4 x64");
|
||||
_guestOsList.add("Debian Lenny 5.0 (32-bit)");
|
||||
_guestOsList.add("Oracle Enterprise Linux 5.0");
|
||||
_guestOsList.add("Oracle Enterprise Linux 5.0 x64");
|
||||
_guestOsList.add("Oracle Enterprise Linux 5.1");
|
||||
_guestOsList.add("Oracle Enterprise Linux 5.1 x64");
|
||||
_guestOsList.add("Oracle Enterprise Linux 5.2");
|
||||
_guestOsList.add("Oracle Enterprise Linux 5.2 x64");
|
||||
_guestOsList.add("Oracle Enterprise Linux 5.3");
|
||||
_guestOsList.add("Oracle Enterprise Linux 5.3 x64");
|
||||
_guestOsList.add("Oracle Enterprise Linux 5.4");
|
||||
_guestOsList.add("Oracle Enterprise Linux 5.4 x64");
|
||||
_guestOsList.add("Red Hat Enterprise Linux 4.5");
|
||||
_guestOsList.add("Red Hat Enterprise Linux 4.6");
|
||||
_guestOsList.add("Red Hat Enterprise Linux 4.7");
|
||||
_guestOsList.add("Red Hat Enterprise Linux 4.8");
|
||||
_guestOsList.add("Red Hat Enterprise Linux 5.0");
|
||||
_guestOsList.add("Red Hat Enterprise Linux 5.0 x64");
|
||||
_guestOsList.add("Red Hat Enterprise Linux 5.1");
|
||||
_guestOsList.add("Red Hat Enterprise Linux 5.1 x64");
|
||||
_guestOsList.add("Red Hat Enterprise Linux 5.2");
|
||||
_guestOsList.add("Red Hat Enterprise Linux 5.2 x64");
|
||||
_guestOsList.add("Red Hat Enterprise Linux 5.3");
|
||||
_guestOsList.add("Red Hat Enterprise Linux 5.3 x64");
|
||||
_guestOsList.add("Red Hat Enterprise Linux 5.4");
|
||||
_guestOsList.add("Red Hat Enterprise Linux 5.4 x64");
|
||||
_guestOsList.add("SUSE Linux Enterprise Server 9 SP4");
|
||||
_guestOsList.add("SUSE Linux Enterprise Server 10 SP1");
|
||||
_guestOsList.add("SUSE Linux Enterprise Server 10 SP1 x64");
|
||||
_guestOsList.add("SUSE Linux Enterprise Server 10 SP2");
|
||||
_guestOsList.add("SUSE Linux Enterprise Server 10 SP2 x64");
|
||||
_guestOsList.add("Other install media");
|
||||
_guestOsList.add("SUSE Linux Enterprise Server 11");
|
||||
_guestOsList.add("SUSE Linux Enterprise Server 11 x64");
|
||||
_guestOsList.add("Windows 7");
|
||||
_guestOsList.add("Windows 7 x64");
|
||||
_guestOsList.add("Windows Server 2003");
|
||||
_guestOsList.add("Windows Server 2003 x64");
|
||||
_guestOsList.add("Windows Server 2008");
|
||||
_guestOsList.add("Windows Server 2008 x64");
|
||||
_guestOsList.add("Windows Server 2008 R2 x64");
|
||||
_guestOsList.add("Windows 2000 SP4");
|
||||
_guestOsList.add("Windows Vista");
|
||||
_guestOsList.add("Windows XP SP2");
|
||||
_guestOsList.add("Windows XP SP3");
|
||||
_guestOsList.add("Other install media");
|
||||
}
|
||||
|
||||
public static String getGuestOsType(String stdType) {
|
||||
return _guestOsMap.get(stdType);
|
||||
}
|
||||
|
||||
public static String getGuestOsType(long guestOsId) {
|
||||
return _guestOsList.get((int) (guestOsId-1));
|
||||
}
|
||||
}
|
||||
@ -1,5 +1,5 @@
|
||||
/**
|
||||
: * Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
|
||||
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
|
||||
*
|
||||
* This software is licensed under the GNU General Public License v3 or later.
|
||||
*
|
||||
@ -234,6 +234,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
protected int _wait;
|
||||
protected IAgentControl _agentControl;
|
||||
protected boolean _isRemoteAgent = false;
|
||||
|
||||
|
||||
protected final XenServerHost _host = new XenServerHost();
|
||||
|
||||
@ -270,69 +271,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
s_statesTable.put(Types.VmPowerState.UNKNOWN, State.Unknown);
|
||||
s_statesTable.put(Types.VmPowerState.UNRECOGNIZED, State.Unknown);
|
||||
}
|
||||
private static HashMap<String, String> _guestOsType = new HashMap<String, String>(50);
|
||||
static {
|
||||
_guestOsType.put("CentOS 4.5 (32-bit)", "CentOS 4.5");
|
||||
_guestOsType.put("CentOS 4.6 (32-bit)", "CentOS 4.6");
|
||||
_guestOsType.put("CentOS 4.7 (32-bit)", "CentOS 4.7");
|
||||
_guestOsType.put("CentOS 4.8 (32-bit)", "CentOS 4.8");
|
||||
_guestOsType.put("CentOS 5.0 (32-bit)", "CentOS 5.0");
|
||||
_guestOsType.put("CentOS 5.0 (64-bit)", "CentOS 5.0 x64");
|
||||
_guestOsType.put("CentOS 5.1 (32-bit)", "CentOS 5.1");
|
||||
_guestOsType.put("CentOS 5.1 (64-bit)", "CentOS 5.1 x64");
|
||||
_guestOsType.put("CentOS 5.2 (32-bit)", "CentOS 5.2");
|
||||
_guestOsType.put("CentOS 5.2 (64-bit)", "CentOS 5.2 x64");
|
||||
_guestOsType.put("CentOS 5.3 (32-bit)", "CentOS 5.3");
|
||||
_guestOsType.put("CentOS 5.3 (64-bit)", "CentOS 5.3 x64");
|
||||
_guestOsType.put("CentOS 5.4 (32-bit)", "CentOS 5.4");
|
||||
_guestOsType.put("CentOS 5.4 (64-bit)", "CentOS 5.4 x64");
|
||||
_guestOsType.put("Debian Lenny 5.0 (32-bit)", "Debian Lenny 5.0");
|
||||
_guestOsType.put("Oracle Enterprise Linux 5.0 (32-bit)", "Oracle Enterprise Linux 5.0");
|
||||
_guestOsType.put("Oracle Enterprise Linux 5.0 (64-bit)", "Oracle Enterprise Linux 5.0 x64");
|
||||
_guestOsType.put("Oracle Enterprise Linux 5.1 (32-bit)", "Oracle Enterprise Linux 5.1");
|
||||
_guestOsType.put("Oracle Enterprise Linux 5.1 (64-bit)", "Oracle Enterprise Linux 5.1 x64");
|
||||
_guestOsType.put("Oracle Enterprise Linux 5.2 (32-bit)", "Oracle Enterprise Linux 5.2");
|
||||
_guestOsType.put("Oracle Enterprise Linux 5.2 (64-bit)", "Oracle Enterprise Linux 5.2 x64");
|
||||
_guestOsType.put("Oracle Enterprise Linux 5.3 (32-bit)", "Oracle Enterprise Linux 5.3");
|
||||
_guestOsType.put("Oracle Enterprise Linux 5.3 (64-bit)", "Oracle Enterprise Linux 5.3 x64");
|
||||
_guestOsType.put("Oracle Enterprise Linux 5.4 (32-bit)", "Oracle Enterprise Linux 5.4");
|
||||
_guestOsType.put("Oracle Enterprise Linux 5.4 (64-bit)", "Oracle Enterprise Linux 5.4 x64");
|
||||
_guestOsType.put("Red Hat Enterprise Linux 4.5 (32-bit)", "Red Hat Enterprise Linux 4.5");
|
||||
_guestOsType.put("Red Hat Enterprise Linux 4.6 (32-bit)", "Red Hat Enterprise Linux 4.6");
|
||||
_guestOsType.put("Red Hat Enterprise Linux 4.7 (32-bit)", "Red Hat Enterprise Linux 4.7");
|
||||
_guestOsType.put("Red Hat Enterprise Linux 4.8 (32-bit)", "Red Hat Enterprise Linux 4.8");
|
||||
_guestOsType.put("Red Hat Enterprise Linux 5.0 (32-bit)", "Red Hat Enterprise Linux 5.0");
|
||||
_guestOsType.put("Red Hat Enterprise Linux 5.0 (64-bit)", "Red Hat Enterprise Linux 5.0 x64");
|
||||
_guestOsType.put("Red Hat Enterprise Linux 5.1 (32-bit)", "Red Hat Enterprise Linux 5.1");
|
||||
_guestOsType.put("Red Hat Enterprise Linux 5.1 (64-bit)", "Red Hat Enterprise Linux 5.1 x64");
|
||||
_guestOsType.put("Red Hat Enterprise Linux 5.2 (32-bit)", "Red Hat Enterprise Linux 5.2");
|
||||
_guestOsType.put("Red Hat Enterprise Linux 5.2 (64-bit)", "Red Hat Enterprise Linux 5.2 x64");
|
||||
_guestOsType.put("Red Hat Enterprise Linux 5.3 (32-bit)", "Red Hat Enterprise Linux 5.3");
|
||||
_guestOsType.put("Red Hat Enterprise Linux 5.3 (64-bit)", "Red Hat Enterprise Linux 5.3 x64");
|
||||
_guestOsType.put("Red Hat Enterprise Linux 5.4 (32-bit)", "Red Hat Enterprise Linux 5.4");
|
||||
_guestOsType.put("Red Hat Enterprise Linux 5.4 (64-bit)", "Red Hat Enterprise Linux 5.4 x64");
|
||||
_guestOsType.put("SUSE Linux Enterprise Server 9 SP4 (32-bit)", "SUSE Linux Enterprise Server 9 SP4");
|
||||
_guestOsType.put("SUSE Linux Enterprise Server 10 SP1 (32-bit)", "SUSE Linux Enterprise Server 10 SP1");
|
||||
_guestOsType.put("SUSE Linux Enterprise Server 10 SP1 (64-bit)", "SUSE Linux Enterprise Server 10 SP1 x64");
|
||||
_guestOsType.put("SUSE Linux Enterprise Server 10 SP2 (32-bit)", "SUSE Linux Enterprise Server 10 SP2");
|
||||
_guestOsType.put("SUSE Linux Enterprise Server 10 SP2 (64-bit)", "SUSE Linux Enterprise Server 10 SP2 x64");
|
||||
_guestOsType.put("SUSE Linux Enterprise Server 10 SP3 (64-bit)", "Other install media");
|
||||
_guestOsType.put("SUSE Linux Enterprise Server 11 (32-bit)", "SUSE Linux Enterprise Server 11");
|
||||
_guestOsType.put("SUSE Linux Enterprise Server 11 (64-bit)", "SUSE Linux Enterprise Server 11 x64");
|
||||
_guestOsType.put("Windows 7 (32-bit)", "Windows 7");
|
||||
_guestOsType.put("Windows 7 (64-bit)", "Windows 7 x64");
|
||||
_guestOsType.put("Windows Server 2003 (32-bit)", "Windows Server 2003");
|
||||
_guestOsType.put("Windows Server 2003 (64-bit)", "Windows Server 2003 x64");
|
||||
_guestOsType.put("Windows Server 2008 (32-bit)", "Windows Server 2008");
|
||||
_guestOsType.put("Windows Server 2008 (64-bit)", "Windows Server 2008 x64");
|
||||
_guestOsType.put("Windows Server 2008 R2 (64-bit)", "Windows Server 2008 R2 x64");
|
||||
_guestOsType.put("Windows 2000 SP4 (32-bit)", "Windows 2000 SP4");
|
||||
_guestOsType.put("Windows Vista (32-bit)", "Windows Vista");
|
||||
_guestOsType.put("Windows XP SP2 (32-bit)", "Windows XP SP2");
|
||||
_guestOsType.put("Windows XP SP3 (32-bit)", "Windows XP SP3");
|
||||
_guestOsType.put("Other install media", "Other install media");
|
||||
|
||||
}
|
||||
|
||||
|
||||
protected boolean isRefNull(XenAPIObject object) {
|
||||
return (object == null || object.toWireString().equals("OpaqueRef:NULL"));
|
||||
@ -1171,7 +1110,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
bootArgs += " pod=" + _pod;
|
||||
bootArgs += " localgw=" + _localGateway;
|
||||
String result = startSystemVM(vmName, storage.getVlanId(), network, cmd.getVolumes(), bootArgs, storage.getGuestMacAddress(), storage.getGuestIpAddress(), storage
|
||||
.getPrivateMacAddress(), storage.getPublicMacAddress(), cmd.getProxyCmdPort(), storage.getRamSize());
|
||||
.getPrivateMacAddress(), storage.getPublicMacAddress(), cmd.getProxyCmdPort(), storage.getRamSize(), storage.getGuestOSId());
|
||||
if (result == null) {
|
||||
return new StartSecStorageVmAnswer(cmd);
|
||||
}
|
||||
@ -2078,6 +2017,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
|
||||
/* Does the template exist in primary storage pool? If yes, no copy */
|
||||
VDI vmtmpltvdi = null;
|
||||
VDI snapshotvdi = null;
|
||||
|
||||
Set<VDI> vdis = VDI.getByNameLabel(conn, "Template " + cmd.getName());
|
||||
|
||||
@ -2110,19 +2050,21 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
return new DownloadAnswer(null, 0, msg, com.cloud.storage.VMTemplateStorageResourceAssoc.Status.DOWNLOAD_ERROR, "", "", 0);
|
||||
}
|
||||
vmtmpltvdi = cloudVDIcopy(tmpltvdi, poolsr);
|
||||
|
||||
vmtmpltvdi.setNameLabel(conn, "Template " + cmd.getName());
|
||||
snapshotvdi = vmtmpltvdi.snapshot(conn, new HashMap<String, String>());
|
||||
vmtmpltvdi.destroy(conn);
|
||||
snapshotvdi.setNameLabel(conn, "Template " + cmd.getName());
|
||||
// vmtmpltvdi.setNameDescription(conn, cmd.getDescription());
|
||||
uuid = vmtmpltvdi.getUuid(conn);
|
||||
uuid = snapshotvdi.getUuid(conn);
|
||||
vmtmpltvdi = snapshotvdi;
|
||||
|
||||
} else
|
||||
uuid = vmtmpltvdi.getUuid(conn);
|
||||
|
||||
// Determine the size of the template
|
||||
long createdSize = vmtmpltvdi.getVirtualSize(conn);
|
||||
long phySize = vmtmpltvdi.getPhysicalUtilisation(conn);
|
||||
|
||||
DownloadAnswer answer = new DownloadAnswer(null, 100, cmd, com.cloud.storage.VMTemplateStorageResourceAssoc.Status.DOWNLOADED, uuid, uuid);
|
||||
answer.setTemplateSize(createdSize);
|
||||
answer.setTemplateSize(phySize);
|
||||
|
||||
return answer;
|
||||
|
||||
@ -2593,9 +2535,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
return vm;
|
||||
}
|
||||
|
||||
protected String getGuestOsType(String stdType) {
|
||||
return _guestOsType.get(stdType);
|
||||
}
|
||||
|
||||
|
||||
public boolean joinPool(String address, String username, String password) {
|
||||
Connection conn = getConnection();
|
||||
@ -3139,7 +3079,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
String bootArgs = cmd.getBootArgs();
|
||||
|
||||
String result = startSystemVM(vmName, router.getVlanId(), network, cmd.getVolumes(), bootArgs, router.getGuestMacAddress(), router.getPrivateIpAddress(), router
|
||||
.getPrivateMacAddress(), router.getPublicMacAddress(), 3922, router.getRamSize());
|
||||
.getPrivateMacAddress(), router.getPublicMacAddress(), 3922, router.getRamSize(), router.getGuestOSId());
|
||||
if (result == null) {
|
||||
networkUsage(router.getPrivateIpAddress(), "create", null);
|
||||
return new StartRouterAnswer(cmd);
|
||||
@ -3154,7 +3094,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
}
|
||||
|
||||
protected String startSystemVM(String vmName, String vlanId, Network nw0, List<VolumeVO> vols, String bootArgs, String guestMacAddr, String privateIp, String privateMacAddr,
|
||||
String publicMacAddr, int cmdPort, long ramSize) {
|
||||
String publicMacAddr, int cmdPort, long ramSize, long guestOsId) {
|
||||
|
||||
setupLinkLocalNetwork();
|
||||
VM vm = null;
|
||||
@ -3172,14 +3112,12 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
|
||||
Ternary<SR, VDI, VolumeVO> mount = mounts.get(0);
|
||||
|
||||
Set<VM> templates = VM.getByNameLabel(conn, "CentOS 5.3");
|
||||
Set<VM> templates = VM.getByNameLabel(conn, CitrixHelper.getGuestOsType(guestOsId));
|
||||
if (templates.size() == 0) {
|
||||
templates = VM.getByNameLabel(conn, "CentOS 5.3 (64-bit)");
|
||||
if (templates.size() == 0) {
|
||||
String msg = " can not find template CentOS 5.3 ";
|
||||
s_logger.warn(msg);
|
||||
return msg;
|
||||
}
|
||||
String msg = " can not find systemvm template " + CitrixHelper.getGuestOsType(guestOsId) ;
|
||||
s_logger.warn(msg);
|
||||
return msg;
|
||||
|
||||
}
|
||||
|
||||
VM template = templates.iterator().next();
|
||||
@ -3340,7 +3278,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
bootArgs += " localgw=" + _localGateway;
|
||||
|
||||
String result = startSystemVM(vmName, proxy.getVlanId(), network, cmd.getVolumes(), bootArgs, proxy.getGuestMacAddress(), proxy.getGuestIpAddress(), proxy
|
||||
.getPrivateMacAddress(), proxy.getPublicMacAddress(), cmd.getProxyCmdPort(), proxy.getRamSize());
|
||||
.getPrivateMacAddress(), proxy.getPublicMacAddress(), cmd.getProxyCmdPort(), proxy.getRamSize(), proxy.getGuestOSId());
|
||||
if (result == null) {
|
||||
return new StartConsoleProxyAnswer(cmd);
|
||||
}
|
||||
@ -3477,8 +3415,12 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
protected String callHostPlugin(String plugin, String cmd, String... params) {
|
||||
//default time out is 300 s
|
||||
return callHostPluginWithTimeOut(plugin, cmd, 300, params);
|
||||
}
|
||||
|
||||
protected String callHostPluginWithTimeOut(String plugin, String cmd, int timeout, String... params) {
|
||||
Map<String, String> args = new HashMap<String, String>();
|
||||
Session slaveSession = null;
|
||||
Connection slaveConn = null;
|
||||
@ -3490,7 +3432,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
// TODO Auto-generated catch block
|
||||
e.printStackTrace();
|
||||
}
|
||||
slaveConn = new Connection(slaveUrl, 1800);
|
||||
slaveConn = new Connection(slaveUrl, timeout);
|
||||
slaveSession = Session.slaveLocalLoginWithPassword(slaveConn, _username, _password);
|
||||
|
||||
if (s_logger.isDebugEnabled()) {
|
||||
@ -4451,9 +4393,8 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
SR.Record srr = sr.getRecord(conn);
|
||||
Set<PBD> pbds = sr.getPBDs(conn);
|
||||
if (pbds.size() == 0) {
|
||||
String msg = "There is no PBDs for this SR: " + _host.uuid;
|
||||
String msg = "There is no PBDs for this SR: " + srr.nameLabel + " on host:" + _host.uuid;
|
||||
s_logger.warn(msg);
|
||||
removeSR(sr);
|
||||
return false;
|
||||
}
|
||||
Set<Host> hosts = null;
|
||||
@ -4507,15 +4448,11 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
|
||||
protected Answer execute(ModifyStoragePoolCommand cmd) {
|
||||
StoragePoolVO pool = cmd.getPool();
|
||||
StoragePoolTO poolTO = new StoragePoolTO(pool);
|
||||
try {
|
||||
Connection conn = getConnection();
|
||||
|
||||
SR sr = getStorageRepository(conn, pool);
|
||||
if (!checkSR(sr)) {
|
||||
String msg = "ModifyStoragePoolCommand checkSR failed! host:" + _host.uuid + " pool: " + pool.getName() + pool.getHostAddress() + pool.getPath();
|
||||
s_logger.warn(msg);
|
||||
return new Answer(cmd, false, msg);
|
||||
}
|
||||
SR sr = getStorageRepository(conn, poolTO);
|
||||
long capacity = sr.getPhysicalSize(conn);
|
||||
long available = capacity - sr.getPhysicalUtilisation(conn);
|
||||
if (capacity == -1) {
|
||||
@ -4540,14 +4477,10 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
|
||||
protected Answer execute(DeleteStoragePoolCommand cmd) {
|
||||
StoragePoolVO pool = cmd.getPool();
|
||||
StoragePoolTO poolTO = new StoragePoolTO(pool);
|
||||
try {
|
||||
Connection conn = getConnection();
|
||||
SR sr = getStorageRepository(conn, pool);
|
||||
if (!checkSR(sr)) {
|
||||
String msg = "DeleteStoragePoolCommand checkSR failed! host:" + _host.uuid + " pool: " + pool.getName() + pool.getHostAddress() + pool.getPath();
|
||||
s_logger.warn(msg);
|
||||
return new Answer(cmd, false, msg);
|
||||
}
|
||||
SR sr = getStorageRepository(conn, poolTO);
|
||||
sr.setNameLabel(conn, pool.getUuid());
|
||||
sr.setNameDescription(conn, pool.getName());
|
||||
|
||||
@ -4957,119 +4890,10 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
s_logger.warn(msg, e);
|
||||
throw new CloudRuntimeException(msg, e);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
protected SR getIscsiSR(Connection conn, StoragePoolVO pool) {
|
||||
|
||||
synchronized (pool.getUuid().intern()) {
|
||||
Map<String, String> deviceConfig = new HashMap<String, String>();
|
||||
try {
|
||||
String target = pool.getHostAddress().trim();
|
||||
String path = pool.getPath().trim();
|
||||
if (path.endsWith("/")) {
|
||||
path = path.substring(0, path.length() - 1);
|
||||
}
|
||||
|
||||
String tmp[] = path.split("/");
|
||||
if (tmp.length != 3) {
|
||||
String msg = "Wrong iscsi path " + pool.getPath() + " it should be /targetIQN/LUN";
|
||||
s_logger.warn(msg);
|
||||
throw new CloudRuntimeException(msg);
|
||||
}
|
||||
String targetiqn = tmp[1].trim();
|
||||
String lunid = tmp[2].trim();
|
||||
String scsiid = "";
|
||||
|
||||
Set<SR> srs = SR.getByNameLabel(conn, pool.getUuid());
|
||||
for (SR sr : srs) {
|
||||
if (!SRType.LVMOISCSI.equals(sr.getType(conn)))
|
||||
continue;
|
||||
|
||||
Set<PBD> pbds = sr.getPBDs(conn);
|
||||
if (pbds.isEmpty())
|
||||
continue;
|
||||
|
||||
PBD pbd = pbds.iterator().next();
|
||||
|
||||
Map<String, String> dc = pbd.getDeviceConfig(conn);
|
||||
|
||||
if (dc == null)
|
||||
continue;
|
||||
|
||||
if (dc.get("target") == null)
|
||||
continue;
|
||||
|
||||
if (dc.get("targetIQN") == null)
|
||||
continue;
|
||||
|
||||
if (dc.get("lunid") == null)
|
||||
continue;
|
||||
|
||||
if (target.equals(dc.get("target")) && targetiqn.equals(dc.get("targetIQN")) && lunid.equals(dc.get("lunid"))) {
|
||||
return sr;
|
||||
}
|
||||
|
||||
}
|
||||
deviceConfig.put("target", target);
|
||||
deviceConfig.put("targetIQN", targetiqn);
|
||||
|
||||
Host host = Host.getByUuid(conn, _host.uuid);
|
||||
SR sr = null;
|
||||
try {
|
||||
sr = SR.create(conn, host, deviceConfig, new Long(0), pool.getUuid(), pool.getName(), SRType.LVMOISCSI.toString(), "user", true, new HashMap<String, String>());
|
||||
} catch (XenAPIException e) {
|
||||
String errmsg = e.toString();
|
||||
if (errmsg.contains("SR_BACKEND_FAILURE_107")) {
|
||||
String lun[] = errmsg.split("<LUN>");
|
||||
boolean found = false;
|
||||
for (int i = 1; i < lun.length; i++) {
|
||||
int blunindex = lun[i].indexOf("<LUNid>") + 7;
|
||||
int elunindex = lun[i].indexOf("</LUNid>");
|
||||
String ilun = lun[i].substring(blunindex, elunindex);
|
||||
ilun = ilun.trim();
|
||||
if (ilun.equals(lunid)) {
|
||||
int bscsiindex = lun[i].indexOf("<SCSIid>") + 8;
|
||||
int escsiindex = lun[i].indexOf("</SCSIid>");
|
||||
scsiid = lun[i].substring(bscsiindex, escsiindex);
|
||||
scsiid = scsiid.trim();
|
||||
found = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (!found) {
|
||||
String msg = "can not find LUN " + lunid + " in " + errmsg;
|
||||
s_logger.warn(msg);
|
||||
throw new CloudRuntimeException(msg);
|
||||
}
|
||||
} else {
|
||||
String msg = "Unable to create Iscsi SR " + deviceConfig + " due to " + e.toString();
|
||||
s_logger.warn(msg, e);
|
||||
throw new CloudRuntimeException(msg, e);
|
||||
}
|
||||
}
|
||||
deviceConfig.put("SCSIid", scsiid);
|
||||
sr = SR.create(conn, host, deviceConfig, new Long(0), pool.getUuid(), pool.getName(), SRType.LVMOISCSI.toString(), "user", true, new HashMap<String, String>());
|
||||
if( !checkSR(sr) ) {
|
||||
throw new Exception("no attached PBD");
|
||||
}
|
||||
sr.scan(conn);
|
||||
return sr;
|
||||
|
||||
} catch (XenAPIException e) {
|
||||
String msg = "Unable to create Iscsi SR " + deviceConfig + " due to " + e.toString();
|
||||
s_logger.warn(msg, e);
|
||||
throw new CloudRuntimeException(msg, e);
|
||||
} catch (Exception e) {
|
||||
String msg = "Unable to create Iscsi SR " + deviceConfig + " due to " + e.getMessage();
|
||||
s_logger.warn(msg, e);
|
||||
throw new CloudRuntimeException(msg, e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
protected SR getIscsiSR(Connection conn, StoragePoolTO pool) {
|
||||
|
||||
protected SR getIscsiSR(StoragePoolTO pool) {
|
||||
Connection conn = getConnection();
|
||||
synchronized (pool.getUuid().intern()) {
|
||||
Map<String, String> deviceConfig = new HashMap<String, String>();
|
||||
try {
|
||||
@ -5118,6 +4942,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
if (checkSR(sr)) {
|
||||
return sr;
|
||||
}
|
||||
throw new CloudRuntimeException("SR check failed for storage pool: " + pool.getUuid() + "on host:" + _host.uuid);
|
||||
}
|
||||
|
||||
}
|
||||
@ -5177,13 +5002,12 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
}
|
||||
}
|
||||
|
||||
protected SR getNfsSR(StoragePoolVO pool) {
|
||||
protected SR getNfsSR(StoragePoolTO pool) {
|
||||
Connection conn = getConnection();
|
||||
|
||||
Map<String, String> deviceConfig = new HashMap<String, String>();
|
||||
try {
|
||||
|
||||
String server = pool.getHostAddress();
|
||||
String server = pool.getHost();
|
||||
String serverpath = pool.getPath();
|
||||
serverpath = serverpath.replace("//", "/");
|
||||
Set<SR> srs = SR.getAll(conn);
|
||||
@ -5212,59 +5036,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
if (checkSR(sr)) {
|
||||
return sr;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
deviceConfig.put("server", server);
|
||||
deviceConfig.put("serverpath", serverpath);
|
||||
Host host = Host.getByUuid(conn, _host.uuid);
|
||||
SR sr = SR.create(conn, host, deviceConfig, new Long(0), pool.getUuid(), pool.getName(), SRType.NFS.toString(), "user", true, new HashMap<String, String>());
|
||||
sr.scan(conn);
|
||||
return sr;
|
||||
|
||||
} catch (XenAPIException e) {
|
||||
String msg = "Unable to create NFS SR " + deviceConfig + " due to " + e.toString();
|
||||
s_logger.warn(msg, e);
|
||||
throw new CloudRuntimeException(msg, e);
|
||||
} catch (Exception e) {
|
||||
String msg = "Unable to create NFS SR " + deviceConfig + " due to " + e.getMessage();
|
||||
s_logger.warn(msg);
|
||||
throw new CloudRuntimeException(msg, e);
|
||||
}
|
||||
}
|
||||
|
||||
protected SR getNfsSR(Connection conn, StoragePoolTO pool) {
|
||||
Map<String, String> deviceConfig = new HashMap<String, String>();
|
||||
|
||||
String server = pool.getHost();
|
||||
String serverpath = pool.getPath();
|
||||
serverpath = serverpath.replace("//", "/");
|
||||
try {
|
||||
Set<SR> srs = SR.getAll(conn);
|
||||
for (SR sr : srs) {
|
||||
if (!SRType.NFS.equals(sr.getType(conn)))
|
||||
continue;
|
||||
|
||||
Set<PBD> pbds = sr.getPBDs(conn);
|
||||
if (pbds.isEmpty())
|
||||
continue;
|
||||
|
||||
PBD pbd = pbds.iterator().next();
|
||||
|
||||
Map<String, String> dc = pbd.getDeviceConfig(conn);
|
||||
|
||||
if (dc == null)
|
||||
continue;
|
||||
|
||||
if (dc.get("server") == null)
|
||||
continue;
|
||||
|
||||
if (dc.get("serverpath") == null)
|
||||
continue;
|
||||
|
||||
if (server.equals(dc.get("server")) && serverpath.equals(dc.get("serverpath"))) {
|
||||
return sr;
|
||||
throw new CloudRuntimeException("SR check failed for storage pool: " + pool.getUuid() + "on host:" + _host.uuid);
|
||||
}
|
||||
|
||||
}
|
||||
@ -5351,6 +5123,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
public CopyVolumeAnswer execute(final CopyVolumeCommand cmd) {
|
||||
String volumeUUID = cmd.getVolumePath();
|
||||
StoragePoolVO pool = cmd.getPool();
|
||||
StoragePoolTO poolTO = new StoragePoolTO(pool);
|
||||
String secondaryStorageURL = cmd.getSecondaryStorageURL();
|
||||
|
||||
URI uri = null;
|
||||
@ -5403,7 +5176,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
}
|
||||
|
||||
// Copy the volume to the primary storage pool
|
||||
primaryStoragePool = getStorageRepository(conn, pool);
|
||||
primaryStoragePool = getStorageRepository(conn, poolTO);
|
||||
destVolume = cloudVDIcopy(srcVolume, primaryStoragePool);
|
||||
}
|
||||
|
||||
@ -6277,40 +6050,6 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
throw new CloudRuntimeException("Unable to get SR " + pool.getUuid() + " due to " + e.getMessage(), e);
|
||||
}
|
||||
|
||||
if (srs.size() > 1) {
|
||||
throw new CloudRuntimeException("More than one storage repository was found for pool with uuid: " + pool.getUuid());
|
||||
}
|
||||
|
||||
if (srs.size() == 1) {
|
||||
SR sr = srs.iterator().next();
|
||||
if (s_logger.isDebugEnabled()) {
|
||||
s_logger.debug("SR retrieved for " + pool.getId() + " is mapped to " + sr.toString());
|
||||
}
|
||||
|
||||
if (checkSR(sr)) {
|
||||
return sr;
|
||||
}
|
||||
}
|
||||
|
||||
if (pool.getType() == StoragePoolType.NetworkFilesystem)
|
||||
return getNfsSR(conn, pool);
|
||||
else if (pool.getType() == StoragePoolType.IscsiLUN)
|
||||
return getIscsiSR(conn, pool);
|
||||
else
|
||||
throw new CloudRuntimeException("The pool type: " + pool.getType().name() + " is not supported.");
|
||||
|
||||
}
|
||||
|
||||
protected SR getStorageRepository(Connection conn, StoragePoolVO pool) {
|
||||
Set<SR> srs;
|
||||
try {
|
||||
srs = SR.getByNameLabel(conn, pool.getUuid());
|
||||
} catch (XenAPIException e) {
|
||||
throw new CloudRuntimeException("Unable to get SR " + pool.getUuid() + " due to " + e.toString(), e);
|
||||
} catch (Exception e) {
|
||||
throw new CloudRuntimeException("Unable to get SR " + pool.getUuid() + " due to " + e.getMessage(), e);
|
||||
}
|
||||
|
||||
if (srs.size() > 1) {
|
||||
throw new CloudRuntimeException("More than one storage repository was found for pool with uuid: " + pool.getUuid());
|
||||
} else if (srs.size() == 1) {
|
||||
@ -6322,15 +6061,15 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
if (checkSR(sr)) {
|
||||
return sr;
|
||||
}
|
||||
throw new CloudRuntimeException("Check this SR failed");
|
||||
throw new CloudRuntimeException("SR check failed for storage pool: " + pool.getUuid() + "on host:" + _host.uuid);
|
||||
} else {
|
||||
|
||||
if (pool.getPoolType() == StoragePoolType.NetworkFilesystem)
|
||||
if (pool.getType() == StoragePoolType.NetworkFilesystem)
|
||||
return getNfsSR(pool);
|
||||
else if (pool.getPoolType() == StoragePoolType.IscsiLUN)
|
||||
return getIscsiSR(conn, pool);
|
||||
else if (pool.getType() == StoragePoolType.IscsiLUN)
|
||||
return getIscsiSR(pool);
|
||||
else
|
||||
throw new CloudRuntimeException("The pool type: " + pool.getPoolType().name() + " is not supported.");
|
||||
throw new CloudRuntimeException("The pool type: " + pool.getType().name() + " is not supported.");
|
||||
}
|
||||
|
||||
}
|
||||
@ -6405,7 +6144,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
checksum = "";
|
||||
}
|
||||
|
||||
String result = callHostPlugin("vmopsSnapshot", "post_create_private_template", "remoteTemplateMountPath", remoteTemplateMountPath, "templateDownloadFolder", templateDownloadFolder,
|
||||
String result = callHostPluginWithTimeOut("vmopsSnapshot", "post_create_private_template", 110*60, "remoteTemplateMountPath", remoteTemplateMountPath, "templateDownloadFolder", templateDownloadFolder,
|
||||
"templateInstallFolder", templateInstallFolder, "templateFilename", templateFilename, "templateName", templateName, "templateDescription", templateDescription,
|
||||
"checksum", checksum, "virtualSize", String.valueOf(virtualSize), "templateId", String.valueOf(templateId));
|
||||
|
||||
@ -6443,7 +6182,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
|
||||
// Each argument is put in a separate line for readability.
|
||||
// Using more lines does not harm the environment.
|
||||
String results = callHostPlugin("vmopsSnapshot", "backupSnapshot", "primaryStorageSRUuid", primaryStorageSRUuid, "dcId", dcId.toString(), "accountId", accountId.toString(), "volumeId",
|
||||
String results = callHostPluginWithTimeOut("vmopsSnapshot", "backupSnapshot", 110*60, "primaryStorageSRUuid", primaryStorageSRUuid, "dcId", dcId.toString(), "accountId", accountId.toString(), "volumeId",
|
||||
volumeId.toString(), "secondaryStorageMountPath", secondaryStorageMountPath, "snapshotUuid", snapshotUuid, "prevSnapshotUuid", prevSnapshotUuid, "prevBackupUuid",
|
||||
prevBackupUuid, "isFirstSnapshotOfRootVolume", isFirstSnapshotOfRootVolume.toString(), "isISCSI", isISCSI.toString());
|
||||
|
||||
@ -6546,7 +6285,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
|
||||
String failureString = "Could not create volume from " + backedUpSnapshotUuid;
|
||||
templatePath = (templatePath == null) ? "" : templatePath;
|
||||
String results = callHostPlugin("vmopsSnapshot", "createVolumeFromSnapshot", "dcId", dcId.toString(), "accountId", accountId.toString(), "volumeId", volumeId.toString(),
|
||||
String results = callHostPluginWithTimeOut("vmopsSnapshot","createVolumeFromSnapshot", 110*60, "dcId", dcId.toString(), "accountId", accountId.toString(), "volumeId", volumeId.toString(),
|
||||
"secondaryStorageMountPath", secondaryStorageMountPath, "backedUpSnapshotUuid", backedUpSnapshotUuid, "templatePath", templatePath, "templateDownloadFolder",
|
||||
templateDownloadFolder, "isISCSI", isISCSI.toString());
|
||||
|
||||
@ -6699,4 +6438,8 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
|
||||
return virtualSize;
|
||||
}
|
||||
}
|
||||
|
||||
protected String getGuestOsType(String stdType) {
|
||||
return CitrixHelper.getGuestOsType(stdType);
|
||||
}
|
||||
}
|
||||
|
||||
@ -76,7 +76,8 @@ public class Criteria {
|
||||
public static final String TARGET_IQN = "targetiqn";
|
||||
public static final String SCOPE = "scope";
|
||||
public static final String NETWORKGROUP = "networkGroup";
|
||||
|
||||
public static final String GROUP = "group";
|
||||
public static final String EMPTY_GROUP = "emptyGroup";
|
||||
|
||||
public Criteria(String orderBy, Boolean ascending, Long offset, Long limit) {
|
||||
this.offset = offset;
|
||||
|
||||
16
core/src/com/cloud/server/ManagementServer.java
Normal file → Executable file
16
core/src/com/cloud/server/ManagementServer.java
Normal file → Executable file
@ -615,8 +615,8 @@ public interface ManagementServer {
|
||||
* @volumeId
|
||||
* @throws InvalidParameterValueException, InternalErrorException
|
||||
*/
|
||||
void detachVolumeFromVM(long volumeId, long startEventId) throws InternalErrorException;
|
||||
long detachVolumeFromVMAsync(long volumeId) throws InvalidParameterValueException;
|
||||
void detachVolumeFromVM(long volumeId, long startEventId, long deviceId, long instanceId) throws InternalErrorException;
|
||||
long detachVolumeFromVMAsync(long volumeId, long deviceId, long instanceId) throws InvalidParameterValueException;
|
||||
|
||||
/**
|
||||
* Attaches an ISO to the virtual CDROM device of the specified VM. Will fail if the VM already has an ISO mounted.
|
||||
@ -2186,7 +2186,17 @@ public interface ManagementServer {
|
||||
boolean validateCustomVolumeSizeRange(long size) throws InvalidParameterValueException;
|
||||
|
||||
boolean checkIfMaintenable(long hostId);
|
||||
/**
|
||||
* Extracts the template to a particular location.
|
||||
* @param url - the url where the template needs to be extracted to
|
||||
* @param zoneId - zone id of the template
|
||||
* @param template id - the id of the template
|
||||
*
|
||||
*/
|
||||
void extractTemplate(String url, Long templateId, Long zoneId) throws URISyntaxException;
|
||||
|
||||
Map<String, String> listCapabilities();
|
||||
GuestOSCategoryVO getGuestOsCategory(Long guestOsId);
|
||||
GuestOSVO getGuestOs(Long guestOsId);
|
||||
VolumeVO findVolumeByInstanceAndDeviceId(long instanceId, long deviceId);
|
||||
VolumeVO getRootVolume(Long instanceId);
|
||||
}
|
||||
|
||||
@ -234,5 +234,9 @@ public class DiskOfferingVO implements DiskOffering {
|
||||
buf.delete(buf.length() - 1, buf.length());
|
||||
|
||||
setTags(buf.toString());
|
||||
}
|
||||
}
|
||||
|
||||
public void setUseLocalStorage(boolean useLocalStorage) {
|
||||
this.useLocalStorage = useLocalStorage;
|
||||
}
|
||||
}
|
||||
|
||||
@ -53,12 +53,14 @@ import com.cloud.agent.api.storage.ShareAnswer;
|
||||
import com.cloud.agent.api.storage.ShareCommand;
|
||||
import com.cloud.agent.api.storage.UpgradeDiskAnswer;
|
||||
import com.cloud.agent.api.storage.UpgradeDiskCommand;
|
||||
import com.cloud.agent.api.storage.UploadCommand;
|
||||
import com.cloud.host.Host;
|
||||
import com.cloud.resource.ServerResource;
|
||||
import com.cloud.resource.ServerResourceBase;
|
||||
import com.cloud.storage.Storage.StoragePoolType;
|
||||
import com.cloud.storage.template.DownloadManager;
|
||||
import com.cloud.storage.template.TemplateInfo;
|
||||
import com.cloud.storage.template.UploadManager;
|
||||
import com.cloud.utils.NumbersUtil;
|
||||
import com.cloud.utils.exception.CloudRuntimeException;
|
||||
import com.cloud.utils.script.OutputInterpreter;
|
||||
@ -112,6 +114,7 @@ public abstract class StorageResource extends ServerResourceBase implements Serv
|
||||
protected String _zfsScriptsDir;
|
||||
|
||||
protected DownloadManager _downloadManager;
|
||||
protected UploadManager _uploadManager;
|
||||
|
||||
protected Map<Long, VolumeSnapshotRequest> _volumeHourlySnapshotRequests = new HashMap<Long, VolumeSnapshotRequest>();
|
||||
protected Map<Long, VolumeSnapshotRequest> _volumeDailySnapshotRequests = new HashMap<Long, VolumeSnapshotRequest>();
|
||||
@ -127,6 +130,8 @@ public abstract class StorageResource extends ServerResourceBase implements Serv
|
||||
return execute((PrimaryStorageDownloadCommand)cmd);
|
||||
} else if (cmd instanceof DownloadCommand) {
|
||||
return execute((DownloadCommand)cmd);
|
||||
}else if (cmd instanceof UploadCommand) {
|
||||
return execute((UploadCommand)cmd);
|
||||
} else if (cmd instanceof GetStorageStatsCommand) {
|
||||
return execute((GetStorageStatsCommand)cmd);
|
||||
} else if (cmd instanceof UpgradeDiskCommand) {
|
||||
@ -159,6 +164,11 @@ public abstract class StorageResource extends ServerResourceBase implements Serv
|
||||
protected Answer execute(final PrimaryStorageDownloadCommand cmd) {
|
||||
return Answer.createUnsupportedCommandAnswer(cmd);
|
||||
}
|
||||
|
||||
private Answer execute(UploadCommand cmd) {
|
||||
s_logger.warn(" Nitin got the cmd " +cmd);
|
||||
return _uploadManager.handleUploadCommand(cmd);
|
||||
}
|
||||
|
||||
protected Answer execute(final DownloadCommand cmd) {
|
||||
return _downloadManager.handleDownloadCommand(cmd);
|
||||
|
||||
@ -59,7 +59,10 @@ public class VMTemplateHostVO implements VMTemplateStorageResourceAssoc {
|
||||
private Date lastUpdated = null;
|
||||
|
||||
@Column (name="download_pct")
|
||||
private int downloadPercent;
|
||||
private int downloadPercent;
|
||||
|
||||
@Column (name="upload_pct")
|
||||
private int uploadPercent;
|
||||
|
||||
@Column (name="size")
|
||||
private long size;
|
||||
@ -67,15 +70,25 @@ public class VMTemplateHostVO implements VMTemplateStorageResourceAssoc {
|
||||
@Column (name="download_state")
|
||||
@Enumerated(EnumType.STRING)
|
||||
private Status downloadState;
|
||||
|
||||
@Column (name="upload_state")
|
||||
@Enumerated(EnumType.STRING)
|
||||
private Status uploadState;
|
||||
|
||||
@Column (name="local_path")
|
||||
private String localDownloadPath;
|
||||
|
||||
@Column (name="error_str")
|
||||
private String errorString;
|
||||
|
||||
@Column (name="upload_error_str")
|
||||
private String upload_errorString;
|
||||
|
||||
@Column (name="job_id")
|
||||
private String jobId;
|
||||
|
||||
@Column (name="upload_job_id")
|
||||
private String uploadJobId;
|
||||
|
||||
@Column (name="pool_id")
|
||||
private Long poolId;
|
||||
@ -85,7 +98,10 @@ public class VMTemplateHostVO implements VMTemplateStorageResourceAssoc {
|
||||
|
||||
@Column (name="url")
|
||||
private String downloadUrl;
|
||||
|
||||
|
||||
@Column (name="upload_url")
|
||||
private String uploadUrl;
|
||||
|
||||
@Column(name="is_copy")
|
||||
private boolean isCopy = false;
|
||||
|
||||
@ -262,5 +278,45 @@ public class VMTemplateHostVO implements VMTemplateStorageResourceAssoc {
|
||||
|
||||
public boolean isCopy() {
|
||||
return isCopy;
|
||||
}
|
||||
|
||||
public int getUploadPercent() {
|
||||
return uploadPercent;
|
||||
}
|
||||
|
||||
public void setUploadPercent(int uploadPercent) {
|
||||
this.uploadPercent = uploadPercent;
|
||||
}
|
||||
|
||||
public Status getUploadState() {
|
||||
return uploadState;
|
||||
}
|
||||
|
||||
public void setUploadState(Status uploadState) {
|
||||
this.uploadState = uploadState;
|
||||
}
|
||||
|
||||
public String getUpload_errorString() {
|
||||
return upload_errorString;
|
||||
}
|
||||
|
||||
public void setUpload_errorString(String uploadErrorString) {
|
||||
upload_errorString = uploadErrorString;
|
||||
}
|
||||
|
||||
public String getUploadUrl() {
|
||||
return uploadUrl;
|
||||
}
|
||||
|
||||
public void setUploadUrl(String uploadUrl) {
|
||||
this.uploadUrl = uploadUrl;
|
||||
}
|
||||
|
||||
public String getUploadJobId() {
|
||||
return uploadJobId;
|
||||
}
|
||||
|
||||
public void setUploadJobId(String uploadJobId) {
|
||||
this.uploadJobId = uploadJobId;
|
||||
}
|
||||
}
|
||||
|
||||
@ -24,7 +24,7 @@ import java.util.Date;
|
||||
*
|
||||
*/
|
||||
public interface VMTemplateStorageResourceAssoc {
|
||||
public static enum Status {UNKNOWN, DOWNLOAD_ERROR, NOT_DOWNLOADED, DOWNLOAD_IN_PROGRESS, DOWNLOADED, ABANDONED}
|
||||
public static enum Status {UNKNOWN, DOWNLOAD_ERROR, NOT_DOWNLOADED, DOWNLOAD_IN_PROGRESS, DOWNLOADED, ABANDONED, UPLOADED, NOT_UPLOADED, UPLOAD_ERROR, UPLOAD_IN_PROGRESS}
|
||||
|
||||
public String getInstallPath();
|
||||
|
||||
|
||||
@ -90,6 +90,10 @@ public class VolumeVO implements Volume {
|
||||
@Column(name="created")
|
||||
Date created;
|
||||
|
||||
@Column(name="attached")
|
||||
@Temporal(value=TemporalType.TIMESTAMP)
|
||||
Date attached;
|
||||
|
||||
@Column(name="data_center_id")
|
||||
long dataCenterId;
|
||||
|
||||
@ -539,4 +543,15 @@ public class VolumeVO implements Volume {
|
||||
public Long getSourceId(){
|
||||
return this.sourceId;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Date getAttached(){
|
||||
return this.attached;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setAttached(Date attached){
|
||||
this.attached = attached;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@ -101,5 +101,7 @@ public interface StoragePoolDao extends GenericDao<StoragePoolVO, Long> {
|
||||
List<String> searchForStoragePoolDetails(long poolId, String value);
|
||||
|
||||
long countBy(long podId, Status... statuses);
|
||||
|
||||
List<StoragePoolVO> findIfDuplicatePoolsExistByUUID(String uuid);
|
||||
|
||||
}
|
||||
|
||||
@ -61,6 +61,7 @@ public class StoragePoolDaoImpl extends GenericDaoBase<StoragePoolVO, Long> imp
|
||||
protected final SearchBuilder<StoragePoolVO> DeleteLvmSearch;
|
||||
protected final GenericSearchBuilder<StoragePoolVO, Long> MaintenanceCountSearch;
|
||||
|
||||
|
||||
protected final StoragePoolDetailsDao _detailsDao;
|
||||
|
||||
private final String DetailsSqlPrefix = "SELECT storage_pool.* from storage_pool LEFT JOIN storage_pool_details ON storage_pool.id = storage_pool_details.pool_id WHERE storage_pool.data_center_id = ? and (storage_pool.pod_id = ? or storage_pool.pod_id is null) and (";
|
||||
@ -144,6 +145,13 @@ public class StoragePoolDaoImpl extends GenericDaoBase<StoragePoolVO, Long> imp
|
||||
return findOneBy(sc);
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<StoragePoolVO> findIfDuplicatePoolsExistByUUID(String uuid) {
|
||||
SearchCriteria<StoragePoolVO> sc = UUIDSearch.create();
|
||||
sc.setParameters("uuid", uuid);
|
||||
return listActiveBy(sc);
|
||||
}
|
||||
|
||||
|
||||
@Override
|
||||
public List<StoragePoolVO> listByDataCenterId(long datacenterId) {
|
||||
|
||||
7
core/src/com/cloud/storage/dao/VMTemplateHostDao.java
Normal file → Executable file
7
core/src/com/cloud/storage/dao/VMTemplateHostDao.java
Normal file → Executable file
@ -18,9 +18,11 @@
|
||||
|
||||
package com.cloud.storage.dao;
|
||||
|
||||
import java.util.Date;
|
||||
import java.util.List;
|
||||
|
||||
import com.cloud.storage.VMTemplateHostVO;
|
||||
import com.cloud.storage.VMTemplateStorageResourceAssoc.Status;
|
||||
import com.cloud.utils.db.GenericDao;
|
||||
|
||||
public interface VMTemplateHostDao extends GenericDao<VMTemplateHostVO, Long> {
|
||||
@ -41,6 +43,9 @@ public interface VMTemplateHostDao extends GenericDao<VMTemplateHostVO, Long> {
|
||||
List<VMTemplateHostVO> listByTemplatePool(long templateId, long poolId);
|
||||
|
||||
void update(VMTemplateHostVO instance);
|
||||
|
||||
void updateUploadStatus(long hostId, long templateId, int uploadPercent, Status uploadState,
|
||||
String jobId, String uploadUrl );
|
||||
|
||||
List<VMTemplateHostVO> listByTemplateStatus(long templateId, VMTemplateHostVO.Status downloadState);
|
||||
|
||||
@ -53,4 +58,6 @@ public interface VMTemplateHostDao extends GenericDao<VMTemplateHostVO, Long> {
|
||||
List<VMTemplateHostVO> listDestroyed(long hostId);
|
||||
|
||||
boolean templateAvailable(long templateId, long hostId);
|
||||
|
||||
List<VMTemplateHostVO> listByTemplateUploadStatus(long templateId,Status UploadState);
|
||||
}
|
||||
|
||||
53
core/src/com/cloud/storage/dao/VMTemplateHostDaoImpl.java
Normal file → Executable file
53
core/src/com/cloud/storage/dao/VMTemplateHostDaoImpl.java
Normal file → Executable file
@ -49,16 +49,26 @@ public class VMTemplateHostDaoImpl extends GenericDaoBase<VMTemplateHostVO, Long
|
||||
protected final SearchBuilder<VMTemplateHostVO> PoolTemplateSearch;
|
||||
protected final SearchBuilder<VMTemplateHostVO> HostTemplatePoolSearch;
|
||||
protected final SearchBuilder<VMTemplateHostVO> TemplateStatusSearch;
|
||||
protected final SearchBuilder<VMTemplateHostVO> TemplateStatesSearch;
|
||||
protected final SearchBuilder<VMTemplateHostVO> TemplateStatesSearch;
|
||||
protected final SearchBuilder<VMTemplateHostVO> TemplateUploadStatusSearch;
|
||||
|
||||
protected static final String UPDATE_TEMPLATE_HOST_REF =
|
||||
"UPDATE template_host_ref SET download_state = ?, download_pct= ?, last_updated = ? "
|
||||
+ ", error_str = ?, local_path = ?, job_id = ? "
|
||||
+ "WHERE host_id = ? and template_id = ?";
|
||||
|
||||
protected static final String UPDATE_UPLOAD_INFO =
|
||||
"UPDATE template_host_ref SET upload_state = ?, upload_pct= ?, last_updated = ? "
|
||||
+ ", upload_error_str = ?, upload_job_id = ? "
|
||||
+ "WHERE host_id = ? and template_id = ?";
|
||||
|
||||
protected static final String DOWNLOADS_STATE_DC=
|
||||
"SELECT * FROM template_host_ref t, host h where t.host_id = h.id and h.data_center_id=? "
|
||||
+ " and t.template_id=? and t.download_state = ?" ;
|
||||
|
||||
protected static final String UPLOADS_STATE_DC=
|
||||
"SELECT * FROM template_host_ref t, host h where t.host_id = h.id and h.data_center_id=? "
|
||||
+ " and t.template_id=? and t.upload_state = ?" ;
|
||||
|
||||
protected static final String DOWNLOADS_STATE_DC_POD=
|
||||
"SELECT * FROM template_host_ref t, host h where t.host_id = h.id and h.data_center_id=? and h.pod_id=? "
|
||||
@ -67,7 +77,12 @@ public class VMTemplateHostDaoImpl extends GenericDaoBase<VMTemplateHostVO, Long
|
||||
protected static final String DOWNLOADS_STATE=
|
||||
"SELECT * FROM template_host_ref t "
|
||||
+ " where t.template_id=? and t.download_state=?";
|
||||
|
||||
|
||||
|
||||
protected static final String UPLOADS_STATE=
|
||||
"SELECT * FROM template_host_ref t "
|
||||
+ " where t.template_id=? and t.upload_state=?";
|
||||
|
||||
public VMTemplateHostDaoImpl () {
|
||||
HostSearch = createSearchBuilder();
|
||||
HostSearch.and("host_id", HostSearch.entity().getHostId(), SearchCriteria.Op.EQ);
|
||||
@ -98,6 +113,11 @@ public class VMTemplateHostDaoImpl extends GenericDaoBase<VMTemplateHostVO, Long
|
||||
TemplateStatusSearch.and("template_id", TemplateStatusSearch.entity().getTemplateId(), SearchCriteria.Op.EQ);
|
||||
TemplateStatusSearch.and("download_state", TemplateStatusSearch.entity().getDownloadState(), SearchCriteria.Op.EQ);
|
||||
TemplateStatusSearch.done();
|
||||
|
||||
TemplateUploadStatusSearch = createSearchBuilder();
|
||||
TemplateUploadStatusSearch.and("template_id", TemplateUploadStatusSearch.entity().getTemplateId(), SearchCriteria.Op.EQ);
|
||||
TemplateUploadStatusSearch.and("upload_state", TemplateUploadStatusSearch.entity().getUploadState(), SearchCriteria.Op.EQ);
|
||||
TemplateUploadStatusSearch.done();
|
||||
|
||||
TemplateStatesSearch = createSearchBuilder();
|
||||
TemplateStatesSearch.and("template_id", TemplateStatesSearch.entity().getTemplateId(), SearchCriteria.Op.EQ);
|
||||
@ -129,6 +149,27 @@ public class VMTemplateHostDaoImpl extends GenericDaoBase<VMTemplateHostVO, Long
|
||||
} catch (Exception e) {
|
||||
s_logger.warn("Exception: ", e);
|
||||
}
|
||||
}
|
||||
|
||||
public void updateUploadStatus(long hostId, long templateId, int uploadPercent, Status uploadState,
|
||||
String uploadJobId, String uploadUrl ) {
|
||||
Transaction txn = Transaction.currentTxn();
|
||||
PreparedStatement pstmt = null;
|
||||
try {
|
||||
Date now = new Date();
|
||||
String sql = UPDATE_UPLOAD_INFO;
|
||||
pstmt = txn.prepareAutoCloseStatement(sql);
|
||||
pstmt.setString(1, uploadState.toString());
|
||||
pstmt.setInt(2, uploadPercent);
|
||||
pstmt.setString(3, DateUtil.getDateDisplayString(TimeZone.getTimeZone("GMT"), now));
|
||||
pstmt.setString(4, uploadJobId);
|
||||
pstmt.setLong(5, hostId);
|
||||
pstmt.setLong(6, templateId);
|
||||
pstmt.setString(7, uploadUrl);
|
||||
pstmt.executeUpdate();
|
||||
} catch (Exception e) {
|
||||
s_logger.warn("Exception: ", e);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -160,6 +201,14 @@ public class VMTemplateHostDaoImpl extends GenericDaoBase<VMTemplateHostVO, Long
|
||||
sc.setParameters("template_id", templateId);
|
||||
return findOneBy(sc);
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<VMTemplateHostVO> listByTemplateUploadStatus(long templateId, VMTemplateHostVO.Status uploadState) {
|
||||
SearchCriteria<VMTemplateHostVO> sc = TemplateUploadStatusSearch.create();
|
||||
sc.setParameters("template_id", templateId);
|
||||
sc.setParameters("upload_state", uploadState.toString());
|
||||
return listBy(sc);
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<VMTemplateHostVO> listByTemplateStatus(long templateId, VMTemplateHostVO.Status downloadState) {
|
||||
|
||||
@ -46,4 +46,5 @@ public interface VolumeDao extends GenericDao<VolumeVO, Long> {
|
||||
List<VolumeVO> listRemovedButNotDestroyed();
|
||||
List<VolumeVO> findCreatedByInstance(long id);
|
||||
List<VolumeVO> findByPoolId(long poolId);
|
||||
List<VolumeVO> findByInstanceAndDeviceId(long instanceId, long deviceId);
|
||||
}
|
||||
|
||||
@ -61,6 +61,7 @@ public class VolumeDaoImpl extends GenericDaoBase<VolumeVO, Long> implements Vol
|
||||
protected final GenericSearchBuilder<VolumeVO, Long> ActiveTemplateSearch;
|
||||
protected final SearchBuilder<VolumeVO> RemovedButNotDestroyedSearch;
|
||||
protected final SearchBuilder<VolumeVO> PoolIdSearch;
|
||||
protected final SearchBuilder<VolumeVO> InstanceAndDeviceIdSearch;
|
||||
|
||||
protected static final String SELECT_VM_SQL = "SELECT DISTINCT instance_id from volumes v where v.host_id = ? and v.mirror_state = ?";
|
||||
protected static final String SELECT_VM_ID_SQL = "SELECT DISTINCT instance_id from volumes v where v.host_id = ?";
|
||||
@ -117,6 +118,14 @@ public class VolumeDaoImpl extends GenericDaoBase<VolumeVO, Long> implements Vol
|
||||
sc.setParameters("instanceId", id);
|
||||
return listActiveBy(sc);
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<VolumeVO> findByInstanceAndDeviceId(long instanceId, long deviceId){
|
||||
SearchCriteria<VolumeVO> sc = InstanceAndDeviceIdSearch.create();
|
||||
sc.setParameters("instanceId", instanceId);
|
||||
sc.setParameters("deviceId", deviceId);
|
||||
return listActiveBy(sc);
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<VolumeVO> findByPoolId(long poolId) {
|
||||
@ -234,6 +243,7 @@ public class VolumeDaoImpl extends GenericDaoBase<VolumeVO, Long> implements Vol
|
||||
volume.setInstanceId(vmId);
|
||||
volume.setDeviceId(deviceId);
|
||||
volume.setUpdated(new Date());
|
||||
volume.setAttached(new Date());
|
||||
update(volumeId, volume);
|
||||
}
|
||||
|
||||
@ -243,6 +253,7 @@ public class VolumeDaoImpl extends GenericDaoBase<VolumeVO, Long> implements Vol
|
||||
volume.setInstanceId(null);
|
||||
volume.setDeviceId(null);
|
||||
volume.setUpdated(new Date());
|
||||
volume.setAttached(null);
|
||||
update(volumeId, volume);
|
||||
}
|
||||
|
||||
@ -302,6 +313,11 @@ public class VolumeDaoImpl extends GenericDaoBase<VolumeVO, Long> implements Vol
|
||||
InstanceIdSearch.and("instanceId", InstanceIdSearch.entity().getInstanceId(), SearchCriteria.Op.EQ);
|
||||
InstanceIdSearch.done();
|
||||
|
||||
InstanceAndDeviceIdSearch = createSearchBuilder();
|
||||
InstanceAndDeviceIdSearch.and("instanceId", InstanceAndDeviceIdSearch.entity().getInstanceId(), SearchCriteria.Op.EQ);
|
||||
InstanceAndDeviceIdSearch.and("deviceId", InstanceAndDeviceIdSearch.entity().getDeviceId(), SearchCriteria.Op.EQ);
|
||||
InstanceAndDeviceIdSearch.done();
|
||||
|
||||
PoolIdSearch = createSearchBuilder();
|
||||
PoolIdSearch.and("poolId", PoolIdSearch.entity().getPoolId(), SearchCriteria.Op.EQ);
|
||||
PoolIdSearch.done();
|
||||
|
||||
8
core/src/com/cloud/storage/resource/NfsSecondaryStorageResource.java
Normal file → Executable file
8
core/src/com/cloud/storage/resource/NfsSecondaryStorageResource.java
Normal file → Executable file
@ -48,6 +48,7 @@ import com.cloud.agent.api.SecStorageFirewallCfgCommand.PortConfig;
|
||||
import com.cloud.agent.api.storage.DeleteTemplateCommand;
|
||||
import com.cloud.agent.api.storage.DownloadCommand;
|
||||
import com.cloud.agent.api.storage.DownloadProgressCommand;
|
||||
import com.cloud.agent.api.storage.UploadCommand;
|
||||
import com.cloud.host.Host;
|
||||
import com.cloud.host.Host.Type;
|
||||
import com.cloud.resource.ServerResource;
|
||||
@ -58,6 +59,8 @@ import com.cloud.storage.Storage.StoragePoolType;
|
||||
import com.cloud.storage.template.DownloadManager;
|
||||
import com.cloud.storage.template.DownloadManagerImpl;
|
||||
import com.cloud.storage.template.TemplateInfo;
|
||||
import com.cloud.storage.template.UploadManager;
|
||||
import com.cloud.storage.template.UploadManagerImpl;
|
||||
import com.cloud.utils.NumbersUtil;
|
||||
import com.cloud.utils.component.ComponentLocator;
|
||||
import com.cloud.utils.exception.CloudRuntimeException;
|
||||
@ -85,6 +88,7 @@ public class NfsSecondaryStorageResource extends ServerResourceBase implements S
|
||||
Random _rand = new Random(System.currentTimeMillis());
|
||||
|
||||
DownloadManager _dlMgr;
|
||||
UploadManager _upldMgr;
|
||||
private String _configSslScr;
|
||||
private String _configAuthScr;
|
||||
private String _publicIp;
|
||||
@ -111,6 +115,8 @@ public class NfsSecondaryStorageResource extends ServerResourceBase implements S
|
||||
return _dlMgr.handleDownloadCommand((DownloadProgressCommand)cmd);
|
||||
} else if (cmd instanceof DownloadCommand) {
|
||||
return _dlMgr.handleDownloadCommand((DownloadCommand)cmd);
|
||||
}else if (cmd instanceof UploadCommand) {
|
||||
return _upldMgr.handleUploadCommand((UploadCommand)cmd);
|
||||
} else if (cmd instanceof GetStorageStatsCommand) {
|
||||
return execute((GetStorageStatsCommand)cmd);
|
||||
} else if (cmd instanceof CheckHealthCommand) {
|
||||
@ -413,6 +419,8 @@ public class NfsSecondaryStorageResource extends ServerResourceBase implements S
|
||||
_params.put(StorageLayer.InstanceConfigKey, _storage);
|
||||
_dlMgr = new DownloadManagerImpl();
|
||||
_dlMgr.configure("DownloadManager", _params);
|
||||
_upldMgr = new UploadManagerImpl();
|
||||
_upldMgr.configure("UploadManager", params);
|
||||
} catch (ConfigurationException e) {
|
||||
s_logger.warn("Caught problem while configuring DownloadManager", e);
|
||||
return false;
|
||||
|
||||
224
core/src/com/cloud/storage/template/FtpTemplateUploader.java
Normal file
224
core/src/com/cloud/storage/template/FtpTemplateUploader.java
Normal file
@ -0,0 +1,224 @@
|
||||
package com.cloud.storage.template;
|
||||
|
||||
import java.io.BufferedInputStream;
|
||||
import java.io.BufferedOutputStream;
|
||||
import java.io.File;
|
||||
import java.io.FileInputStream;
|
||||
import java.io.IOException;
|
||||
import java.net.MalformedURLException;
|
||||
import java.net.URL;
|
||||
import java.net.URLConnection;
|
||||
import java.util.Date;
|
||||
|
||||
import org.apache.log4j.Logger;
|
||||
|
||||
|
||||
public class FtpTemplateUploader implements TemplateUploader {
|
||||
|
||||
public static final Logger s_logger = Logger.getLogger(FtpTemplateUploader.class.getName());
|
||||
public TemplateUploader.Status status = TemplateUploader.Status.NOT_STARTED;
|
||||
public String errorString = "";
|
||||
public long totalBytes = 0;
|
||||
public long templateSizeinBytes;
|
||||
private String sourcePath;
|
||||
private String ftpUrl;
|
||||
private UploadCompleteCallback completionCallback;
|
||||
private boolean resume;
|
||||
private BufferedInputStream inputStream = null;
|
||||
private BufferedOutputStream outputStream = null;
|
||||
private static final int CHUNK_SIZE = 1024*1024; //1M
|
||||
|
||||
public FtpTemplateUploader(String sourcePath, String url, UploadCompleteCallback callback, long templateSizeinBytes){
|
||||
|
||||
this.sourcePath = sourcePath;
|
||||
this.ftpUrl = url;
|
||||
this.completionCallback = callback;
|
||||
this.templateSizeinBytes = templateSizeinBytes;
|
||||
s_logger.warn("Nitin in FtpTemplateUploader " +url + " "+sourcePath);
|
||||
}
|
||||
|
||||
public long upload(UploadCompleteCallback callback )
|
||||
{
|
||||
|
||||
switch (status) {
|
||||
case ABORTED:
|
||||
case UNRECOVERABLE_ERROR:
|
||||
case UPLOAD_FINISHED:
|
||||
return 0;
|
||||
default:
|
||||
|
||||
}
|
||||
|
||||
Date start = new Date();
|
||||
s_logger.warn("Nitin in FtpTemplateUploader ");
|
||||
StringBuffer sb = new StringBuffer();
|
||||
// check for authentication else assume its anonymous access.
|
||||
/* if (user != null && password != null)
|
||||
{
|
||||
sb.append( user );
|
||||
sb.append( ':' );
|
||||
sb.append( password );
|
||||
sb.append( '@' );
|
||||
}*/
|
||||
sb.append( ftpUrl );
|
||||
/*sb.append( '/' );
|
||||
sb.append( fileName ); filename where u want to dld it */
|
||||
/*ftp://10.91.18.14/
|
||||
* type ==> a=ASCII mode, i=image (binary) mode, d= file directory
|
||||
* listing
|
||||
*/
|
||||
sb.append( ";type=i" );
|
||||
|
||||
try
|
||||
{
|
||||
URL url = new URL( sb.toString() );
|
||||
URLConnection urlc = url.openConnection();
|
||||
|
||||
outputStream = new BufferedOutputStream( urlc.getOutputStream() );
|
||||
inputStream = new BufferedInputStream( new FileInputStream( new File(sourcePath) ) );
|
||||
|
||||
status = TemplateUploader.Status.IN_PROGRESS;
|
||||
|
||||
int bytes = 0;
|
||||
byte[] block = new byte[CHUNK_SIZE];
|
||||
boolean done=false;
|
||||
while (!done && status != Status.ABORTED ) {
|
||||
if ( (bytes = inputStream.read(block, 0, CHUNK_SIZE)) > -1) {
|
||||
outputStream.write(block,0, bytes);
|
||||
totalBytes += bytes;
|
||||
} else {
|
||||
done = true;
|
||||
}
|
||||
}
|
||||
status = TemplateUploader.Status.UPLOAD_FINISHED;
|
||||
s_logger.warn("Nitin in FtpTemplateUploader " +status);
|
||||
return totalBytes;
|
||||
} catch (MalformedURLException e) {
|
||||
status = TemplateUploader.Status.UNRECOVERABLE_ERROR;
|
||||
errorString = e.getMessage();
|
||||
s_logger.error("Nitin in FtpTemplateUploader " +errorString);
|
||||
} catch (IOException e) {
|
||||
status = TemplateUploader.Status.UNRECOVERABLE_ERROR;
|
||||
errorString = e.getMessage();
|
||||
s_logger.error("Nitin in FtpTemplateUploader " +errorString);
|
||||
}
|
||||
finally
|
||||
{
|
||||
try
|
||||
{
|
||||
if (inputStream != null){
|
||||
inputStream.close();
|
||||
}
|
||||
if (outputStream != null){
|
||||
outputStream.close();
|
||||
}
|
||||
}catch (IOException ioe){
|
||||
s_logger.error(" Caught exception while closing the resources" );
|
||||
}
|
||||
if (callback != null) {
|
||||
callback.uploadComplete(status);
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void run() {
|
||||
try {
|
||||
upload(completionCallback);
|
||||
} catch (Throwable t) {
|
||||
s_logger.warn("Caught exception during upload "+ t.getMessage(), t);
|
||||
errorString = "Failed to install: " + t.getMessage();
|
||||
status = TemplateUploader.Status.UNRECOVERABLE_ERROR;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public Status getStatus() {
|
||||
return status;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getUploadError() {
|
||||
return errorString;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getUploadLocalPath() {
|
||||
return null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int getUploadPercent() {
|
||||
if (templateSizeinBytes == 0) {
|
||||
return 0;
|
||||
}
|
||||
return (int)(100.0*totalBytes/templateSizeinBytes);
|
||||
}
|
||||
|
||||
@Override
|
||||
public long getUploadTime() {
|
||||
// TODO Auto-generated method stub
|
||||
return 0;
|
||||
}
|
||||
|
||||
@Override
|
||||
public long getUploadedBytes() {
|
||||
return totalBytes;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean isInited() {
|
||||
return false;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setResume(boolean resume) {
|
||||
this.resume = resume;
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setStatus(Status status) {
|
||||
this.status = status;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setUploadError(String string) {
|
||||
errorString = string;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean stopUpload() {
|
||||
switch (getStatus()) {
|
||||
case IN_PROGRESS:
|
||||
try {
|
||||
if(outputStream != null) {
|
||||
outputStream.close();
|
||||
}
|
||||
if (inputStream != null){
|
||||
inputStream.close();
|
||||
}
|
||||
} catch (IOException e) {
|
||||
s_logger.error(" Caught exception while closing the resources" );
|
||||
}
|
||||
status = TemplateUploader.Status.ABORTED;
|
||||
return true;
|
||||
case UNKNOWN:
|
||||
case NOT_STARTED:
|
||||
case RECOVERABLE_ERROR:
|
||||
case UNRECOVERABLE_ERROR:
|
||||
case ABORTED:
|
||||
status = TemplateUploader.Status.ABORTED;
|
||||
case UPLOAD_FINISHED:
|
||||
return true;
|
||||
|
||||
default:
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
77
core/src/com/cloud/storage/template/TemplateUploader.java
Normal file
77
core/src/com/cloud/storage/template/TemplateUploader.java
Normal file
@ -0,0 +1,77 @@
|
||||
package com.cloud.storage.template;
|
||||
|
||||
import com.cloud.storage.template.TemplateUploader.UploadCompleteCallback;
|
||||
import com.cloud.storage.template.TemplateUploader.Status;
|
||||
|
||||
public interface TemplateUploader extends Runnable{
|
||||
|
||||
/**
|
||||
* Callback used to notify completion of upload
|
||||
* @author nitin
|
||||
*
|
||||
*/
|
||||
public interface UploadCompleteCallback {
|
||||
void uploadComplete( Status status);
|
||||
|
||||
}
|
||||
|
||||
public static enum Status {UNKNOWN, NOT_STARTED, IN_PROGRESS, ABORTED, UNRECOVERABLE_ERROR, RECOVERABLE_ERROR, UPLOAD_FINISHED, POST_UPLOAD_FINISHED}
|
||||
|
||||
|
||||
/**
|
||||
* Initiate upload
|
||||
* @param callback completion callback to be called after upload is complete
|
||||
* @return bytes uploaded
|
||||
*/
|
||||
public long upload(UploadCompleteCallback callback);
|
||||
|
||||
/**
|
||||
* @return
|
||||
*/
|
||||
public boolean stopUpload();
|
||||
|
||||
/**
|
||||
* @return percent of file uploaded
|
||||
*/
|
||||
public int getUploadPercent();
|
||||
|
||||
/**
|
||||
* Get the status of the upload
|
||||
* @return status of upload
|
||||
*/
|
||||
public TemplateUploader.Status getStatus();
|
||||
|
||||
|
||||
/**
|
||||
* Get time taken to upload so far
|
||||
* @return time in seconds taken to upload
|
||||
*/
|
||||
public long getUploadTime();
|
||||
|
||||
/**
|
||||
* Get bytes uploaded
|
||||
* @return bytes uploaded so far
|
||||
*/
|
||||
public long getUploadedBytes();
|
||||
|
||||
/**
|
||||
* Get the error if any
|
||||
* @return error string if any
|
||||
*/
|
||||
public String getUploadError();
|
||||
|
||||
/** Get local path of the uploaded file
|
||||
* @return local path of the file uploaded
|
||||
*/
|
||||
public String getUploadLocalPath();
|
||||
|
||||
public void setStatus(TemplateUploader.Status status);
|
||||
|
||||
public void setUploadError(String string);
|
||||
|
||||
public void setResume(boolean resume);
|
||||
|
||||
public boolean isInited();
|
||||
|
||||
|
||||
}
|
||||
68
core/src/com/cloud/storage/template/UploadManager.java
Normal file
68
core/src/com/cloud/storage/template/UploadManager.java
Normal file
@ -0,0 +1,68 @@
|
||||
package com.cloud.storage.template;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import com.cloud.agent.api.storage.UploadAnswer;
|
||||
import com.cloud.agent.api.storage.UploadCommand;
|
||||
import com.cloud.agent.api.storage.UploadCommand;
|
||||
import com.cloud.storage.StorageResource;
|
||||
import com.cloud.storage.VMTemplateHostVO;
|
||||
import com.cloud.storage.Storage.ImageFormat;
|
||||
import com.cloud.utils.component.Manager;
|
||||
|
||||
public interface UploadManager extends Manager {
|
||||
|
||||
|
||||
/**
|
||||
* Get the status of a upload job
|
||||
* @param jobId job Id
|
||||
* @return status of the upload job
|
||||
*/
|
||||
public TemplateUploader.Status getUploadStatus(String jobId);
|
||||
|
||||
/**
|
||||
* Get the status of a upload job
|
||||
* @param jobId job Id
|
||||
* @return status of the upload job
|
||||
*/
|
||||
public VMTemplateHostVO.Status getUploadStatus2(String jobId);
|
||||
|
||||
/**
|
||||
* Get the upload percent of a upload job
|
||||
* @param jobId job Id
|
||||
* @return
|
||||
*/
|
||||
public int getUploadPct(String jobId);
|
||||
|
||||
/**
|
||||
* Get the upload error if any
|
||||
* @param jobId job Id
|
||||
* @return
|
||||
*/
|
||||
public String getUploadError(String jobId);
|
||||
|
||||
/**
|
||||
* Get the local path for the upload
|
||||
* @param jobId job Id
|
||||
* @return
|
||||
public String getUploadLocalPath(String jobId);
|
||||
*/
|
||||
|
||||
/** Handle upload commands from the management server
|
||||
* @param cmd cmd from server
|
||||
* @return answer representing status of upload.
|
||||
*/
|
||||
public UploadAnswer handleUploadCommand(UploadCommand cmd);
|
||||
|
||||
public String setRootDir(String rootDir, StorageResource storage);
|
||||
|
||||
public String getPublicTemplateRepo();
|
||||
|
||||
|
||||
String uploadPublicTemplate(long id, String url, String name,
|
||||
ImageFormat format, Long accountId, String descr,
|
||||
String cksum, String installPathPrefix, String user,
|
||||
String password, long maxTemplateSizeInBytes);
|
||||
|
||||
}
|
||||
597
core/src/com/cloud/storage/template/UploadManagerImpl.java
Normal file
597
core/src/com/cloud/storage/template/UploadManagerImpl.java
Normal file
@ -0,0 +1,597 @@
|
||||
package com.cloud.storage.template;
|
||||
|
||||
import java.io.File;
|
||||
import java.io.IOException;
|
||||
import java.net.URI;
|
||||
import java.net.URISyntaxException;
|
||||
import java.text.SimpleDateFormat;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Date;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.concurrent.ExecutorService;
|
||||
import java.util.concurrent.Executors;
|
||||
|
||||
import javax.naming.ConfigurationException;
|
||||
|
||||
import org.apache.log4j.Logger;
|
||||
|
||||
import com.cloud.agent.api.storage.UploadAnswer;
|
||||
import com.cloud.agent.api.storage.UploadProgressCommand;
|
||||
import com.cloud.agent.api.storage.UploadCommand;
|
||||
import com.cloud.agent.api.storage.UploadAnswer;
|
||||
import com.cloud.agent.api.storage.UploadCommand;
|
||||
import com.cloud.storage.StorageLayer;
|
||||
import com.cloud.storage.StorageResource;
|
||||
import com.cloud.storage.VMTemplateHostVO;
|
||||
import com.cloud.storage.Storage.ImageFormat;
|
||||
import com.cloud.storage.template.TemplateUploader.UploadCompleteCallback;
|
||||
import com.cloud.storage.template.TemplateUploader.Status;
|
||||
import com.cloud.utils.NumbersUtil;
|
||||
import com.cloud.utils.UUID;
|
||||
import com.cloud.utils.component.Adapters;
|
||||
import com.cloud.utils.component.ComponentLocator;
|
||||
import com.cloud.utils.exception.CloudRuntimeException;
|
||||
import com.cloud.utils.script.Script;
|
||||
|
||||
public class UploadManagerImpl implements UploadManager {
|
||||
|
||||
|
||||
public class Completion implements UploadCompleteCallback {
|
||||
private final String jobId;
|
||||
|
||||
public Completion(String jobId) {
|
||||
this.jobId = jobId;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void uploadComplete(Status status) {
|
||||
setUploadStatus(jobId, status);
|
||||
}
|
||||
}
|
||||
|
||||
private static class UploadJob {
|
||||
private final TemplateUploader td;
|
||||
private final String jobId;
|
||||
private final String tmpltName;
|
||||
private final ImageFormat format;
|
||||
private String tmpltPath;
|
||||
private String description;
|
||||
private String checksum;
|
||||
private Long accountId;
|
||||
private String installPathPrefix;
|
||||
private long templatesize;
|
||||
private long id;
|
||||
|
||||
public UploadJob(TemplateUploader td, String jobId, long id, String tmpltName, ImageFormat format, boolean hvm, Long accountId, String descr, String cksum, String installPathPrefix) {
|
||||
super();
|
||||
this.td = td;
|
||||
this.jobId = jobId;
|
||||
this.tmpltName = tmpltName;
|
||||
this.format = format;
|
||||
this.accountId = accountId;
|
||||
this.description = descr;
|
||||
this.checksum = cksum;
|
||||
this.installPathPrefix = installPathPrefix;
|
||||
this.templatesize = 0;
|
||||
this.id = id;
|
||||
}
|
||||
|
||||
public TemplateUploader getTd() {
|
||||
return td;
|
||||
}
|
||||
|
||||
public String getDescription() {
|
||||
return description;
|
||||
}
|
||||
|
||||
public String getChecksum() {
|
||||
return checksum;
|
||||
}
|
||||
|
||||
public UploadJob(TemplateUploader td, String jobId, UploadCommand cmd) {
|
||||
this.td = td;
|
||||
this.jobId = jobId;
|
||||
this.tmpltName = cmd.getName();
|
||||
this.format = cmd.getFormat();
|
||||
}
|
||||
|
||||
public TemplateUploader getTemplateUploader() {
|
||||
return td;
|
||||
}
|
||||
|
||||
public String getJobId() {
|
||||
return jobId;
|
||||
}
|
||||
|
||||
public String getTmpltName() {
|
||||
return tmpltName;
|
||||
}
|
||||
|
||||
public ImageFormat getFormat() {
|
||||
return format;
|
||||
}
|
||||
|
||||
public Long getAccountId() {
|
||||
return accountId;
|
||||
}
|
||||
|
||||
public long getId() {
|
||||
return id;
|
||||
}
|
||||
|
||||
public void setTmpltPath(String tmpltPath) {
|
||||
this.tmpltPath = tmpltPath;
|
||||
}
|
||||
|
||||
public String getTmpltPath() {
|
||||
return tmpltPath;
|
||||
}
|
||||
|
||||
public String getInstallPathPrefix() {
|
||||
return installPathPrefix;
|
||||
}
|
||||
|
||||
public void cleanup() {
|
||||
}
|
||||
|
||||
public void setTemplatesize(long templatesize) {
|
||||
this.templatesize = templatesize;
|
||||
}
|
||||
|
||||
public long getTemplatesize() {
|
||||
return templatesize;
|
||||
}
|
||||
}
|
||||
public static final Logger s_logger = Logger.getLogger(UploadManagerImpl.class);
|
||||
private ExecutorService threadPool;
|
||||
private final Map<String, UploadJob> jobs = new ConcurrentHashMap<String, UploadJob>();
|
||||
private String parentDir;
|
||||
private Adapters<Processor> _processors;
|
||||
private String publicTemplateRepo;
|
||||
private StorageLayer _storage;
|
||||
private int installTimeoutPerGig;
|
||||
private boolean _sslCopy;
|
||||
private String _name;
|
||||
private boolean hvm;
|
||||
|
||||
@Override
|
||||
public String uploadPublicTemplate(long id, String url, String name,
|
||||
ImageFormat format, Long accountId, String descr,
|
||||
String cksum, String installPathPrefix, String userName,
|
||||
String passwd, long templateSizeInBytes) {
|
||||
|
||||
UUID uuid = new UUID();
|
||||
String jobId = uuid.toString();
|
||||
|
||||
String completePath = parentDir + File.separator + installPathPrefix;
|
||||
s_logger.debug("Starting upload from " + completePath);
|
||||
|
||||
URI uri;
|
||||
try {
|
||||
uri = new URI(url);
|
||||
} catch (URISyntaxException e) {
|
||||
s_logger.error("URI is incorrect: " + url);
|
||||
throw new CloudRuntimeException("URI is incorrect: " + url);
|
||||
}
|
||||
TemplateUploader tu;
|
||||
if ((uri != null) && (uri.getScheme() != null)) {
|
||||
if (uri.getScheme().equalsIgnoreCase("ftp")) {
|
||||
tu = new FtpTemplateUploader(completePath, url, new Completion(jobId), templateSizeInBytes);
|
||||
} else {
|
||||
s_logger.error("Scheme is not supported " + url);
|
||||
throw new CloudRuntimeException("Scheme is not supported " + url);
|
||||
}
|
||||
} else {
|
||||
s_logger.error("Unable to download from URL: " + url);
|
||||
throw new CloudRuntimeException("Unable to download from URL: " + url);
|
||||
}
|
||||
UploadJob uj = new UploadJob(tu, jobId, id, name, format, hvm, accountId, descr, cksum, installPathPrefix);
|
||||
jobs.put(jobId, uj);
|
||||
threadPool.execute(tu);
|
||||
|
||||
return jobId;
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getUploadError(String jobId) {
|
||||
UploadJob uj = jobs.get(jobId);
|
||||
if (uj != null) {
|
||||
return uj.getTemplateUploader().getUploadError();
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int getUploadPct(String jobId) {
|
||||
UploadJob uj = jobs.get(jobId);
|
||||
if (uj != null) {
|
||||
return uj.getTemplateUploader().getUploadPercent();
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Status getUploadStatus(String jobId) {
|
||||
UploadJob job = jobs.get(jobId);
|
||||
if (job != null) {
|
||||
TemplateUploader tu = job.getTemplateUploader();
|
||||
if (tu != null) {
|
||||
return tu.getStatus();
|
||||
}
|
||||
}
|
||||
return Status.UNKNOWN;
|
||||
}
|
||||
|
||||
public static VMTemplateHostVO.Status convertStatus(Status tds) {
|
||||
switch (tds) {
|
||||
case ABORTED:
|
||||
return VMTemplateHostVO.Status.NOT_UPLOADED;
|
||||
case UPLOAD_FINISHED:
|
||||
return VMTemplateHostVO.Status.UPLOAD_IN_PROGRESS;
|
||||
case IN_PROGRESS:
|
||||
return VMTemplateHostVO.Status.UPLOAD_IN_PROGRESS;
|
||||
case NOT_STARTED:
|
||||
return VMTemplateHostVO.Status.NOT_UPLOADED;
|
||||
case RECOVERABLE_ERROR:
|
||||
return VMTemplateHostVO.Status.NOT_UPLOADED;
|
||||
case UNKNOWN:
|
||||
return VMTemplateHostVO.Status.UNKNOWN;
|
||||
case UNRECOVERABLE_ERROR:
|
||||
return VMTemplateHostVO.Status.UPLOAD_ERROR;
|
||||
case POST_UPLOAD_FINISHED:
|
||||
return VMTemplateHostVO.Status.UPLOADED;
|
||||
default:
|
||||
return VMTemplateHostVO.Status.UNKNOWN;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public com.cloud.storage.VMTemplateHostVO.Status getUploadStatus2(String jobId) {
|
||||
return convertStatus(getUploadStatus(jobId));
|
||||
}
|
||||
@Override
|
||||
public String getPublicTemplateRepo() {
|
||||
// TODO Auto-generated method stub
|
||||
return null;
|
||||
}
|
||||
|
||||
private UploadAnswer handleUploadProgressCmd(UploadProgressCommand cmd) {
|
||||
String jobId = cmd.getJobId();
|
||||
UploadAnswer answer;
|
||||
UploadJob uj = null;
|
||||
if (jobId != null)
|
||||
uj = jobs.get(jobId);
|
||||
if (uj == null) {
|
||||
return new UploadAnswer(null, 0, "Cannot find job", com.cloud.storage.VMTemplateHostVO.Status.UNKNOWN, "", "", 0);
|
||||
}
|
||||
TemplateUploader td = uj.getTemplateUploader();
|
||||
switch (cmd.getRequest()) {
|
||||
case GET_STATUS:
|
||||
break;
|
||||
case ABORT:
|
||||
td.stopUpload();
|
||||
sleep();
|
||||
break;
|
||||
/*case RESTART:
|
||||
td.stopUpload();
|
||||
sleep();
|
||||
threadPool.execute(td);
|
||||
break;*/
|
||||
case PURGE:
|
||||
td.stopUpload();
|
||||
answer = new UploadAnswer(jobId, getUploadPct(jobId), getUploadError(jobId), getUploadStatus2(jobId), getUploadLocalPath(jobId), getInstallPath(jobId), getUploadTemplateSize(jobId));
|
||||
jobs.remove(jobId);
|
||||
return answer;
|
||||
default:
|
||||
break; // TODO
|
||||
}
|
||||
return new UploadAnswer(jobId, getUploadPct(jobId), getUploadError(jobId), getUploadStatus2(jobId), getUploadLocalPath(jobId), getInstallPath(jobId),
|
||||
getUploadTemplateSize(jobId));
|
||||
}
|
||||
|
||||
@Override
|
||||
public UploadAnswer handleUploadCommand(UploadCommand cmd) {
|
||||
s_logger.warn(" handliing the upload " +cmd.getInstallPath() + " " + cmd.getId());
|
||||
if (cmd instanceof UploadProgressCommand) {
|
||||
return handleUploadProgressCmd((UploadProgressCommand) cmd);
|
||||
}
|
||||
/*
|
||||
if (cmd.getUrl() == null) {
|
||||
return new UploadAnswer(null, 0, "Template is corrupted on storage due to an invalid url , cannot Upload", com.cloud.storage.VMTemplateStorageResourceAssoc.Status.UPLOAD_ERROR, "", "", 0);
|
||||
}
|
||||
|
||||
if (cmd.getName() == null) {
|
||||
return new UploadAnswer(null, 0, "Invalid Name", com.cloud.storage.VMTemplateStorageResourceAssoc.Status.UPLOAD_ERROR, "", "", 0);
|
||||
}*/
|
||||
|
||||
// String installPathPrefix = null;
|
||||
// installPathPrefix = publicTemplateRepo;
|
||||
|
||||
String user = null;
|
||||
String password = null;
|
||||
String jobId = uploadPublicTemplate(cmd.getId(), cmd.getUrl(), cmd.getName(),
|
||||
cmd.getFormat(), cmd.getAccountId(), cmd.getDescription(),
|
||||
cmd.getChecksum(), cmd.getInstallPath(), user, password,
|
||||
cmd.getTemplateSizeInBytes());
|
||||
sleep();
|
||||
if (jobId == null) {
|
||||
return new UploadAnswer(null, 0, "Internal Error", com.cloud.storage.VMTemplateStorageResourceAssoc.Status.UPLOAD_ERROR, "", "", 0);
|
||||
}
|
||||
return new UploadAnswer(jobId, getUploadPct(jobId), getUploadError(jobId), getUploadStatus2(jobId), getUploadLocalPath(jobId), getInstallPath(jobId),
|
||||
getUploadTemplateSize(jobId));
|
||||
}
|
||||
|
||||
private String getInstallPath(String jobId) {
|
||||
// TODO Auto-generated method stub
|
||||
return null;
|
||||
}
|
||||
|
||||
private String getUploadLocalPath(String jobId) {
|
||||
// TODO Auto-generated method stub
|
||||
return null;
|
||||
}
|
||||
|
||||
private long getUploadTemplateSize(String jobId){
|
||||
UploadJob uj = jobs.get(jobId);
|
||||
if (uj != null) {
|
||||
return uj.getTemplatesize();
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String setRootDir(String rootDir, StorageResource storage) {
|
||||
this.publicTemplateRepo = rootDir + publicTemplateRepo;
|
||||
return null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean configure(String name, Map<String, Object> params)
|
||||
throws ConfigurationException {
|
||||
_name = name;
|
||||
|
||||
String value = null;
|
||||
|
||||
_storage = (StorageLayer) params.get(StorageLayer.InstanceConfigKey);
|
||||
if (_storage == null) {
|
||||
value = (String) params.get(StorageLayer.ClassConfigKey);
|
||||
if (value == null) {
|
||||
throw new ConfigurationException("Unable to find the storage layer");
|
||||
}
|
||||
|
||||
Class<StorageLayer> clazz;
|
||||
try {
|
||||
clazz = (Class<StorageLayer>) Class.forName(value);
|
||||
} catch (ClassNotFoundException e) {
|
||||
throw new ConfigurationException("Unable to instantiate " + value);
|
||||
}
|
||||
_storage = ComponentLocator.inject(clazz);
|
||||
}
|
||||
String useSsl = (String)params.get("sslcopy");
|
||||
if (useSsl != null) {
|
||||
_sslCopy = Boolean.parseBoolean(useSsl);
|
||||
|
||||
}
|
||||
configureFolders(name, params);
|
||||
String inSystemVM = (String)params.get("secondary.storage.vm");
|
||||
if (inSystemVM != null && "true".equalsIgnoreCase(inSystemVM)) {
|
||||
s_logger.info("UploadManager: starting additional services since we are inside system vm");
|
||||
startAdditionalServices();
|
||||
blockOutgoingOnPrivate();
|
||||
}
|
||||
|
||||
value = (String) params.get("install.timeout.pergig");
|
||||
this.installTimeoutPerGig = NumbersUtil.parseInt(value, 15 * 60) * 1000;
|
||||
|
||||
value = (String) params.get("install.numthreads");
|
||||
final int numInstallThreads = NumbersUtil.parseInt(value, 10);
|
||||
|
||||
String scriptsDir = (String) params.get("template.scripts.dir");
|
||||
if (scriptsDir == null) {
|
||||
scriptsDir = "scripts/storage/secondary";
|
||||
}
|
||||
|
||||
List<Processor> processors = new ArrayList<Processor>();
|
||||
_processors = new Adapters<Processor>("processors", processors);
|
||||
Processor processor = new VhdProcessor();
|
||||
|
||||
processor.configure("VHD Processor", params);
|
||||
processors.add(processor);
|
||||
|
||||
processor = new IsoProcessor();
|
||||
processor.configure("ISO Processor", params);
|
||||
processors.add(processor);
|
||||
|
||||
processor = new QCOW2Processor();
|
||||
processor.configure("QCOW2 Processor", params);
|
||||
processors.add(processor);
|
||||
// Add more processors here.
|
||||
threadPool = Executors.newFixedThreadPool(numInstallThreads);
|
||||
return true;
|
||||
}
|
||||
|
||||
protected void configureFolders(String name, Map<String, Object> params) throws ConfigurationException {
|
||||
parentDir = (String) params.get("template.parent");
|
||||
if (parentDir == null) {
|
||||
throw new ConfigurationException("Unable to find the parent root for the templates");
|
||||
}
|
||||
|
||||
String value = (String) params.get("public.templates.root.dir");
|
||||
if (value == null) {
|
||||
value = TemplateConstants.DEFAULT_TMPLT_ROOT_DIR;
|
||||
}
|
||||
|
||||
if (value.startsWith(File.separator)) {
|
||||
publicTemplateRepo = value;
|
||||
} else {
|
||||
publicTemplateRepo = parentDir + File.separator + value;
|
||||
}
|
||||
|
||||
if (!publicTemplateRepo.endsWith(File.separator)) {
|
||||
publicTemplateRepo += File.separator;
|
||||
}
|
||||
|
||||
publicTemplateRepo += TemplateConstants.DEFAULT_TMPLT_FIRST_LEVEL_DIR;
|
||||
|
||||
if (!_storage.mkdirs(publicTemplateRepo)) {
|
||||
throw new ConfigurationException("Unable to create public templates directory");
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getName() {
|
||||
return _name;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean start() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean stop() {
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get notified of change of job status. Executed in context of uploader thread
|
||||
*
|
||||
* @param jobId
|
||||
* the id of the job
|
||||
* @param status
|
||||
* the status of the job
|
||||
*/
|
||||
public void setUploadStatus(String jobId, Status status) {
|
||||
UploadJob uj = jobs.get(jobId);
|
||||
if (uj == null) {
|
||||
s_logger.warn("setUploadStatus for jobId: " + jobId + ", status=" + status + " no job found");
|
||||
return;
|
||||
}
|
||||
TemplateUploader tu = uj.getTemplateUploader();
|
||||
s_logger.warn("Upload Completion for jobId: " + jobId + ", status=" + status);
|
||||
s_logger.warn("UploadedBytes=" + tu.getUploadedBytes() + ", error=" + tu.getUploadError() + ", pct=" + tu.getUploadPercent());
|
||||
|
||||
switch (status) {
|
||||
case ABORTED:
|
||||
case NOT_STARTED:
|
||||
case UNRECOVERABLE_ERROR:
|
||||
// TODO
|
||||
uj.cleanup();
|
||||
break;
|
||||
case UNKNOWN:
|
||||
return;
|
||||
case IN_PROGRESS:
|
||||
s_logger.info("Resuming jobId: " + jobId + ", status=" + status);
|
||||
tu.setResume(true);
|
||||
threadPool.execute(tu);
|
||||
break;
|
||||
case RECOVERABLE_ERROR:
|
||||
threadPool.execute(tu);
|
||||
break;
|
||||
case UPLOAD_FINISHED:
|
||||
tu.setUploadError("Upload success, starting install ");
|
||||
String result = postUpload(jobId);
|
||||
if (result != null) {
|
||||
s_logger.error("Failed post upload script: " + result);
|
||||
tu.setStatus(Status.UNRECOVERABLE_ERROR);
|
||||
tu.setUploadError("Failed post upload script: " + result);
|
||||
} else {
|
||||
s_logger.warn("Upload completed successfully at " + new SimpleDateFormat().format(new Date()));
|
||||
tu.setStatus(Status.POST_UPLOAD_FINISHED);
|
||||
tu.setUploadError("Upload completed successfully at " + new SimpleDateFormat().format(new Date()));
|
||||
}
|
||||
uj.cleanup();
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
private String postUpload(String jobId) {
|
||||
return null;
|
||||
}
|
||||
|
||||
private void sleep() {
|
||||
try {
|
||||
Thread.sleep(3000);
|
||||
} catch (InterruptedException e) {
|
||||
// ignore
|
||||
}
|
||||
}
|
||||
|
||||
private void blockOutgoingOnPrivate() {
|
||||
Script command = new Script("/bin/bash", s_logger);
|
||||
String intf = "eth1";
|
||||
command.add("-c");
|
||||
command.add("iptables -A OUTPUT -o " + intf + " -p tcp -m state --state NEW -m tcp --dport " + "80" + " -j REJECT;" +
|
||||
"iptables -A OUTPUT -o " + intf + " -p tcp -m state --state NEW -m tcp --dport " + "443" + " -j REJECT;");
|
||||
|
||||
String result = command.execute();
|
||||
if (result != null) {
|
||||
s_logger.warn("Error in blocking outgoing to port 80/443 err=" + result );
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
private void startAdditionalServices() {
|
||||
|
||||
Script command = new Script("/bin/bash", s_logger);
|
||||
command.add("-c");
|
||||
command.add("service httpd stop ");
|
||||
String result = command.execute();
|
||||
if (result != null) {
|
||||
s_logger.warn("Error in stopping httpd service err=" + result );
|
||||
}
|
||||
String port = Integer.toString(TemplateConstants.DEFAULT_TMPLT_COPY_PORT);
|
||||
String intf = TemplateConstants.DEFAULT_TMPLT_COPY_INTF;
|
||||
|
||||
command = new Script("/bin/bash", s_logger);
|
||||
command.add("-c");
|
||||
command.add("iptables -D INPUT -i " + intf + " -p tcp -m state --state NEW -m tcp --dport " + port + " -j DROP;" +
|
||||
"iptables -D INPUT -i " + intf + " -p tcp -m state --state NEW -m tcp --dport " + port + " -j HTTP;" +
|
||||
"iptables -D INPUT -i " + intf + " -p tcp -m state --state NEW -m tcp --dport " + "443" + " -j DROP;" +
|
||||
"iptables -D INPUT -i " + intf + " -p tcp -m state --state NEW -m tcp --dport " + "443" + " -j HTTP;" +
|
||||
"iptables -F HTTP;" +
|
||||
"iptables -X HTTP;" +
|
||||
"iptables -N HTTP;" +
|
||||
"iptables -I INPUT -i " + intf + " -p tcp -m state --state NEW -m tcp --dport " + port + " -j DROP;" +
|
||||
"iptables -I INPUT -i " + intf + " -p tcp -m state --state NEW -m tcp --dport " + "443" + " -j DROP;" +
|
||||
"iptables -I INPUT -i " + intf + " -p tcp -m state --state NEW -m tcp --dport " + port + " -j HTTP;" +
|
||||
"iptables -I INPUT -i " + intf + " -p tcp -m state --state NEW -m tcp --dport " + "443" + " -j HTTP;");
|
||||
|
||||
result = command.execute();
|
||||
if (result != null) {
|
||||
s_logger.warn("Error in opening up httpd port err=" + result );
|
||||
return;
|
||||
}
|
||||
|
||||
command = new Script("/bin/bash", s_logger);
|
||||
command.add("-c");
|
||||
command.add("service httpd start ");
|
||||
result = command.execute();
|
||||
if (result != null) {
|
||||
s_logger.warn("Error in starting httpd service err=" + result );
|
||||
return;
|
||||
}
|
||||
command = new Script("mkdir", s_logger);
|
||||
command.add("-p");
|
||||
command.add("/var/www/html/copy/template");
|
||||
result = command.execute();
|
||||
if (result != null) {
|
||||
s_logger.warn("Error in creating directory =" + result );
|
||||
return;
|
||||
}
|
||||
|
||||
command = new Script("/bin/bash", s_logger);
|
||||
command.add("-c");
|
||||
command.add("ln -sf " + publicTemplateRepo + " /var/www/html/copy/template");
|
||||
result = command.execute();
|
||||
if (result != null) {
|
||||
s_logger.warn("Error in linking err=" + result );
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -116,6 +116,8 @@ public class UserVmDaoImpl extends GenericDaoBase<UserVmVO, Long> implements Use
|
||||
DestroySearch.and("updateTime", DestroySearch.entity().getUpdateTime(), SearchCriteria.Op.LT);
|
||||
DestroySearch.done();
|
||||
|
||||
|
||||
|
||||
_updateTimeAttr = _allAttributes.get("updateTime");
|
||||
assert _updateTimeAttr != null : "Couldn't get this updateTime attribute";
|
||||
}
|
||||
|
||||
2
debian/cloud-agent-scripts.install
vendored
2
debian/cloud-agent-scripts.install
vendored
@ -12,8 +12,6 @@
|
||||
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/id_rsa.cloud
|
||||
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/make_migratable.sh
|
||||
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/network_info.sh
|
||||
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/networkUsage.sh
|
||||
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/prepsystemvm.sh
|
||||
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/setup_iscsi.sh
|
||||
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/setupxenserver.sh
|
||||
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/vmops
|
||||
|
||||
2
debian/cloud-client.postinst
vendored
2
debian/cloud-client.postinst
vendored
@ -17,8 +17,6 @@ case "$1" in
|
||||
chgrp cloud $i
|
||||
done
|
||||
|
||||
test -f /var/lib/cloud/management/.ssh/id_rsa || su - cloud -c 'yes "" | ssh-keygen -t rsa -q -N ""' < /dev/null
|
||||
|
||||
for i in /etc/cloud/management/db.properties
|
||||
do
|
||||
chmod 0640 $i
|
||||
|
||||
3
debian/cloud-setup.install
vendored
3
debian/cloud-setup.install
vendored
@ -12,3 +12,6 @@
|
||||
/usr/share/cloud/setup/index-212to213.sql
|
||||
/usr/share/cloud/setup/postprocess-20to21.sql
|
||||
/usr/share/cloud/setup/schema-20to21.sql
|
||||
/usr/share/cloud/setup/schema-level.sql
|
||||
/usr/share/cloud/setup/schema-21to22.sql
|
||||
/usr/share/cloud/setup/data-21to22.sql
|
||||
|
||||
4
debian/control
vendored
4
debian/control
vendored
@ -2,7 +2,7 @@ Source: cloud
|
||||
Section: libs
|
||||
Priority: extra
|
||||
Maintainer: Manuel Amador (Rudd-O) <manuel@cloud.com>
|
||||
Build-Depends: debhelper (>= 7), openjdk-6-jdk, tomcat6, libws-commons-util-java, libcommons-dbcp-java, libcommons-collections-java, libcommons-httpclient-java, libservlet2.5-java, genisoimage
|
||||
Build-Depends: debhelper (>= 7), openjdk-6-jdk, tomcat6, libws-commons-util-java, libcommons-dbcp-java, libcommons-collections-java, libcommons-httpclient-java, libservlet2.5-java, genisoimage, python-mysqldb
|
||||
Standards-Version: 3.8.1
|
||||
Homepage: http://techcenter.cloud.com/software/cloudstack
|
||||
|
||||
@ -128,7 +128,7 @@ Provides: vmops-setup
|
||||
Conflicts: vmops-setup
|
||||
Replaces: vmops-setup
|
||||
Architecture: any
|
||||
Depends: openjdk-6-jre, python, cloud-utils (= ${source:Version}), mysql-client, cloud-deps (= ${source:Version}), cloud-server (= ${source:Version}), cloud-python (= ${source:Version}), python-mysqldb
|
||||
Depends: openjdk-6-jre, python, cloud-utils (= ${source:Version}), cloud-deps (= ${source:Version}), cloud-server (= ${source:Version}), cloud-python (= ${source:Version}), python-mysqldb
|
||||
Description: Cloud.com client
|
||||
The Cloud.com setup tools let you set up your Management Server and Usage Server.
|
||||
|
||||
|
||||
2
debian/rules
vendored
2
debian/rules
vendored
@ -91,7 +91,7 @@ binary-common:
|
||||
dh_testdir
|
||||
dh_testroot
|
||||
dh_installchangelogs
|
||||
dh_installdocs -A README INSTALL HACKING README.html
|
||||
dh_installdocs -A README.html
|
||||
# dh_installexamples
|
||||
# dh_installmenu
|
||||
# dh_installdebconf
|
||||
|
||||
@ -1,223 +0,0 @@
|
||||
|
||||
|
||||
|
||||
#! /bin/bash
|
||||
# chkconfig: 35 09 90
|
||||
# description: pre-boot configuration using boot line parameters
|
||||
# This file exists in /etc/init.d/
|
||||
|
||||
replace_in_file() {
|
||||
local filename=$1
|
||||
local keyname=$2
|
||||
local value=$3
|
||||
sed -i /$keyname=/d $filename
|
||||
echo "$keyname=$value" >> $filename
|
||||
return $?
|
||||
}
|
||||
|
||||
setup_interface() {
|
||||
local intfnum=$1
|
||||
local ip=$2
|
||||
local mask=$3
|
||||
|
||||
cfg=/etc/sysconfig/network-scripts/ifcfg-eth${intfnum}
|
||||
replace_in_file ${cfg} IPADDR ${ip}
|
||||
replace_in_file ${cfg} NETMASK ${mask}
|
||||
replace_in_file ${cfg} BOOTPROTO STATIC
|
||||
if [ "$ip" == "0.0.0.0" ]
|
||||
then
|
||||
replace_in_file ${cfg} ONBOOT No
|
||||
else
|
||||
replace_in_file ${cfg} ONBOOT Yes
|
||||
fi
|
||||
}
|
||||
|
||||
setup_common() {
|
||||
setup_interface "0" $ETH0_IP $ETH0_MASK
|
||||
setup_interface "1" $ETH1_IP $ETH1_MASK
|
||||
setup_interface "2" $ETH2_IP $ETH2_MASK
|
||||
|
||||
replace_in_file /etc/sysconfig/network GATEWAY $GW
|
||||
replace_in_file /etc/sysconfig/network HOSTNAME $NAME
|
||||
echo "NOZEROCONF=yes" >> /etc/sysconfig/network
|
||||
hostname $NAME
|
||||
|
||||
#Nameserver
|
||||
if [ -n "$NS1" ]
|
||||
then
|
||||
echo "nameserver $NS1" > /etc/dnsmasq-resolv.conf
|
||||
echo "nameserver $NS1" > /etc/resolv.conf
|
||||
fi
|
||||
|
||||
if [ -n "$NS2" ]
|
||||
then
|
||||
echo "nameserver $NS2" >> /etc/dnsmasq-resolv.conf
|
||||
echo "nameserver $NS2" >> /etc/resolv.conf
|
||||
fi
|
||||
if [[ -n "$MGMTNET" && -n "$LOCAL_GW" ]]
|
||||
then
|
||||
echo "$MGMTNET via $LOCAL_GW dev eth1" > /etc/sysconfig/network-scripts/route-eth1
|
||||
fi
|
||||
}
|
||||
|
||||
setup_router() {
|
||||
setup_common
|
||||
[ -z $DHCP_RANGE ] && DHCP_RANGE=$ETH0_IP
|
||||
if [ -n "$DOMAIN" ]
|
||||
then
|
||||
#send domain name to dhcp clients
|
||||
sed -i s/[#]*dhcp-option=15.*$/dhcp-option=15,\"$DOMAIN\"/ /etc/dnsmasq.conf
|
||||
#DNS server will append $DOMAIN to local queries
|
||||
sed -r -i s/^[#]?domain=.*$/domain=$DOMAIN/ /etc/dnsmasq.conf
|
||||
#answer all local domain queries
|
||||
sed -i -e "s/^[#]*local=.*$/local=\/$DOMAIN\//" /etc/dnsmasq.conf
|
||||
fi
|
||||
sed -i -e "s/^dhcp-range=.*$/dhcp-range=$DHCP_RANGE,static/" /etc/dnsmasq.conf
|
||||
sed -i -e "s/^[#]*listen-address=.*$/listen-address=$ETH0_IP/" /etc/dnsmasq.conf
|
||||
sed -i /gateway/d /etc/hosts
|
||||
echo "$ETH0_IP $NAME" >> /etc/hosts
|
||||
[ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*$/Listen $ETH0_IP:80/" /etc/httpd/conf/httpd.conf
|
||||
[ -f /etc/httpd/conf.d/ssl.conf ] && mv /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/ssl.conf.bak
|
||||
[ -f /etc/ssh/sshd_config ] && sed -i -e "s/^[#]*ListenAddress.*$/ListenAddress $ETH1_IP/" /etc/ssh/sshd_config
|
||||
}
|
||||
|
||||
setup_dhcpsrvr() {
|
||||
setup_common
|
||||
[ -z $DHCP_RANGE ] && DHCP_RANGE=$ETH0_IP
|
||||
if [ -n "$DOMAIN" ]
|
||||
then
|
||||
#send domain name to dhcp clients
|
||||
sed -i s/[#]*dhcp-option=15.*$/dhcp-option=15,\"$DOMAIN\"/ /etc/dnsmasq.conf
|
||||
#DNS server will append $DOMAIN to local queries
|
||||
sed -r -i s/^[#]?domain=.*$/domain=$DOMAIN/ /etc/dnsmasq.conf
|
||||
#answer all local domain queries
|
||||
sed -i -e "s/^[#]*local=.*$/local=\/$DOMAIN\//" /etc/dnsmasq.conf
|
||||
else
|
||||
#delete domain option
|
||||
sed -i /^dhcp-option=15.*$/d /etc/dnsmasq.conf
|
||||
sed -i /^domain=.*$/d /etc/dnsmasq.conf
|
||||
sed -i -e "/^local=.*$/d" /etc/dnsmasq.conf
|
||||
fi
|
||||
sed -i -e "s/^dhcp-range=.*$/dhcp-range=$DHCP_RANGE,static/" /etc/dnsmasq.conf
|
||||
sed -i -e "s/^[#]*dhcp-option=option:router.*$/dhcp-option=option:router,$GW/" /etc/dnsmasq.conf
|
||||
echo "dhcp-option=6,$NS1,$NS2" >> /etc/dnsmasq.conf
|
||||
sed -i /gateway/d /etc/hosts
|
||||
echo "$ETH0_IP $NAME" >> /etc/hosts
|
||||
[ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*$/Listen $ETH0_IP:80/" /etc/httpd/conf/httpd.conf
|
||||
[ -f /etc/httpd/conf.d/ssl.conf ] && mv /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/ssl.conf.bak
|
||||
}
|
||||
|
||||
setup_secstorage() {
|
||||
setup_common
|
||||
sed -i /gateway/d /etc/hosts
|
||||
public_ip=$ETH2_IP
|
||||
[ "$ETH2_IP" == "0.0.0.0" ] && public_ip=$ETH1_IP
|
||||
echo "$public_ip $NAME" >> /etc/hosts
|
||||
[ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*:80$/Listen $public_ip:80/" /etc/httpd/conf/httpd.conf
|
||||
[ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*:443$/Listen $public_ip:443/" /etc/httpd/conf/httpd.conf
|
||||
}
|
||||
|
||||
setup_console_proxy() {
|
||||
setup_common
|
||||
public_ip=$ETH2_IP
|
||||
[ "$ETH2_IP" == "0.0.0.0" ] && public_ip=$ETH1_IP
|
||||
sed -i /gateway/d /etc/hosts
|
||||
echo "$public_ip $NAME" >> /etc/hosts
|
||||
}
|
||||
|
||||
if [ -f /mnt/cmdline ]
|
||||
then
|
||||
CMDLINE=$(cat /mnt/cmdline)
|
||||
else
|
||||
CMDLINE=$(cat /proc/cmdline)
|
||||
fi
|
||||
|
||||
TYPE="router"
|
||||
|
||||
for i in $CMDLINE
|
||||
do
|
||||
# search for foo=bar pattern and cut out foo
|
||||
KEY=$(echo $i | cut -d= -f1)
|
||||
VALUE=$(echo $i | cut -d= -f2)
|
||||
case $KEY in
|
||||
eth0ip)
|
||||
ETH0_IP=$VALUE
|
||||
;;
|
||||
eth1ip)
|
||||
ETH1_IP=$VALUE
|
||||
;;
|
||||
eth2ip)
|
||||
ETH2_IP=$VALUE
|
||||
;;
|
||||
gateway)
|
||||
GW=$VALUE
|
||||
;;
|
||||
eth0mask)
|
||||
ETH0_MASK=$VALUE
|
||||
;;
|
||||
eth1mask)
|
||||
ETH1_MASK=$VALUE
|
||||
;;
|
||||
eth2mask)
|
||||
ETH2_MASK=$VALUE
|
||||
;;
|
||||
dns1)
|
||||
NS1=$VALUE
|
||||
;;
|
||||
dns2)
|
||||
NS2=$VALUE
|
||||
;;
|
||||
domain)
|
||||
DOMAIN=$VALUE
|
||||
;;
|
||||
mgmtcidr)
|
||||
MGMTNET=$VALUE
|
||||
;;
|
||||
localgw)
|
||||
LOCAL_GW=$VALUE
|
||||
;;
|
||||
template)
|
||||
TEMPLATE=$VALUE
|
||||
;;
|
||||
name)
|
||||
NAME=$VALUE
|
||||
;;
|
||||
dhcprange)
|
||||
DHCP_RANGE=$(echo $VALUE | tr ':' ',')
|
||||
;;
|
||||
type)
|
||||
TYPE=$VALUE
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
case $TYPE in
|
||||
router)
|
||||
[ "$NAME" == "" ] && NAME=router
|
||||
setup_router
|
||||
;;
|
||||
dhcpsrvr)
|
||||
[ "$NAME" == "" ] && NAME=dhcpsrvr
|
||||
setup_dhcpsrvr
|
||||
;;
|
||||
secstorage)
|
||||
[ "$NAME" == "" ] && NAME=secstorage
|
||||
setup_secstorage;
|
||||
;;
|
||||
consoleproxy)
|
||||
[ "$NAME" == "" ] && NAME=consoleproxy
|
||||
setup_console_proxy;
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ ! -d /root/.ssh ]
|
||||
then
|
||||
mkdir /root/.ssh
|
||||
chmod 700 /root/.ssh
|
||||
fi
|
||||
if [ -f /mnt/id_rsa.pub ]
|
||||
then
|
||||
cat /mnt/id_rsa.pub > /root/.ssh/authorized_keys
|
||||
chmod 600 /root/.ssh/authorized_keys
|
||||
fi
|
||||
|
||||
@ -1,33 +0,0 @@
|
||||
# Generated by iptables-save v1.3.8 on Thu Oct 1 18:16:05 2009
|
||||
# @VERSION@
|
||||
*nat
|
||||
:PREROUTING ACCEPT [499:70846]
|
||||
:POSTROUTING ACCEPT [1:85]
|
||||
:OUTPUT ACCEPT [1:85]
|
||||
COMMIT
|
||||
# Completed on Thu Oct 1 18:16:06 2009
|
||||
# Generated by iptables-save v1.3.8 on Thu Oct 1 18:16:06 2009
|
||||
*filter
|
||||
#:INPUT DROP [288:42467]
|
||||
:FORWARD DROP [0:0]
|
||||
:OUTPUT ACCEPT [65:9665]
|
||||
-A INPUT -i eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
|
||||
-A INPUT -i eth2 -m state --state RELATED,ESTABLISHED -j ACCEPT
|
||||
-A INPUT -p icmp -j ACCEPT
|
||||
-A INPUT -i eth0 -p udp -m udp --dport 67 -j ACCEPT
|
||||
-A INPUT -i eth0 -p udp -m udp --dport 53 -j ACCEPT
|
||||
-A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
|
||||
-A INPUT -i eth1 -p tcp -m tcp --dport 3922 --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
|
||||
-A INPUT -i eth0 -p tcp -m tcp --dport 8080 --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
|
||||
-A INPUT -p tcp -m tcp --dport 8001 --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
|
||||
-A INPUT -p tcp -m tcp --dport 443 --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
|
||||
-A INPUT -p tcp -m tcp --dport 80 --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
|
||||
-A INPUT -i eth1 -p tcp -m state --state NEW -m tcp --dport 8001 -j ACCEPT
|
||||
-A INPUT -i eth2 -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
|
||||
-A INPUT -i eth2 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
|
||||
-A FORWARD -i eth0 -o eth1 -j ACCEPT
|
||||
-A FORWARD -i eth0 -o eth2 -j ACCEPT
|
||||
-A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
|
||||
-A FORWARD -i eth2 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
|
||||
COMMIT
|
||||
# Completed on Thu Oct 1 18:16:06 2009
|
||||
@ -1,48 +0,0 @@
|
||||
# Load additional iptables modules (nat helpers)
|
||||
# Default: -none-
|
||||
# Space separated list of nat helpers (e.g. 'ip_nat_ftp ip_nat_irc'), which
|
||||
# are loaded after the firewall rules are applied. Options for the helpers are
|
||||
# stored in /etc/modprobe.conf.
|
||||
IPTABLES_MODULES="ip_conntrack_ftp nf_nat_ftp"
|
||||
|
||||
# Unload modules on restart and stop
|
||||
# Value: yes|no, default: yes
|
||||
# This option has to be 'yes' to get to a sane state for a firewall
|
||||
# restart or stop. Only set to 'no' if there are problems unloading netfilter
|
||||
# modules.
|
||||
IPTABLES_MODULES_UNLOAD="yes"
|
||||
|
||||
# Save current firewall rules on stop.
|
||||
# Value: yes|no, default: no
|
||||
# Saves all firewall rules to /etc/sysconfig/iptables if firewall gets stopped
|
||||
# (e.g. on system shutdown).
|
||||
IPTABLES_SAVE_ON_STOP="no"
|
||||
|
||||
# Save current firewall rules on restart.
|
||||
# Value: yes|no, default: no
|
||||
# Saves all firewall rules to /etc/sysconfig/iptables if firewall gets
|
||||
# restarted.
|
||||
IPTABLES_SAVE_ON_RESTART="no"
|
||||
|
||||
# Save (and restore) rule and chain counter.
|
||||
# Value: yes|no, default: no
|
||||
# Save counters for rules and chains to /etc/sysconfig/iptables if
|
||||
# 'service iptables save' is called or on stop or restart if SAVE_ON_STOP or
|
||||
# SAVE_ON_RESTART is enabled.
|
||||
IPTABLES_SAVE_COUNTER="no"
|
||||
|
||||
# Numeric status output
|
||||
# Value: yes|no, default: yes
|
||||
# Print IP addresses and port numbers in numeric format in the status output.
|
||||
IPTABLES_STATUS_NUMERIC="yes"
|
||||
|
||||
# Verbose status output
|
||||
# Value: yes|no, default: yes
|
||||
# Print info about the number of packets and bytes plus the "input-" and
|
||||
# "outputdevice" in the status output.
|
||||
IPTABLES_STATUS_VERBOSE="no"
|
||||
|
||||
# Status output with numbered lines
|
||||
# Value: yes|no, default: yes
|
||||
# Print a counter/number for every rule in the status output.
|
||||
IPTABLES_STATUS_LINENUMBERS="yes"
|
||||
@ -74,13 +74,15 @@ resolv-file=/etc/dnsmasq-resolv.conf
|
||||
interface=eth0
|
||||
# Or you can specify which interface _not_ to listen on
|
||||
except-interface=eth1
|
||||
except-interface=eth2
|
||||
# Or which to listen on by address (remember to include 127.0.0.1 if
|
||||
# you use this.)
|
||||
#listen-address=
|
||||
# If you want dnsmasq to provide only DNS service on an interface,
|
||||
# configure it as shown above, and then use the following line to
|
||||
# disable DHCP on it.
|
||||
#no-dhcp-interface=eth1
|
||||
no-dhcp-interface=eth1
|
||||
no-dhcp-interface=eth2
|
||||
|
||||
# On systems which support it, dnsmasq binds the wildcard address,
|
||||
# even when it is listening on only some interfaces. It then discards
|
||||
@ -109,7 +111,7 @@ expand-hosts
|
||||
# 2) Sets the "domain" DHCP option thereby potentially setting the
|
||||
# domain of all systems configured by DHCP
|
||||
# 3) Provides the domain part for "expand-hosts"
|
||||
domain=foo.com
|
||||
#domain=foo.com
|
||||
|
||||
# Uncomment this to enable the integrated DHCP server, you need
|
||||
# to supply the range of addresses available for lease and optionally
|
||||
@ -248,7 +250,7 @@ dhcp-hostsfile=/etc/dhcphosts.txt
|
||||
#dhcp-option=27,1
|
||||
|
||||
# Set the domain
|
||||
dhcp-option=15,"foo.com"
|
||||
#dhcp-option=15,"foo.com"
|
||||
|
||||
# Send the etherboot magic flag and then etherboot options (a string).
|
||||
#dhcp-option=128,e4:45:74:68:00:00
|
||||
@ -26,7 +26,14 @@ setup_console_proxy() {
|
||||
echo "$public_ip $NAME" >> /etc/hosts
|
||||
}
|
||||
|
||||
CMDLINE=$(cat /proc/cmdline)
|
||||
|
||||
if [ -f /mnt/cmdline ]
|
||||
then
|
||||
CMDLINE=$(cat /mnt/cmdline)
|
||||
else
|
||||
CMDLINE=$(cat /proc/cmdline)
|
||||
fi
|
||||
|
||||
TYPE="router"
|
||||
BOOTPROTO="static"
|
||||
|
||||
@ -118,11 +118,12 @@ setup_dhcpsrvr() {
|
||||
sed -i -e "s/^dhcp-range=.*$/dhcp-range=$DHCP_RANGE,static/" /etc/dnsmasq.conf
|
||||
sed -i -e "s/^[#]*dhcp-option=option:router.*$/dhcp-option=option:router,$GW/" /etc/dnsmasq.conf
|
||||
#for now set up ourself as the dns server as well
|
||||
#echo "dhcp-option=6,$NS1,$NS2" >> /etc/dnsmasq.conf
|
||||
sed -i s/[#]*dhcp-option=6.*$/dhcp-option=6,\"$NS1\",\"$NS2\"/ /etc/dnsmasq.conf
|
||||
sed -i /gateway/d /etc/hosts
|
||||
echo "$ETH0_IP $NAME" >> /etc/hosts
|
||||
[ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*$/Listen $ETH0_IP:80/" /etc/httpd/conf/httpd.conf
|
||||
[ -f /etc/httpd/conf.d/ssl.conf ] && mv /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/ssl.conf.bak
|
||||
[ -f /etc/ssh/sshd_config ] && sed -i -e "s/^[#]*ListenAddress.*$/ListenAddress $ETH1_IP/" /etc/ssh/sshd_config
|
||||
}
|
||||
|
||||
setup_secstorage() {
|
||||
@ -143,7 +144,25 @@ setup_console_proxy() {
|
||||
echo "$public_ip $NAME" >> /etc/hosts
|
||||
}
|
||||
|
||||
CMDLINE=$(cat /proc/cmdline)
|
||||
if [ -f /mnt/cmdline ]
|
||||
then
|
||||
CMDLINE=$(cat /mnt/cmdline)
|
||||
else
|
||||
CMDLINE=$(cat /proc/cmdline)
|
||||
fi
|
||||
|
||||
|
||||
if [ ! -d /root/.ssh ]
|
||||
then
|
||||
mkdir /root/.ssh
|
||||
chmod 700 /root/.ssh
|
||||
fi
|
||||
if [ -f /mnt/id_rsa.pub ]
|
||||
then
|
||||
cat /mnt/id_rsa.pub > /root/.ssh/authorized_keys
|
||||
chmod 600 /root/.ssh/authorized_keys
|
||||
fi
|
||||
|
||||
TYPE="router"
|
||||
BOOTPROTO="static"
|
||||
|
||||
@ -4,26 +4,15 @@ bld.substitute("*/**",name="patchsubst")
|
||||
|
||||
for virttech in Utils.to_list(bld.path.ant_glob("*",dir=True)):
|
||||
if virttech in ["shared","wscript_build"]: continue
|
||||
patchfiles = bld.path.ant_glob('%s/** shared/**'%virttech,src=True,bld=True,dir=False,flat=True)
|
||||
patchfiles = bld.path.ant_glob('shared/** %s/**'%virttech,src=False,bld=True,dir=False,flat=True)
|
||||
tgen = bld(
|
||||
features = 'tar',#Utils.tar_up,
|
||||
source = patchfiles,
|
||||
target = '%s-patch.tgz'%virttech,
|
||||
name = '%s-patch_tgz'%virttech,
|
||||
root = "patches/%s"%virttech,
|
||||
root = os.path.join("patches",virttech),
|
||||
rename = lambda x: re.sub(".subst$","",x),
|
||||
after = 'patchsubst',
|
||||
)
|
||||
bld.process_after(tgen)
|
||||
if virttech != "xenserver":
|
||||
# xenserver uses the patch.tgz file later to make an ISO, so we do not need to install it
|
||||
bld.install_as("${AGENTLIBDIR}/scripts/vm/hypervisor/%s/patch.tgz"%virttech, "%s-patch.tgz"%virttech)
|
||||
|
||||
tgen = bld(
|
||||
rule = 'cp ${SRC} ${TGT}',
|
||||
source = 'xenserver-patch.tgz',
|
||||
target = 'patch.tgz',
|
||||
after = 'xenserver-patch_tgz',
|
||||
name = 'patch_tgz'
|
||||
)
|
||||
bld.process_after(tgen)
|
||||
|
||||
@ -1,48 +0,0 @@
|
||||
# Load additional iptables modules (nat helpers)
|
||||
# Default: -none-
|
||||
# Space separated list of nat helpers (e.g. 'ip_nat_ftp ip_nat_irc'), which
|
||||
# are loaded after the firewall rules are applied. Options for the helpers are
|
||||
# stored in /etc/modprobe.conf.
|
||||
IPTABLES_MODULES="ip_conntrack_ftp nf_nat_ftp"
|
||||
|
||||
# Unload modules on restart and stop
|
||||
# Value: yes|no, default: yes
|
||||
# This option has to be 'yes' to get to a sane state for a firewall
|
||||
# restart or stop. Only set to 'no' if there are problems unloading netfilter
|
||||
# modules.
|
||||
IPTABLES_MODULES_UNLOAD="yes"
|
||||
|
||||
# Save current firewall rules on stop.
|
||||
# Value: yes|no, default: no
|
||||
# Saves all firewall rules to /etc/sysconfig/iptables if firewall gets stopped
|
||||
# (e.g. on system shutdown).
|
||||
IPTABLES_SAVE_ON_STOP="no"
|
||||
|
||||
# Save current firewall rules on restart.
|
||||
# Value: yes|no, default: no
|
||||
# Saves all firewall rules to /etc/sysconfig/iptables if firewall gets
|
||||
# restarted.
|
||||
IPTABLES_SAVE_ON_RESTART="no"
|
||||
|
||||
# Save (and restore) rule and chain counter.
|
||||
# Value: yes|no, default: no
|
||||
# Save counters for rules and chains to /etc/sysconfig/iptables if
|
||||
# 'service iptables save' is called or on stop or restart if SAVE_ON_STOP or
|
||||
# SAVE_ON_RESTART is enabled.
|
||||
IPTABLES_SAVE_COUNTER="no"
|
||||
|
||||
# Numeric status output
|
||||
# Value: yes|no, default: yes
|
||||
# Print IP addresses and port numbers in numeric format in the status output.
|
||||
IPTABLES_STATUS_NUMERIC="yes"
|
||||
|
||||
# Verbose status output
|
||||
# Value: yes|no, default: yes
|
||||
# Print info about the number of packets and bytes plus the "input-" and
|
||||
# "outputdevice" in the status output.
|
||||
IPTABLES_STATUS_VERBOSE="no"
|
||||
|
||||
# Status output with numbered lines
|
||||
# Value: yes|no, default: yes
|
||||
# Print a counter/number for every rule in the status output.
|
||||
IPTABLES_STATUS_LINENUMBERS="yes"
|
||||
File diff suppressed because it is too large
Load Diff
@ -1,3 +1,2 @@
|
||||
if bld.env.DISTRO not in ['Windows','Mac']:
|
||||
obj = bld(features = 'py',name='pythonmodules')
|
||||
obj.find_sources_in_dirs('lib', exts=['.py'])
|
||||
obj = bld(features = 'py',name='pythonmodules')
|
||||
obj.find_sources_in_dirs('lib', exts=['.py'])
|
||||
|
||||
@ -85,7 +85,7 @@ do
|
||||
esac
|
||||
done
|
||||
|
||||
CERT="$(dirname $0)/id_rsa"
|
||||
cert="/root/.ssh/id_rsa.cloud"
|
||||
|
||||
# Check if DomR is up and running. If not, exit with error code 1.
|
||||
check_gw "$domRIp"
|
||||
@ -114,7 +114,7 @@ then
|
||||
exit 2
|
||||
fi
|
||||
|
||||
ssh -p 3922 -q -o StrictHostKeyChecking=no -i $CERT root@$domRIp "/root/firewall.sh $*"
|
||||
ssh -p 3922 -q -o StrictHostKeyChecking=no -i $cert root@$domRIp "/root/firewall.sh $*"
|
||||
exit $?
|
||||
|
||||
|
||||
|
||||
@ -26,7 +26,7 @@ copy_haproxy() {
|
||||
local domRIp=$1
|
||||
local cfg=$2
|
||||
|
||||
scp -P 3922 -q -o StrictHostKeyChecking=no -i $CERT $cfg root@$domRIp:/etc/haproxy/haproxy.cfg.new
|
||||
scp -P 3922 -q -o StrictHostKeyChecking=no -i $cert $cfg root@$domRIp:/etc/haproxy/haproxy.cfg.new
|
||||
return $?
|
||||
}
|
||||
|
||||
@ -56,7 +56,7 @@ do
|
||||
esac
|
||||
done
|
||||
|
||||
CERT="$(dirname $0)/id_rsa"
|
||||
cert="/root/.ssh/id_rsa.cloud"
|
||||
|
||||
if [ "$iflag$fflag" != "11" ]
|
||||
then
|
||||
@ -79,5 +79,5 @@ then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ssh -p 3922 -q -o StrictHostKeyChecking=no -i $CERT root@$domRIp "/root/loadbalancer.sh $*"
|
||||
ssh -p 3922 -q -o StrictHostKeyChecking=no -i $cert root@$domRIp "/root/loadbalancer.sh $*"
|
||||
exit $?
|
||||
|
||||
@ -57,7 +57,7 @@ add_nat_entry() {
|
||||
ssh -p 3922 -o StrictHostKeyChecking=no -i $cert root@$dRIp "\
|
||||
ip addr add dev $correctVif $pubIp
|
||||
iptables -t nat -I POSTROUTING -j SNAT -o $correctVif --to-source $pubIp ;
|
||||
/sbin/arping -c 3 -I $correctVif -A -U -s $pubIp $pubIp;
|
||||
arping -c 3 -I $correctVif -A -U -s $pubIp $pubIp;
|
||||
"
|
||||
if [ $? -gt 0 -a $? -ne 2 ]
|
||||
then
|
||||
@ -91,7 +91,7 @@ add_an_ip () {
|
||||
ssh -p 3922 -o StrictHostKeyChecking=no -i $cert root@$dRIp "\
|
||||
ifconfig $correctVif up;
|
||||
ip addr add dev $correctVif $pubIp ;
|
||||
/sbin/arping -c 3 -I $correctVif -A -U -s $pubIp $pubIp;
|
||||
arping -c 3 -I $correctVif -A -U -s $pubIp $pubIp;
|
||||
"
|
||||
return $?
|
||||
}
|
||||
|
||||
@ -18,7 +18,7 @@ check_gw() {
|
||||
return $?;
|
||||
}
|
||||
|
||||
cert="$(dirname $0)/id_rsa"
|
||||
cert="/root/.ssh/id_rsa.cloud"
|
||||
|
||||
create_usage_rules () {
|
||||
local dRIp=$1
|
||||
@ -10,7 +10,7 @@ usage() {
|
||||
}
|
||||
|
||||
set -x
|
||||
CERT="/root/.ssh/id_rsa.cloud"
|
||||
cert="/root/.ssh/id_rsa.cloud"
|
||||
PORT=3922
|
||||
|
||||
create_htaccess() {
|
||||
@ -24,7 +24,7 @@ create_htaccess() {
|
||||
entry="RewriteRule ^$file$ ../$folder/%{REMOTE_ADDR}/$file [L,NC,QSA]"
|
||||
htaccessFolder="/var/www/html/latest"
|
||||
htaccessFile=$htaccessFolder/.htaccess
|
||||
ssh -p $PORT -o StrictHostKeyChecking=no -i $CERT root@$domrIp "mkdir -p $htaccessFolder; touch $htaccessFile; grep -F \"$entry\" $htaccessFile; if [ \$? -gt 0 ]; then echo -e \"$entry\" >> $htaccessFile; fi" >/dev/null
|
||||
ssh -p $PORT -o StrictHostKeyChecking=no -i $cert root@$domrIp "mkdir -p $htaccessFolder; touch $htaccessFile; grep -F \"$entry\" $htaccessFile; if [ \$? -gt 0 ]; then echo -e \"$entry\" >> $htaccessFile; fi" >/dev/null
|
||||
result=$?
|
||||
|
||||
if [ $result -eq 0 ]
|
||||
@ -32,7 +32,7 @@ create_htaccess() {
|
||||
entry="Options -Indexes\\nOrder Deny,Allow\\nDeny from all\\nAllow from $vmIp"
|
||||
htaccessFolder="/var/www/html/$folder/$vmIp"
|
||||
htaccessFile=$htaccessFolder/.htaccess
|
||||
ssh -p $PORT -o StrictHostKeyChecking=no -i $CERT root@$domrIp "mkdir -p $htaccessFolder; echo -e \"$entry\" > $htaccessFile" >/dev/null
|
||||
ssh -p $PORT -o StrictHostKeyChecking=no -i $cert root@$domrIp "mkdir -p $htaccessFolder; echo -e \"$entry\" > $htaccessFile" >/dev/null
|
||||
result=$?
|
||||
fi
|
||||
|
||||
@ -47,7 +47,7 @@ copy_vm_data_file() {
|
||||
local dataFile=$5
|
||||
|
||||
chmod +r $dataFile
|
||||
scp -P $PORT -o StrictHostKeyChecking=no -i $CERT $dataFile root@$domrIp:/var/www/html/$folder/$vmIp/$file >/dev/null
|
||||
scp -P $PORT -o StrictHostKeyChecking=no -i $cert $dataFile root@$domrIp:/var/www/html/$folder/$vmIp/$file >/dev/null
|
||||
return $?
|
||||
}
|
||||
|
||||
@ -58,7 +58,7 @@ delete_vm_data_file() {
|
||||
local file=$4
|
||||
|
||||
vmDataFilePath="/var/www/html/$folder/$vmIp/$file"
|
||||
ssh -p $PORT -o StrictHostKeyChecking=no -i $CERT root@$domrIp "if [ -f $vmDataFilePath ]; then rm -rf $vmDataFilePath; fi" >/dev/null
|
||||
ssh -p $PORT -o StrictHostKeyChecking=no -i $cert root@$domrIp "if [ -f $vmDataFilePath ]; then rm -rf $vmDataFilePath; fi" >/dev/null
|
||||
return $?
|
||||
}
|
||||
|
||||
|
||||
@ -78,6 +78,7 @@ create_from_file() {
|
||||
then
|
||||
rm -f $tmpltimg
|
||||
fi
|
||||
chmod a+r /$tmpltfs/$tmpltname
|
||||
}
|
||||
|
||||
create_from_snapshot() {
|
||||
@ -92,6 +93,8 @@ create_from_snapshot() {
|
||||
printf "Failed to create template /$tmplfs/$tmpltname from snapshot $snapshotName on disk $tmpltImg "
|
||||
exit 2
|
||||
fi
|
||||
|
||||
chmod a+r /$tmpltfs/$tmpltname
|
||||
}
|
||||
|
||||
tflag=
|
||||
@ -165,6 +168,7 @@ else
|
||||
fi
|
||||
|
||||
touch /$tmpltfs/template.properties
|
||||
chmod a+r /$tmpltfs/template.properties
|
||||
echo -n "" > /$tmpltfs/template.properties
|
||||
|
||||
today=$(date '+%m_%d_%Y')
|
||||
|
||||
@ -43,8 +43,24 @@ create_snapshot() {
|
||||
destroy_snapshot() {
|
||||
local disk=$1
|
||||
local snapshotname=$2
|
||||
local deleteDir=$3
|
||||
local failed=0
|
||||
|
||||
if [ -d $disk ]
|
||||
then
|
||||
if [ -f $disk/$snapshotname ]
|
||||
then
|
||||
rm -rf $disk/$snapshotname >& /dev/null
|
||||
fi
|
||||
|
||||
if [ "$deleteDir" == "1" ]
|
||||
then
|
||||
rm -rf %disk >& /dev/null
|
||||
fi
|
||||
|
||||
return $failed
|
||||
fi
|
||||
|
||||
if [ ! -f $disk ]
|
||||
then
|
||||
failed=1
|
||||
@ -119,8 +135,9 @@ nflag=
|
||||
pathval=
|
||||
snapshot=
|
||||
tmplName=
|
||||
deleteDir=
|
||||
|
||||
while getopts 'c:d:r:n:b:p:t:' OPTION
|
||||
while getopts 'c:d:r:n:b:p:t:f' OPTION
|
||||
do
|
||||
case $OPTION in
|
||||
c) cflag=1
|
||||
@ -142,6 +159,8 @@ do
|
||||
;;
|
||||
t) tmplName="$OPTARG"
|
||||
;;
|
||||
f) deleteDir=1
|
||||
;;
|
||||
?) usage
|
||||
;;
|
||||
esac
|
||||
@ -154,7 +173,7 @@ then
|
||||
exit $?
|
||||
elif [ "$dflag" == "1" ]
|
||||
then
|
||||
destroy_snapshot $pathval $snapshot
|
||||
destroy_snapshot $pathval $snapshot $deleteDir
|
||||
exit $?
|
||||
elif [ "$bflag" == "1" ]
|
||||
then
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
# createtmplt.sh -- install a template
|
||||
|
||||
usage() {
|
||||
printf "Usage: %s: -t <template-fs> -n <templatename> -f <root disk file> -s <size in Gigabytes> -c <md5 cksum> -d <descr> -h [-u]\n" $(basename $0) >&2
|
||||
printf "Usage: %s: -t <template-fs> -n <templatename> -f <root disk file> -c <md5 cksum> -d <descr> -h [-u]\n" $(basename $0) >&2
|
||||
}
|
||||
|
||||
|
||||
@ -67,7 +67,7 @@ uncompress() {
|
||||
return 1
|
||||
fi
|
||||
|
||||
rm $1
|
||||
rm -f $1
|
||||
printf $tmpfile
|
||||
|
||||
return 0
|
||||
@ -77,16 +77,10 @@ create_from_file() {
|
||||
local tmpltfs=$1
|
||||
local tmpltimg=$2
|
||||
local tmpltname=$3
|
||||
local volsize=$4
|
||||
local cleanup=$5
|
||||
|
||||
#copy the file to the disk
|
||||
mv $tmpltimg /$tmpltfs/$tmpltname
|
||||
|
||||
# if [ "$cleanup" == "true" ]
|
||||
# then
|
||||
# rm -f $tmpltimg
|
||||
# fi
|
||||
}
|
||||
|
||||
tflag=
|
||||
@ -112,7 +106,6 @@ do
|
||||
tmpltimg="$OPTARG"
|
||||
;;
|
||||
s) sflag=1
|
||||
volsize="$OPTARG"
|
||||
;;
|
||||
c) cflag=1
|
||||
cksum="$OPTARG"
|
||||
@ -161,33 +154,18 @@ rollback_if_needed $tmpltfs $? "failed to uncompress $tmpltimg\n"
|
||||
tmpltimg2=$(untar $tmpltimg2)
|
||||
rollback_if_needed $tmpltfs $? "tar archives not supported\n"
|
||||
|
||||
if [ ${tmpltname%.vhd} = ${tmpltname} ]
|
||||
if [ ${tmpltname%.vhd} != ${tmpltname} ]
|
||||
then
|
||||
vhd-util check -n ${tmpltimg2} > /dev/null
|
||||
rollback_if_needed $tmpltfs $? "vhd tool check $tmpltimg2 failed\n"
|
||||
fi
|
||||
|
||||
# need the 'G' suffix on volume size
|
||||
if [ ${volsize:(-1)} != G ]
|
||||
then
|
||||
volsize=${volsize}G
|
||||
fi
|
||||
|
||||
#determine source file size -- it needs to be less than or equal to volsize
|
||||
imgsize=$(ls -lh $tmpltimg2| awk -F" " '{print $5}')
|
||||
if [ ${imgsize:(-1)} == G ]
|
||||
then
|
||||
imgsize=${imgsize%G} #strip out the G
|
||||
imgsize=${imgsize%.*} #...and any decimal part
|
||||
let imgsize=imgsize+1 # add 1 to compensate for decimal part
|
||||
volsizetmp=${volsize%G}
|
||||
if [ $volsizetmp -lt $imgsize ]
|
||||
then
|
||||
volsize=${imgsize}G
|
||||
if which vhd-util 2>/dev/null
|
||||
then
|
||||
vhd-util check -n ${tmpltimg2} > /dev/null
|
||||
rollback_if_needed $tmpltfs $? "vhd tool check $tmpltimg2 failed\n"
|
||||
fi
|
||||
fi
|
||||
|
||||
create_from_file $tmpltfs $tmpltimg2 $tmpltname $volsize $cleanup
|
||||
imgsize=$(ls -l $tmpltimg2| awk -F" " '{print $5}')
|
||||
|
||||
create_from_file $tmpltfs $tmpltimg2 $tmpltname
|
||||
|
||||
touch /$tmpltfs/template.properties
|
||||
rollback_if_needed $tmpltfs $? "Failed to create template.properties file"
|
||||
@ -195,13 +173,10 @@ echo -n "" > /$tmpltfs/template.properties
|
||||
|
||||
today=$(date '+%m_%d_%Y')
|
||||
echo "filename=$tmpltname" > /$tmpltfs/template.properties
|
||||
echo "snapshot.name=$today" >> /$tmpltfs/template.properties
|
||||
echo "description=$descr" >> /$tmpltfs/template.properties
|
||||
echo "name=$tmpltname" >> /$tmpltfs/template.properties
|
||||
echo "checksum=$cksum" >> /$tmpltfs/template.properties
|
||||
echo "hvm=$hvm" >> /$tmpltfs/template.properties
|
||||
echo "volume.size=$volsize" >> /$tmpltfs/template.properties
|
||||
|
||||
echo "size=$imgsize" >> /$tmpltfs/template.properties
|
||||
|
||||
if [ "$cleanup" == "true" ]
|
||||
then
|
||||
|
||||
@ -116,6 +116,9 @@ then
|
||||
echo "Failed to install routing template $tmpltimg to $destdir"
|
||||
fi
|
||||
|
||||
tmpltfile=$destdir/$tmpfile
|
||||
tmpltsize=$(ls -l $tmpltfile| awk -F" " '{print $5}')
|
||||
|
||||
echo "vhd=true" >> $destdir/template.properties
|
||||
echo "id=1" >> $destdir/template.properties
|
||||
echo "public=true" >> $destdir/template.properties
|
||||
@ -123,6 +126,6 @@ echo "vhd.filename=$localfile" >> $destdir/template.properties
|
||||
echo "uniquename=routing" >> $destdir/template.properties
|
||||
echo "vhd.virtualsize=2147483648" >> $destdir/template.properties
|
||||
echo "virtualsize=2147483648" >> $destdir/template.properties
|
||||
echo "vhd.size=2101252608" >> $destdir/template.properties
|
||||
echo "vhd.size=$tmpltsize" >> $destdir/template.properties
|
||||
|
||||
echo "Successfully installed routing template $tmpltimg to $destdir"
|
||||
|
||||
@ -1,232 +0,0 @@
|
||||
#/bin/bash
|
||||
# $Id: prepsystemvm.sh 10800 2010-07-16 13:48:39Z edison $ $HeadURL: svn://svn.lab.vmops.com/repos/vmdev/java/scripts/vm/hypervisor/xenserver/prepsystemvm.sh $
|
||||
|
||||
#set -x
|
||||
|
||||
mntpath() {
|
||||
local vmname=$1
|
||||
echo "/mnt/$vmname"
|
||||
}
|
||||
|
||||
mount_local() {
|
||||
local vmname=$1
|
||||
local disk=$2
|
||||
local path=$(mntpath $vmname)
|
||||
|
||||
mkdir -p ${path}
|
||||
mount $disk ${path}
|
||||
|
||||
return $?
|
||||
}
|
||||
|
||||
umount_local() {
|
||||
local vmname=$1
|
||||
local path=$(mntpath $vmname)
|
||||
|
||||
umount $path
|
||||
local ret=$?
|
||||
|
||||
rm -rf $path
|
||||
return $ret
|
||||
}
|
||||
|
||||
|
||||
patch_scripts() {
|
||||
local vmname=$1
|
||||
local patchfile=$2
|
||||
local path=$(mntpath $vmname)
|
||||
|
||||
local oldmd5=
|
||||
local md5file=${path}/md5sum
|
||||
[ -f ${md5file} ] && oldmd5=$(cat ${md5file})
|
||||
local newmd5=$(md5sum $patchfile | awk '{print $1}')
|
||||
|
||||
if [ "$oldmd5" != "$newmd5" ]
|
||||
then
|
||||
tar xzf $patchfile -C ${path}
|
||||
echo ${newmd5} > ${md5file}
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
#
|
||||
# To use existing console proxy .zip-based package file
|
||||
#
|
||||
patch_console_proxy() {
|
||||
local vmname=$1
|
||||
local patchfile=$2
|
||||
local path=$(mntpath $vmname)
|
||||
local oldmd5=
|
||||
local md5file=${path}/usr/local/cloud/systemvm/md5sum
|
||||
|
||||
[ -f ${md5file} ] && oldmd5=$(cat ${md5file})
|
||||
local newmd5=$(md5sum $patchfile | awk '{print $1}')
|
||||
|
||||
if [ "$oldmd5" != "$newmd5" ]
|
||||
then
|
||||
echo "All" | unzip $patchfile -d ${path}/usr/local/cloud/systemvm >/dev/null 2>&1
|
||||
chmod 555 ${path}/usr/local/cloud/systemvm/run.sh
|
||||
find ${path}/usr/local/cloud/systemvm/ -name \*.sh | xargs chmod 555
|
||||
echo ${newmd5} > ${md5file}
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
consoleproxy_svcs() {
|
||||
local vmname=$1
|
||||
local path=$(mntpath $vmname)
|
||||
|
||||
chroot ${path} /sbin/chkconfig cloud on
|
||||
chroot ${path} /sbin/chkconfig postinit on
|
||||
chroot ${path} /sbin/chkconfig domr_webserver off
|
||||
chroot ${path} /sbin/chkconfig haproxy off ;
|
||||
chroot ${path} /sbin/chkconfig dnsmasq off
|
||||
chroot ${path} /sbin/chkconfig sshd on
|
||||
chroot ${path} /sbin/chkconfig httpd off
|
||||
chroot ${path} /sbin/chkconfig nfs off
|
||||
chroot ${path} /sbin/chkconfig nfslock off
|
||||
chroot ${path} /sbin/chkconfig rpcbind off
|
||||
chroot ${path} /sbin/chkconfig rpcidmap off
|
||||
|
||||
cp ${path}/etc/sysconfig/iptables-consoleproxy ${path}/etc/sysconfig/iptables
|
||||
}
|
||||
|
||||
secstorage_svcs() {
|
||||
local vmname=$1
|
||||
local path=$(mntpath $vmname)
|
||||
|
||||
chroot ${path} /sbin/chkconfig cloud on
|
||||
chroot ${path} /sbin/chkconfig postinit on
|
||||
chroot ${path} /sbin/chkconfig domr_webserver off
|
||||
chroot ${path} /sbin/chkconfig haproxy off ;
|
||||
chroot ${path} /sbin/chkconfig dnsmasq off
|
||||
chroot ${path} /sbin/chkconfig sshd on
|
||||
chroot ${path} /sbin/chkconfig httpd off
|
||||
|
||||
|
||||
cp ${path}/etc/sysconfig/iptables-secstorage ${path}/etc/sysconfig/iptables
|
||||
mkdir -p ${path}/var/log/cloud
|
||||
}
|
||||
|
||||
routing_svcs() {
|
||||
local vmname=$1
|
||||
local path=$(mntpath $vmname)
|
||||
|
||||
chroot ${path} /sbin/chkconfig cloud off
|
||||
chroot ${path} /sbin/chkconfig domr_webserver on ;
|
||||
chroot ${path} /sbin/chkconfig haproxy on ;
|
||||
chroot ${path} /sbin/chkconfig dnsmasq on
|
||||
chroot ${path} /sbin/chkconfig sshd on
|
||||
chroot ${path} /sbin/chkconfig nfs off
|
||||
chroot ${path} /sbin/chkconfig nfslock off
|
||||
chroot ${path} /sbin/chkconfig rpcbind off
|
||||
chroot ${path} /sbin/chkconfig rpcidmap off
|
||||
cp ${path}/etc/sysconfig/iptables-domr ${path}/etc/sysconfig/iptables
|
||||
}
|
||||
|
||||
lflag=
|
||||
dflag=
|
||||
|
||||
while getopts 't:l:d:' OPTION
|
||||
do
|
||||
case $OPTION in
|
||||
l) lflag=1
|
||||
vmname="$OPTARG"
|
||||
;;
|
||||
t) tflag=1
|
||||
vmtype="$OPTARG"
|
||||
;;
|
||||
d) dflag=1
|
||||
rootdisk="$OPTARG"
|
||||
;;
|
||||
*) ;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [ "$lflag$tflag$dflag" != "111" ]
|
||||
then
|
||||
printf "Error: Not enough parameter\n" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
||||
mount_local $vmname $rootdisk
|
||||
|
||||
if [ $? -gt 0 ]
|
||||
then
|
||||
printf "Failed to mount disk $rootdisk for $vmname\n" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -f $(dirname $0)/patch.tgz ]
|
||||
then
|
||||
patch_scripts $vmname $(dirname $0)/patch.tgz
|
||||
if [ $? -gt 0 ]
|
||||
then
|
||||
printf "Failed to apply patch patch.zip to $vmname\n" >&2
|
||||
umount_local $vmname
|
||||
exit 4
|
||||
fi
|
||||
fi
|
||||
|
||||
cpfile=$(dirname $0)/systemvm-premium.zip
|
||||
if [ "$vmtype" == "consoleproxy" ] || [ "$vmtype" == "secstorage" ] && [ -f $cpfile ]
|
||||
then
|
||||
patch_console_proxy $vmname $cpfile
|
||||
if [ $? -gt 0 ]
|
||||
then
|
||||
printf "Failed to apply patch $patch $cpfile to $vmname\n" >&2
|
||||
umount_local $vmname
|
||||
exit 5
|
||||
fi
|
||||
fi
|
||||
|
||||
# domr is 64 bit, need to copy 32bit chkconfig to domr
|
||||
# this is workaroud, will use 32 bit domr
|
||||
dompath=$(mntpath $vmname)
|
||||
cp /sbin/chkconfig $dompath/sbin
|
||||
# copy public key to system vm
|
||||
cp $(dirname $0)/id_rsa.pub $dompath/root/.ssh/authorized_keys
|
||||
#empty known hosts
|
||||
echo "" > $dompath/root/.ssh/known_hosts
|
||||
|
||||
if [ "$vmtype" == "router" ]
|
||||
then
|
||||
routing_svcs $vmname
|
||||
if [ $? -gt 0 ]
|
||||
then
|
||||
printf "Failed to execute routing_svcs\n" >&2
|
||||
umount_local $vmname
|
||||
exit 6
|
||||
fi
|
||||
fi
|
||||
|
||||
|
||||
if [ "$vmtype" == "consoleproxy" ]
|
||||
then
|
||||
consoleproxy_svcs $vmname
|
||||
if [ $? -gt 0 ]
|
||||
then
|
||||
printf "Failed to execute consoleproxy_svcs\n" >&2
|
||||
umount_local $vmname
|
||||
exit 7
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$vmtype" == "secstorage" ]
|
||||
then
|
||||
secstorage_svcs $vmname
|
||||
if [ $? -gt 0 ]
|
||||
then
|
||||
printf "Failed to execute secstorage_svcs\n" >&2
|
||||
umount_local $vmname
|
||||
exit 8
|
||||
fi
|
||||
fi
|
||||
|
||||
|
||||
umount_local $vmname
|
||||
|
||||
exit $?
|
||||
@ -21,13 +21,11 @@ vmopsSnapshot=..,0755,/etc/xapi.d/plugins
|
||||
xs_cleanup.sh=..,0755,/opt/xensource/bin
|
||||
systemvm.iso=../../../../../vms,0644,/opt/xensource/packages/iso
|
||||
hostvmstats.py=..,0755,/opt/xensource/sm
|
||||
id_rsa.cloud=..,0600,/opt/xensource/bin
|
||||
id_rsa.cloud=..,0600,/root/.ssh
|
||||
network_info.sh=..,0755,/opt/xensource/bin
|
||||
prepsystemvm.sh=..,0755,/opt/xensource/bin
|
||||
setupxenserver.sh=..,0755,/opt/xensource/bin
|
||||
make_migratable.sh=..,0755,/opt/xensource/bin
|
||||
networkUsage.sh=..,0755,/opt/xensource/bin
|
||||
setup_iscsi.sh=..,0755,/opt/xensource/bin
|
||||
version=..,0755,/opt/xensource/bin
|
||||
pingtest.sh=../../..,0755,/opt/xensource/bin
|
||||
@ -35,5 +33,6 @@ dhcp_entry.sh=../../../../network/domr/,0755,/opt/xensource/bin
|
||||
ipassoc.sh=../../../../network/domr/,0755,/opt/xensource/bin
|
||||
vm_data.sh=../../../../network/domr/,0755,/opt/xensource/bin
|
||||
save_password_to_domr.sh=../../../../network/domr/,0755,/opt/xensource/bin
|
||||
networkUsage.sh=../../../../network/domr/,0755,/opt/xensource/bin
|
||||
call_firewall.sh=../../../../network/domr/,0755,/opt/xensource/bin
|
||||
call_loadbalancer.sh=../../../../network/domr/,0755,/opt/xensource/bin
|
||||
|
||||
@ -551,6 +551,7 @@ public class AgentManagerImpl implements AgentManager, HandlerFactory {
|
||||
host.setGuid(null);
|
||||
host.setClusterId(null);
|
||||
_hostDao.update(host.getId(), host);
|
||||
|
||||
_hostDao.remove(hostId);
|
||||
|
||||
//delete the associated primary storage from db
|
||||
@ -614,6 +615,8 @@ public class AgentManagerImpl implements AgentManager, HandlerFactory {
|
||||
templateHostSC.addAnd("hostId", SearchCriteria.Op.EQ, secStorageHost.getId());
|
||||
_vmTemplateHostDao.remove(templateHostSC);
|
||||
|
||||
/*Disconnected agent needs special handling here*/
|
||||
secStorageHost.setGuid(null);
|
||||
txn.commit();
|
||||
return true;
|
||||
}catch (Throwable t) {
|
||||
@ -1142,11 +1145,16 @@ public class AgentManagerImpl implements AgentManager, HandlerFactory {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public Answer easySend(final Long hostId, final Command cmd) {
|
||||
return easySend(hostId, cmd, _wait);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Answer easySend(final Long hostId, final Command cmd) {
|
||||
public Answer easySend(final Long hostId, final Command cmd, int timeout) {
|
||||
try {
|
||||
final Answer answer = send(hostId, cmd, _wait);
|
||||
final Answer answer = send(hostId, cmd, timeout);
|
||||
if (answer == null) {
|
||||
s_logger.warn("send returns null answer");
|
||||
return null;
|
||||
@ -1764,6 +1772,7 @@ public class AgentManagerImpl implements AgentManager, HandlerFactory {
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public Host findHost(VmCharacteristics vm, Set<? extends Host> avoids) {
|
||||
return null;
|
||||
}
|
||||
|
||||
@ -51,7 +51,6 @@ import com.cloud.utils.DateUtil;
|
||||
import com.cloud.utils.NumbersUtil;
|
||||
import com.cloud.utils.Pair;
|
||||
import com.cloud.utils.component.Inject;
|
||||
import com.cloud.utils.db.GlobalLock;
|
||||
import com.cloud.utils.db.SearchCriteria;
|
||||
import com.cloud.vm.State;
|
||||
import com.cloud.vm.UserVmVO;
|
||||
@ -78,7 +77,6 @@ public class UserConcentratedAllocator implements PodAllocator {
|
||||
@Inject VMInstanceDao _vmInstanceDao;
|
||||
|
||||
Random _rand = new Random(System.currentTimeMillis());
|
||||
private final GlobalLock m_capacityCheckLock = GlobalLock.getInternLock("capacity.check");
|
||||
private int _hoursToSkipStoppedVMs = 24;
|
||||
|
||||
private int _secStorageVmRamSize = 1024;
|
||||
@ -145,7 +143,7 @@ public class UserConcentratedAllocator implements PodAllocator {
|
||||
}
|
||||
|
||||
if (availablePods.size() == 0) {
|
||||
s_logger.debug("There are no pods with enough memory/CPU capacity in zone" + zone.getName());
|
||||
s_logger.debug("There are no pods with enough memory/CPU capacity in zone " + zone.getName());
|
||||
return null;
|
||||
} else {
|
||||
// Return a random pod
|
||||
@ -158,30 +156,14 @@ public class UserConcentratedAllocator implements PodAllocator {
|
||||
|
||||
private boolean dataCenterAndPodHasEnoughCapacity(long dataCenterId, long podId, long capacityNeeded, short capacityType, long[] hostCandidate) {
|
||||
List<CapacityVO> capacities = null;
|
||||
if (m_capacityCheckLock.lock(120)) { // 2 minutes
|
||||
try {
|
||||
SearchCriteria<CapacityVO> sc = _capacityDao.createSearchCriteria();
|
||||
sc.addAnd("capacityType", SearchCriteria.Op.EQ, capacityType);
|
||||
sc.addAnd("dataCenterId", SearchCriteria.Op.EQ, dataCenterId);
|
||||
sc.addAnd("podId", SearchCriteria.Op.EQ, podId);
|
||||
capacities = _capacityDao.search(sc, null);
|
||||
} finally {
|
||||
m_capacityCheckLock.unlock();
|
||||
}
|
||||
} else {
|
||||
s_logger.error("Unable to acquire synchronization lock for pod allocation");
|
||||
|
||||
// we now try to enforce reservation-style allocation, waiting time has been adjusted
|
||||
// to 2 minutes
|
||||
return false;
|
||||
|
||||
/*
|
||||
// If we can't lock the table, just return that there is enough capacity and allow instance creation to fail on the agent
|
||||
// if there is not enough capacity. All that does is skip the optimization of checking for capacity before sending the
|
||||
// command to the agent.
|
||||
return true;
|
||||
*/
|
||||
}
|
||||
|
||||
SearchCriteria<CapacityVO> sc = _capacityDao.createSearchCriteria();
|
||||
sc.addAnd("capacityType", SearchCriteria.Op.EQ, capacityType);
|
||||
sc.addAnd("dataCenterId", SearchCriteria.Op.EQ, dataCenterId);
|
||||
sc.addAnd("podId", SearchCriteria.Op.EQ, podId);
|
||||
s_logger.trace("Executing search");
|
||||
capacities = _capacityDao.search(sc, null);
|
||||
s_logger.trace("Done with a search");
|
||||
|
||||
boolean enoughCapacity = false;
|
||||
if (capacities != null) {
|
||||
|
||||
@ -65,8 +65,9 @@ import com.cloud.storage.dao.VolumeDao;
|
||||
import com.cloud.utils.NumbersUtil;
|
||||
import com.cloud.utils.Pair;
|
||||
import com.cloud.utils.component.ComponentLocator;
|
||||
import com.cloud.utils.db.GlobalLock;
|
||||
import com.cloud.utils.db.DB;
|
||||
import com.cloud.utils.db.SearchCriteria;
|
||||
import com.cloud.utils.db.Transaction;
|
||||
import com.cloud.vm.ConsoleProxyVO;
|
||||
import com.cloud.vm.DomainRouterVO;
|
||||
import com.cloud.vm.SecondaryStorageVmVO;
|
||||
@ -118,8 +119,6 @@ public class AlertManagerImpl implements AlertManager {
|
||||
private double _publicIPCapacityThreshold = 0.75;
|
||||
private double _privateIPCapacityThreshold = 0.75;
|
||||
|
||||
private final GlobalLock m_capacityCheckLock = GlobalLock.getInternLock("capacity.check");
|
||||
|
||||
@Override
|
||||
public boolean configure(String name, Map<String, Object> params) throws ConfigurationException {
|
||||
_name = name;
|
||||
@ -319,7 +318,7 @@ public class AlertManagerImpl implements AlertManager {
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
@Override @DB
|
||||
public void recalculateCapacity() {
|
||||
// FIXME: the right way to do this is to register a listener (see RouterStatsListener, VMSyncListener)
|
||||
// for the vm sync state. The listener model has connects/disconnects to keep things in sync much better
|
||||
@ -435,25 +434,23 @@ public class AlertManagerImpl implements AlertManager {
|
||||
newCapacities.add(newPrivateIPCapacity);
|
||||
}
|
||||
|
||||
if (m_capacityCheckLock.lock(5)) { // 5 second timeout
|
||||
try {
|
||||
// delete the old records
|
||||
_capacityDao.clearNonStorageCapacities();
|
||||
Transaction txn = Transaction.currentTxn();
|
||||
try {
|
||||
txn.start();
|
||||
// delete the old records
|
||||
_capacityDao.clearNonStorageCapacities();
|
||||
|
||||
for (CapacityVO newCapacity : newCapacities) {
|
||||
_capacityDao.persist(newCapacity);
|
||||
}
|
||||
} finally {
|
||||
m_capacityCheckLock.unlock();
|
||||
}
|
||||
|
||||
if (s_logger.isTraceEnabled()) {
|
||||
s_logger.trace("done recalculating system capacity");
|
||||
}
|
||||
} else {
|
||||
if (s_logger.isTraceEnabled()) {
|
||||
s_logger.trace("Skipping capacity check, unable to lock the capacity table for recalculation.");
|
||||
}
|
||||
for (CapacityVO newCapacity : newCapacities) {
|
||||
s_logger.trace("Executing capacity update");
|
||||
_capacityDao.persist(newCapacity);
|
||||
s_logger.trace("Done with capacity update");
|
||||
}
|
||||
txn.commit();
|
||||
} catch (Exception ex) {
|
||||
txn.rollback();
|
||||
s_logger.error("Unable to start transaction for capacity update");
|
||||
}finally {
|
||||
txn.close();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@ -153,6 +153,7 @@ public abstract class BaseCmd {
|
||||
CPU_ALLOCATED("cpuallocated", BaseCmd.TYPE_LONG, "cpuallocated"),
|
||||
CPU_USED("cpuused", BaseCmd.TYPE_LONG, "cpuused"),
|
||||
CREATED("created", BaseCmd.TYPE_DATE, "created"),
|
||||
ATTACHED("attached", BaseCmd.TYPE_DATE, "attached"),
|
||||
CROSS_ZONES("crossZones", BaseCmd.TYPE_BOOLEAN, "crosszones"),
|
||||
DAILY_MAX("dailymax", BaseCmd.TYPE_INT, "dailyMax"),
|
||||
DATA_DISK_OFFERING_ID("datadiskofferingid", BaseCmd.TYPE_LONG, "dataDiskOfferingId"),
|
||||
@ -198,6 +199,7 @@ public abstract class BaseCmd {
|
||||
GROUP("group", BaseCmd.TYPE_STRING, "group"),
|
||||
GROUP_ID("group", BaseCmd.TYPE_LONG, "groupId"),
|
||||
GROUP_IDS("groupids", BaseCmd.TYPE_STRING, "groupIds"),
|
||||
GUEST_OS_ID("guestosid", BaseCmd.TYPE_LONG, "guestOsId"),
|
||||
HA_ENABLE("haenable", BaseCmd.TYPE_BOOLEAN, "haEnable"),
|
||||
HAS_CHILD("haschild", BaseCmd.TYPE_BOOLEAN, "haschild"),
|
||||
HOST_ID("hostid", BaseCmd.TYPE_LONG, "hostId"),
|
||||
@ -308,6 +310,8 @@ public abstract class BaseCmd {
|
||||
RESOURCE_TYPE("resourcetype", BaseCmd.TYPE_INT, "resourcetype"),
|
||||
RESPONSE_TYPE("response",BaseCmd.TYPE_STRING,"response"),
|
||||
ROOT_DISK_OFFERING_ID("rootdiskofferingid", BaseCmd.TYPE_LONG, "rootDiskOfferingId"),
|
||||
ROOT_DEVICE_ID("rootdeviceid", BaseCmd.TYPE_LONG, "rootDeviceId"),
|
||||
ROOT_DEVICE_TYPE("rootdevicetype", BaseCmd.TYPE_STRING, "rootDeviceType"),
|
||||
RULE_ID("ruleid", BaseCmd.TYPE_LONG, "ruleId"),
|
||||
RUNNING_VMS("runningvms", BaseCmd.TYPE_LONG, "runningvms"),
|
||||
SCHEDULE("schedule", BaseCmd.TYPE_STRING, "schedule"),
|
||||
|
||||
@ -37,7 +37,9 @@ public class DetachVolumeCmd extends BaseCmd {
|
||||
|
||||
static {
|
||||
s_properties.add(new Pair<Enum, Boolean>(BaseCmd.Properties.ACCOUNT_OBJ, Boolean.FALSE));
|
||||
s_properties.add(new Pair<Enum, Boolean>(BaseCmd.Properties.ID, Boolean.TRUE));
|
||||
s_properties.add(new Pair<Enum, Boolean>(BaseCmd.Properties.ID, Boolean.FALSE));
|
||||
s_properties.add(new Pair<Enum, Boolean>(BaseCmd.Properties.DEVICE_ID, Boolean.FALSE));
|
||||
s_properties.add(new Pair<Enum, Boolean>(BaseCmd.Properties.VIRTUAL_MACHINE_ID, Boolean.FALSE));
|
||||
}
|
||||
|
||||
public String getName() {
|
||||
@ -56,6 +58,23 @@ public class DetachVolumeCmd extends BaseCmd {
|
||||
public List<Pair<String, Object>> execute(Map<String, Object> params) {
|
||||
Account account = (Account) params.get(BaseCmd.Properties.ACCOUNT_OBJ.getName());
|
||||
Long volumeId = (Long) params.get(BaseCmd.Properties.ID.getName());
|
||||
Long deviceId = (Long) params.get(BaseCmd.Properties.DEVICE_ID.getName());
|
||||
Long instanceId = (Long) params.get(BaseCmd.Properties.VIRTUAL_MACHINE_ID.getName());
|
||||
VolumeVO volume = null;
|
||||
|
||||
if((volumeId==null && (deviceId==null && instanceId==null)) || (volumeId!=null && (deviceId!=null || instanceId!=null)) || (volumeId==null && (deviceId==null || instanceId==null)))
|
||||
{
|
||||
throw new ServerApiException(BaseCmd.PARAM_ERROR, "Please provide either a volume id, or a tuple(device id, instance id)");
|
||||
}
|
||||
|
||||
if(volumeId!=null)
|
||||
{
|
||||
deviceId = instanceId = Long.valueOf("0");
|
||||
}
|
||||
else
|
||||
{
|
||||
volumeId = Long.valueOf("0");;
|
||||
}
|
||||
|
||||
boolean isAdmin;
|
||||
if (account == null) {
|
||||
@ -67,9 +86,18 @@ public class DetachVolumeCmd extends BaseCmd {
|
||||
}
|
||||
|
||||
// Check that the volume ID is valid
|
||||
VolumeVO volume = getManagementServer().findVolumeById(volumeId);
|
||||
if (volume == null)
|
||||
throw new ServerApiException(BaseCmd.PARAM_ERROR, "Unable to find volume with ID: " + volumeId);
|
||||
if(volumeId != 0)
|
||||
{
|
||||
volume = getManagementServer().findVolumeById(volumeId);
|
||||
if (volume == null)
|
||||
throw new ServerApiException(BaseCmd.PARAM_ERROR, "Unable to find volume with ID: " + volumeId);
|
||||
}
|
||||
else
|
||||
{
|
||||
volume = getManagementServer().findVolumeByInstanceAndDeviceId(instanceId, deviceId);
|
||||
if (volume == null)
|
||||
throw new ServerApiException(BaseCmd.PARAM_ERROR, "Unable to find volume with ID: " + volumeId);
|
||||
}
|
||||
|
||||
// If the account is not an admin, check that the volume is owned by the account that was passed in
|
||||
if (!isAdmin) {
|
||||
@ -82,7 +110,7 @@ public class DetachVolumeCmd extends BaseCmd {
|
||||
}
|
||||
|
||||
try {
|
||||
long jobId = getManagementServer().detachVolumeFromVMAsync(volumeId);
|
||||
long jobId = getManagementServer().detachVolumeFromVMAsync(volumeId,deviceId,instanceId);
|
||||
|
||||
if (jobId == 0) {
|
||||
s_logger.warn("Unable to schedule async-job for DetachVolume comamnd");
|
||||
|
||||
90
server/src/com/cloud/api/commands/ExtractTemplateCmd.java
Normal file
90
server/src/com/cloud/api/commands/ExtractTemplateCmd.java
Normal file
@ -0,0 +1,90 @@
|
||||
package com.cloud.api.commands;
|
||||
|
||||
import java.net.URISyntaxException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import org.apache.log4j.Logger;
|
||||
|
||||
import com.cloud.api.BaseCmd;
|
||||
import com.cloud.api.ServerApiException;
|
||||
import com.cloud.dc.DataCenterVO;
|
||||
import com.cloud.server.ManagementServer;
|
||||
import com.cloud.storage.VMTemplateVO;
|
||||
import com.cloud.user.Account;
|
||||
import com.cloud.utils.Pair;
|
||||
|
||||
public class ExtractTemplateCmd extends BaseCmd {
|
||||
|
||||
public static final Logger s_logger = Logger.getLogger(ExtractTemplateCmd.class.getName());
|
||||
|
||||
private static final String s_name = "extracttemplateresponse";
|
||||
private static final List<Pair<Enum, Boolean>> s_properties = new ArrayList<Pair<Enum, Boolean>>();
|
||||
|
||||
static {
|
||||
s_properties.add(new Pair<Enum, Boolean>(BaseCmd.Properties.URL, Boolean.TRUE));
|
||||
s_properties.add(new Pair<Enum, Boolean>(BaseCmd.Properties.ID, Boolean.TRUE));
|
||||
s_properties.add(new Pair<Enum, Boolean>(BaseCmd.Properties.ZONE_ID, Boolean.TRUE));
|
||||
s_properties.add(new Pair<Enum, Boolean>(BaseCmd.Properties.ACCOUNT_OBJ, Boolean.FALSE));
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<Pair<String, Object>> execute(Map<String, Object> params) {
|
||||
String url = (String) params.get(BaseCmd.Properties.URL.getName());
|
||||
Long templateId = (Long) params.get(BaseCmd.Properties.ID.getName());
|
||||
Long zoneId = (Long) params.get(BaseCmd.Properties.ZONE_ID.getName());
|
||||
Account account = (Account) params.get(BaseCmd.Properties.ACCOUNT_OBJ.getName());
|
||||
|
||||
ManagementServer managementServer = getManagementServer();
|
||||
VMTemplateVO template = managementServer.findTemplateById(templateId.longValue());
|
||||
if (template == null) {
|
||||
throw new ServerApiException(BaseCmd.INTERNAL_ERROR, "Unable to find template with id " + templateId);
|
||||
}
|
||||
if (template.getName().startsWith("xs-tools") ){
|
||||
throw new ServerApiException(BaseCmd.INTERNAL_ERROR, "Unable to extract the template " + template.getName() + " It is not supported yet");
|
||||
}
|
||||
|
||||
if(url.toLowerCase().contains("file://")){
|
||||
throw new ServerApiException(BaseCmd.PARAM_ERROR, "file:// type urls are currently unsupported");
|
||||
}
|
||||
|
||||
if (account != null) {
|
||||
if(!isAdmin(account.getType())){
|
||||
if (template.getAccountId() != account.getId()){
|
||||
throw new ServerApiException(BaseCmd.PARAM_ERROR, "Unable to find template with ID: " + templateId + " for account: " + account.getAccountName());
|
||||
}
|
||||
}else if(!managementServer.isChildDomain(account.getDomainId(), managementServer.findDomainIdByAccountId(template.getAccountId())) ) {
|
||||
throw new ServerApiException(BaseCmd.PARAM_ERROR, "Unable to extract template " + templateId + " to " + url + ", permission denied.");
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
managementServer.extractTemplate(url, templateId, zoneId);
|
||||
} catch (Exception e) {
|
||||
s_logger.error(e.getMessage(), e);
|
||||
throw new ServerApiException(BaseCmd.INTERNAL_ERROR, "Internal Error Extracting the template " + e.getMessage());
|
||||
}
|
||||
DataCenterVO zone = managementServer.getDataCenterBy(zoneId);
|
||||
List<Pair<String, Object>> response = new ArrayList<Pair<String, Object>>();
|
||||
response.add(new Pair<String, Object>(BaseCmd.Properties.TEMPLATE_ID.getName(), templateId));
|
||||
response.add(new Pair<String, Object>(BaseCmd.Properties.NAME.getName(), template.getName()));
|
||||
response.add(new Pair<String, Object>(BaseCmd.Properties.DISPLAY_TEXT.getName(), template.getDisplayText()));
|
||||
response.add(new Pair<String, Object>(BaseCmd.Properties.URL.getName(), url));
|
||||
response.add(new Pair<String, Object>(BaseCmd.Properties.ZONE_ID.getName(), zoneId));
|
||||
response.add(new Pair<String, Object>(BaseCmd.Properties.ZONE_NAME.getName(), zone.getName()));
|
||||
response.add(new Pair<String, Object>(BaseCmd.Properties.TEMPLATE_STATUS.getName(), "Processing"));
|
||||
return response;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getName() {
|
||||
return s_name;
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<Pair<Enum, Boolean>> getProperties() {
|
||||
return s_properties;
|
||||
}
|
||||
|
||||
}
|
||||
@ -35,7 +35,10 @@ import com.cloud.server.Criteria;
|
||||
import com.cloud.service.ServiceOfferingVO;
|
||||
import com.cloud.storage.GuestOSCategoryVO;
|
||||
import com.cloud.storage.GuestOSVO;
|
||||
import com.cloud.storage.StoragePool;
|
||||
import com.cloud.storage.StoragePoolVO;
|
||||
import com.cloud.storage.VMTemplateVO;
|
||||
import com.cloud.storage.VolumeVO;
|
||||
import com.cloud.user.Account;
|
||||
import com.cloud.uservm.UserVm;
|
||||
import com.cloud.utils.Pair;
|
||||
@ -53,6 +56,7 @@ public class ListVMsCmd extends BaseCmd {
|
||||
s_properties.add(new Pair<Enum, Boolean>(BaseCmd.Properties.STATE, Boolean.FALSE));
|
||||
s_properties.add(new Pair<Enum, Boolean>(BaseCmd.Properties.ZONE_ID, Boolean.FALSE));
|
||||
s_properties.add(new Pair<Enum, Boolean>(BaseCmd.Properties.POD_ID, Boolean.FALSE));
|
||||
s_properties.add(new Pair<Enum, Boolean>(BaseCmd.Properties.GROUP, Boolean.FALSE));
|
||||
s_properties.add(new Pair<Enum, Boolean>(BaseCmd.Properties.HOST_ID, Boolean.FALSE));
|
||||
s_properties.add(new Pair<Enum, Boolean>(BaseCmd.Properties.KEYWORD, Boolean.FALSE));
|
||||
s_properties.add(new Pair<Enum, Boolean>(BaseCmd.Properties.ACCOUNT, Boolean.FALSE));
|
||||
@ -82,6 +86,7 @@ public class ListVMsCmd extends BaseCmd {
|
||||
Long zoneId = (Long)params.get(BaseCmd.Properties.ZONE_ID.getName());
|
||||
Long podId = (Long)params.get(BaseCmd.Properties.POD_ID.getName());
|
||||
Long hostId = (Long)params.get(BaseCmd.Properties.HOST_ID.getName());
|
||||
String group = (String)params.get(BaseCmd.Properties.GROUP.getName());
|
||||
String keyword = (String)params.get(BaseCmd.Properties.KEYWORD.getName());
|
||||
Integer page = (Integer)params.get(BaseCmd.Properties.PAGE.getName());
|
||||
Integer pageSize = (Integer)params.get(BaseCmd.Properties.PAGESIZE.getName());
|
||||
@ -140,6 +145,14 @@ public class ListVMsCmd extends BaseCmd {
|
||||
if(zoneId != null)
|
||||
c.addCriteria(Criteria.DATACENTERID, zoneId);
|
||||
|
||||
if(group != null)
|
||||
{
|
||||
if(group.equals(""))
|
||||
c.addCriteria(Criteria.EMPTY_GROUP, group);
|
||||
else
|
||||
c.addCriteria(Criteria.GROUP, group);
|
||||
}
|
||||
|
||||
// ignore these search requests if it's not an admin
|
||||
if (isAdmin == true) {
|
||||
c.addCriteria(Criteria.DOMAINID, domainId);
|
||||
@ -169,6 +182,14 @@ public class ListVMsCmd extends BaseCmd {
|
||||
}
|
||||
|
||||
for (UserVm vmInstance : virtualMachines) {
|
||||
|
||||
//if the account is deleted, do not return the user vm
|
||||
Account currentVmAccount = getManagementServer().getAccount(vmInstance.getAccountId());
|
||||
if(currentVmAccount.getRemoved()!=null)
|
||||
{
|
||||
continue; //not returning this vm
|
||||
}
|
||||
|
||||
List<Pair<String, Object>> vmData = new ArrayList<Pair<String, Object>>();
|
||||
AsyncJobVO asyncJob = getManagementServer().findInstancePendingAsyncJob("vm_instance", vmInstance.getId());
|
||||
if(asyncJob != null) {
|
||||
@ -260,14 +281,22 @@ public class ListVMsCmd extends BaseCmd {
|
||||
long networkKbWrite = (long)vmStats.getNetworkWriteKBs();
|
||||
vmData.add(new Pair<String, Object>(BaseCmd.Properties.NETWORK_KB_WRITE.getName(), networkKbWrite));
|
||||
}
|
||||
vmData.add(new Pair<String, Object>(BaseCmd.Properties.GUEST_OS_ID.getName(), vmInstance.getGuestOSId()));
|
||||
|
||||
GuestOSCategoryVO guestOsCategory = getManagementServer().getGuestOsCategory(vmInstance.getGuestOSId());
|
||||
if(guestOsCategory!=null)
|
||||
vmData.add(new Pair<String, Object>(BaseCmd.Properties.OS_TYPE_ID.getName(),guestOsCategory.getId()));
|
||||
GuestOSVO guestOs = getManagementServer().getGuestOs(vmInstance.getGuestOSId());
|
||||
if(guestOs!=null)
|
||||
vmData.add(new Pair<String, Object>(BaseCmd.Properties.OS_TYPE_ID.getName(),guestOs.getCategoryId()));
|
||||
|
||||
//network groups
|
||||
vmData.add(new Pair<String, Object>(BaseCmd.Properties.NETWORK_GROUP_LIST.getName(), getManagementServer().getNetworkGroupsNamesForVm(vmInstance.getId())));
|
||||
|
||||
//root device related
|
||||
VolumeVO rootVolume = getManagementServer().findRootVolume(vmInstance.getId());
|
||||
vmData.add(new Pair<String, Object>(BaseCmd.Properties.ROOT_DEVICE_ID.getName(), rootVolume.getDeviceId()));
|
||||
|
||||
StoragePoolVO storagePool = getManagementServer().findPoolById(rootVolume.getPoolId());
|
||||
vmData.add(new Pair<String, Object>(BaseCmd.Properties.ROOT_DEVICE_TYPE.getName(), storagePool.getPoolType().toString()));
|
||||
|
||||
vmTag[i++] = vmData;
|
||||
}
|
||||
List<Pair<String, Object>> returnTags = new ArrayList<Pair<String, Object>>();
|
||||
|
||||
@ -143,7 +143,7 @@ public class ListVolumesCmd extends BaseCmd{
|
||||
|
||||
List<VolumeVO> volumes = getManagementServer().searchForVolumes(c);
|
||||
|
||||
if (volumes == null || volumes.size()==0) {
|
||||
if (volumes == null) {
|
||||
throw new ServerApiException(BaseCmd.INTERNAL_ERROR, "unable to find volumes");
|
||||
}
|
||||
|
||||
@ -194,6 +194,7 @@ public class ListVolumesCmd extends BaseCmd{
|
||||
volumeData.add(new Pair<String, Object>(BaseCmd.Properties.SIZE.getName(), virtualSizeInBytes));
|
||||
|
||||
volumeData.add(new Pair<String, Object>(BaseCmd.Properties.CREATED.getName(), getDateString(volume.getCreated())));
|
||||
volumeData.add(new Pair<String, Object>(BaseCmd.Properties.ATTACHED.getName(), getDateString(volume.getAttached())));
|
||||
volumeData.add(new Pair<String, Object>(BaseCmd.Properties.STATE.getName(),volume.getStatus()));
|
||||
|
||||
Account accountTemp = getManagementServer().findAccountById(volume.getAccountId());
|
||||
|
||||
@ -27,7 +27,6 @@ import org.apache.log4j.Logger;
|
||||
import com.cloud.api.BaseCmd;
|
||||
import com.cloud.api.ServerApiException;
|
||||
import com.cloud.exception.InvalidParameterValueException;
|
||||
import com.cloud.host.HostVO;
|
||||
import com.cloud.host.Status;
|
||||
import com.cloud.storage.StoragePoolVO;
|
||||
import com.cloud.user.Account;
|
||||
|
||||
@ -86,7 +86,7 @@ public class VolumeOperationExecutor extends BaseAsyncJobExecutor {
|
||||
eventType = EventTypes.EVENT_VOLUME_DETACH;
|
||||
failureDescription = "Failed to detach volume";
|
||||
|
||||
asyncMgr.getExecutorContext().getManagementServer().detachVolumeFromVM(param.getVolumeId(), param.getEventId());
|
||||
asyncMgr.getExecutorContext().getManagementServer().detachVolumeFromVM(param.getVolumeId(), param.getEventId(),param.getDeviceId(),param.getVmId());
|
||||
success = true;
|
||||
asyncMgr.completeAsyncJob(getJob().getId(), AsyncJobResult.STATUS_SUCCEEDED, 0, null);
|
||||
} else {
|
||||
|
||||
@ -41,7 +41,7 @@ public class VolumeOperationParam {
|
||||
private long volumeId;
|
||||
private long eventId;
|
||||
private Long deviceId;
|
||||
|
||||
|
||||
public VolumeOperationParam() {
|
||||
}
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user