This is the REAL 2.1.x tip source

This commit is contained in:
Manuel Amador (Rudd-O) 2010-08-11 18:48:43 -07:00
parent 004c1a2675
commit 50459fd30f
609 changed files with 18063 additions and 31512 deletions

132
HACKING
View File

@ -1,49 +1,24 @@
---------------------------------------------------------------------
THE QUICK GUIDE TO CLOUDSTACK DEVELOPMENT
QUICK GUIDE TO DEVELOPING, BUILDING AND INSTALLING FROM SOURCE
---------------------------------------------------------------------
=== Overview of the development lifecycle ===
To hack on a CloudStack component, you will generally:
1. Configure the source code:
./waf configure --prefix=/home/youruser/cloudstack
(see below, "./waf configure")
2. Build and install the CloudStack
./waf install
(see below, "./waf install")
3. Set the CloudStack component up
(see below, "Running the CloudStack components from source")
4. Run the CloudStack component
(see below, "Running the CloudStack components from source")
5. Modify the source code
6. Build and install the CloudStack again
./waf install --preserve-config
(see below, "./waf install")
7. GOTO 4
=== What is this waf thing in my development lifecycle? ===
waf is a self-contained, advanced build system written by Thomas Nagy,
in the spirit of SCons or the GNU autotools suite.
* To run waf on Linux / Mac: ./waf [...commands...]
* To run waf on Windows: waf.bat [...commands...]
It all starts with waf.
./waf --help should be your first discovery point to find out both the
configure-time options and the different processes that you can run
using waf.
using waf. Your second discovery point should be the files:
1. wscript: contains the processes you can run when invoking waf
2. wscript_build: contains a manifest of *what* is built and installed
=== What do the different waf commands above do? ===
Your normal development process should be:
1. ./waf configure --prefix=/some/path, ONCE
2. ./waf, then hack, then ./waf, then hack, then ./waf
3. ./waf install, then hack, then ./waf install
In detail:
1. ./waf configure --prefix=/some/path
@ -54,10 +29,15 @@ using waf.
variables and options that waf will use for compilation and
installation, including the installation directory (PREFIX).
If you have already configured your source, and you are reconfiguring
it, then you *must* run ./waf clean so the source files are rebuilt
with the proper variables. Otherwise, ./waf install will install
stale files.
For convenience reasons, if you forget to run configure, waf
will proceed with some default configuration options. By
default, PREFIX is /usr/local, but you can set it e.g. to
/home/youruser/cloudstack if you plan to do a non-root
/home/yourusername/cloudstack if you plan to do a non-root
install. Be ware that you can later install the stack as a
regular user, but most components need to *run* as root.
@ -130,64 +110,65 @@ using waf.
=== Running the CloudStack components from source (for debugging / coding) ===
It is not technically possible to run the CloudStack components from
the source. That, however, is fine -- each component can be run
independently from the install directory:
the source. That, however, is fine -- you do not have to stop and start
the services each time you run ./waf install. Each component can be run
independently:
- Management Server
1) Execute ./waf install as your current user (or as root if the
Execute ./waf install as your current user (or as root if the
installation path is only writable by root).
WARNING: if any CloudStack configuration files have been
already configured / altered, they will be *overwritten* by this
process. Append --preserve-config to ./waf install to prevent this
from happening. Or resort to the override method discussed
above (search for "override" in this document).
2) If you haven't done so yet, set up the management server database:
- either run ./waf deploydb_kvm, or
- run $BINDIR/cloud-setup-databases
3) Execute ./waf run as your current user (or as root if the
Then execute ./waf run as your current user (or as root if the
installation path is only writable by root). Alternatively,
you can use ./waf debug and this will run with debugging enabled.
This will compile the stack, reinstall it, then run the Management
Server in the installed environment, as your current user, in
the foreground.
- Agent (Linux-only):
1) Execute ./waf install as your current user (or as root if the
installation path is only writable by root).
NOTE: if you have not yet deployed a database to the local MySQL
server, you should ./waf deploydb_kvm once so the database is
deployed. Failure to do that will cause the Management Server
to fail on startup.
WARNING: if any CloudStack configuration files have been
already configured / altered, they will be *overwritten* by this
process. Append --preserve-config to ./waf install to prevent this
from happening. Or resort to the override method discussed
above (search for "override" in this document).
2) If you haven't done so yet, set the Console Proxy up:
- run $BINDIR/cloud-setup-agent
3) Execute $LIBEXECDIR/agent-runner as root
- Agent:
- Console Proxy (Linux-only):
1) Execute ./waf install as your current user (or as root if the
Execute ./waf install as your current user (or as root if the
installation path is only writable by root).
Then execute $LIBEXECDIR/agent-runner as root
These steps, will compile, reinstall and run the Agent in the
foreground. You must run this runner as root.
WARNING: if any CloudStack configuration files have been
already configured / altered, they will be *overwritten* by this
process. Append --preserve-config to ./waf install to prevent this
from happening. Or resort to the override method discussed
above (search for "override" in this document).
2) If you haven't done so yet, set the Console Proxy up:
- run $BINDIR/cloud-setup-console-proxy
- Console Proxy:
3) Execute $LIBEXECDIR/console-proxy-runner as root
Execute ./waf install as your current user (or as root if the
installation path is only writable by root).
Then execute $LIBEXECDIR/console-proxy-runner as root
These steps, will compile, reinstall and run the Console Proxy in the
foreground. You must run this runner as root.
WARNING: if any CloudStack configuration files have been
already configured / altered, they will be *overwritten* by this
process. Append --preserve-config to ./waf install to prevent this
from happening. Or resort to the override method discussed
above (search for "override" in this document).
---------------------------------------------------------------------
@ -219,17 +200,6 @@ other later-generation build systems:
language is available to use in the build process.
=== Hacking on the build system: what are these wscript files? ===
1. wscript: contains most commands you can run from within waf
2. wscript_configure: contains the process that discovers the software
on the system and configures the build to fit that
2. wscript_build: contains a manifest of *what* is built and installed
Refer to the waf book for general information on waf:
http://freehackers.org/~tnagy/wafbook/index.html
=== What happens when waf runs ===
When you run waf, this happens behind the scenes:

View File

@ -1,16 +1,15 @@
<?xml version="1.0" encoding="UTF-8"?>
<classpath>
<classpathentry kind="src" path="src"/>
<classpathentry kind="src" path="test"/>
<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER"/>
<classpathentry combineaccessrules="false" kind="src" path="/utils"/>
<classpathentry kind="lib" path="/thirdparty/log4j-1.2.15.jar"/>
<classpathentry combineaccessrules="false" kind="src" path="/core"/>
<classpathentry kind="lib" path="/thirdparty/commons-httpclient-3.1.jar"/>
<classpathentry kind="lib" path="/thirdparty/junit-4.8.1.jar"/>
<classpathentry kind="lib" path="/thirdparty/xmlrpc-client-3.1.3.jar"/>
<classpathentry kind="lib" path="/thirdparty/xmlrpc-common-3.1.3.jar"/>
<classpathentry kind="lib" path="/thirdparty/libvirt-0.4.5.jar"/>
<classpathentry combineaccessrules="false" kind="src" path="/api"/>
<classpathentry kind="output" path="bin"/>
</classpath>
<?xml version="1.0" encoding="UTF-8"?>
<classpath>
<classpathentry kind="src" path="src"/>
<classpathentry kind="src" path="test"/>
<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER"/>
<classpathentry combineaccessrules="false" kind="src" path="/utils"/>
<classpathentry kind="lib" path="/thirdparty/log4j-1.2.15.jar"/>
<classpathentry combineaccessrules="false" kind="src" path="/core"/>
<classpathentry kind="lib" path="/thirdparty/commons-httpclient-3.1.jar"/>
<classpathentry kind="lib" path="/thirdparty/junit-4.8.1.jar"/>
<classpathentry kind="lib" path="/thirdparty/xmlrpc-client-3.1.3.jar"/>
<classpathentry kind="lib" path="/thirdparty/xmlrpc-common-3.1.3.jar"/>
<classpathentry kind="lib" path="/thirdparty/libvirt-0.4.5.jar"/>
<classpathentry kind="output" path="bin"/>
</classpath>

View File

@ -83,7 +83,7 @@ try:
stderr(str(e))
bail(cloud_utils.E_SETUPFAILED,"Cloud Agent setup failed")
setup_agent_config(configfile)
setup_agent_config(configfile,brname)
stderr("Enabling and starting the Cloud Agent")
stop_service(servicename)
enable_service(servicename)

View File

@ -19,12 +19,10 @@ pod=default
zone=default
#private.network.device= the private nic device
# if this is commented, it is autodetected on service startup
# private.network.device=cloudbr0
private.network.device=cloudbr0
#public.network.device= the public nic device
# if this is commented, it is autodetected on service startup
# public.network.device=cloudbr0
public.network.device=cloudbr0
#guid= a GUID to identify the agent

View File

View File

View File

@ -23,20 +23,6 @@ cd "@AGENTLIBDIR@"
echo Current directory is "$PWD"
echo CLASSPATH to run the agent: "$CLASSPATH"
export PATH=/sbin:/usr/sbin:"$PATH"
SERVICEARGS=
for x in private public ; do
configuration=`grep -q "^$x.network.device" "@AGENTSYSCONFDIR@"/agent.properties || true`
if [ -n "$CONFIGURATION" ] ; then
echo "Using manually-configured network device $CONFIGURATION"
else
defaultroute=`ip route | grep ^default | cut -d ' ' -f 5`
test -n "$defaultroute"
echo "Using auto-discovered network device $defaultroute which is the default route"
SERVICEARGS="$SERVICEARGS -D$x.network.device="$defaultroute
fi
done
function termagent() {
if [ "$agentpid" != "" ] ; then
echo Killing VMOps Agent "(PID $agentpid)" with SIGTERM >&2
@ -52,7 +38,7 @@ function termagent() {
trap termagent TERM
while true ; do
java -Xms128M -Xmx384M -cp "$CLASSPATH" $SERVICEARGS "$@" com.cloud.agent.AgentShell &
java -Xms128M -Xmx384M -cp "$CLASSPATH" "$@" com.cloud.agent.AgentShell &
agentpid=$!
echo "Agent started. PID: $!" >&2
wait $agentpid

View File

@ -1,3 +1,3 @@
#!/usr/bin/env bash
#run.sh runs the agent client.
java $1 -Xms128M -Xmx384M -cp cglib-nodep-2.2.jar:xenserver-5.5.0-1.jar:trilead-ssh2-build213.jar:cloud-api.jar:cloud-core-extras.jar:cloud-utils.jar:cloud-agent.jar:cloud-console-proxy.jar:cloud-console-common.jar:freemarker.jar:log4j-1.2.15.jar:ws-commons-util-1.0.2.jar:xmlrpc-client-3.1.3.jar:cloud-core.jar:xmlrpc-common-3.1.3.jar:javaee-api-5.0-1.jar:gson-1.3.jar:commons-httpclient-3.1.jar:commons-logging-1.1.1.jar:commons-codec-1.4.jar:commons-collections-3.2.1.jar:commons-pool-1.4.jar:apache-log4j-extras-1.0.jar:libvirt-0.4.5.jar:jna.jar:.:/etc/cloud:./conf com.cloud.agent.AgentShell
java $1 -Xms128M -Xmx384M -cp cglib-nodep-2.2.jar:xenserver-5.5.0-1.jar:trilead-ssh2-build213.jar:cloud-core-extras.jar:cloud-utils.jar:cloud-agent.jar:cloud-console-proxy.jar:cloud-console-common.jar:freemarker.jar:log4j-1.2.15.jar:ws-commons-util-1.0.2.jar:xmlrpc-client-3.1.3.jar:cloud-core.jar:xmlrpc-common-3.1.3.jar:javaee-api-5.0-1.jar:gson-1.3.jar:commons-httpclient-3.1.jar:commons-logging-1.1.1.jar:commons-codec-1.4.jar:commons-collections-3.2.1.jar:commons-pool-1.4.jar:apache-log4j-extras-1.0.jar:libvirt-0.4.5.jar:jna.jar:.:/etc/cloud:./conf com.cloud.agent.AgentShell

View File

@ -157,8 +157,7 @@ public class Agent implements HandlerFactory, IAgentControl {
_shell.getPort(),
_shell.getWorkers(),
this);
// ((NioClient)_connection).setBindAddress(_shell.getPrivateIp());
((NioClient)_connection).setBindAddress(_shell.getPrivateIp());
s_logger.debug("Adding shutdown hook");
Runtime.getRuntime().addShutdownHook(new ShutdownThread(this));

View File

@ -55,7 +55,6 @@ import org.libvirt.Connect;
import org.libvirt.Domain;
import org.libvirt.DomainInfo;
import org.libvirt.DomainInterfaceStats;
import org.libvirt.DomainSnapshot;
import org.libvirt.LibvirtException;
import org.libvirt.Network;
import org.libvirt.NodeInfo;
@ -69,20 +68,12 @@ import com.cloud.agent.api.Answer;
import com.cloud.agent.api.AttachIsoCommand;
import com.cloud.agent.api.AttachVolumeAnswer;
import com.cloud.agent.api.AttachVolumeCommand;
import com.cloud.agent.api.BackupSnapshotAnswer;
import com.cloud.agent.api.BackupSnapshotCommand;
import com.cloud.agent.api.CheckHealthAnswer;
import com.cloud.agent.api.CheckHealthCommand;
import com.cloud.agent.api.CheckStateCommand;
import com.cloud.agent.api.CheckVirtualMachineAnswer;
import com.cloud.agent.api.CheckVirtualMachineCommand;
import com.cloud.agent.api.Command;
import com.cloud.agent.api.CreatePrivateTemplateFromSnapshotCommand;
import com.cloud.agent.api.CreateVolumeFromSnapshotAnswer;
import com.cloud.agent.api.CreateVolumeFromSnapshotCommand;
import com.cloud.agent.api.DeleteSnapshotBackupAnswer;
import com.cloud.agent.api.DeleteSnapshotBackupCommand;
import com.cloud.agent.api.DeleteSnapshotsDirCommand;
import com.cloud.agent.api.DeleteStoragePoolCommand;
import com.cloud.agent.api.GetHostStatsAnswer;
import com.cloud.agent.api.GetHostStatsCommand;
@ -159,7 +150,6 @@ import com.cloud.hypervisor.Hypervisor;
import com.cloud.network.NetworkEnums.RouterPrivateIpStrategy;
import com.cloud.resource.ServerResource;
import com.cloud.resource.ServerResourceBase;
import com.cloud.storage.StorageLayer;
import com.cloud.storage.StoragePoolVO;
import com.cloud.storage.Volume;
import com.cloud.storage.VolumeVO;
@ -167,15 +157,9 @@ import com.cloud.storage.Storage.ImageFormat;
import com.cloud.storage.Storage.StoragePoolType;
import com.cloud.storage.Volume.StorageResourceType;
import com.cloud.storage.Volume.VolumeType;
import com.cloud.storage.template.Processor;
import com.cloud.storage.template.QCOW2Processor;
import com.cloud.storage.template.TemplateInfo;
import com.cloud.storage.template.TemplateLocation;
import com.cloud.storage.template.Processor.FormatInfo;
import com.cloud.utils.NumbersUtil;
import com.cloud.utils.Pair;
import com.cloud.utils.PropertiesUtil;
import com.cloud.utils.component.ComponentLocator;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.net.NetUtils;
import com.cloud.utils.script.OutputInterpreter;
@ -214,8 +198,6 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
private String _versionstringpath;
private String _patchdomrPath;
private String _createvmPath;
private String _manageSnapshotPath;
private String _createTmplPath;
private String _host;
private String _dcId;
private String _pod;
@ -224,7 +206,6 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
private final String _SSHPRVKEYPATH = _SSHKEYSPATH + File.separator + "id_rsa.cloud";
private final String _SSHPUBKEYPATH = _SSHKEYSPATH + File.separator + "id_rsa.pub.cloud";
private final String _mountPoint = "/mnt";
StorageLayer _storage;
private static final class KeyValueInterpreter extends OutputInterpreter {
private final Map<String, String> map = new HashMap<String, String>();
@ -400,14 +381,6 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
" <target dev=''{1}'' bus=''scsi''/>" +
" </disk>");
protected static MessageFormat SnapshotXML = new MessageFormat(
" <domainsnapshot>" +
" <name>{0}</name>" +
" <domain>" +
" <uuid>{1}</uuid>" +
" </domain>" +
" </domainsnapshot>");
protected Connect _conn;
protected String _hypervisorType;
protected String _hypervisorURI;
@ -422,7 +395,6 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
protected String _domrRamdisk;
protected String _pool;
private boolean _can_bridge_firewall;
private Pair<String, String> _pifs;
private final Map<String, vmStats> _vmStats = new ConcurrentHashMap<String, vmStats>();
protected boolean _disconnected = true;
@ -588,16 +560,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
if (_createvmPath == null) {
throw new ConfigurationException("Unable to find the createvm.sh");
}
_manageSnapshotPath = Script.findScript(storageScriptsDir, "managesnapshot.sh");
if (_manageSnapshotPath == null) {
throw new ConfigurationException("Unable to find the managesnapshot.sh");
}
_createTmplPath = Script.findScript(storageScriptsDir, "createtmplt.sh");
if (_createTmplPath == null) {
throw new ConfigurationException("Unable to find the createtmplt.sh");
}
s_logger.info("createvm.sh found in " + _createvmPath);
String value = (String)params.get("developer");
boolean isDeveloper = Boolean.parseBoolean(value);
@ -719,15 +682,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
try {
Class<?> clazz = Class.forName("com.cloud.storage.JavaStorageLayer");
_storage = (StorageLayer)ComponentLocator.inject(clazz);
_storage.configure("StorageLayer", params);
} catch (ClassNotFoundException e) {
throw new ConfigurationException("Unable to find class " + "com.cloud.storage.JavaStorageLayer");
}
//_can_bridge_firewall = can_bridge_firewall();
_can_bridge_firewall = can_bridge_firewall();
Network vmopsNw = null;
try {
@ -763,34 +718,12 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
s_logger.info("Found private network " + _privNwName + " already defined");
}
_pifs = getPifs();
if (_pifs.first() == null) {
s_logger.debug("Failed to get private nic name");
throw new ConfigurationException("Failed to get private nic name");
}
if (_pifs.second() == null) {
s_logger.debug("Failed to get public nic name");
throw new ConfigurationException("Failed to get public nic name");
}
s_logger.debug("Found pif: " + _pifs.first() + " on " + _privBridgeName + ", pif: " + _pifs.second() + " on " + _publicBridgeName);
return true;
}
private Pair<String, String> getPifs() {
/*get pifs from bridge*/
String pubPif = null;
String privPif = null;
if (_publicBridgeName != null) {
pubPif = Script.runSimpleBashScript("ls /sys/class/net/" + _publicBridgeName + "/brif/ |egrep eth[0-9]+");
}
if (_privBridgeName != null) {
privPif = Script.runSimpleBashScript("ls /sys/class/net/" + _privBridgeName + "/brif/ |egrep eth[0-9]+");
}
return new Pair<String, String>(privPif, pubPif);
}
private String getVnetId(String vnetId) {
return vnetId;
String id = "0000" + vnetId;
return id.substring(id.length() - 4);
}
private void patchSystemVm(String cmdLine, String dataDiskPath, String vmName) throws InternalErrorException {
@ -914,7 +847,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
guestDef guest = new guestDef();
guest.setGuestType(guestDef.guestType.KVM);
guest.setGuestArch(arch);
guest.setMachineType("pc");
guest.setBootOrder(guestDef.bootOrder.CDROM);
guest.setBootOrder(guestDef.bootOrder.HARDISK);
@ -1057,6 +990,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
s_logger.info("Rule " + (created?" ":" not ") + " created");
test.stop();
}
@Override
@ -1119,16 +1053,6 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
return execute((GetStorageStatsCommand) cmd);
} else if (cmd instanceof ManageSnapshotCommand) {
return execute((ManageSnapshotCommand) cmd);
} else if (cmd instanceof BackupSnapshotCommand) {
return execute((BackupSnapshotCommand) cmd);
} else if (cmd instanceof DeleteSnapshotBackupCommand) {
return execute((DeleteSnapshotBackupCommand) cmd);
} else if (cmd instanceof DeleteSnapshotsDirCommand) {
return execute((DeleteSnapshotsDirCommand) cmd);
} else if (cmd instanceof CreateVolumeFromSnapshotCommand) {
return execute((CreateVolumeFromSnapshotCommand) cmd);
} else if (cmd instanceof CreatePrivateTemplateFromSnapshotCommand) {
return execute((CreatePrivateTemplateFromSnapshotCommand) cmd);
} else if (cmd instanceof ModifyStoragePoolCommand) {
return execute((ModifyStoragePoolCommand) cmd);
} else if (cmd instanceof NetworkIngressRulesCmd) {
@ -1222,127 +1146,10 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
}
protected ManageSnapshotAnswer execute(final ManageSnapshotCommand cmd) {
String snapshotName = cmd.getSnapshotName();
String VolPath = cmd.getVolumePath();
try {
StorageVol vol = getVolume(VolPath);
if (vol == null) {
return new ManageSnapshotAnswer(cmd, false, null);
}
Domain vm = getDomain(cmd.getVmName());
String vmUuid = vm.getUUIDString();
Object[] args = new Object[] {snapshotName, vmUuid};
String snapshot = SnapshotXML.format(args);
s_logger.debug(snapshot);
if (cmd.getCommandSwitch().equalsIgnoreCase(ManageSnapshotCommand.CREATE_SNAPSHOT)) {
vm.snapshotCreateXML(snapshot);
} else {
DomainSnapshot snap = vm.snapshotLookupByName(snapshotName);
snap.delete(0);
}
} catch (LibvirtException e) {
s_logger.debug("Failed to manage snapshot: " + e.toString());
return new ManageSnapshotAnswer(cmd, false, "Failed to manage snapshot: " + e.toString());
}
/*TODO: no snapshot support for KVM right now, but create_private_template needs us to return true here*/
return new ManageSnapshotAnswer(cmd, cmd.getSnapshotId(), cmd.getVolumePath(), true, null);
}
protected BackupSnapshotAnswer execute(final BackupSnapshotCommand cmd) {
Long dcId = cmd.getDataCenterId();
Long accountId = cmd.getAccountId();
Long volumeId = cmd.getVolumeId();
String secondaryStoragePoolURL = cmd.getSecondaryStoragePoolURL();
String snapshotName = cmd.getSnapshotName();
String snapshotPath = cmd.getSnapshotUuid();
String snapshotDestPath = null;
try {
StoragePool secondaryStoragePool = getNfsSPbyURI(_conn, new URI(secondaryStoragePoolURL));
String ssPmountPath = _mountPoint + File.separator + secondaryStoragePool.getUUIDString();
snapshotDestPath = ssPmountPath + File.separator + dcId + File.separator + "snapshots" + File.separator + accountId + File.separator + volumeId;
final Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
command.add("-b", snapshotPath);
command.add("-n", snapshotName);
command.add("-p", snapshotDestPath);
String result = command.execute();
if (result != null) {
s_logger.debug("Failed to backup snaptshot: " + result);
return new BackupSnapshotAnswer(cmd, false, result, null);
}
} catch (LibvirtException e) {
return new BackupSnapshotAnswer(cmd, false, e.toString(), null);
} catch (URISyntaxException e) {
return new BackupSnapshotAnswer(cmd, false, e.toString(), null);
}
return new BackupSnapshotAnswer(cmd, true, null, snapshotDestPath + File.separator + snapshotName);
}
protected DeleteSnapshotBackupAnswer execute(final DeleteSnapshotBackupCommand cmd) {
return new DeleteSnapshotBackupAnswer(cmd, true, null);
}
protected Answer execute(DeleteSnapshotsDirCommand cmd) {
return new Answer(cmd, true, null);
}
protected CreateVolumeFromSnapshotAnswer execute(final CreateVolumeFromSnapshotCommand cmd) {
String snapshotPath = cmd.getSnapshotUuid();
String primaryUuid = cmd.getPrimaryStoragePoolNameLabel();
String primaryPath = _mountPoint + File.separator + primaryUuid;
String volUuid = UUID.randomUUID().toString();
String volPath = primaryPath + File.separator + volUuid;
String result = Script.runSimpleBashScript("cp " + snapshotPath + " " + volPath);
if (result != null) {
return new CreateVolumeFromSnapshotAnswer(cmd, false, result, null);
}
return new CreateVolumeFromSnapshotAnswer(cmd, true, "", volPath);
}
protected CreatePrivateTemplateAnswer execute(final CreatePrivateTemplateFromSnapshotCommand cmd) {
String orignalTmplPath = cmd.getOrigTemplateInstallPath();
String templateFolder = cmd.getAccountId() + File.separator + cmd.getNewTemplateId();
String templateInstallFolder = "template/tmpl/" + templateFolder;
String snapshotPath = cmd.getSnapshotUuid();
String tmplName = UUID.randomUUID().toString();
String tmplFileName = tmplName + ".qcow2";
StoragePool secondaryPool;
try {
secondaryPool = getNfsSPbyURI(_conn, new URI(cmd.getSecondaryStoragePoolURL()));
/*TODO: assuming all the storage pools mounted under _mountPoint, the mount point should be got from pool.dumpxml*/
String templatePath = _mountPoint + File.separator + secondaryPool.getUUIDString() + File.separator + templateInstallFolder;
String tmplPath = templateInstallFolder + File.separator + tmplFileName;
Script command = new Script(_createTmplPath, _timeout, s_logger);
command.add("-t", templatePath);
command.add("-n", tmplFileName);
command.add("-f", snapshotPath);
String result = command.execute();
Map<String, Object> params = new HashMap<String, Object>();
params.put(StorageLayer.InstanceConfigKey, _storage);
Processor qcow2Processor = new QCOW2Processor();
qcow2Processor.configure("QCOW2 Processor", params);
FormatInfo info = qcow2Processor.process(templatePath, null, tmplName);
TemplateLocation loc = new TemplateLocation(_storage, templatePath);
loc.create(1, true, tmplName);
loc.addFormat(info);
loc.save();
return new CreatePrivateTemplateAnswer(cmd, true, "", tmplPath, info.virtualSize, tmplName, info.format);
} catch (LibvirtException e) {
return new CreatePrivateTemplateAnswer(cmd, false, e.getMessage(), null, 0, null, null);
} catch (URISyntaxException e) {
return new CreatePrivateTemplateAnswer(cmd, false, e.getMessage(), null, 0, null, null);
} catch (ConfigurationException e) {
return new CreatePrivateTemplateAnswer(cmd, false, e.getMessage(), null, 0, null, null);
} catch (InternalErrorException e) {
return new CreatePrivateTemplateAnswer(cmd, false, e.getMessage(), null, 0, null, null);
} catch (IOException e) {
return new CreatePrivateTemplateAnswer(cmd, false, e.getMessage(), null, 0, null, null);
}
}
protected GetStorageStatsAnswer execute(final GetStorageStatsCommand cmd) {
StoragePool sp = null;
StoragePoolInfo spi = null;
@ -1483,6 +1290,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
if (primaryPool == null) {
return new Answer(cmd, false, " Can't find primary storage pool");
}
LibvirtStorageVolumeDef vol = new LibvirtStorageVolumeDef(UUID.randomUUID().toString(), tmplVol.getInfo().capacity, volFormat.QCOW2, null, null);
s_logger.debug(vol.toString());
primaryVol = copyVolume(primaryPool, vol, tmplVol);
@ -1908,7 +1716,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
final String vnet = getVnetId(cmd.getVnet());
if (vnet != null) {
try {
createVnet(vnet, _pifs.first()); /*TODO: Need to add public network for domR*/
createVnet(vnet);
} catch (InternalErrorException e) {
return new PrepareForMigrationAnswer(cmd, false, result);
}
@ -1922,11 +1730,9 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
return new PrepareForMigrationAnswer(cmd, result == null, result);
}
public void createVnet(String vnetId, String pif) throws InternalErrorException {
final Script command = new Script(_modifyVlanPath, _timeout, s_logger);
public void createVnet(String vnetId) throws InternalErrorException {
final Script command = new Script(_createvnetPath, _timeout, s_logger);
command.add("-v", vnetId);
command.add("-p", pif);
command.add("-o", "add");
final String result = command.execute();
if (result != null) {
@ -2263,12 +2069,12 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
isoPath = isoVol.getPath();
diskDef iso = new diskDef();
iso.defFileBasedDisk(isoPath, "hdc", diskDef.diskBus.IDE, diskDef.diskFmtType.RAW);
iso.defFileBasedDisk(isoPath, "hdc", diskDef.diskBus.IDE);
iso.setDeviceType(diskDef.deviceType.CDROM);
isoXml = iso.toString();
} else {
diskDef iso = new diskDef();
iso.defFileBasedDisk(null, "hdc", diskDef.diskBus.IDE, diskDef.diskFmtType.RAW);
iso.defFileBasedDisk(null, "hdc", diskDef.diskBus.IDE);
iso.setDeviceType(diskDef.deviceType.CDROM);
isoXml = iso.toString();
}
@ -2320,9 +2126,9 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
diskDef disk = new diskDef();
String guestOSType = getGuestType(vmName);
if (isGuestPVEnabled(guestOSType)) {
disk.defFileBasedDisk(sourceFile, diskDev, diskDef.diskBus.VIRTIO, diskDef.diskFmtType.QCOW2);
disk.defFileBasedDisk(sourceFile, diskDev, diskDef.diskBus.VIRTIO);
} else {
disk.defFileBasedDisk(sourceFile, diskDev, diskDef.diskBus.SCSI, diskDef.diskFmtType.QCOW2);
disk.defFileBasedDisk(sourceFile, diskDev, diskDef.diskBus.SCSI);
}
String xml = disk.toString();
return attachOrDetachDevice(attach, vmName, xml);
@ -2822,8 +2628,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
}
final Script command = new Script(_modifyVlanPath, _timeout, s_logger);
command.add("-o", "delete");
final Script command = new Script(_vnetcleanupPath, _timeout, s_logger);
command.add("-v", vnetId);
return command.execute();
}
@ -2887,21 +2692,16 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
private String getHypervisorPath() {
File f =new File("/usr/bin/cloud-qemu-kvm");
if (f.exists()) {
return "/usr/bin/cloud-qemu-kvm";
} else {
if (_conn == null)
return null;
if (_conn == null)
return null;
LibvirtCapXMLParser parser = new LibvirtCapXMLParser();
try {
parser.parseCapabilitiesXML(_conn.getCapabilities());
} catch (LibvirtException e) {
LibvirtCapXMLParser parser = new LibvirtCapXMLParser();
try {
parser.parseCapabilitiesXML(_conn.getCapabilities());
} catch (LibvirtException e) {
}
return parser.getEmulator();
}
return parser.getEmulator();
}
private String getGuestType(String vmName) {
@ -2992,10 +2792,10 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
private String setVnetBrName(String vnetId) {
return "cloudVirBr" + vnetId;
return "vnbr" + vnetId;
}
private String getVnetIdFromBrName(String vnetBrName) {
return vnetBrName.replaceAll("cloudVirBr", "");
return vnetBrName.replaceAll("vnbr", "");
}
private List<interfaceDef> createUserVMNetworks(StartCommand cmd) throws InternalErrorException {
List<interfaceDef> nics = new ArrayList<interfaceDef>();
@ -3015,8 +2815,8 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
/*guest network is vnet*/
String vnetId = getVnetId(cmd.getGuestNetworkId());
brName = setVnetBrName(vnetId);
createVnet(vnetId, _pifs.first());
pubNic.setHostNetType(hostNicType.VLAN);
createVnet(vnetId);
pubNic.setHostNetType(hostNicType.VNET);
}
pubNic.defBridgeNet(brName, null, guestMac, nicModel);
nics.add(pubNic);
@ -3033,35 +2833,33 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
interfaceDef pubNic = new interfaceDef();
interfaceDef privNic = new interfaceDef();
interfaceDef vnetNic = new interfaceDef();
/*nic 0, guest network*/
if ("untagged".equalsIgnoreCase(router.getVnet())){
/*guest network is direct attached with domr DHCP server*/
/*0 is on private nic, 1 is link local*/
vnetNic.defBridgeNet(_privBridgeName, null, guestMac, interfaceDef.nicModel.VIRTIO);
vnetNic.setHostNetType(hostNicType.DIRECT_ATTACHED_WITH_DHCP);
nics.add(vnetNic);
privNic.defPrivateNet(_privNwName, null, privateMac, interfaceDef.nicModel.VIRTIO);
privNic.setHostNetType(hostNicType.DIRECT_ATTACHED_WITH_DHCP);
nics.add(privNic);
} else {
/*guest network is vnet: 0 is vnet, 1 is link local, 2 is pub nic*/
String vnetId = getVnetId(router.getVnet());
brName = setVnetBrName(vnetId);
String vnetDev = "vtap" + vnetId;
createVnet(vnetId, _pifs.first());
createVnet(vnetId);
vnetNic.defBridgeNet(brName, vnetDev, guestMac, interfaceDef.nicModel.VIRTIO);
vnetNic.setHostNetType(hostNicType.VNET);
nics.add(vnetNic);
privNic.defPrivateNet(_privNwName, null, privateMac, interfaceDef.nicModel.VIRTIO);
nics.add(privNic);
String pubDev = "tap" + vnetId;
pubNic.defBridgeNet(_publicBridgeName, pubDev, pubMac, interfaceDef.nicModel.VIRTIO);
nics.add(pubNic);
}
nics.add(vnetNic);
/*nic 1: link local*/
privNic.defPrivateNet(_privNwName, null, privateMac, interfaceDef.nicModel.VIRTIO);
nics.add(privNic);
/*nic 2: public */
if ("untagged".equalsIgnoreCase(router.getVlanId())) {
pubNic.defBridgeNet(_publicBridgeName, null, pubMac, interfaceDef.nicModel.VIRTIO);
} else {
String vnetId = getVnetId(router.getVlanId());
brName = setVnetBrName(vnetId);
String vnetDev = "vtap" + vnetId;
createVnet(vnetId, _pifs.second());
pubNic.defBridgeNet(brName, vnetDev, pubMac, interfaceDef.nicModel.VIRTIO);
}
nics.add(pubNic);
return nics;
}
@ -3078,7 +2876,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
vnetNic.defPrivateNet("default", null, null, interfaceDef.nicModel.VIRTIO);
nics.add(vnetNic);
privNic.defPrivateNet(_privNwName, null, privateMac, interfaceDef.nicModel.VIRTIO);
privNic.defPrivateNet(_linkLocalBridgeName, null, privateMac, interfaceDef.nicModel.VIRTIO);
nics.add(privNic);
pubNic.defBridgeNet(_publicBridgeName, null, pubMac, interfaceDef.nicModel.VIRTIO);
@ -3111,11 +2909,11 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
String datadiskPath = tmplVol.getKey();
diskDef hda = new diskDef();
hda.defFileBasedDisk(rootkPath, "vda", diskDef.diskBus.IDE, diskDef.diskFmtType.QCOW2);
hda.defFileBasedDisk(rootkPath, "hda", diskDef.diskBus.IDE);
disks.add(hda);
diskDef hdb = new diskDef();
hdb.defFileBasedDisk(datadiskPath, "vdb", diskDef.diskBus.IDE, diskDef.diskFmtType.QCOW2);
hdb.defFileBasedDisk(datadiskPath, "hdb", diskDef.diskBus.IDE);
disks.add(hdb);
return disks;
@ -3150,13 +2948,13 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
diskDef hda = new diskDef();
hda.defFileBasedDisk(rootVolume.getPath(), "vda", diskBusType, diskDef.diskFmtType.QCOW2);
hda.defFileBasedDisk(rootVolume.getPath(), "hda", diskBusType);
disks.add(hda);
/*Centos doesn't support scsi hotplug. For other host OSes, we attach the disk after the vm is running, so that we can hotplug it.*/
if (dataVolume != null) {
diskDef hdb = new diskDef();
hdb.defFileBasedDisk(dataVolume.getPath(), "vdb", diskBusType, diskDef.diskFmtType.QCOW2);
hdb.defFileBasedDisk(dataVolume.getPath(), "hdb", diskBusType);
if (!isCentosHost()) {
hdb.setAttachDeferred(true);
}
@ -3165,7 +2963,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
if (isoPath != null) {
diskDef hdc = new diskDef();
hdc.defFileBasedDisk(isoPath, "hdc", diskDef.diskBus.IDE, diskDef.diskFmtType.RAW);
hdc.defFileBasedDisk(isoPath, "hdc", diskDef.diskBus.IDE);
hdc.setDeviceType(diskDef.deviceType.CDROM);
disks.add(hdc);
}
@ -3469,9 +3267,8 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
return vol;
}
private StorageVol getVolume(String volKey) throws LibvirtException{
private StorageVol getVolume(String volKey) {
StorageVol vol = null;
try {
vol = _conn.storageVolLookupByKey(volKey);
} catch (LibvirtException e) {
@ -3479,16 +3276,31 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
if (vol == null) {
StoragePool pool = null;
String token[] = volKey.split("/");
if (token.length <= 2) {
s_logger.debug("what the heck of volkey: " + volKey);
try {
String token[] = volKey.split("/");
if (token.length <= 2) {
s_logger.debug("what the heck of volkey: " + volKey);
return null;
}
String poolUUID = token[token.length - 2];
pool = _conn.storagePoolLookupByUUIDString(poolUUID);
} catch (LibvirtException e) {
s_logger.debug("Failed to get pool, with volKey: " + volKey + "due to" + e.toString());
return null;
}
String poolUUID = token[token.length - 2];
pool = _conn.storagePoolLookupByUUIDString(poolUUID);
pool.refresh(0);
vol = _conn.storageVolLookupByKey(volKey);
try {
pool.refresh(0);
} catch (LibvirtException e) {
}
try {
vol = _conn.storageVolLookupByKey(volKey);
} catch (LibvirtException e) {
}
}
return vol;
}

View File

@ -248,37 +248,23 @@ public class LibvirtVMDef {
return _bus;
}
}
enum diskFmtType {
RAW("raw"),
QCOW2("qcow2");
String _fmtType;
diskFmtType(String fmt) {
_fmtType = fmt;
}
@Override
public String toString() {
return _fmtType;
}
}
private deviceType _deviceType; /*floppy, disk, cdrom*/
private diskType _diskType;
private String _sourcePath;
private String _diskLabel;
private diskBus _bus;
private diskFmtType _diskFmtType; /*qcow2, raw etc.*/
private boolean _readonly = false;
private boolean _shareable = false;
private boolean _deferAttach = false;
public void setDeviceType(deviceType deviceType) {
_deviceType = deviceType;
}
public void defFileBasedDisk(String filePath, String diskLabel, diskBus bus, diskFmtType diskFmtType) {
public void defFileBasedDisk(String filePath, String diskLabel, diskBus bus) {
_diskType = diskType.FILE;
_deviceType = deviceType.DISK;
_sourcePath = filePath;
_diskLabel = diskLabel;
_diskFmtType = diskFmtType;
_bus = bus;
}
@ -316,7 +302,6 @@ public class LibvirtVMDef {
}
diskBuilder.append(" type='" + _diskType + "'");
diskBuilder.append(">\n");
diskBuilder.append("<driver name='qemu'" + " type='" + _diskFmtType + "'/>\n");
if (_diskType == diskType.FILE) {
diskBuilder.append("<source ");
if (_sourcePath != null) {
@ -611,14 +596,14 @@ public class LibvirtVMDef {
vm.addComp(term);
devicesDef devices = new devicesDef();
devices.setEmulatorPath("/usr/bin/cloud-qemu-system-x86_64");
devices.setEmulatorPath("/usr/bin/qemu-kvm");
diskDef hda = new diskDef();
hda.defFileBasedDisk("/path/to/hda1", "hda", diskDef.diskBus.IDE, diskDef.diskFmtType.QCOW2);
hda.defFileBasedDisk("/path/to/hda1", "hda", diskDef.diskBus.IDE);
devices.addDevice(hda);
diskDef hdb = new diskDef();
hdb.defFileBasedDisk("/path/to/hda2", "hdb", diskDef.diskBus.IDE, diskDef.diskFmtType.QCOW2);
hdb.defFileBasedDisk("/path/to/hda2", "hdb", diskDef.diskBus.IDE);
devices.addDevice(hdb);
interfaceDef pubNic = new interfaceDef();

View File

@ -10,15 +10,7 @@
</description>
<dirname property="base.dir" file="${ant.file.Cloud.com Cloud Stack Build Dispatch}"/>
<condition property="build.dir" value="${base.dir}/build" else="${base.dir}/build"> <!-- silly no-op -->
<and>
<available file="cloudstack-proprietary/build/build-cloud-premium.xml"/>
<not>
<isset property="OSS"/>
</not>
</and>
</condition>
<property name="build.dir" location="${base.dir}/build"/>
<condition property="build-cloud.properties.file" value="${build.dir}/override/build-cloud.properties" else="${build.dir}/build-cloud.properties">
<available file="${build.dir}/override/build-cloud.properties" />
@ -29,57 +21,57 @@
<property name="dist.dir" location="${base.dir}/dist"/>
<property name="target.dir" location="${base.dir}/target"/>
<condition property="build.file" value="cloudstack-proprietary/build/build-cloud-premium.xml" else="${build.dir}/build-cloud.xml">
<condition property="build.file" value="premium/build-cloud-premium.xml" else="build-cloud.xml">
<and>
<available file="cloudstack-proprietary/build/build-cloud-premium.xml"/>
<available file="build/premium/build-cloud-premium.xml"/>
<not>
<isset property="OSS"/>
</not>
</and>
</condition>
<condition property="package.file" value="cloudstack-proprietary/build/package-premium.xml" else="${build.dir}/package.xml">
<condition property="package.file" value="premium/package-premium.xml" else="package.xml">
<and>
<available file="cloudstack-proprietary/build/package-premium.xml"/>
<available file="build/premium/package-premium.xml"/>
<not>
<isset property="OSS"/>
</not>
</and>
</condition>
<condition property="developer.file" value="cloudstack-proprietary/build/developer-premium.xml" else="${build.dir}/developer.xml">
<condition property="developer.file" value="premium/developer-premium.xml" else="developer.xml">
<and>
<available file="cloudstack-proprietary/build/developer-premium.xml"/>
<available file="build/premium/developer-premium.xml"/>
<not>
<isset property="OSS"/>
</not>
</and>
</condition>
<condition property="docs.file" value="cloudstack-proprietary/build/build-docs-premium.xml" else="${build.dir}/build-docs.xml">
<condition property="docs.file" value="premium/build-docs-premium.xml" else="build-docs.xml">
<and>
<available file="cloudstack-proprietary/build/build-docs-premium.xml"/>
<available file="build/premium/build-docs-premium.xml"/>
<not>
<isset property="OSS"/>
</not>
</and>
</condition>
<condition property="test.file" value="cloudstack-proprietary/build/build-tests-premium.xml" else="${build.dir}/build-tests.xml">
<condition property="test.file" value="premium/build-tests-premium.xml" else="build-tests.xml">
<and>
<available file="cloudstack-proprietary/build/build-tests-premium.xml"/>
<available file="build/premium/build-tests-premium.xml"/>
<not>
<isset property="OSS"/>
</not>
</and>
</condition>
<import file="${base.dir}/cloudstack-proprietary/plugins/zynga/build.xml" optional='true'/>
<import file="${build.file}" optional="false"/>
<import file="${docs.file}" optional="true"/>
<import file="${test.file}" optional="true"/>
<import file="${package.file}" optional="true"/>
<import file="${developer.file}" optional="true"/>
<import file="${base.dir}/plugins/zynga/build.xml" optional='true'/>
<import file="${build.dir}/${build.file}" optional="false"/>
<import file="${build.dir}/${docs.file}" optional="true"/>
<import file="${build.dir}/${test.file}" optional="true"/>
<import file="${build.dir}/${package.file}" optional="true"/>
<import file="${build.dir}/${developer.file}" optional="true"/>
</project>

View File

@ -60,9 +60,7 @@
<property name="dep.cache.dir" location="${target.dir}/dep-cache" />
<property name="build.log" location="${target.dir}/ant_verbose.txt" />
<property name="proprietary.dir" location="${base.dir}/cloudstack-proprietary" />
<property name="thirdparty.dir" location="${proprietary.dir}/thirdparty" />
<property name="thirdparty.dir" location="${base.dir}/thirdparty" />
<property name="deps.dir" location="${base.dir}/deps" />
<!-- directories for client compilation-->
@ -119,7 +117,6 @@
<property name="agent.jar" value="cloud-agent.jar" />
<property name="console-common.jar" value="cloud-console-common.jar" />
<property name="console-proxy.jar" value="cloud-console-proxy.jar" />
<property name="api.jar" value="cloud-api.jar"/>
<!--
Import information about the build version and company information
@ -166,21 +163,11 @@
<compile-java jar.name="${utils.jar}" top.dir="${utils.dir}" classpath="utils.classpath" />
</target>
<property name="api.dir" location="${base.dir}/api" />
<property name="api.test.dir" location="${api.dir}/test/" />
<path id="api.classpath">
<path refid="thirdparty.classpath" />
<path refid="dist.classpath"/>
</path>
<target name="compile-api" depends="-init, compile-utils" description="Compile the utilities jar that is shared.">
<compile-java jar.name="${api.jar}" top.dir="${api.dir}" classpath="api.classpath" />
</target>
<path id="core.classpath">
<path refid="thirdparty.classpath" />
<path refid="dist.classpath" />
</path>
<target name="compile-core" depends="-init, compile-utils, compile-api" description="Compile the core business logic.">
<target name="compile-core" depends="-init, compile-utils" description="Compile the core business logic.">
<compile-java jar.name="${core.jar}" top.dir="${core.dir}" classpath="core.classpath" />
</target>
@ -222,7 +209,6 @@
<fileset dir="${ui.user.dir}">
<include name="**/*.html" />
<include name="**/*.js"/>
<include name="**/*.jsp"/>
<exclude name="**/.classpath" />
<exclude name="**/.project" />
</fileset>
@ -235,7 +221,6 @@
<include name="**/*"/>
<exclude name="**/*.html" />
<exclude name="**/*.js"/>
<exclude name="**/*.jsp"/>
<exclude name="**/.classpath" />
<exclude name="**/.project" />
</fileset>
@ -417,10 +402,9 @@
<fileset dir="${target.dir}">
<include name="**/${core.jar}" />
<include name="**/${utils.jar}" />
<include name="**/${api.jar}"/>
</fileset>
</path>
<target name="compile-agent" depends="-init, compile-utils, compile-core, compile-api" description="Compile the management agent.">
<target name="compile-agent" depends="-init, compile-utils, compile-core" description="Compile the management agent.">
<compile-java jar.name="${agent.jar}" top.dir="${agent.dir}" classpath="agent.classpath" />
</target>

View File

@ -1,3 +1,3 @@
#Build Number for ANT. Do not edit!
#Sat Aug 07 12:54:57 PDT 2010
build.number=927
#Wed Jul 21 16:28:37 PDT 2010
build.number=5

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.8 KiB

View File

@ -1,103 +0,0 @@
#!/usr/bin/env bash
# deploy-db.sh -- deploys the database configuration.
# set -x
if [ "$1" == "" ]; then
printf "Usage: %s [path to additional sql] [root password]\n" $(basename $0) >&2
exit 1;
fi
if [ ! -f $1 ]; then
echo "Error: Unable to find $1"
exit 2
fi
if [ "$2" != "" ]; then
if [ ! -f $2 ]; then
echo "Error: Unable to find $2"
exit 3
fi
fi
if [ ! -f create-database.sql ]; then
printf "Error: Unable to find create-database.sql\n"
exit 4
fi
if [ ! -f create-schema.sql ]; then
printf "Error: Unable to find create-schema.sql\n"
exit 5
fi
if [ ! -f create-index-fk.sql ]; then
printf "Error: Unable to find create-index-fk.sql\n"
exit 6;
fi
PATHSEP=':'
if [[ $OSTYPE == "cygwin" ]] ; then
export CATALINA_HOME=`cygpath -m $CATALINA_HOME`
PATHSEP=';'
else
mysql="mysql"
service mysql status > /dev/null 2>/dev/null
if [ $? -eq 1 ]; then
mysql="mysqld"
service mysqld status > /dev/null 2>/dev/null
if [ $? -ne 0 ]; then
printf "Unable to find mysql daemon\n"
exit 7
fi
fi
echo "Starting mysql"
service $mysql start > /dev/null 2>/dev/null
fi
echo "Recreating Database."
mysql --user=root --password=$3 < create-database.sql > /dev/null 2>/dev/null
mysqlout=$?
if [ $mysqlout -eq 1 ]; then
printf "Please enter root password for MySQL.\n"
mysql --user=root --password < create-database.sql
if [ $? -ne 0 ]; then
printf "Error: Cannot execute create-database.sql\n"
exit 10
fi
elif [ $mysqlout -ne 0 ]; then
printf "Error: Cannot execute create-database.sql\n"
exit 11
fi
mysql --user=cloud --password=cloud cloud < create-schema.sql
if [ $? -ne 0 ]; then
printf "Error: Cannot execute create-schema.sql\n"
exit 11
fi
if [ "$1" != "" ]; then
mysql --user=cloud --password=cloud cloud < $1
if [ $? -ne 0 ]; then
printf "Error: Cannot execute $1\n"
exit 12
fi
fi
if [ "$2" != "" ]; then
echo "Adding Templates"
mysql --user=cloud --password=cloud cloud < $2
if [ $? -ne 0 ]; then
printf "Error: Cannot execute $2\n"
exit 12
fi
fi
echo "Creating Indice and Foreign Keys"
mysql --user=cloud --password=cloud cloud < create-index-fk.sql
if [ $? -ne 0 ]; then
printf "Error: Cannot execute create-index-fk.sql\n"
exit 13
fi

View File

@ -1,7 +0,0 @@
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
log4j.appender.stdout.threshold=ERROR
log4j.rootLogger=INFO, stdout
log4j.category.org.apache=INFO, stdout

View File

@ -1,217 +0,0 @@
#!/usr/bin/env bash
# install.sh -- installs an agent
#
#
usage() {
printf "Usage: %s: -d [directory to deploy to] -t [routing|storage|computing] -z [zip file] -h [host] -p [pod] -c [data center] -m [expert|novice|setup]\n" $(basename $0) >&2
}
mode=
host=
pod=
zone=
deploydir=
confdir=
zipfile=
typ=
#set -x
while getopts 'd:z:t:x:m:h:p:c:' OPTION
do
case "$OPTION" in
d) deploydir="$OPTARG"
;;
z) zipfile="$OPTARG"
;;
t) typ="$OPTARG"
;;
m) mode="$OPTARG"
;;
h) host="$OPTARG"
;;
p) pod="$OPTARG"
;;
c) zone="$OPTARG"
;;
?) usage
exit 2
;;
esac
done
printf "NOTE: You must have root privileges to install and run this program.\n"
if [ "$typ" == "" ]; then
if [ "$mode" != "expert" ]
then
printf "Type of agent to install [routing|computing|storage]: "
read typ
fi
fi
if [ "$typ" != "computing" ] && [ "$typ" != "routing" ] && [ "$typ" != "storage" ]
then
printf "ERROR: The choices are computing, routing, or storage.\n"
exit 4
fi
if [ "$host" == "" ]; then
if [ "$mode" != "expert" ]
then
printf "Host name or ip address of management server [Required]: "
read host
if [ "$host" == "" ]; then
printf "ERROR: Host is required\n"
exit 23;
fi
fi
fi
port=
if [ "$mode" != "expert" ]
then
printf "Port number of management server [defaults to 8250]: "
read port
fi
if [ "$port" == "" ]
then
port=8250
fi
if [ "$zone" == "" ]; then
if [ "$mode" != "expert" ]; then
printf "Availability Zone [Required]: "
read zone
if [ "$zone" == "" ]; then
printf "ERROR: Zone is required\n";
exit 21;
fi
fi
fi
if [ "$pod" == "" ]; then
if [ "$mode" != "expert" ]; then
printf "Pod [Required]: "
read pod
if [ "$pod" == "" ]; then
printf "ERROR: Pod is required\n";
exit 22;
fi
fi
fi
workers=
if [ "$mode" != "expert" ]; then
printf "# of workers to start [defaults to 3]: "
read workers
fi
if [ "$workers" == "" ]; then
workers=3
fi
if [ "$deploydir" == "" ]; then
if [ "$mode" != "expert" ]; then
printf "Directory to deploy to [defaults to /usr/local/vmops/agent]: "
read deploydir
fi
if [ "$deploydir" == "" ]; then
deploydir="/usr/local/vmops/agent"
fi
fi
if ! mkdir -p $deploydir
then
printf "ERROR: Unable to create $deploydir\n"
exit 5
fi
if [ "$zipfile" == "" ]; then
if [ "$mode" != "expert" ]; then
printf "Path of the zip file [defaults to agent.zip]: "
read zipfile
fi
if [ "$zipfile" == "" ]; then
zipfile="agent.zip"
fi
fi
if ! unzip -o $zipfile -d $deploydir
then
printf "ERROR: Unable to unzip $zipfile to $deploydir\n"
exit 6
fi
#if ! chmod -R +x $deploydir/scripts/*.sh
#then
# printf "ERROR: Unable to change scripts to executable.\n"
# exit 7
#fi
#if ! chmod -R +x $deploydir/scripts/iscsi/*.sh
#then
# printf "ERROR: Unable to change scripts to executable.\n"
# exit 8
#fi
#if ! chmod -R +x $deploydir/*.sh
#then
# printf "ERROR: Unable to change scripts to executable.\n"
# exit 9
#fi
if [ "$mode" == "setup" ]; then
mode="expert"
deploydir="/usr/local/vmops/agent"
confdir="/etc/vmops"
/bin/cp -f $deploydir/conf/agent.properties $confdir/agent.properties
if [ $? -gt 0 ]; then
printf "ERROR: Failed to copy the agent.properties file into the right place."
exit 10;
fi
else
confdir="$deploydir/conf"
fi
if [ "$typ" != "" ]; then
sed s/@TYPE@/"$typ"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Type is not set\n"
fi
if [ "$host" != "" ]; then
sed s/@HOST@/"$host"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: host is not set\n"
fi
if [ "$port" != "" ]; then
sed s/@PORT@/"$port"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Port is not set\n"
fi
if [ "$pod" != "" ]; then
sed s/@POD@/"$pod"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Pod is not set\n"
fi
if [ "$zone" != "" ]; then
sed s/@ZONE@/"$zone"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Zone is not set\n"
fi
if [ "$workers" != "" ]; then
sed s/@WORKERS@/"$workers"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Workers is not set\n"
fi
printf "SUCCESS: Installation is now complete. If you like to make changes, edit $confdir/agent.properties\n"
exit 0

View File

@ -1,73 +0,0 @@
#!/usr/bin/env bash
# Deploy console proxy package to an existing VM template
#
usage() {
printf "Usage: %s: -d [work directory to deploy to] -z [zip file]" $(basename $0) >&2
}
deploydir=
zipfile=
#set -x
while getopts 'd:z:' OPTION
do
case "$OPTION" in
d) deploydir="$OPTARG"
;;
z) zipfile="$OPTARG"
;;
?) usage
exit 2
;;
esac
done
printf "NOTE: You must have root privileges to install and run this program.\n"
if [ "$deploydir" == "" ]; then
printf "ERROR: Unable to find deployment work directory $deploydir\n"
exit 3;
fi
if [ ! -f $deploydir/consoleproxy.tar.gz ]
then
printf "ERROR: Unable to find existing console proxy template file (consoleproxy.tar.gz) to work on at $deploydir\n"
exit 5
fi
if [ "$zipfile" == "" ]; then
zipfile="console-proxy.zip"
fi
if ! mkdir -p /mnt/consoleproxy
then
printf "ERROR: Unable to create /mnt/consoleproxy for mounting template image\n"
exit 5
fi
tar xvfz $deploydir/consoleproxy.tar.gz -C $deploydir
mount -o loop $deploydir/vmi-root-fc8-x86_64-domP /mnt/consoleproxy
if ! unzip -o $zipfile -d /mnt/consoleproxy/usr/local/vmops/consoleproxy
then
printf "ERROR: Unable to unzip $zipfile to $deploydir\n"
exit 6
fi
umount /mnt/consoleproxy
pushd $deploydir
tar cvf consoleproxy.tar vmi-root-fc8-x86_64-domP
mv -f consoleproxy.tar.gz consoleproxy.tar.gz.old
gzip consoleproxy.tar
popd
if [ ! -f $deploydir/consoleproxy.tar.gz ]
then
mv consoleproxy.tar.gz.old consoleproxy.tar.gz
printf "ERROR: failed to deploy and recreate the template at $deploydir\n"
fi
printf "SUCCESS: Installation is now complete. please go to $deploydir to review it\n"
exit 0

View File

@ -1,106 +0,0 @@
#!/usr/bin/env bash
# deploy.sh -- deploys a management server
#
#
usage() {
printf "Usage: %s: -d [tomcat directory to deploy to] -z [zip file to use]\n" $(basename $0) >&2
}
dflag=
zflag=
tflag=
iflag=
deploydir=
zipfile="client.zip"
typ=
#set -x
while getopts 'd:z:x:h:' OPTION
do
case "$OPTION" in
d) dflag=1
deploydir="$OPTARG"
;;
z) zflag=1
zipfile="$OPTARG"
;;
h) iflag="$OPTARG"
;;
?) usage
exit 2
;;
esac
done
if [ "$deploydir" == "" ]
then
if [ "$CATALINA_HOME" == "" ]
then
printf "Tomcat Directory to deploy to: "
read deploydir
else
deploydir="$CATALINA_HOME"
fi
fi
if [ "$deploydir" == "" ]
then
printf "Tomcat directory was not specified\n";
exit 15;
fi
printf "Check to see if the Tomcat directory exist: $deploydir\n"
if [ ! -d $deploydir ]
then
printf "Tomcat directory does not exist\n";
exit 16;
fi
if [ "$zipfile" == "" ]
then
printf "Path of the zip file [defaults to client.zip]: "
read zipfile
if [ "$zipfile" == "" ]
then
zipfile="client.zip"
fi
fi
if ! unzip -o $zipfile client.war
then
exit 6
fi
rm -fr $deploydir/webapps/client
if ! unzip -o ./client.war -d $deploydir/webapps/client
then
exit 10;
fi
rm -f ./client.war
if ! unzip -o $zipfile lib/* -d $deploydir
then
exit 11;
fi
if ! unzip -o $zipfile conf/* -d $deploydir
then
exit 12;
fi
if ! unzip -o $zipfile bin/* -d $deploydir
then
exit 13;
fi
printf "Adding the conf directory to the class loader for tomcat\n"
sed 's/shared.loader=$/shared.loader=\$\{catalina.home\},\$\{catalina.home\}\/conf\
/' $deploydir/conf/catalina.properties > $deploydir/conf/catalina.properties.tmp
mv $deploydir/conf/catalina.properties.tmp $deploydir/conf/catalina.properties
printf "Installation is now complete\n"
exit 0

View File

@ -1,185 +0,0 @@
#!/usr/bin/env bash
# install.sh -- installs an agent
#
#
usage() {
printf "Usage: %s: -d [directory to deploy to] -z [zip file] -h [host] -p [pod] -c [data center] -m [expert|novice|setup]\n" $(basename $0) >&2
}
mode=
host=
pod=
zone=
deploydir=
confdir=
zipfile=
typ=
#set -x
while getopts 'd:z:x:m:h:p:c:' OPTION
do
case "$OPTION" in
d) deploydir="$OPTARG"
;;
z) zipfile="$OPTARG"
;;
m) mode="$OPTARG"
;;
h) host="$OPTARG"
;;
p) pod="$OPTARG"
;;
c) zone="$OPTARG"
;;
?) usage
exit 2
;;
esac
done
printf "NOTE: You must have root privileges to install and run this program.\n"
if [ "$mode" == "setup" ]; then
mode="expert"
deploydir="/usr/local/vmops/agent-simulator"
confdir="/etc/vmops"
/bin/cp -f $deploydir/conf/agent.properties $confdir/agent.properties
if [ $? -gt 0 ]; then
printf "ERROR: Failed to copy the agent.properties file into the right place."
exit 10;
fi
else
confdir="$deploydir/conf"
fi
if [ "$host" == "" ]; then
if [ "$mode" != "expert" ]
then
printf "Host name or ip address of management server [Required]: "
read host
if [ "$host" == "" ]; then
printf "ERROR: Host is required\n"
exit 23;
fi
fi
fi
port=
if [ "$mode" != "expert" ]
then
printf "Port number of management server [defaults to 8250]: "
read port
fi
if [ "$port" == "" ]
then
port=8250
fi
if [ "$zone" == "" ]; then
if [ "$mode" != "expert" ]; then
printf "Availability Zone [Required]: "
read zone
if [ "$zone" == "" ]; then
printf "ERROR: Zone is required\n";
exit 21;
fi
fi
fi
if [ "$pod" == "" ]; then
if [ "$mode" != "expert" ]; then
printf "Pod [Required]: "
read pod
if ["$pod" == ""]; then
printf "ERROR: Pod is required\n";
exit 22;
fi
fi
fi
workers=
if [ "$mode" != "expert" ]; then
printf "# of workers to start [defaults to 3]: "
read workers
fi
if [ "$workers" == "" ]; then
workers=3
fi
if [ "$deploydir" == "" ]; then
if [ "$mode" != "expert" ]; then
printf "Directory to deploy to [defaults to /usr/local/vmops/agent-simulator]: "
read deploydir
fi
if [ "$deploydir" == "" ]; then
deploydir="/usr/local/vmops/agent-simulator"
fi
fi
if ! mkdir -p $deploydir
then
printf "ERROR: Unable to create $deploydir\n"
exit 5
fi
if [ "$zipfile" == "" ]; then
if [ "$mode" != "expert" ]; then
printf "Path of the zip file [defaults to agent-simulator.zip]: "
read zipfile
fi
if [ "$zipfile" == "" ]; then
zipfile="agent-simulator.zip"
fi
fi
if ! unzip -o $zipfile -d $deploydir
then
printf "ERROR: Unable to unzip $zipfile to $deploydir\n"
exit 6
fi
if ! chmod +x $deploydir/*.sh
then
printf "ERROR: Unable to change scripts to executable.\n"
exit 9
fi
if [ "$host" != "" ]; then
sed s/@HOST@/"$host"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: host is not set\n"
fi
if [ "$port" != "" ]; then
sed s/@PORT@/"$port"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Port is not set\n"
fi
if [ "$pod" != "" ]; then
sed s/@POD@/"$pod"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Pod is not set\n"
fi
if [ "$zone" != "" ]; then
sed s/@ZONE@/"$zone"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Zone is not set\n"
fi
if [ "$workers" != "" ]; then
sed s/@WORKERS@/"$workers"/ $confdir/agent.properties > $confdir/tmp
/bin/mv -f $confdir/tmp $confdir/agent.properties
else
printf "INFO: Workers is not set\n"
fi
printf "SUCCESS: Installation is now complete. If you like to make changes, edit $confdir/agent.properties\n"
exit 0

View File

@ -1,133 +0,0 @@
#!/usr/bin/env bash
# install-storage-server.sh: Installs a VMOps Storage Server
#
choose_correct_filename() {
local default_filename=$1
local user_specified_filename=$2
if [ -f "$user_specified_filename" ]
then
echo $user_specified_filename
return 0
else
if [ -f "$default_filename" ]
then
echo $default_filename
return 0
else
echo ""
return 1
fi
fi
}
install_opensolaris_package() {
pkg_name=$1
pkg info $pkg_name >> /dev/null
if [ $? -gt 0 ]
then
# The package is not installed, so install it
pkg install $pkg_name
return $?
else
# The package is already installed
return 0
fi
}
exit_if_error() {
return_code=$1
msg=$2
if [ $return_code -gt 0 ]
then
echo $msg
exit 1
fi
}
usage() {
printf "Usage: ./install-storage-server.sh <path to agent.zip> <path to templates.tar.gz>"
}
AGENT_FILE=$(choose_correct_filename "./agent.zip" $1)
exit_if_error $? "Please download agent.zip to your Storage Server."
TEMPLATES_FILE=$(choose_correct_filename "./templates.tar.gz" $2)
exit_if_error $? "Please download templates.tar.gz to your Storage Server."
VMOPS_DIR="/usr/local/vmops"
AGENT_DIR="/usr/local/vmops/agent"
CONF_DIR="/etc/vmops"
TEMPLATES_DIR="/root/template"
# Make all the necessary directories if they don't already exist
echo "Creating VMOps directories..."
for dir in $VMOPS_DIR $CONF_DIR $TEMPLATES_DIR
do
mkdir -p $dir
done
# Unzip agent.zip to $AGENT_DIR
echo "Uncompressing and installing VMOps Storage Agent..."
unzip -o $AGENT_FILE -d $AGENT_DIR >> /dev/null
# Remove agent/conf/agent.properties, since we should use the file in the real configuration directory
rm $AGENT_DIR/conf/agent.properties
# Backup any existing VMOps configuration files, if there aren't any backups already
if [ ! -d $CONF_DIR/BACKUP ]
then
echo "Backing up existing configuration files..."
mkdir -p $CONF_DIR/BACKUP
cp $CONF_DIR/*.properties $CONF_DIR/BACKUP >> /dev/null
fi
# Copy all the files in storagehdpatch to their proper places
echo "Installing system files..."
(cd $AGENT_DIR/storagehdpatch; tar cf - .) | (cd /; tar xf -)
exit_if_error $? "There was a problem with installing system files. Please contact VMOps Support."
# Make vsetup executable
chmod +x /usr/sbin/vsetup
# Make vmops executable
chmod +x /lib/svc/method/vmops
# Uncompress the templates and copy them to the templates directory
echo "Uncompressing templates..."
tar -xzf $TEMPLATES_FILE -C $TEMPLATES_DIR >> /dev/null
exit_if_error $? "There was a problem with uncompressing templates. Please contact VMOps Support."
# Install the storage-server package, if it is not already installed
echo "Installing OpenSolaris storage server package..."
install_opensolaris_package "storage-server"
exit_if_error $? "There was a problem with installing the storage server package. Please contact VMOps Support."
echo "Installing COMSTAR..."
install_opensolaris_package "SUNWiscsit"
exit_if_error $? "Unable to install COMSTAR iscsi target. Please contact VMOps Support."
# Install the SUNWinstall-test package, if it is not already installed
echo "Installing OpenSolaris test tools package..."
install_opensolaris_package "SUNWinstall-test"
exit_if_error $? "There was a problem with installing the test tools package. Please contact VMOps Support."
# Print a success message
printf "\nSuccessfully installed the VMOps Storage Server.\n"
printf "Please complete the following steps to configure your networking settings and storage pools:\n\n"
printf "1. Specify networking settings in /etc/vmops/network.properties\n"
printf "2. Run \"vsetup networking\" and then specify disk settings in /etc/vmops/disks.properties\n"
printf "3. Run \"vsetup zpool\" and reboot the machine when prompted.\n\n"

View File

@ -1,139 +0,0 @@
#!/bin/bash
# install.sh -- installs MySQL, Java, Tomcat, and the VMOps server
#set -x
set -e
EX_NOHOSTNAME=15
EX_SELINUX=16
function usage() {
printf "Usage: %s [path to server-setup.xml]\n" $(basename $0) >&2
exit 64
}
function checkhostname() {
if hostname | grep -qF . ; then true ; else
echo "You need to have a fully-qualified host name for the setup to work." > /dev/stderr
echo "Please use your operating system's network setup tools to set one." > /dev/stderr
exit $EX_NOHOSTNAME
fi
}
function checkselinux() {
#### before checking arguments, make sure SELINUX is "permissible" in /etc/selinux/config
if /usr/sbin/getenforce | grep -qi enforcing ; then borked=1 ; fi
if grep -i SELINUX=enforcing /etc/selinux/config ; then borked=1 ; fi
if [ "$borked" == "1" ] ; then
echo "SELINUX is set to enforcing, please set it to permissive in /etc/selinux/config" > /dev/stderr
echo "then reboot the machine, after which you can run the install script again." > /dev/stderr
exit $EX_SELINUX
fi
}
checkhostname
checkselinux
if [ "$1" == "" ]; then
usage
fi
if [ ! -f $1 ]; then
echo "Error: Unable to find $1" > /dev/stderr
exit 2
fi
#### check that all files exist
if [ ! -f apache-tomcat-6.0.18.tar.gz ]; then
printf "Error: Unable to find apache-tomcat-6.0.18.tar.gz\n" > /dev/stderr
exit 3
fi
if [ ! -f MySQL-client-5.1.30-0.glibc23.x86_64.rpm ]; then
printf "Error: Unable to find MySQL-client-5.1.30-0.glibc23.x86_64.rpm\n" > /dev/stderr
exit 4
fi
if [ ! -f MySQL-server-5.1.30-0.glibc23.x86_64.rpm ]; then
printf "Error: Unable to find MySQL-server-5.1.30-0.glibc23.x86_64.rpm\n" > /dev/stderr
exit 5
fi
if [ ! -f jdk-6u13-linux-amd64.rpm.bin ]; then
printf "Error: Unable to find jdk-6u13-linux-amd64.rpm.bin\n" > /dev/stderr
exit 6
fi
#if [ ! -f osol.tar.bz2 ]; then
# printf "Error: Unable to find osol.tar.bz2\n"
# exit 7
#fi
if [ ! -f apache-tomcat-6.0.18.tar.gz ]; then
printf "Error: Unable to find apache-tomcat-6.0.18.tar.gz\n" > /dev/stderr
exit 8
fi
if [ ! -f vmops-*.zip ]; then
printf "Error: Unable to find vmops install file\n" > /dev/stderr
exit 9
fi
if [ ! -f catalina ] ; then
printf "Error: Unable to find catalina initscript\n" > /dev/stderr
exit 10
fi
if [ ! -f usageserver ] ; then
printf "Error: Unable to find usageserver initscript\n" > /dev/stderr
exit 11
fi
###### install Apache
# if [ ! -d /usr/local/tomcat ] ; then
echo "installing Apache..."
mkdir -p /usr/local/tomcat
tar xfz apache-tomcat-6.0.18.tar.gz -C /usr/local/tomcat
ln -s /usr/local/tomcat/apache-tomcat-6.0.18 /usr/local/tomcat/current
# fi
# if [ ! -f /etc/profile.d/catalinahome.sh ] ; then
# echo "export CATALINA_HOME=/usr/local/tomcat/current" >> /etc/profile.d/catalinahome.sh
# fi
source /etc/profile.d/catalinahome.sh
# if [ ! -f /etc/init.d/catalina ] ; then
cp -f catalina /etc/init.d
/sbin/chkconfig catalina on
# fi
####### set up usage server as a service
if [ ! -f /ec/init.d/usageserver ] ; then
cp -f usageserver /etc/init.d
/sbin/chkconfig usageserver on
fi
##### set up mysql
if rpm -q MySQL-server MySQL-client > /dev/null 2>&1 ; then true ; else
echo "installing MySQL..."
yum localinstall --nogpgcheck -y MySQL-*.rpm
fi
#### install JDK
echo "installing JDK..."
sh jdk-6u13-linux-amd64.rpm.bin
rm -rf /usr/bin/java
ln -s /usr/java/default/bin/java /usr/bin/java
#### setting up OSOL image
#mkdir -p $CATALINA_HOME/webapps/images
#echo "copying Open Solaris image, this may take a few moments..."
#cp osol.tar.bz2 $CATALINA_HOME/webapps/images
#### deploying database
unzip -o vmops-*.zip
cd vmops-*
sh deploy-server.sh -d "$CATALINA_HOME"
cd db
sh deploy-db.sh "../../$1" templates.sql
exit 0

View File

@ -1,38 +0,0 @@
#
# Copyright 2005 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License, Version 1.0 only
# (the "License"). You may not use this file except in compliance
# with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
#ident "%Z%%M% %I% %E% SMI"
#
# This file is /etc/default/init. /etc/TIMEZONE is a symlink to this file.
# This file looks like a shell script, but it is not. To maintain
# compatibility with old versions of /etc/TIMEZONE, some shell constructs
# (i.e., export commands) are allowed in this file, but are ignored.
#
# Lines of this file should be of the form VAR=value, where VAR is one of
# TZ, LANG, CMASK, or any of the LC_* environment variables. value may
# be enclosed in double quotes (") or single quotes (').
#
TZ=GMT
CMASK=022
LANG=en_US.UTF-8

View File

@ -1,6 +0,0 @@
driftfile /var/lib/ntp/ntp.drift
server 0.pool.ntp.org
server 1.pool.ntp.org
server 2.pool.ntp.org
server 3.pool.ntp.org

View File

@ -1,70 +0,0 @@
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
# Copyright 2007 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#ident "%Z%%M% %I% %E% SMI"
#
# /etc/nsswitch.dns:
#
# An example file that could be copied over to /etc/nsswitch.conf; it uses
# DNS for hosts lookups, otherwise it does not use any other naming service.
#
# "hosts:" and "services:" in this file are used only if the
# /etc/netconfig file has a "-" for nametoaddr_libs of "inet" transports.
# DNS service expects that an instance of svc:/network/dns/client be
# enabled and online.
passwd: files
group: files
# You must also set up the /etc/resolv.conf file for DNS name
# server lookup. See resolv.conf(4). For lookup via mdns
# svc:/network/dns/multicast:default must also be enabled. See mdnsd(1M)
hosts: files dns
# Note that IPv4 addresses are searched for in all of the ipnodes databases
# before searching the hosts databases.
ipnodes: files dns
networks: files
protocols: files
rpc: files
ethers: files
netmasks: files
bootparams: files
publickey: files
# At present there isn't a 'files' backend for netgroup; the system will
# figure it out pretty quickly, and won't use netgroups at all.
netgroup: files
automount: files
aliases: files
services: files
printers: user files
auth_attr: files
prof_attr: files
project: files
tnrhtp: files
tnrhdb: files

View File

@ -1,154 +0,0 @@
#
# Copyright 2008 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# Configuration file for sshd(1m)
# Protocol versions supported
#
# The sshd shipped in this release of Solaris has support for major versions
# 1 and 2. It is recommended due to security weaknesses in the v1 protocol
# that sites run only v2 if possible. Support for v1 is provided to help sites
# with existing ssh v1 clients/servers to transition.
# Support for v1 may not be available in a future release of Solaris.
#
# To enable support for v1 an RSA1 key must be created with ssh-keygen(1).
# RSA and DSA keys for protocol v2 are created by /etc/init.d/sshd if they
# do not already exist, RSA1 keys for protocol v1 are not automatically created.
# Uncomment ONLY ONE of the following Protocol statements.
# Only v2 (recommended)
Protocol 2
# Both v1 and v2 (not recommended)
#Protocol 2,1
# Only v1 (not recommended)
#Protocol 1
# Listen port (the IANA registered port number for ssh is 22)
Port 22
# The default listen address is all interfaces, this may need to be changed
# if you wish to restrict the interfaces sshd listens on for a multi homed host.
# Multiple ListenAddress entries are allowed.
# IPv4 only
#ListenAddress 0.0.0.0
# IPv4 & IPv6
ListenAddress ::
# Port forwarding
AllowTcpForwarding no
# If port forwarding is enabled, specify if the server can bind to INADDR_ANY.
# This allows the local port forwarding to work when connections are received
# from any remote host.
GatewayPorts no
# X11 tunneling options
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost yes
# The maximum number of concurrent unauthenticated connections to sshd.
# start:rate:full see sshd(1) for more information.
# The default is 10 unauthenticated clients.
#MaxStartups 10:30:60
# Banner to be printed before authentication starts.
#Banner /etc/issue
# Should sshd print the /etc/motd file and check for mail.
# On Solaris it is assumed that the login shell will do these (eg /etc/profile).
PrintMotd no
# KeepAlive specifies whether keep alive messages are sent to the client.
# See sshd(1) for detailed description of what this means.
# Note that the client may also be sending keep alive messages to the server.
KeepAlive yes
# Syslog facility and level
SyslogFacility auth
LogLevel info
#
# Authentication configuration
#
# Host private key files
# Must be on a local disk and readable only by the root user (root:sys 600).
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key
# Length of the server key
# Default 768, Minimum 512
ServerKeyBits 768
# sshd regenerates the key every KeyRegenerationInterval seconds.
# The key is never stored anywhere except the memory of sshd.
# The default is 1 hour (3600 seconds).
KeyRegenerationInterval 3600
# Ensure secure permissions on users .ssh directory.
StrictModes yes
# Length of time in seconds before a client that hasn't completed
# authentication is disconnected.
# Default is 600 seconds. 0 means no time limit.
LoginGraceTime 600
# Maximum number of retries for authentication
# Default is 6. Default (if unset) for MaxAuthTriesLog is MaxAuthTries / 2
MaxAuthTries 6
MaxAuthTriesLog 3
# Are logins to accounts with empty passwords allowed.
# If PermitEmptyPasswords is no, pass PAM_DISALLOW_NULL_AUTHTOK
# to pam_authenticate(3PAM).
PermitEmptyPasswords no
# To disable tunneled clear text passwords, change PasswordAuthentication to no.
PasswordAuthentication yes
# Use PAM via keyboard interactive method for authentication.
# Depending on the setup of pam.conf(4) this may allow tunneled clear text
# passwords even when PasswordAuthentication is set to no. This is dependent
# on what the individual modules request and is out of the control of sshd
# or the protocol.
PAMAuthenticationViaKBDInt yes
# Are root logins permitted using sshd.
# Note that sshd uses pam_authenticate(3PAM) so the root (or any other) user
# maybe denied access by a PAM module regardless of this setting.
# Valid options are yes, without-password, no.
PermitRootLogin yes
# sftp subsystem
Subsystem sftp /usr/lib/ssh/sftp-server
# SSH protocol v1 specific options
#
# The following options only apply to the v1 protocol and provide
# some form of backwards compatibility with the very weak security
# of /usr/bin/rsh. Their use is not recommended and the functionality
# will be removed when support for v1 protocol is removed.
# Should sshd use .rhosts and .shosts for password less authentication.
IgnoreRhosts yes
RhostsAuthentication no
# Rhosts RSA Authentication
# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts.
# If the user on the client side is not root then this won't work on
# Solaris since /usr/bin/ssh is not installed setuid.
RhostsRSAAuthentication no
# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication.
#IgnoreUserKnownHosts yes
# Is pure RSA authentication allowed.
# Default is yes
RSAAuthentication yes

View File

@ -1,101 +0,0 @@
*ident "%Z%%M% %I% %E% SMI" /* SVR4 1.5 */
*
* CDDL HEADER START
*
* The contents of this file are subject to the terms of the
* Common Development and Distribution License, Version 1.0 only
* (the "License"). You may not use this file except in compliance
* with the License.
*
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
* or http://www.opensolaris.org/os/licensing.
* See the License for the specific language governing permissions
* and limitations under the License.
*
* When distributing Covered Code, include this CDDL HEADER in each
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
* If applicable, add the following below this CDDL HEADER, with the
* fields enclosed by brackets "[]" replaced with your own identifying
* information: Portions Copyright [yyyy] [name of copyright owner]
*
* CDDL HEADER END
*
*
* SYSTEM SPECIFICATION FILE
*
* moddir:
*
* Set the search path for modules. This has a format similar to the
* csh path variable. If the module isn't found in the first directory
* it tries the second and so on. The default is /kernel /usr/kernel
*
* Example:
* moddir: /kernel /usr/kernel /other/modules
* root device and root filesystem configuration:
*
* The following may be used to override the defaults provided by
* the boot program:
*
* rootfs: Set the filesystem type of the root.
*
* rootdev: Set the root device. This should be a fully
* expanded physical pathname. The default is the
* physical pathname of the device where the boot
* program resides. The physical pathname is
* highly platform and configuration dependent.
*
* Example:
* rootfs:ufs
* rootdev:/sbus@1,f8000000/esp@0,800000/sd@3,0:a
*
* (Swap device configuration should be specified in /etc/vfstab.)
* exclude:
*
* Modules appearing in the moddir path which are NOT to be loaded,
* even if referenced. Note that `exclude' accepts either a module name,
* or a filename which includes the directory.
*
* Examples:
* exclude: win
* exclude: sys/shmsys
* forceload:
*
* Cause these modules to be loaded at boot time, (just before mounting
* the root filesystem) rather than at first reference. Note that
* forceload expects a filename which includes the directory. Also
* note that loading a module does not necessarily imply that it will
* be installed.
*
* Example:
* forceload: drv/foo
* set:
*
* Set an integer variable in the kernel or a module to a new value.
* This facility should be used with caution. See system(4).
*
* Examples:
*
* To set variables in 'unix':
*
* set nautopush=32
* set maxusers=40
*
* To set a variable named 'debug' in the module named 'test_module'
*
* set test_module:debug = 0x13
* set zfs:zfs_arc_max=0x4002000
set zfs:zfs_vdev_cache_size=0

View File

@ -1,7 +0,0 @@
# Specify disks in this file
# D: Data
# C: Cache
# L: Intent Log
# S: Spare
# U: Unused

View File

@ -1,35 +0,0 @@
# Host Settings
hostname=
domain=
dns1=
dns2=
# Private/Storage Network Settings (required)
storage.ip=
storage.netmask=
storage.gateway=
# Second Storage Network Settings (optional)
storage.ip.2=
storage.netmask.2=
storage.gateway.2=
# Datacenter Settings
pod=
zone=
host=
port=
# Storage Appliance Settings (optional)
# Specify if you would like to use this Storage Server with an external storage appliance)
iscsi.iqn=
iscsi.ip=
iscsi.port=
# VMOps IQN (optional)
# Specify if you would like to manually change the IQN of the Storage Server's iSCSI target
vmops.iqn=
# MTU (optional)
mtu=

View File

@ -1,106 +0,0 @@
#!/bin/bash
#
# vmops Script to start and stop the VMOps Agent.
#
# Author: Chiradeep Vittal <chiradeep@vmops.com>
# chkconfig: 2345 99 01
# description: Start up the VMOps agent
# Source function library.
if [ -f /etc/init.d/functions ]
then
. /etc/init.d/functions
fi
_success() {
if [ -f /etc/init.d/functions ]
then
success
else
echo "Success"
fi
}
_failure() {
if [ -f /etc/init.d/functions ]
then
failure
else
echo "Failed"
fi
}
RETVAL=$?
VMOPS_HOME="/usr/local/vmops"
mkdir -p /var/log/vmops
get_pids() {
local i
for i in $(ps -ef | grep agent.sh | grep -v grep | awk '{print $2}');
do
echo $(pwdx $i) | grep "$VMOPS_HOME" | grep agent | awk -F: '{print $1}';
done
}
start() {
local pid=$(get_pids)
echo -n "Starting VMOps agent: "
if [ -f $VMOPS_HOME/agent/agent.sh ];
then
if [ "$pid" == "" ]
then
(cd $VMOPS_HOME/agent; nohup ./agent.sh > /var/log/vmops/vmops.out 2>&1 & )
pid=$(get_pids)
echo $pid > /var/run/vmops.pid
fi
_success
else
_failure
fi
echo
}
stop() {
local pid
echo -n "Stopping VMOps agent: "
for pid in $(get_pids)
do
pgid=$(ps -o pgid -p $pid | tr '\n' ' ' | awk '{print $2}')
pgid=${pgid## }
pgid=${pgid%% }
kill -- -$pgid
done
rm /var/run/vmops.pid
_success
echo
}
status() {
local pids=$(get_pids)
if [ "$pids" == "" ]
then
echo "VMOps agent is not running"
return 1
fi
echo "VMOps agent (pid $pids) is running"
return 0
}
case "$1" in
start) start
;;
stop) stop
;;
status) status
;;
restart) stop
sleep 1.5
start
;;
*) echo $"Usage: $0 {start|stop|status|restart}"
exit 1
;;
esac
exit $RETVAL

View File

@ -1,44 +0,0 @@
#! /bin/bash
stage=$1
option=$2
export VMOPS_HOME=/usr/local/vmops
usage() {
echo "Usage: vsetup [networking|zpool]"
echo " networking: probe NICs, configure networking, and detect disks"
echo " zpool: create ZFS storage pool"
}
if [ "$stage" != "networking" ] && [ "$stage" != "zpool" ] && [ "$stage" != "detectdisks" ]
then
usage
exit 1
fi
if [ "$option" != "" ] && [ "$option" != "-listonly" ]
then
usage
exit 1
fi
$VMOPS_HOME/agent/scripts/installer/run_installer.sh storage $stage $option
if [ $? -eq 0 ]
then
if [ "$stage" == "networking" ]
then
echo "Please edit /etc/vmops/disks.properties and then run \"vsetup zpool\"."
else
if [ "$stage" == "zpool" ]
then
echo "Press enter to reboot the computer..."
read
reboot
fi
fi
fi

View File

@ -1,43 +0,0 @@
<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type='manifest' name='cloud'>
<service
name='application/cloud'
type='service'
version='0.1.0'>
<!-- This is the cloud storage agent
-->
<create_default_instance enabled='false' />
<single_instance />
<dependency
name='iscsi_target'
grouping='require_all'
restart_on='error'
type='service'>
<service_fmri value='svc:/network/iscsi/target:default' />
</dependency>
<exec_method
type='method'
name='start'
exec='/lib/svc/method/cloud start'
timeout_seconds='60'>
</exec_method>
<exec_method
type='method'
name='stop'
exec='/lib/svc/method/cloud stop'
timeout_seconds='60'>
</exec_method>
</service>
</service_bundle>

View File

@ -1,6 +0,0 @@
consoleproxy.tcpListenPort=0
consoleproxy.httpListenPort=80
consoleproxy.httpCmdListenPort=8001
consoleproxy.jarDir=./applet/
consoleproxy.viewerLinger=180
consoleproxy.reconnectMaxRetry=5

View File

@ -1,532 +0,0 @@
<?xml version="1.0" encoding="ISO-8859-1"?>
<data>
<version>2.0</version>
<zones>
<zone>
<id>1</id>
<name>AH</name>
<dns1>72.52.126.11</dns1>
<dns2>72.52.126.12</dns2>
<internalDns1>192.168.10.253</internalDns1>
<internalDns2>192.168.10.254</internalDns2>
<vnet>100-199</vnet>
<guestNetworkCidr>10.1.1.0/24</guestNetworkCidr>
</zone>
<zone>
<id>2</id>
<name>KM</name>
<dns1>72.52.126.11</dns1>
<dns2>72.52.126.12</dns2>
<internalDns1>192.168.10.253</internalDns1>
<internalDns2>192.168.10.254</internalDns2>
<vnet>200-299</vnet>
<guestNetworkCidr>10.1.1.0/24</guestNetworkCidr>
</zone>
<zone>
<id>3</id>
<name>KY</name>
<dns1>72.52.126.11</dns1>
<dns2>72.52.126.12</dns2>
<internalDns1>192.168.10.253</internalDns1>
<internalDns2>192.168.10.254</internalDns2>
<vnet>300-399</vnet>
<guestNetworkCidr>10.1.1.0/24</guestNetworkCidr>
</zone>
<zone>
<id>4</id>
<name>WC</name>
<dns1>72.52.126.11</dns1>
<dns2>72.52.126.12</dns2>
<internalDns1>192.168.10.253</internalDns1>
<internalDns2>192.168.10.254</internalDns2>
<vnet>400-499</vnet>
<guestNetworkCidr>10.1.1.0/24</guestNetworkCidr>
</zone>
<zone>
<id>5</id>
<name>CV</name>
<dns1>72.52.126.11</dns1>
<dns2>72.52.126.12</dns2>
<internalDns1>192.168.10.253</internalDns1>
<internalDns2>192.168.10.254</internalDns2>
<vnet>500-599</vnet>
<guestNetworkCidr>10.1.1.0/24</guestNetworkCidr>
</zone>
<zone>
<id>6</id>
<name>KS</name>
<dns1>72.52.126.11</dns1>
<dns2>72.52.126.12</dns2>
<internalDns1>192.168.10.253</internalDns1>
<internalDns2>192.168.10.254</internalDns2>
<vnet>600-699</vnet>
<guestNetworkCidr>10.1.1.0/24</guestNetworkCidr>
</zone>
<zone>
<id>7</id>
<name>ES</name>
<dns1>72.52.126.11</dns1>
<dns2>72.52.126.12</dns2>
<internalDns1>192.168.10.253</internalDns1>
<internalDns2>192.168.10.254</internalDns2>
<vnet>700-799</vnet>
<guestNetworkCidr>10.1.1.0/24</guestNetworkCidr>
</zone>
<zone>
<id>8</id>
<name>RC</name>
<dns1>72.52.126.11</dns1>
<dns2>72.52.126.12</dns2>
<internalDns1>192.168.10.253</internalDns1>
<internalDns2>192.168.10.254</internalDns2>
<vnet>800-899</vnet>
<guestNetworkCidr>10.1.1.0/24</guestNetworkCidr>
</zone>
<zone>
<id>9</id>
<name>AX</name>
<dns1>72.52.126.11</dns1>
<dns2>72.52.126.12</dns2>
<internalDns1>192.168.10.253</internalDns1>
<internalDns2>192.168.10.254</internalDns2>
<vnet>900-999</vnet>
<guestNetworkCidr>10.1.1.0/24</guestNetworkCidr>
</zone>
<zone>
<id>10</id>
<name>JW</name>
<dns1>72.52.126.11</dns1>
<dns2>72.52.126.12</dns2>
<internalDns1>192.168.10.253</internalDns1>
<internalDns2>192.168.10.254</internalDns2>
<vnet>900-999</vnet>
<guestNetworkCidr>10.1.1.0/24</guestNetworkCidr>
</zone>
<zone>
<id>11</id>
<name>AJ</name>
<dns1>72.52.126.11</dns1>
<dns2>72.52.126.12</dns2>
<internalDns1>192.168.10.253</internalDns1>
<internalDns2>192.168.10.254</internalDns2>
<vnet>1000-1099</vnet>
<guestNetworkCidr>10.1.1.0/24</guestNetworkCidr>
</zone>
</zones>
<!--
<storagePools>
<storagePool>
<zoneId>5</zoneId>
<name>sol10-2</name>
<hostAddress>sol10-2</hostAddress>
<hostPath>/tank/cloud-nfs/</hostPath>
</storagePool>
</storagePools>
-->
<vlans>
<vlan>
<zoneId>1</zoneId>
<vlanId>31</vlanId>
<vlanType>VirtualNetwork</vlanType>
<gateway>192.168.31.1</gateway>
<netmask>255.255.255.0</netmask>
<ipAddressRange>192.168.31.150-192.168.31.159</ipAddressRange>
</vlan>
<vlan>
<zoneId>2</zoneId>
<vlanId>32</vlanId>
<vlanType>VirtualNetwork</vlanType>
<gateway>192.168.32.1</gateway>
<netmask>255.255.255.0</netmask>
<ipAddressRange>192.168.32.150-192.168.32.159</ipAddressRange>
</vlan>
<vlan>
<zoneId>3</zoneId>
<vlanId>33</vlanId>
<vlanType>VirtualNetwork</vlanType>
<gateway>192.168.33.1</gateway>
<netmask>255.255.255.0</netmask>
<ipAddressRange>192.168.33.150-192.168.33.159</ipAddressRange>
</vlan>
<vlan>
<zoneId>4</zoneId>
<vlanId>34</vlanId>
<vlanType>VirtualNetwork</vlanType>
<gateway>192.168.34.1</gateway>
<netmask>255.255.255.0</netmask>
<ipAddressRange>192.168.34.150-192.168.34.159</ipAddressRange>
</vlan>
<vlan>
<zoneId>5</zoneId>
<vlanId>35</vlanId>
<vlanType>VirtualNetwork</vlanType>
<gateway>192.168.35.1</gateway>
<netmask>255.255.255.0</netmask>
<ipAddressRange>192.168.35.150-192.168.35.159</ipAddressRange>
</vlan>
<vlan>
<zoneId>6</zoneId>
<vlanId>36</vlanId>
<vlanType>VirtualNetwork</vlanType>
<gateway>192.168.36.1</gateway>
<netmask>255.255.255.0</netmask>
<ipAddressRange>192.168.36.150-192.168.36.159</ipAddressRange>
</vlan>
<vlan>
<zoneId>7</zoneId>
<vlanId>37</vlanId>
<vlanType>VirtualNetwork</vlanType>
<gateway>192.168.37.1</gateway>
<netmask>255.255.255.0</netmask>
<ipAddressRange>192.168.37.150-192.168.37.159</ipAddressRange>
</vlan>
<vlan>
<zoneId>8</zoneId>
<vlanId>38</vlanId>
<vlanType>VirtualNetwork</vlanType>
<gateway>192.168.38.1</gateway>
<netmask>255.255.255.0</netmask>
<ipAddressRange>192.168.38.150-192.168.38.159</ipAddressRange>
</vlan>
<vlan>
<zoneId>9</zoneId>
<vlanId>39</vlanId>
<vlanType>VirtualNetwork</vlanType>
<gateway>192.168.39.1</gateway>
<netmask>255.255.255.0</netmask>
<ipAddressRange>192.168.39.150-192.168.39.159</ipAddressRange>
</vlan>
<vlan>
<zoneId>10</zoneId>
<vlanId>40</vlanId>
<vlanType>VirtualNetwork</vlanType>
<gateway>192.168.40.1</gateway>
<netmask>255.255.255.0</netmask>
<ipAddressRange>192.168.40.150-192.168.40.159</ipAddressRange>
</vlan>
<vlan>
<zoneId>11</zoneId>
<vlanId>41</vlanId>
<vlanType>VirtualNetwork</vlanType>
<gateway>192.168.41.1</gateway>
<netmask>255.255.255.0</netmask>
<ipAddressRange>192.168.41.150-192.168.41.159</ipAddressRange>
</vlan>
</vlans>
<pods>
<pod>
<id>1</id>
<name>AH</name>
<zoneId>1</zoneId>
<ipAddressRange>192.168.10.20-192.168.10.24</ipAddressRange>
<cidr>192.168.10.0/24</cidr>
</pod>
<pod>
<id>2</id>
<name>KM</name>
<zoneId>2</zoneId>
<ipAddressRange>192.168.10.25-192.168.10.29</ipAddressRange>
<cidr>192.168.10.0/24</cidr>
</pod>
<pod>
<id>3</id>
<name>KY</name>
<zoneId>3</zoneId>
<ipAddressRange>192.168.10.30-192.168.10.34</ipAddressRange>
<cidr>192.168.10.0/24</cidr>
</pod>
<pod>
<id>4</id>
<name>WC</name>
<zoneId>4</zoneId>
<ipAddressRange>192.168.10.35-192.168.10.39</ipAddressRange>
<cidr>192.168.10.0/24</cidr>
</pod>
<pod>
<id>5</id>
<name>CV</name>
<zoneId>5</zoneId>
<ipAddressRange>192.168.10.40-192.168.10.44</ipAddressRange>
<cidr>192.168.10.0/24</cidr>
</pod>
<pod>
<id>6</id>
<name>KS</name>
<zoneId>6</zoneId>
<ipAddressRange>192.168.10.45-192.168.10.49</ipAddressRange>
<cidr>192.168.10.0/24</cidr>
</pod>
<pod>
<id>7</id>
<name>ES</name>
<zoneId>7</zoneId>
<ipAddressRange>192.168.10.50-192.168.10.54</ipAddressRange>
<cidr>192.168.10.0/24</cidr>
</pod>
<pod>
<id>8</id>
<name>RC</name>
<zoneId>8</zoneId>
<ipAddressRange>192.168.10.55-192.168.10.59</ipAddressRange>
<cidr>192.168.10.0/24</cidr>
</pod>
<pod>
<id>9</id>
<name>AX</name>
<zoneId>9</zoneId>
<ipAddressRange>192.168.10.62-192.168.10.64</ipAddressRange>
<cidr>192.168.10.0/24</cidr>
</pod>
<pod>
<id>10</id>
<name>JW</name>
<zoneId>10</zoneId>
<ipAddressRange>192.168.10.65-192.168.10.69</ipAddressRange>
<cidr>192.168.10.0/24</cidr>
</pod>
<pod>
<id>11</id>
<name>AJ</name>
<zoneId>11</zoneId>
<ipAddressRange>192.168.10.70-192.168.10.74</ipAddressRange>
<cidr>192.168.10.0/24</cidr>
</pod>
</pods>
<!--
* cpu is the number of CPUs for the offering
* ramSize is total memory in MB
* speed is the CPU speed for each core in MHZ
* diskSpace is the storage space in MB
* price is the price of the offering per hour
-->
<serviceOfferings>
<serviceOffering>
<id>1</id>
<name>Small Instance</name>
<displayText>Small Instance [500MHZ CPU, 512MB MEM, 16GB Disk] - $0.10 per hour</displayText>
<cpu>1</cpu>
<ramSize>512</ramSize>
<speed>500</speed>
<mirrored>false</mirrored>
</serviceOffering>
<serviceOffering>
<id>2</id>
<name>Medium Instance</name>
<displayText>Medium Instance [500MHZ CPU, 1GB MEM, 32GB Disk] - $0.20 per hour</displayText>
<cpu>1</cpu>
<ramSize>1024</ramSize>
<speed>512</speed>
</serviceOffering>
<serviceOffering>
<id>3</id>
<name>Large Instance</name>
<displayText>Large Instance [2GHZ CPU, 4GB MEM, 64GB Disk] - $0.30 per hour</displayText>
<cpu>2</cpu>
<ramSize>4096</ramSize>
<speed>2000</speed>
</serviceOffering>
</serviceOfferings>
<diskOfferings>
<diskOffering>
<id>1</id>
<domainId>1</domainId>
<name>Small Disk</name>
<displayText>Small Disk [16GB Disk]</displayText>
<diskSpace>16000</diskSpace>
</diskOffering>
<diskOffering>
<id>2</id>
<domainId>1</domainId>
<name>Medium Disk</name>
<displayText>Medium Disk [32GB Disk]</displayText>
<diskSpace>32000</diskSpace>
</diskOffering>
<diskOffering>
<id>3</id>
<domainId>1</domainId>
<name>Large Disk</name>
<displayText>Large Disk [64GB Disk]</displayText>
<diskSpace>64000</diskSpace>
</diskOffering>
</diskOfferings>
<!--
* firstname/lastname are optional parameters
* id, username, password are required parameters
-->
<users>
<user>
<id>2</id>
<username>admin</username>
<password>password</password>
<firstname>Admin</firstname>
<lastname>User</lastname>
<email>admin@mailprovider.com</email>
</user>
</users>
<configurationEntries>
<configuration>
<name>default.zone</name>
<value>AH</value>
</configuration>
<configuration>
<name>domain.suffix</name>
<value>cloud-test.cloud.com</value>
</configuration>
<configuration>
<name>instance.name</name>
<value>AH</value>
</configuration>
<configuration>
<name>consoleproxy.ram.size</name>
<value>256</value>
</configuration>
<configuration>
<name>host.stats.interval</name>
<value>3600000</value>
</configuration>
<configuration>
<name>storage.stats.interval</name>
<value>120000</value>
</configuration>
<configuration>
<name>volume.stats.interval</name>
<value>-1</value>
</configuration>
<configuration>
<name>ping.interval</name>
<value>60</value>
</configuration>
<configuration>
<name>alert.wait</name>
<value>1800</value>
</configuration>
<configuration>
<name>expunge.interval</name>
<value>86400</value>
</configuration>
<configuration>
<name>usage.aggregation.timezone</name>
<value>GMT</value>
</configuration>
<!-- RSA Keys -->
<configuration>
<name>ssh.privatekey</name>
<value>-----BEGIN RSA PRIVATE KEY-----\nMIIEoQIBAAKCAQEAnNUMVgQS87EzAQN9ufGgH3T1kOpqcvTmUrp8RVZyeA5qwptS\nrZxONRbhLK709pZFBJLmeFqiqciWoA/srVIFk+rPmBlVsMw8BK53hTGoax7iSe8s\nLFCAATm6vp0HnZzYqNfrzR2by36ET5aQD/VAyA55u+uUgAlxQuhKff2xjyahEHs+\nUiRlReiAgItygm9g3co3+8fJDOuRse+s0TOip1D0jPdo2AJFscyxrG9hWqQH86R/\nZlLJ7DqsiaAcUmn52u6Nsmd3BkRmGVx/D35Mq6upJqrk/QDfug9LF66yiIP/BEIn\n08N/wQ6m/O37WUtqqyl3rRKqs5TJ9ZnhsqeO9QIBIwKCAQA6QIDsv69EkkYk8qsK\njPJU06uq2rnS7T+bEhDmjdK+4MiRbOQx2vh6HnDktgM3BJ1K13oss/NGYHJ190lH\nsMA+QUXKx5TbRItSMixkrAta/Ne1D7FSScklBtBVbYZ8XtQhdMVML5GjWuCv2NZs\nU8eaw4xNHPyklcr7mBurI7b6p13VK5BNUWR/VNuigT4U89YzRcoEZ/sTlR+4ACYr\nxbUJJGBA03+NhdSAe2vodlMh5lGflD0JmHMFqqg9BcAtVb73JsOsxFQArbXwRd/q\nNckdoAvgJfhTOvXF5GMPLI0lGb6skJkS229F4GaBB2Iz4A9O0aHZob8I8zsWUbiu\npvBrAoGBAMjUDfF2x13NjH1cFHietO5O1oM0nZaAxKodxoAUvHVMUd5DIY50tqYw\n7ecKi2Cw43ONpdj0nP9Nc2NV3NDRqLopwkKUsTtq9AKQ2cIuw3+uS5vm0VZBzmTP\nuF04Qo4bXh/jFRA62u9bXsmIFtaehKxE1Gp6zi393GcbWP4HX/3dAoGBAMfq0KD3\ngeU1PHi9uI3Ss89nXzJsiGcwC5Iunu1aTzJCYhMlJkfmRcXYMAqSfg0nGWnfvlDh\nuOO26CHKjG182mTwYXdgQzIPpBc8suvgUWDBTrIzJI+zuyBLtPbd9DJEVrZkRVQX\nXrOV3Y5oOWsba4F+b20jaaHFAiY7s6OtrX/5AoGBAMMXI3zZyPwJgSlSIoPNX03m\nL3gke9QID4CvNduB26UlkVuRq5GzNRZ4rJdMEl3tqcC1fImdKswfWiX7o06ChqY3\nMb0FePfkPX7V2tnkSOJuzRsavLoxTCdqsxi6T0g318c0XZq81K4A/P5Jr8ksRl40\nPA+qfyVdAf3Cy3ptkHLzAoGASkFGLSi7N+CSzcLPhSJgCzUGGgsOF7LCeB/x4yGL\nIUvbSPCKj7vuB6gR2AqGlyvHnFprQpz7h8eYDI0PlmGS8kqn2+HtEpgYYGcAoMEI\nSIJQbhL+84vmaxTOL87IanEnhZL1LdzLZ0ZK+mE55fQ936P9gE77WVfNmSweJtob\n3xMCgYAl0aLeGf4oUZbI56eEaCbu8U7dEe6MF54VbozyiXqbp455QnUpuBrRn5uf\nc079dNcqTNDuk1+hYX9qNn1aXsvWeuofBXqWoFXu/c4yoWxJAPhEVhzZ9xrXI76I\nBKiPCyKrOa7bSLvs6SQPpuf5AQ8+NJrOxkEB9hbMuaAr2N5rCw==\n-----END RSA PRIVATE KEY-----
</value>
<category>Hidden</category>
</configuration>
<configuration>
<name>ssh.publickey</name>
<value>
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnNUMVgQS87EzAQN9ufGgH3T1kOpqcvTmUrp8RVZyeA5qwptSrZxONRbhLK709pZFBJLmeFqiqciWoA/srVIFk+rPmBlVsMw8BK53hTGoax7iSe8sLFCAATm6vp0HnZzYqNfrzR2by36ET5aQD/VAyA55u+uUgAlxQuhKff2xjyahEHs+UiRlReiAgItygm9g3co3+8fJDOuRse+s0TOip1D0jPdo2AJFscyxrG9hWqQH86R/ZlLJ7DqsiaAcUmn52u6Nsmd3BkRmGVx/D35Mq6upJqrk/QDfug9LF66yiIP/BEIn08N/wQ6m/O37WUtqqyl3rRKqs5TJ9ZnhsqeO9Q== root@test2.lab.vmops.com
</value>
<category>Hidden</category>
</configuration>
<!-- the following are for configuring alerts and need to be changed to proper configuration values -->
<!--
<configuration>
<name>alert.smtp.host</name>
<value>smtp.host.com</value>
</configuration>
<configuration>
<name>alert.smtp.port</name>
<value>25</value>
</configuration>
<configuration>
<name>alert.smtp.useAuth</name>
<value>false</value>
</configuration>
<configuration>
<name>alert.smtp.username</name>
<value>some.user@example.com</value>
</configuration>
<configuration>
<name>alert.smtp.password</name>
<value>password</value>
</configuration>
<configuration>
<name>alert.email.sender</name>
<value>some.user@example.com</value>
</configuration>
<configuration>
<name>alert.email.addresses</name>
<value>some.admin@example.com</value>
</configuration>
<configuration>
<name>alert.smtp.debug</name>
<value>false</value>
</configuration>
-->
<configuration>
<name>memory.capacity.threshold</name>
<value>0.85</value>
</configuration>
<configuration>
<name>cpu.capacity.threshold</name>
<value>0.85</value>
</configuration>
<configuration>
<name>storage.capacity.threshold</name>
<value>0.85</value>
</configuration>
<configuration>
<name>storage.allocated.capacity.threshold</name>
<value>0.85</value>
</configuration>
<configuration>
<name>capacity.check.period</name>
<value>3600000</value>
</configuration>
<configuration>
<name>wait</name>
<value>240</value>
</configuration>
<configuration>
<name>network.throttling.rate</name>
<value>200</value>
</configuration>
<configuration>
<name>multicast.throttling.rate</name>
<value>10</value>
</configuration>
</configurationEntries>
<!--
It is possible to specify a single IP address. For example, to add 192.168.1.1
as the only address, specify as follows.
<publicIpAddresses>
<zoneId>1</zoneId>
<ipAddressRange>192.168.1.1</ipAddressRange>
</publicIpAddresses>
For each ip address range, create a new object. For example, to add the range 192.168.2.1 to 192.168.2.255
copy the following object tag into the privateIpRange
<privateIpAddresses>
<zoneId>1</zoneId>
<podId>1</podId>
<ipAddressRange>192.168.2.1-192.168.2.255</ipAddressRange>
</privateIpAddresses>
-->
<!--
It is possible to specify a single IP address. For example, to add 65.37.141.29
as the only address, specify as follows.
<publicIpAddresses>
<zoneId>1</zoneId>
<ipAddressRange>65.37.141.29</ipAddressRange>
</publicIpAddresses>
For each ip address range, create a new object. For example, to add the range 65.37.141.29 to 65.37.141.39
copy the following object tag into the publicIpRange
<publicIpAddresses>
<zoneId>1</zoneId>
<ipAddressRange>65.37.141.29-65.37.141.39</ipAddressRange>
</publicIpAddresses>
-->
</data>

View File

@ -1,14 +0,0 @@
INSERT INTO `vmops`.`vm_template` (id, unique_name, name, public, path, created, type, hvm, bits, created_by, url, checksum, ready, display_text, enable_password)
VALUES (1, 'routing', 'DomR Template', 0, 'tank/volumes/demo/template/private/u000000/os/routing', now(), 'ext3', 0, 64, 1, 'http://vmopsserver.lab.vmops.com/images/routing/vmi-root-fc8-x86_64-domR.img.bz2', 'd00927f863a23b98cc6df6e377c9d0c6', 0, 'DomR Template', 0);
INSERT INTO `vmops`.`vm_template` (id, unique_name, name, public, path, created, type, hvm, bits, created_by, url, checksum, ready, display_text, enable_password)
VALUES (3, 'centos53-x86_64', 'Centos 5.3(x86_64) no GUI', 1, 'tank/volumes/demo/template/public/os/centos53-x86_64', now(), 'ext3', 0, 64, 1, 'http://vmopsserver.lab.vmops.com/images/centos52-x86_64/vmi-root-centos.5-2.64.pv.img.gz', 'd4ca80825d936db00eedf26620f13d69', 0, 'Centos 5.3(x86_64) no GUI', 0);
#INSERT INTO `vmops`.`vm_template` (id, unique_name, name, public, path, created, type, hvm, bits, created_by, url, checksum, ready, display_text, enable_password)
# VALUES (4, 'centos52-x86_64-gui', 'Centos 5.2(x86_64) GUI', 1, 'tank/volumes/demo/template/public/os/centos52-x86_64-gui', now(), 'ext3', 0, 64, 1, 'http://vmopsserver.lab.vmops.com/images/centos52-x86_64/vmi-root-centos.5-2.64.pv.img.gz', 'd4ca80825d936db00eedf26620f13d69', 0, 'Centos 5.2(x86_64) GUI', 0);
INSERT INTO `vmops`.`vm_template` (id, unique_name, name, public, path, created, type, hvm, bits, created_by, url, checksum, ready, display_text, enable_password)
VALUES (5, 'winxpsp3', 'Windows XP SP3 (32-bit)', 1, 'tank/volumes/demo/template/public/os/winxpsp3', now(), 'ntfs', 1, 32, 1, 'http://vmopsserver.lab.vmops.com/images/fedora10-x86_64/vmi-root-fedora10.64.img.gz', 'c76d42703f14108b15acc9983307c759', 0, 'Windows XP SP3 (32-bit)', 0);
INSERT INTO `vmops`.`vm_template` (id, unique_name, name, public, path, created, type, hvm, bits, created_by, url, checksum, ready, display_text, enable_password)
VALUES (7, 'win2003sp2', 'Windows 2003 SP2 (32-bit)', 1, 'tank/volumes/demo/template/public/os/win2003sp2', now(), 'ntfs', 1, 32, 1, 'http://vmopsserver.lab.vmops.com/images/win2003sp2/vmi-root-win2003sp2.img.gz', '4d2cc51898d05c0f7a2852c15bcdc77b', 0, 'Windows 2003 SP2 (32-bit)', 0);
INSERT INTO `vmops`.`vm_template` (id, unique_name, name, public, path, created, type, hvm, bits, created_by, url, checksum, ready, display_text, enable_password)
VALUES (8, 'win2003sp2-x64', 'Windows 2003 SP2 (64-bit)', 1, 'tank/volumes/demo/template/public/os/win2003sp2-x64', now(), 'ntfs', 1, 64, 1, 'http://vmopsserver.lab.vmops.com/images/win2003sp2-x86_64/vmi-root-win2003sp2-x64.img.gz', '35d4de1c38eb4fb9d81a31c1d989c482', 0, 'Windows 2003 SP2 (64-bit)', 0);
INSERT INTO `vmops`.`vm_template` (id, unique_name, name, public, path, created, type, hvm, bits, created_by, url, checksum, ready, display_text, enable_password)
VALUES (9, 'fedora12-GUI-x86_64', 'Fedora 12 Desktop(64-bit)', 1, 'tank/volumes/demo/template/public/os/fedora12-GUI-x86_64', now(), 'ext3', 1, 64, 1, 'http://vmopsserver.lab.vmops.com/images/fedora12-GUI-x86_64/vmi-root-fedora12-GUI-x86_64.qcow2.gz', '', 0, 'Fedora 12 Desktop (with httpd,java and mysql)', 0);

View File

@ -1,68 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="false">
<!-- ============================== -->
<!-- Append messages to the console -->
<!-- ============================== -->
<appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender">
<param name="Target" value="System.out"/>
<param name="Threshold" value="INFO"/>
<layout class="org.apache.log4j.EnhancedPatternLayout">
<param name="ConversionPattern" value="%d{ABSOLUTE}{GMT} %5p %c{1}:%L - %m%n"/>
</layout>
</appender>
<!-- ================================ -->
<!-- Append messages to the usage log -->
<!-- ================================ -->
<!-- A time/date based rolling appender -->
<appender name="USAGE" class="org.apache.log4j.rolling.RollingFileAppender">
<param name="Append" value="true"/>
<param name="Threshold" value="DEBUG"/>
<rollingPolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">
<param name="FileNamePattern" value="/var/log/cloud/cloud_usage.log.%d{yyyy-MM-dd}{GMT}.gz"/>
<param name="ActiveFileName" value="/var/log/cloud/cloud_usage.log"/>
</rollingPolicy>
<layout class="org.apache.log4j.EnhancedPatternLayout">
<param name="ConversionPattern" value="%d{ISO8601}{GMT} %-5p [%c{3}] (%t:%x) %m%n"/>
</layout>
</appender>
<!-- ================ -->
<!-- Limit categories -->
<!-- ================ -->
<category name="com.cloud">
<priority value="DEBUG"/>
</category>
<!-- Limit the org.apache category to INFO as its DEBUG is verbose -->
<category name="org.apache">
<priority value="INFO"/>
</category>
<category name="org">
<priority value="INFO"/>
</category>
<category name="net">
<priority value="INFO"/>
</category>
<!-- ======================= -->
<!-- Setup the Root category -->
<!-- ======================= -->
<root>
<level value="INFO"/>
<appender-ref ref="CONSOLE"/>
<appender-ref ref="USAGE"/>
</root>
</log4j:configuration>

View File

@ -1,68 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="false">
<!-- ============================== -->
<!-- Append messages to the console -->
<!-- ============================== -->
<appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender">
<param name="Target" value="System.out"/>
<param name="Threshold" value="INFO"/>
<layout class="org.apache.log4j.EnhancedPatternLayout">
<param name="ConversionPattern" value="%d{ABSOLUTE}{GMT} %5p %c{1}:%L - %m%n"/>
</layout>
</appender>
<!-- ================================ -->
<!-- Append messages to the usage log -->
<!-- ================================ -->
<!-- A time/date based rolling appender -->
<appender name="USAGE" class="org.apache.log4j.rolling.RollingFileAppender">
<param name="Append" value="true"/>
<param name="Threshold" value="DEBUG"/>
<rollingPolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">
<param name="FileNamePattern" value="@logdir@/cloud_usage.log.%d{yyyy-MM-dd}{GMT}.gz"/>
<param name="ActiveFileName" value="@logdir@/cloud_usage.log"/>
</rollingPolicy>
<layout class="org.apache.log4j.EnhancedPatternLayout">
<param name="ConversionPattern" value="%d{ISO8601}{GMT} %-5p [%c{3}] (%t:%x) %m%n"/>
</layout>
</appender>
<!-- ================ -->
<!-- Limit categories -->
<!-- ================ -->
<category name="com.cloud">
<priority value="DEBUG"/>
</category>
<!-- Limit the org.apache category to INFO as its DEBUG is verbose -->
<category name="org.apache">
<priority value="INFO"/>
</category>
<category name="org">
<priority value="INFO"/>
</category>
<category name="net">
<priority value="INFO"/>
</category>
<!-- ======================= -->
<!-- Setup the Root category -->
<!-- ======================= -->
<root>
<level value="INFO"/>
<appender-ref ref="CONSOLE"/>
<appender-ref ref="USAGE"/>
</root>
</log4j:configuration>

View File

@ -1,48 +0,0 @@
<?xml version="1.0"?>
<!--
usage-components.xml is the configuration file for the VM Ops
usage servers.
Here are some places to look for information.
- To find out the general functionality that each Manager
or Adapter provide, look at the javadoc for the interface
that it implements. The interface is usually the
"key" attribute in the declaration.
- To find specific implementation of each Manager or
Adapter, look at the javadoc for the actual class. The
class can be found in the <class> element.
- To find out the configuration parameters for each Manager
or Adapter, look at the javadoc for the actual implementation
class. It should be documented in the description of the
class.
- To know more about the components.xml in general, look for
the javadoc for ComponentLocator.java.
If you found that the Manager or Adapter are not properly
documented, please contact the author.
-->
<components.xml>
<usage-server>
<dao name="VM Instance" class="com.cloud.vm.dao.VMInstanceDaoImpl"/>
<dao name="User VM" class="com.cloud.vm.dao.UserVmDaoImpl"/>
<dao name="ServiceOffering" class="com.cloud.service.dao.ServiceOfferingDaoImpl">
<param name="cache.size">50</param>
<param name="cache.time.to.live">-1</param>
</dao>
<dao name="Events" class="com.cloud.event.dao.EventDaoImpl"/>
<dao name="UserStats" class="com.cloud.user.dao.UserStatisticsDaoImpl"/>
<dao name="IP Addresses" class="com.cloud.network.dao.IPAddressDaoImpl"/>
<dao name="Usage" class="com.cloud.usage.dao.UsageDaoImpl"/>
<dao name="Domain" class="com.cloud.domain.dao.DomainDaoImpl"/>
<dao name="Account" class="com.cloud.user.dao.AccountDaoImpl"/>
<dao name="UserAccount" class="com.cloud.user.dao.UserAccountDaoImpl"/>
<dao name="Usage VmInstance" class="com.cloud.usage.dao.UsageVMInstanceDaoImpl"/>
<dao name="Usage Network" class="com.cloud.usage.dao.UsageNetworkDaoImpl"/>
<dao name="Usage IPAddress" class="com.cloud.usage.dao.UsageIPAddressDaoImpl"/>
<dao name="Usage Job" class="com.cloud.usage.dao.UsageJobDaoImpl"/>
<dao name="Configuration" class="com.cloud.configuration.dao.ConfigurationDaoImpl"/>
<manager name="usage manager" class="com.cloud.usage.UsageManagerImpl">
<param name="period">DAILY</param> <!-- DAILY, WEEKLY, MONTHLY; how often it creates usage records -->
</manager>
</usage-server>
</components.xml>

View File

@ -1 +0,0 @@
agent.minimal.version=@agent.min.version@

View File

@ -1,527 +0,0 @@
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="ehcache.xsd">
<!--
CacheManager Configuration
==========================
An ehcache.xml corresponds to a single CacheManager.
See instructions below or the ehcache schema (ehcache.xsd) on how to configure.
System property tokens can be specified in this file which are replaced when the configuration is loaded.
For example multicastGroupPort=${multicastGroupPort} can be replaced with the System property either
from an environment variable or a system property specified with a command line switch such as
-DmulticastGroupPort=4446.
DiskStore configuration
=======================
The diskStore element is optional. To turn off disk store path creation, comment out the diskStore
element below.
Configure it if you have overflowToDisk or diskPersistent enabled for any cache.
If it is not configured, and a cache is created which requires a disk store, a warning will be
issued and java.io.tmpdir will automatically be used.
diskStore has only one attribute - "path". It is the path to the directory where
.data and .index files will be created.
If the path is one of the following Java System Property it is replaced by its value in the
running VM. For backward compatibility these are not specified without being enclosed in the ${token}
replacement syntax.
The following properties are translated:
* user.home - User's home directory
* user.dir - User's current working directory
* java.io.tmpdir - Default temp file path
* ehcache.disk.store.dir - A system property you would normally specify on the command line
e.g. java -Dehcache.disk.store.dir=/u01/myapp/diskdir ...
Subdirectories can be specified below the property e.g. java.io.tmpdir/one
-->
<!-- diskStore path="java.io.tmpdir"/ -->
<!--
CacheManagerEventListener
=========================
Specifies a CacheManagerEventListenerFactory, be used to create a CacheManagerPeerProvider,
which is notified when Caches are added or removed from the CacheManager.
The attributes of CacheManagerEventListenerFactory are:
* class - a fully qualified factory class name
* properties - comma separated properties having meaning only to the factory.
Sets the fully qualified class name to be registered as the CacheManager event listener.
The events include:
* adding a Cache
* removing a Cache
Callbacks to listener methods are synchronous and unsynchronized. It is the responsibility
of the implementer to safely handle the potential performance and thread safety issues
depending on what their listener is doing.
If no class is specified, no listener is created. There is no default.
-->
<cacheManagerEventListenerFactory class="" properties=""/>
<!--
CacheManagerPeerProvider
========================
(Enable for distributed operation)
Specifies a CacheManagerPeerProviderFactory which will be used to create a
CacheManagerPeerProvider, which discovers other CacheManagers in the cluster.
The attributes of cacheManagerPeerProviderFactory are:
* class - a fully qualified factory class name
* properties - comma separated properties having meaning only to the factory.
Ehcache comes with a built-in RMI-based distribution system with two means of discovery of
CacheManager peers participating in the cluster:
* automatic, using a multicast group. This one automatically discovers peers and detects
changes such as peers entering and leaving the group
* manual, using manual rmiURL configuration. A hardcoded list of peers is provided at
configuration time.
Configuring Automatic Discovery:
Automatic discovery is configured as per the following example:
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.1,
multicastGroupPort=4446, timeToLive=32"/>
Valid properties are:
* peerDiscovery (mandatory) - specify "automatic"
* multicastGroupAddress (mandatory) - specify a valid multicast group address
* multicastGroupPort (mandatory) - specify a dedicated port for the multicast heartbeat
traffic
* timeToLive - specify a value between 0 and 255 which determines how far the packets will
propagate.
By convention, the restrictions are:
0 - the same host
1 - the same subnet
32 - the same site
64 - the same region
128 - the same continent
255 - unrestricted
Configuring Manual Discovery:
Manual discovery is configured as per the following example:
<cacheManagerPeerProviderFactory class=
"net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=manual,
rmiUrls=//server1:40000/sampleCache1|//server2:40000/sampleCache1
| //server1:40000/sampleCache2|//server2:40000/sampleCache2"
propertySeparator="," />
Valid properties are:
* peerDiscovery (mandatory) - specify "manual"
* rmiUrls (mandatory) - specify a pipe separated list of rmiUrls, in the form
//hostname:port
The hostname is the hostname of the remote CacheManager peer. The port is the listening
port of the RMICacheManagerPeerListener of the remote CacheManager peer.
Configuring JGroups replication:
<cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory"
properties="connect=UDP(mcast_addr=231.12.21.132;mcast_port=45566;ip_ttl=32;
mcast_send_buf_size=150000;mcast_recv_buf_size=80000):
PING(timeout=2000;num_initial_members=6):
MERGE2(min_interval=5000;max_interval=10000):
FD_SOCK:VERIFY_SUSPECT(timeout=1500):
pbcast.NAKACK(gc_lag=10;retransmit_timeout=3000):
UNICAST(timeout=5000):
pbcast.STABLE(desired_avg_gossip=20000):
FRAG:
pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;print_local_addr=false)"
propertySeparator="::"
/>
The only property necessay is the connect String used by jgroups to configure itself. Refer to the Jgroups documentation for explanation
of all the protocols. The example above uses UDP multicast. If the connect property is not specified the default JGroups connection will be
used.
-->
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=automatic,
multicastGroupAddress=230.0.0.1,
multicastGroupPort=4446, timeToLive=1"
propertySeparator=","
/>
<!--
CacheManagerPeerListener
========================
(Enable for distributed operation)
Specifies a CacheManagerPeerListenerFactory which will be used to create a
CacheManagerPeerListener, which
listens for messages from cache replicators participating in the cluster.
The attributes of cacheManagerPeerListenerFactory are:
class - a fully qualified factory class name
properties - comma separated properties having meaning only to the factory.
Ehcache comes with a built-in RMI-based distribution system. The listener component is
RMICacheManagerPeerListener which is configured using
RMICacheManagerPeerListenerFactory. It is configured as per the following example:
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="hostName=fully_qualified_hostname_or_ip,
port=40001,
socketTimeoutMillis=120000"
propertySeparator="," />
All properties are optional. They are:
* hostName - the hostName of the host the listener is running on. Specify
where the host is multihomed and you want to control the interface over which cluster
messages are received. Defaults to the host name of the default interface if not
specified.
* port - the port the RMI Registry listener listens on. This defaults to a free port if not specified.
* remoteObjectPort - the port number on which the remote objects bound in the registry receive calls.
This defaults to a free port if not specified.
* socketTimeoutMillis - the number of ms client sockets will stay open when sending
messages to the listener. This should be long enough for the slowest message.
If not specified it defaults 120000ms.
-->
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"/>
<!--
Cache configuration
===================
The following attributes are required.
name:
Sets the name of the cache. This is used to identify the cache. It must be unique.
maxElementsInMemory:
Sets the maximum number of objects that will be created in memory
maxElementsOnDisk:
Sets the maximum number of objects that will be maintained in the DiskStore
The default value is zero, meaning unlimited.
eternal:
Sets whether elements are eternal. If eternal, timeouts are ignored and the
element is never expired.
overflowToDisk:
Sets whether elements can overflow to disk when the memory store
has reached the maxInMemory limit.
The following attributes and elements are optional.
timeToIdleSeconds:
Sets the time to idle for an element before it expires.
i.e. The maximum amount of time between accesses before an element expires
Is only used if the element is not eternal.
Optional attribute. A value of 0 means that an Element can idle for infinity.
The default value is 0.
timeToLiveSeconds:
Sets the time to live for an element before it expires.
i.e. The maximum time between creation time and when an element expires.
Is only used if the element is not eternal.
Optional attribute. A value of 0 means that and Element can live for infinity.
The default value is 0.
diskPersistent:
Whether the disk store persists between restarts of the Virtual Machine.
The default value is false.
diskExpiryThreadIntervalSeconds:
The number of seconds between runs of the disk expiry thread. The default value
is 120 seconds.
diskSpoolBufferSizeMB:
This is the size to allocate the DiskStore for a spool buffer. Writes are made
to this area and then asynchronously written to disk. The default size is 30MB.
Each spool buffer is used only by its cache. If you get OutOfMemory errors consider
lowering this value. To improve DiskStore performance consider increasing it. Trace level
logging in the DiskStore will show if put back ups are occurring.
memoryStoreEvictionPolicy:
Policy would be enforced upon reaching the maxElementsInMemory limit. Default
policy is Least Recently Used (specified as LRU). Other policies available -
First In First Out (specified as FIFO) and Less Frequently Used
(specified as LFU)
Cache elements can also contain sub elements which take the same format of a factory class
and properties. Defined sub-elements are:
* cacheEventListenerFactory - Enables registration of listeners for cache events, such as
put, remove, update, and expire.
* bootstrapCacheLoaderFactory - Specifies a BootstrapCacheLoader, which is called by a
cache on initialisation to prepopulate itself.
* cacheExtensionFactory - Specifies a CacheExtension, a generic mechansim to tie a class
which holds a reference to a cache to the cache lifecycle.
* cacheExceptionHandlerFactory - Specifies a CacheExceptionHandler, which is called when
cache exceptions occur.
* cacheLoaderFactory - Specifies a CacheLoader, which can be used both asynchronously and
synchronously to load objects into a cache.
RMI Cache Replication
Each cache that will be distributed needs to set a cache event listener which replicates
messages to the other CacheManager peers. For the built-in RMI implementation this is done
by adding a cacheEventListenerFactory element of type RMICacheReplicatorFactory to each
distributed cache's configuration as per the following example:
<cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
properties="replicateAsynchronously=true,
replicatePuts=true,
replicateUpdates=true,
replicateUpdatesViaCopy=true,
replicateRemovals=true
asynchronousReplicationIntervalMillis=<number of milliseconds"
propertySeparator="," />
The RMICacheReplicatorFactory recognises the following properties:
* replicatePuts=true|false - whether new elements placed in a cache are
replicated to others. Defaults to true.
* replicateUpdates=true|false - whether new elements which override an
element already existing with the same key are replicated. Defaults to true.
* replicateRemovals=true - whether element removals are replicated. Defaults to true.
* replicateAsynchronously=true | false - whether replications are
asynchronous (true) or synchronous (false). Defaults to true.
* replicateUpdatesViaCopy=true | false - whether the new elements are
copied to other caches (true), or whether a remove message is sent. Defaults to true.
* asynchronousReplicationIntervalMillis=<number of milliseconds> - The asynchronous
replicator runs at a set interval of milliseconds. The default is 1000. The minimum
is 10. This property is only applicable if replicateAsynchronously=true
For the Jgroups replication this is done with:
<cacheEventListenerFactory class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory"
properties="replicateAsynchronously=true, replicatePuts=true,
replicateUpdates=true, replicateUpdatesViaCopy=false,
replicateRemovals=true,asynchronousReplicationIntervalMillis=1000"/>
This listener supports the same property than the RMICacheReplicationFactory.
Cluster Bootstrapping
The RMIBootstrapCacheLoader bootstraps caches in clusters where RMICacheReplicators are
used. It is configured as per the following example:
<bootstrapCacheLoaderFactory
class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory"
properties="bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000"
propertySeparator="," />
The RMIBootstrapCacheLoaderFactory recognises the following optional properties:
* bootstrapAsynchronously=true|false - whether the bootstrap happens in the background
after the cache has started. If false, bootstrapping must complete before the cache is
made available. The default value is true.
* maximumChunkSizeBytes=<integer> - Caches can potentially be very large, larger than the
memory limits of the VM. This property allows the bootstraper to fetched elements in
chunks. The default chunk size is 5000000 (5MB).
Cache Exception Handling
By default, most cache operations will propagate a runtime CacheException on failure. An
interceptor, using a dynamic proxy, may be configured so that a CacheExceptionHandler can
be configured to intercept Exceptions. Errors are not intercepted.
It is configured as per the following example:
<cacheExceptionHandlerFactory class="com.example.ExampleExceptionHandlerFactory"
properties="logLevel=FINE"/>
Caches with ExceptionHandling configured are not of type Cache, but are of type Ehcache only,
and are not available using CacheManager.getCache(), but using CacheManager.getEhcache().
Cache Loader
A default CacheLoader may be set which loads objects into the cache through asynchronous and
synchronous methods on Cache. This is different to the bootstrap cache loader, which is used
only in distributed caching.
It is configured as per the following example:
<cacheLoaderFactory class="com.example.ExampleCacheLoaderFactory"
properties="type=int,startCounter=10"/>
Cache Extension
CacheExtensions are a general purpose mechanism to allow generic extensions to a Cache.
CacheExtensions are tied into the Cache lifecycle.
CacheExtensions are created using the CacheExtensionFactory which has a
<code>createCacheCacheExtension()</code> method which takes as a parameter a
Cache and properties. It can thus call back into any public method on Cache, including, of
course, the load methods.
Extensions are added as per the following example:
<cacheExtensionFactory class="com.example.FileWatchingCacheRefresherExtensionFactory"
properties="refreshIntervalMillis=18000, loaderTimeout=3000,
flushPeriod=whatever, someOtherProperty=someValue ..."/>
-->
<!--
Mandatory Default Cache configuration. These settings will be applied to caches
created programmtically using CacheManager.add(String cacheName).
The defaultCache has an implicit name "default" which is a reserved cache name.
-->
<defaultCache
maxElementsInMemory="10000"
eternal="false"
timeToIdleSeconds="120"
timeToLiveSeconds="120"
overflowToDisk="false"
diskSpoolBufferSizeMB="30"
maxElementsOnDisk="10000000"
diskPersistent="false"
diskExpiryThreadIntervalSeconds="120"
memoryStoreEvictionPolicy="LRU"
/>
<!--
Sample caches. Following are some example caches. Remove these before use.
-->
<!--
Sample cache named sampleCache1
This cache contains a maximum in memory of 10000 elements, and will expire
an element if it is idle for more than 5 minutes and lives for more than
10 minutes.
If there are more than 10000 elements it will overflow to the
disk cache, which in this configuration will go to wherever java.io.tmp is
defined on your system. On a standard Linux system this will be /tmp"
-->
<!--
<cache name="sampleCache1"
maxElementsInMemory="10000"
maxElementsOnDisk="1000"
eternal="false"
overflowToDisk="true"
diskSpoolBufferSizeMB="20"
timeToIdleSeconds="300"
timeToLiveSeconds="600"
memoryStoreEvictionPolicy="LFU"
/>
-->
<!--
Sample cache named sampleCache2
This cache has a maximum of 1000 elements in memory. There is no overflow to disk, so 1000
is also the maximum cache size. Note that when a cache is eternal, timeToLive and
timeToIdle are not used and do not need to be specified.
-->
<!--
<cache name="sampleCache2"
maxElementsInMemory="1000"
eternal="true"
overflowToDisk="false"
memoryStoreEvictionPolicy="FIFO"
/>
-->
<!--
Sample cache named sampleCache3. This cache overflows to disk. The disk store is
persistent between cache and VM restarts. The disk expiry thread interval is set to 10
minutes, overriding the default of 2 minutes.
-->
<!--
<cache name="sampleCache3"
maxElementsInMemory="500"
eternal="false"
overflowToDisk="true"
timeToIdleSeconds="300"
timeToLiveSeconds="600"
diskPersistent="true"
diskExpiryThreadIntervalSeconds="1"
memoryStoreEvictionPolicy="LFU"
/>
-->
<!--
Sample distributed cache named sampleDistributedCache1.
This cache replicates using defaults.
It also bootstraps from the cluster, using default properties.
-->
<!--
<cache name="sampleDistributedCache1"
maxElementsInMemory="10"
eternal="false"
timeToIdleSeconds="100"
timeToLiveSeconds="100"
overflowToDisk="false">
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"/>
<bootstrapCacheLoaderFactory
class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory"/>
</cache>
-->
<!--
Sample distributed cache named sampleDistributedCache2.
This cache replicates using specific properties.
It only replicates updates and does so synchronously via copy
-->
<!--
<cache name="sampleDistributedCache2"
maxElementsInMemory="10"
eternal="false"
timeToIdleSeconds="100"
timeToLiveSeconds="100"
overflowToDisk="false">
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
properties="replicateAsynchronously=false, replicatePuts=false,
replicateUpdates=true, replicateUpdatesViaCopy=true,
replicateRemovals=false"/>
</cache>
-->
<!--
Sample distributed cache named sampleDistributedCache3.
This cache replicates using defaults except that the asynchronous replication
interval is set to 200ms.
-->
<!--
<cache name="sampleDistributedCache3"
maxElementsInMemory="10"
eternal="false"
timeToIdleSeconds="100"
timeToLiveSeconds="100"
overflowToDisk="false">
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
properties="asynchronousReplicationIntervalMillis=200"/>
</cache>
-->
</ehcache>

View File

@ -1,90 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="false">
<!-- ================================= -->
<!-- Preserve messages in a local file -->
<!-- ================================= -->
<!-- A time/date based rolling appender -->
<appender name="FILE" class="org.apache.log4j.rolling.RollingFileAppender">
<param name="Append" value="true"/>
<param name="Threshold" value="DEBUG"/>
<rollingPolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">
<param name="FileNamePattern" value="/var/log/cloud/cloud.log.%d{yyyy-MM-dd}{GMT}.gz"/>
<param name="ActiveFileName" value="/var/log/cloud/cloud.log"/>
</rollingPolicy>
<layout class="org.apache.log4j.EnhancedPatternLayout">
<param name="ConversionPattern" value="%d{ISO8601}{GMT} %-5p [%c{3}] (%t:%x) %m%n"/>
</layout>
</appender>
<appender name="APISERVER" class="org.apache.log4j.rolling.RollingFileAppender">
<param name="Append" value="true"/>
<param name="Threshold" value="DEBUG"/>
<rollingPolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">
<param name="FileNamePattern" value="/var/log/cloud/api-server.log.%d{yyyy-MM-dd}{GMT}.gz"/>
<param name="ActiveFileName" value="/var/log/cloud/api-server.log"/>
</rollingPolicy>
<layout class="org.apache.log4j.EnhancedPatternLayout">
<param name="ConversionPattern" value="%d{ISO8601}{GMT} %m%n"/>
</layout>
</appender>
<!-- ============================== -->
<!-- Append messages to the console -->
<!-- ============================== -->
<appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender">
<param name="Target" value="System.out"/>
<param name="Threshold" value="INFO"/>
<layout class="org.apache.log4j.EnhancedPatternLayout">
<param name="ConversionPattern" value="%d{ABSOLUTE}{GMT} %5p %c{1}:%L - %m%n"/>
</layout>
</appender>
<!-- ================ -->
<!-- Limit categories -->
<!-- ================ -->
<category name="com.cloud">
<priority value="DEBUG"/>
</category>
<!-- Limit the org.apache category to INFO as its DEBUG is verbose -->
<category name="org.apache">
<priority value="INFO"/>
</category>
<category name="org">
<priority value="INFO"/>
</category>
<category name="net">
<priority value="INFO"/>
</category>
<category name="apiserver.com.cloud">
<priority value="DEBUG"/>
</category>
<logger name="apiserver.com.cloud" additivity="false">
<level value="DEBUG"/>
<appender-ref ref="APISERVER"/>
</logger>
<!-- ======================= -->
<!-- Setup the Root category -->
<!-- ======================= -->
<root>
<level value="INFO"/>
<appender-ref ref="CONSOLE"/>
<appender-ref ref="FILE"/>
</root>
</log4j:configuration>

View File

@ -1,90 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="false">
<!-- ================================= -->
<!-- Preserve messages in a local file -->
<!-- ================================= -->
<!-- A time/date based rolling appender -->
<appender name="FILE" class="org.apache.log4j.rolling.RollingFileAppender">
<param name="Append" value="true"/>
<param name="Threshold" value="DEBUG"/>
<rollingPolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">
<param name="FileNamePattern" value="@logdir@/cloud.log.%d{yyyy-MM-dd}{GMT}.gz"/>
<param name="ActiveFileName" value="@logdir@/cloud.log"/>
</rollingPolicy>
<layout class="org.apache.log4j.EnhancedPatternLayout">
<param name="ConversionPattern" value="%d{ISO8601}{GMT} %-5p [%c{3}] (%t:%x) %m%n"/>
</layout>
</appender>
<appender name="APISERVER" class="org.apache.log4j.rolling.RollingFileAppender">
<param name="Append" value="true"/>
<param name="Threshold" value="DEBUG"/>
<rollingPolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">
<param name="FileNamePattern" value="@logdir@/api-server.log.%d{yyyy-MM-dd}{GMT}.gz"/>
<param name="ActiveFileName" value="@logdir@/api-server.log"/>
</rollingPolicy>
<layout class="org.apache.log4j.EnhancedPatternLayout">
<param name="ConversionPattern" value="%d{ISO8601}{GMT} %m%n"/>
</layout>
</appender>
<!-- ============================== -->
<!-- Append messages to the console -->
<!-- ============================== -->
<appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender">
<param name="Target" value="System.out"/>
<param name="Threshold" value="INFO"/>
<layout class="org.apache.log4j.EnhancedPatternLayout">
<param name="ConversionPattern" value="%d{ABSOLUTE}{GMT} %5p %c{1}:%L - %m%n"/>
</layout>
</appender>
<!-- ================ -->
<!-- Limit categories -->
<!-- ================ -->
<category name="com.cloud">
<priority value="DEBUG"/>
</category>
<!-- Limit the org.apache category to INFO as its DEBUG is verbose -->
<category name="org.apache">
<priority value="INFO"/>
</category>
<category name="org">
<priority value="INFO"/>
</category>
<category name="net">
<priority value="INFO"/>
</category>
<category name="apiserver.com.cloud">
<priority value="DEBUG"/>
</category>
<logger name="apiserver.com.cloud" additivity="false">
<level value="DEBUG"/>
<appender-ref ref="APISERVER"/>
</logger>
<!-- ======================= -->
<!-- Setup the Root category -->
<!-- ======================= -->
<root>
<level value="INFO"/>
<appender-ref ref="CONSOLE"/>
<appender-ref ref="FILE"/>
</root>
</log4j:configuration>

View File

@ -1,147 +0,0 @@
<?xml version='1.0' encoding='utf-8'?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!-- Note: A "Server" is not itself a "Container", so you may not
define subcomponents such as "Valves" at this level.
Documentation at /docs/config/server.html
-->
<Server port="8005" shutdown="SHUTDOWN">
<!--APR library loader. Documentation at /docs/apr.html -->
<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
<!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html -->
<Listener className="org.apache.catalina.core.JasperListener" />
<!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html -->
<Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" />
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
<!-- Global JNDI resources
Documentation at /docs/jndi-resources-howto.html
-->
<GlobalNamingResources>
<!-- Editable user database that can also be used by
UserDatabaseRealm to authenticate users
-->
<Resource name="UserDatabase" auth="Container"
type="org.apache.catalina.UserDatabase"
description="User database that can be updated and saved"
factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
pathname="conf/tomcat-users.xml" />
</GlobalNamingResources>
<!-- A "Service" is a collection of one or more "Connectors" that share
a single "Container" Note: A "Service" is not itself a "Container",
so you may not define subcomponents such as "Valves" at this level.
Documentation at /docs/config/service.html
-->
<Service name="Catalina">
<!--The connectors can use a shared executor, you can define one or more named thread pools-->
<Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
maxThreads="150" minSpareThreads="25"/>
<!-- A "Connector" represents an endpoint by which requests are received
and responses are returned. Documentation at :
Java HTTP Connector: /docs/config/http.html (blocking & non-blocking)
Java AJP Connector: /docs/config/ajp.html
APR (HTTP/AJP) Connector: /docs/apr.html
Define a non-SSL HTTP/1.1 Connector on port 8080
-->
<!--
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
-->
<!-- A "Connector" using the shared thread pool-->
<Connector executor="tomcatThreadPool"
port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol"
connectionTimeout="20000" disableUploadTimeout="true"
acceptCount="150" enableLookups="false" maxThreads="150"
maxHttpHeaderSize="8192" redirectPort="8443" />
<!-- Define a SSL HTTP/1.1 Connector on port 8443
This connector uses the JSSE configuration, when using APR, the
connector should be using the OpenSSL style configuration
described in the APR documentation -->
<!--
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS"
keystoreType="PKCS12"
keystoreFile="conf\cloud-localhost.pk12"
keystorePass="password"
/>
-->
<!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
<!-- An Engine represents the entry point (within Catalina) that processes
every request. The Engine implementation for Tomcat stand alone
analyzes the HTTP headers included with the request, and passes them
on to the appropriate Host (virtual host).
Documentation at /docs/config/engine.html -->
<!-- You should set jvmRoute to support load-balancing via AJP ie :
<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">
-->
<Engine name="Catalina" defaultHost="localhost">
<!--For clustering, please take a look at documentation at:
/docs/cluster-howto.html (simple how to)
/docs/config/cluster.html (reference documentation) -->
<!--
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
-->
<!-- The request dumper valve dumps useful debugging information about
the request and response data received and sent by Tomcat.
Documentation at: /docs/config/valve.html -->
<!--
<Valve className="org.apache.catalina.valves.RequestDumperValve"/>
-->
<!-- This Realm uses the UserDatabase configured in the global JNDI
resources under the key "UserDatabase". Any edits
that are performed against this UserDatabase are immediately
available for use by the Realm. -->
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
<!-- Define the default virtual host
Note: XML Schema validation will not work with Xerces 2.2.
-->
<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="true"
xmlValidation="false" xmlNamespaceAware="false">
<!-- SingleSignOn valve, share authentication between web applications
Documentation at: /docs/config/valve.html -->
<!--
<Valve className="org.apache.catalina.authenticator.SingleSignOn" />
-->
<!-- Access log processes all example.
Documentation at: /docs/config/valve.html -->
<Valve className="org.apache.catalina.valves.FastCommonAccessLogValve" directory="logs"
prefix="access_log." suffix=".txt" pattern="common" resolveHosts="false"/>
</Host>
</Engine>
</Service>
</Server>

View File

@ -72,7 +72,6 @@
<include name="${agent.jar}" />
<include name="${utils.jar}" />
<include name="${core.jar}" />
<include name="${api.jar}" />
</zipfileset>
<zipfileset dir="${agent.dist.dir}" filemode="770">
@ -134,7 +133,9 @@
<include name="**/*" />
</fileset>
<lib dir="${jar.dir}">
<include name="cloud-*.jar" />
<include name="${core.jar}" />
<include name="${utils.jar}" />
<include name="${server.jar}" />
</lib>
<zipfileset dir="${scripts.target.dir}" prefix="WEB-INF/lib/scripts" filemode="555">
<include name="**/*" />
@ -164,7 +165,6 @@
<include name="${agent.jar}" />
<include name="${utils.jar}" />
<include name="${core.jar}" />
<include name="${api.jar}" />
</zipfileset>
<zipfileset dir="${console-proxy.dist.dir}">
<exclude name="**/*.sh" />
@ -232,8 +232,7 @@
</delete>
</target>
<!-- The following target is OBSOLETE. If you need to add a jar file / target, go to the function def runant(target): in wscrpit_build, and list the jar file and the target in the appropriate places -->
<target name="sendjarfiles" depends="compile-utils, compile-core, compile-server, compile-agent, compile-console-common, compile-console-proxy, build-console-viewer">
<target name="sendjarfiles" depends="compile-utils, compile-core, compile-server, compile-agent, compile-console-common, compile-console-proxy, build-console-viewer">
<copy todir="${waf.artifacts}">
<fileset dir="${target.dir}/jar"/>
</copy>

View File

@ -70,8 +70,6 @@ registerIso=com.cloud.api.commands.RegisterIsoCmd;15
updateIso=com.cloud.api.commands.UpdateIsoCmd;15
deleteIso=com.cloud.api.commands.DeleteIsoCmd;15
copyIso=com.cloud.api.commands.CopyIsoCmd;15
updateIsoPermissions=com.cloud.api.commands.UpdateIsoPermissionsCmd;15
listIsoPermissions=com.cloud.api.commands.ListIsoPermissionsCmd;15
#### guest OS commands
listOsTypes=com.cloud.api.commands.ListGuestOsCmd;15
@ -138,7 +136,6 @@ listSystemVms=com.cloud.api.commands.ListSystemVMsCmd;1
#### configuration commands
updateConfiguration=com.cloud.api.commands.UpdateCfgCmd;1
listConfigurations=com.cloud.api.commands.ListCfgsByCmd;1
addConfig=com.cloud.api.commands.AddConfigCmd;15
#### pod commands
createPod=com.cloud.api.commands.CreatePodCmd;1
@ -195,8 +192,6 @@ createStoragePool=com.cloud.api.commands.CreateStoragePoolCmd;1
updateStoragePool=com.cloud.api.commands.UpdateStoragePoolCmd;1
deleteStoragePool=com.cloud.api.commands.DeletePoolCmd;1
listClusters=com.cloud.api.commands.ListClustersCmd;1
enableStorageMaintenance=com.cloud.api.commands.PreparePrimaryStorageForMaintenanceCmd;1
cancelStorageMaintenance=com.cloud.api.commands.CancelPrimaryStorageMaintenanceCmd;1
#### network group commands
createNetworkGroup=com.cloud.api.commands.CreateNetworkGroupCmd;11

View File

@ -34,10 +34,7 @@
<param name="cache.size">50</param>
<param name="cache.time.to.live">-1</param>
</dao>
<dao name="DiskOffering" class="com.cloud.storage.dao.DiskOfferingDaoImpl">
<param name="cache.size">50</param>
<param name="cache.time.to.live">-1</param>
</dao>
<dao name="DiskOffering" class="com.cloud.storage.dao.DiskOfferingDaoImpl"/>
<dao name="host zone" class="com.cloud.dc.dao.DataCenterDaoImpl">
<param name="cache.size">50</param>
<param name="cache.time.to.live">-1</param>
@ -80,6 +77,7 @@
<dao name="Network Group Rules" class="com.cloud.network.security.dao.NetworkGroupRulesDaoImpl"/>
<dao name="Network Group Work" class="com.cloud.network.security.dao.NetworkGroupWorkDaoImpl"/>
<dao name="Network Group VM Ruleset log" class="com.cloud.network.security.dao.VmRulesetLogDaoImpl"/>
<dao name="Pricing" class="com.cloud.pricing.dao.PricingDaoImpl"/>
<dao name="Alert" class="com.cloud.alert.dao.AlertDaoImpl"/>
<dao name="Capacity" class="com.cloud.capacity.dao.CapacityDaoImpl"/>
<dao name="Domain" class="com.cloud.domain.dao.DomainDaoImpl"/>
@ -146,7 +144,6 @@
<adapters key="com.cloud.resource.Discoverer">
<adapter name="SecondaryStorage" class="com.cloud.storage.secondary.SecondaryStorageDiscoverer"/>
<adapter name="XenServer" class="com.cloud.hypervisor.xen.discoverer.XcpServerDiscoverer"/>
</adapters>
<manager name="Cluster Manager" class="com.cloud.cluster.DummyClusterManagerImpl">
@ -224,7 +221,6 @@
<param name="cache.time.to.live">-1</param>
</dao>
<dao name="DiskOffering configuration server" class="com.cloud.storage.dao.DiskOfferingDaoImpl"/>
<dao name="Snapshot policy defaults" class="com.cloud.storage.dao.SnapshotPolicyDaoImpl"/>
<dao name="Events configuration server" class="com.cloud.event.dao.EventDaoImpl"/>
</configuration-server>

View File

@ -4,7 +4,7 @@
# DISABLE the post-percentinstall java repacking and line number stripping
# we need to find a way to just disable the java repacking and line number stripping, but not the autodeps
%define _ver 2.1.98
%define _ver 2.1.2.1
%define _rel 1
Name: cloud
@ -35,7 +35,7 @@ BuildRequires: jpackage-utils
BuildRequires: gcc
BuildRequires: glibc-devel
%global _premium %(tar jtvmf %{SOURCE0} '*/cloudstack-proprietary/' --occurrence=1 2>/dev/null | wc -l)
%global _premium %(tar jtvmf %{SOURCE0} '*/premium/' --occurrence=1 2>/dev/null | wc -l)
%description
This is the Cloud.com Stack, a highly-scalable elastic, open source,
@ -190,22 +190,6 @@ Group: System Environment/Libraries
%description setup
The Cloud.com setup tools let you set up your Management Server and Usage Server.
%package agent-libs
Summary: Cloud.com agent libraries
Requires: java >= 1.6.0
Requires: %{name}-utils = %{version}-%{release}, %{name}-core = %{version}-%{release}, %{name}-deps = %{version}-%{release}
Requires: commons-httpclient
#Requires: commons-codec
Requires: commons-collections
Requires: commons-pool
Requires: commons-dbcp
Requires: jakarta-commons-logging
Requires: jpackage-utils
Group: System Environment/Libraries
%description agent-libs
The Cloud.com agent libraries are used by the Cloud Agent and the Cloud
Console Proxy.
%package agent
Summary: Cloud.com agent
Obsoletes: vmops-agent < %{version}-%{release}
@ -213,10 +197,8 @@ Obsoletes: vmops-console < %{version}-%{release}
Obsoletes: cloud-console < %{version}-%{release}
Requires: java >= 1.6.0
Requires: %{name}-utils = %{version}-%{release}, %{name}-core = %{version}-%{release}, %{name}-deps = %{version}-%{release}
Requires: %{name}-agent-libs = %{version}-%{release}
Requires: %{name}-agent-scripts = %{version}-%{release}
Requires: %{name}-vnet = %{version}-%{release}
Requires: python
Requires: %{name}-python = %{version}-%{release}
Requires: commons-httpclient
#Requires: commons-codec
@ -235,8 +217,6 @@ Requires: libcgroup
Requires: /usr/bin/uuidgen
Requires: augeas >= 0.7.1
Requires: rsync
Requires: /bin/egrep
Requires: /sbin/ip
Group: System Environment/Libraries
%description agent
The Cloud.com agent is in charge of managing shared computing resources in
@ -246,9 +226,7 @@ will participate in your cloud.
%package console-proxy
Summary: Cloud.com console proxy
Requires: java >= 1.6.0
Requires: %{name}-utils = %{version}-%{release}, %{name}-core = %{version}-%{release}, %{name}-deps = %{version}-%{release}, %{name}-agent-libs = %{version}-%{release}
Requires: python
Requires: %{name}-python = %{version}-%{release}
Requires: %{name}-utils = %{version}-%{release}, %{name}-core = %{version}-%{release}, %{name}-deps = %{version}-%{release}, %{name}-agent = %{version}-%{release}
Requires: commons-httpclient
#Requires: commons-codec
Requires: commons-collections
@ -261,8 +239,6 @@ Requires: /sbin/service
Requires: /sbin/chkconfig
Requires: /usr/bin/uuidgen
Requires: augeas >= 0.7.1
Requires: /bin/egrep
Requires: /sbin/ip
Group: System Environment/Libraries
%description console-proxy
The Cloud.com console proxy is the service in charge of granting console
@ -445,9 +421,7 @@ fi
%files utils
%defattr(0644,root,root,0755)
%{_javadir}/%{name}-utils.jar
%{_javadir}/%{name}-api.jar
%doc %{_docdir}/%{name}-%{version}/sccs-info
%doc %{_docdir}/%{name}-%{version}/version-info
%doc %{_docdir}/%{name}-%{version}/configure-info
%doc README
%doc HACKING
@ -485,17 +459,19 @@ fi
%{_libdir}/%{name}/agent/scripts/installer/*
%{_libdir}/%{name}/agent/scripts/network/domr/*.sh
%{_libdir}/%{name}/agent/scripts/storage/*.sh
%{_libdir}/%{name}/agent/scripts/storage/zfs/*
%{_libdir}/%{name}/agent/scripts/storage/qcow2/*
%{_libdir}/%{name}/agent/scripts/storage/secondary/*
%{_libdir}/%{name}/agent/scripts/util/*
%{_libdir}/%{name}/agent/scripts/vm/*.sh
%{_libdir}/%{name}/agent/scripts/vm/storage/nfs/*
%{_libdir}/%{name}/agent/scripts/vm/storage/iscsi/*
%{_libdir}/%{name}/agent/scripts/vm/network/*
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/*.sh
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/kvm/*
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xen/*
%{_libdir}/%{name}/agent/vms/systemvm.zip
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/*
%{_libdir}/%{name}/agent/vms/systemvm-premium.zip
%doc README
%doc HACKING
%doc debian/copyright
@ -617,15 +593,9 @@ fi
%doc HACKING
%doc debian/copyright
%files agent-libs
%defattr(0644,root,root,0755)
%{_javadir}/%{name}-agent.jar
%doc README
%doc HACKING
%doc debian/copyright
%files agent
%defattr(0644,root,root,0755)
%{_javadir}/%{name}-agent.jar
%config(noreplace) %{_sysconfdir}/%{name}/agent/agent.properties
%config %{_sysconfdir}/%{name}/agent/developer.properties.template
%config %{_sysconfdir}/%{name}/agent/environment.properties

View File

@ -2,19 +2,6 @@
import sys, os, subprocess, errno, re
# ---- This snippet of code adds the sources path and the waf configured PYTHONDIR to the Python path ----
# ---- We do this so cloud_utils can be looked up in the following order:
# ---- 1) Sources directory
# ---- 2) waf configured PYTHONDIR
# ---- 3) System Python path
for pythonpath in (
"@PYTHONDIR@",
os.path.join(os.path.dirname(__file__),os.path.pardir,os.path.pardir,"python","lib"),
):
if os.path.isdir(pythonpath): sys.path.insert(0,pythonpath)
# ---- End snippet of code ----
import cloud_utils
E_GENERIC= 1
E_NOKVM = 2
E_NODEFROUTE = 3
@ -131,57 +118,98 @@ CentOS = os.path.exists("/etc/centos-release") or ( os.path.exists("/etc/redhat-
#--------------- procedure starts here ------------
def main():
servicename = "@PACKAGE@-console-proxy"
stderr("Welcome to the Cloud Console Proxy setup")
stderr("")
stderr("Welcome to the Cloud Console Proxy setup")
stderr("")
try:
check_hostname()
stderr("The hostname of this machine is properly set up")
except CalledProcessError,e:
bail(E_NOFQDN,"This machine does not have an FQDN (fully-qualified domain name) for a hostname")
try:
check_hostname()
stderr("The hostname of this machine is properly set up")
except CalledProcessError,e:
bail(E_NOFQDN,"This machine does not have an FQDN (fully-qualified domain name) for a hostname")
try:
service("@PACKAGE@-console-proxy","status")
except CalledProcessError,e:
stderr("Stopping the Cloud Console Proxy")
cloud_utils.stop_service(servicename)
m = service("@PACKAGE@-console-proxy","stop")
print m.stdout + m.stderr
stderr("Cloud Console Proxy stopped")
ports = "8002".split()
if Fedora or CentOS:
try:
o = chkconfig("--list","iptables")
if ":on" in o.stdout and os.path.exists("/etc/sysconfig/iptables"):
stderr("Setting up firewall rules to permit traffic to Cloud services")
service.iptables.start() ; print o.stdout + o.stderr
for p in ports: iptables("-I","INPUT","1","-p","tcp","--dport",p,'-j','ACCEPT')
o = service.iptables.save() ; print o.stdout + o.stderr
except CalledProcessError,e:
print e.stdout+e.stderr
bail(E_FWRECONFIGFAILED,"Firewall rules could not be set")
else:
stderr("Setting up firewall rules to permit traffic to Cloud services")
try:
for p in ports: ufw.allow(p)
stderr("Rules set")
except CalledProcessError,e:
print e.stdout+e.stderr
bail(E_FWRECONFIGFAILED,"Firewall rules could not be set")
stderr("Determining default route")
routes = ip.route().stdout.splitlines()
defaultroute = [ x for x in routes if x.startswith("default") ]
if not defaultroute: bail(E_NODEFROUTE,"Your network configuration does not have a default route")
dev = defaultroute[0].split()[4]
stderr("Default route assigned to device %s"%dev)
ports = "8002".split()
if Fedora or CentOS:
try:
o = chkconfig("--list","iptables")
if ":on" in o.stdout and os.path.exists("/etc/sysconfig/iptables"):
stderr("Setting up firewall rules to permit traffic to Cloud services")
service.iptables.start() ; print o.stdout + o.stderr
for p in ports: iptables("-I","INPUT","1","-p","tcp","--dport",p,'-j','ACCEPT')
o = service.iptables.save() ; print o.stdout + o.stderr
except CalledProcessError,e:
print e.stdout+e.stderr
bail(E_FWRECONFIGFAILED,"Firewall rules could not be set")
else:
stderr("Setting up firewall rules to permit traffic to Cloud services")
try:
for p in ports: ufw.allow(p)
stderr("Rules set")
except CalledProcessError,e:
print e.stdout+e.stderr
bail(E_FWRECONFIGFAILED,"Firewall rules could not be set")
stderr("We are going to enable ufw now. This may disrupt network connectivity and service availability. See the ufw documentation for information on how to manage ufw firewall policies.")
try:
o = ufw.enable < "y\n" ; print o.stdout + o.stderr
except CalledProcessError,e:
print e.stdout+e.stderr
bail(E_FWRECONFIGFAILED,"Firewall could not be enabled")
stderr("We are going to enable ufw now. This may disrupt network connectivity and service availability. See the ufw documentation for information on how to manage ufw firewall policies.")
try:
o = ufw.enable < "y\n" ; print o.stdout + o.stderr
except CalledProcessError,e:
print e.stdout+e.stderr
bail(E_FWRECONFIGFAILED,"Firewall could not be enabled")
cloud_utils.setup_consoleproxy_config("@CPSYSCONFDIR@/agent.properties")
stderr("Enabling and starting the Cloud Console Proxy")
cloud_utils.enable_service(servicename)
stderr("Cloud Console Proxy restarted")
stderr("Examining console-proxy configuration")
fn = "@CPSYSCONFDIR@/agent.properties"
text = file(fn).read(-1)
lines = [ s.strip() for s in text.splitlines() ]
confopts = dict([ m.split("=",1) for m in lines if "=" in m and not m.startswith("#") ])
confposes = dict([ (m.split("=",1)[0],n) for n,m in enumerate(lines) if "=" in m and not m.startswith("#") ])
if not "guid" in confopts:
stderr("Generating GUID for this console-proxy")
confopts['guid'] = uuidgen().stdout.strip()
try: host = confopts["host"]
except KeyError: host = "localhost"
stderr("Please enter the host name of the management server that this console-proxy will connect to: (just hit ENTER to go with %s)",host)
newhost = raw_input().strip()
if newhost: host = newhost
confopts["host"] = host
confopts["private.network.device"] = dev
confopts["public.network.device"] = dev
for opt,val in confopts.items():
line = "=".join([opt,val])
if opt not in confposes: lines.append(line)
else: lines[confposes[opt]] = line
text = "\n".join(lines)
try: file(fn,"w").write(text)
except Exception: bail(E_CPRECONFIGFAILED,"Console Proxy configuration failed")
stderr("")
stderr("Cloud Console Proxy setup completed successfully")
stderr("Starting the Cloud Console Proxy")
try:
m = service("@PACKAGE@-console-proxy","start")
print m.stdout + m.stderr
except CalledProcessError,e:
print e.stdout + e.stderr
bail(E_CPFAILEDTOSTART,"@PACKAGE@-console-proxy failed to start")
if __name__ == "__main__":
main()
# FIXMES: 1) nullify networkmanager on ubuntu (asking the user first) and enable the networking service permanently

View File

@ -19,11 +19,9 @@ pod=default
zone=default
#private.network.device= the private nic device
# if this is commented, it is autodetected on service startup
# private.network.device=cloudbr0
private.network.device=cloudbr0
#public.network.device= the public nic device
# if this is commented, it is autodetected on service startup
# public.network.device=cloudbr0
public.network.device=cloudbr0
#guid= a GUID to identify the agent

View File

@ -21,21 +21,7 @@ export CLASSPATH
set -e
cd "@CPLIBDIR@"
echo Current directory is "$PWD"
echo CLASSPATH to run the console proxy: "$CLASSPATH"
export PATH=/sbin:/usr/sbin:"$PATH"
SERVICEARGS=
for x in private public ; do
configuration=`grep -q "^$x.network.device" "@CPSYSCONFDIR@"/agent.properties || true`
if [ -n "$CONFIGURATION" ] ; then
echo "Using manually-configured network device $CONFIGURATION"
else
defaultroute=`ip route | grep ^default | cut -d ' ' -f 5`
test -n "$defaultroute"
echo "Using auto-discovered network device $defaultroute which is the default route"
SERVICEARGS="$SERVICEARGS -D$x.network.device="$defaultroute
fi
done
echo CLASSPATH to run the agent: "$CLASSPATH"
function termagent() {
if [ "$agentpid" != "" ] ; then
@ -52,7 +38,7 @@ function termagent() {
trap termagent TERM
while true ; do
java -Xms128M -Xmx384M -cp "$CLASSPATH" $SERVICEARGS "$@" com.cloud.agent.AgentShell &
java -Xms128M -Xmx384M -cp "$CLASSPATH" "$@" com.cloud.agent.AgentShell &
agentpid=$!
echo "Console Proxy started. PID: $!" >&2
wait $agentpid

View File

@ -145,8 +145,7 @@ public class ConsoleProxyViewer implements java.lang.Runnable, RfbViewer, RfbPro
if(rfbThread.isAlive()) {
dropMe = true;
viewerInReuse = true;
if(rfb != null)
rfb.close();
rfb.close();
try {
rfbThread.join();
@ -159,7 +158,8 @@ public class ConsoleProxyViewer implements java.lang.Runnable, RfbViewer, RfbPro
dropMe = false;
rfbThread = new Thread(this);
rfbThread.setName("RFB Thread " + rfbThread.getId() + " >" + host + ":" + port);
rfbThread.setName("RFB Thread " + rfbThread.getId() + " >" + host + ":"
+ port);
rfbThread.start();
tileDirtyEvent = new Object();

View File

@ -17,6 +17,5 @@
<classpathentry kind="lib" path="/thirdparty/trilead-ssh2-build213.jar"/>
<classpathentry kind="lib" path="/thirdparty/commons-httpclient-3.1.jar"/>
<classpathentry kind="lib" path="/thirdparty/commons-codec-1.4.jar"/>
<classpathentry combineaccessrules="false" kind="src" path="/api"/>
<classpathentry kind="output" path="bin"/>
</classpath>

View File

@ -35,14 +35,14 @@ import com.cloud.host.HostStats;
import com.cloud.host.HostVO;
import com.cloud.host.Status;
import com.cloud.host.Status.Event;
import com.cloud.offering.ServiceOffering;
import com.cloud.service.ServiceOffering;
import com.cloud.service.ServiceOfferingVO;
import com.cloud.storage.StoragePoolVO;
import com.cloud.storage.VMTemplateVO;
import com.cloud.storage.VirtualMachineTemplate;
import com.cloud.uservm.UserVm;
import com.cloud.utils.Pair;
import com.cloud.utils.component.Manager;
import com.cloud.vm.UserVm;
import com.cloud.vm.VMInstanceVO;
/**

View File

@ -51,14 +51,13 @@ public class BackupSnapshotCommand extends SnapshotCommand {
Long accountId,
Long volumeId,
String snapshotUuid,
String snapshotName,
String prevSnapshotUuid,
String prevBackupUuid,
String firstBackupUuid,
boolean isFirstSnapshotOfRootVolume,
boolean isVolumeInactive)
{
super(primaryStoragePoolNameLabel, secondaryStoragePoolURL, snapshotUuid, snapshotName, dcId, accountId, volumeId);
super(primaryStoragePoolNameLabel, secondaryStoragePoolURL, snapshotUuid, dcId, accountId, volumeId);
this.prevSnapshotUuid = prevSnapshotUuid;
this.prevBackupUuid = prevBackupUuid;
this.firstBackupUuid = firstBackupUuid;

View File

@ -49,12 +49,11 @@ public class CreatePrivateTemplateFromSnapshotCommand extends SnapshotCommand {
Long accountId,
Long volumeId,
String backedUpSnapshotUuid,
String backedUpSnapshotName,
String origTemplateInstallPath,
Long newTemplateId,
String templateName)
{
super(primaryStoragePoolNameLabel, secondaryStoragePoolURL, backedUpSnapshotUuid, backedUpSnapshotName, dcId, accountId, volumeId);
super(primaryStoragePoolNameLabel, secondaryStoragePoolURL, backedUpSnapshotUuid, dcId, accountId, volumeId);
this.origTemplateInstallPath = origTemplateInstallPath;
this.newTemplateId = newTemplateId;
this.templateName = templateName;

View File

@ -51,10 +51,9 @@ public class CreateVolumeFromSnapshotCommand extends SnapshotCommand {
Long accountId,
Long volumeId,
String backedUpSnapshotUuid,
String backedUpSnapshotName,
String templatePath)
{
super(primaryStoragePoolNameLabel, secondaryStoragePoolURL, backedUpSnapshotUuid, backedUpSnapshotName, dcId, accountId, volumeId);
super(primaryStoragePoolNameLabel, secondaryStoragePoolURL, backedUpSnapshotUuid, dcId, accountId, volumeId);
this.templatePath = templatePath;
}

View File

@ -59,10 +59,9 @@ public class DeleteSnapshotBackupCommand extends SnapshotCommand {
Long accountId,
Long volumeId,
String backupUUID,
String backupName,
String childUUID)
{
super(primaryStoragePoolNameLabel, secondaryStoragePoolURL, backupUUID, backupName, dcId, accountId, volumeId);
super(primaryStoragePoolNameLabel, secondaryStoragePoolURL, backupUUID, dcId, accountId, volumeId);
this.childUUID = childUUID;
}

View File

@ -57,10 +57,9 @@ public class DeleteSnapshotsDirCommand extends SnapshotCommand {
Long dcId,
Long accountId,
Long volumeId,
String snapshotUUID,
String snapshotName)
String snapshotUUID)
{
super(primaryStoragePoolNameLabel, secondaryStoragePoolURL, snapshotUUID, snapshotName, dcId, accountId, volumeId);
super(primaryStoragePoolNameLabel, secondaryStoragePoolURL, snapshotUUID, dcId, accountId, volumeId);
}
}

View File

@ -33,12 +33,11 @@ public class ManageSnapshotCommand extends Command {
// Information about the snapshot
private String _snapshotPath = null;
private String _snapshotName = null;
private long _snapshotId;
private String _vmName = null;
private long _snapshotId;
public ManageSnapshotCommand() {}
public ManageSnapshotCommand(String commandSwitch, long snapshotId, String path, String snapshotName, String vmName) {
public ManageSnapshotCommand(String commandSwitch, long snapshotId, String path, String snapshotName) {
_commandSwitch = commandSwitch;
if (commandSwitch.equals(ManageSnapshotCommand.CREATE_SNAPSHOT)) {
_volumePath = path;
@ -47,8 +46,7 @@ public class ManageSnapshotCommand extends Command {
_snapshotPath = path;
}
_snapshotName = snapshotName;
_snapshotId = snapshotId;
_vmName = vmName;
_snapshotId = snapshotId;
}
@Override
@ -74,10 +72,6 @@ public class ManageSnapshotCommand extends Command {
public long getSnapshotId() {
return _snapshotId;
}
public String getVmName() {
return _vmName;
}
}

View File

@ -27,7 +27,6 @@ package com.cloud.agent.api;
public class SnapshotCommand extends Command {
private String primaryStoragePoolNameLabel;
private String snapshotUuid;
private String snapshotName;
private String secondaryStoragePoolURL;
private Long dcId;
private Long accountId;
@ -47,7 +46,6 @@ public class SnapshotCommand extends Command {
public SnapshotCommand(String primaryStoragePoolNameLabel,
String secondaryStoragePoolURL,
String snapshotUuid,
String snapshotName,
Long dcId,
Long accountId,
Long volumeId)
@ -58,7 +56,6 @@ public class SnapshotCommand extends Command {
this.dcId = dcId;
this.accountId = accountId;
this.volumeId = volumeId;
this.snapshotName = snapshotName;
}
/**
@ -75,10 +72,6 @@ public class SnapshotCommand extends Command {
return snapshotUuid;
}
public String getSnapshotName() {
return snapshotName;
}
/**
* @return the secondaryStoragePoolURL
*/

View File

@ -20,10 +20,10 @@ package com.cloud.agent.api;
import java.util.List;
import java.util.Map;
import com.cloud.offering.ServiceOffering;
import com.cloud.service.ServiceOffering;
import com.cloud.storage.VolumeVO;
import com.cloud.uservm.UserVm;
import com.cloud.vm.DomainRouter;
import com.cloud.vm.UserVm;
import com.cloud.vm.UserVmVO;
public class StartCommand extends AbstractStartCommand {

View File

@ -17,6 +17,7 @@
*/
package com.cloud.agent.api;
import com.cloud.vm.ConsoleProxyVO;
import com.cloud.vm.VirtualMachine;
public class StopCommand extends RebootCommand {
@ -26,7 +27,6 @@ public class StopCommand extends RebootCommand {
private String vncPort=null;
private String urlPort=null;
private String publicConsoleProxyIpAddress=null;
private String privateRouterIpAddress=null;
protected StopCommand() {
}
@ -42,17 +42,15 @@ public class StopCommand extends RebootCommand {
public StopCommand(VirtualMachine vm, String vnet) {
super(vm);
this.vnet = vnet;
this.mirroredVolumes = vm.isMirroredVols();
}
public StopCommand(VirtualMachine vm, String vmName, String vnet) {
super(vmName);
this.vnet = vnet;
}
public StopCommand(VirtualMachine vm, String vmName, String vnet, String privateRouterIpAddress) {
super(vmName);
this.vnet = vnet;
this.privateRouterIpAddress = privateRouterIpAddress;
if (vm != null) {
this.mirroredVolumes = vm.isMirroredVols();
}
}
public String getVnet() {
@ -87,9 +85,5 @@ public class StopCommand extends RebootCommand {
public String getPublicConsoleProxyIpAddress() {
return this.publicConsoleProxyIpAddress;
}
public String getPrivateRouterIpAddress() {
return privateRouterIpAddress;
}
}

View File

@ -0,0 +1,52 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.api;
import java.util.Collection;
import java.util.HashMap;
public class WatchNetworkAnswer extends Answer {
HashMap<String, Long> transmitted;
HashMap<String, Long> received;
protected WatchNetworkAnswer() {
}
public WatchNetworkAnswer(WatchNetworkCommand cmd) {
super(cmd);
transmitted = new HashMap<String, Long>();
received = new HashMap<String, Long>();
}
public void addStats(String vmName, long txn, long rcvd) {
transmitted.put(vmName, txn);
received.put(vmName, rcvd);
}
public long[] getStats(String vmName) {
long[] stats = new long[2];
stats[0] = transmitted.get(vmName);
stats[1] = received.get(vmName);
return stats;
}
public Collection<String> getAllVms() {
return transmitted.keySet();
}
}

View File

@ -17,29 +17,22 @@
*/
package com.cloud.agent.api;
public class NetworkUsageCommand extends Command {
private String privateIP;
public class WatchNetworkCommand extends Command implements CronCommand {
int interval;
protected NetworkUsageCommand() {
protected WatchNetworkCommand() {
}
public NetworkUsageCommand(String privateIP)
{
this.privateIP = privateIP;
}
public String getPrivateIP() {
return privateIP;
public WatchNetworkCommand(int interval) {
this.interval = interval;
}
public int getInterval() {
return interval;
}
/**
* {@inheritDoc}
*/
@Override
public boolean executeInSequence() {
return false;
}
}
}

View File

@ -18,10 +18,10 @@
package com.cloud.alert.dao;
import java.util.List;
import javax.ejb.Local;
import java.util.List;
import javax.ejb.Local;
import com.cloud.alert.AlertVO;
import com.cloud.utils.db.Filter;
import com.cloud.utils.db.GenericDaoBase;
@ -32,7 +32,7 @@ public class AlertDaoImpl extends GenericDaoBase<AlertVO, Long> implements Alert
@Override
public AlertVO getLastAlert(short type, long dataCenterId, Long podId) {
Filter searchFilter = new Filter(AlertVO.class, "createdDate", Boolean.FALSE, Long.valueOf(1), Long.valueOf(1));
SearchCriteria<AlertVO> sc = createSearchCriteria();
SearchCriteria sc = createSearchCriteria();
sc.addAnd("type", SearchCriteria.Op.EQ, Short.valueOf(type));
sc.addAnd("dataCenterId", SearchCriteria.Op.EQ, Long.valueOf(dataCenterId));

View File

@ -19,12 +19,12 @@
package com.cloud.async.dao;
import java.util.Date;
import java.util.List;
import javax.ejb.Local;
import org.apache.log4j.Logger;
import java.util.List;
import javax.ejb.Local;
import org.apache.log4j.Logger;
import com.cloud.async.AsyncJobResult;
import com.cloud.async.AsyncJobVO;
import com.cloud.utils.db.Filter;
@ -56,7 +56,7 @@ public class AsyncJobDaoImpl extends GenericDaoBase<AsyncJobVO, Long> implements
}
public AsyncJobVO findInstancePendingAsyncJob(String instanceType, long instanceId) {
SearchCriteria<AsyncJobVO> sc = pendingAsyncJobSearch.create();
SearchCriteria sc = pendingAsyncJobSearch.create();
sc.setParameters("instanceType", instanceType);
sc.setParameters("instanceId", instanceId);
sc.setParameters("status", AsyncJobResult.STATUS_IN_PROGRESS);
@ -73,7 +73,7 @@ public class AsyncJobDaoImpl extends GenericDaoBase<AsyncJobVO, Long> implements
}
public List<AsyncJobVO> getExpiredJobs(Date cutTime, int limit) {
SearchCriteria<AsyncJobVO> sc = expiringAsyncJobSearch.create();
SearchCriteria sc = expiringAsyncJobSearch.create();
sc.setParameters("created", cutTime);
Filter filter = new Filter(AsyncJobVO.class, "created", true, 0L, (long)limit);
return listBy(sc, filter);

View File

@ -18,15 +18,15 @@
package com.cloud.async.dao;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.Date;
import java.util.TimeZone;
import javax.ejb.Local;
import org.apache.log4j.Logger;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.Date;
import java.util.TimeZone;
import javax.ejb.Local;
import org.apache.log4j.Logger;
import com.cloud.async.SyncQueueVO;
import com.cloud.utils.DateUtil;
import com.cloud.utils.db.GenericDaoBase;
@ -36,9 +36,7 @@ import com.cloud.utils.db.Transaction;
@Local(value = { SyncQueueDao.class })
public class SyncQueueDaoImpl extends GenericDaoBase<SyncQueueVO, Long> implements SyncQueueDao {
private static final Logger s_logger = Logger.getLogger(SyncQueueDaoImpl.class.getName());
SearchBuilder<SyncQueueVO> TypeIdSearch = createSearchBuilder();
private static final Logger s_logger = Logger.getLogger(SyncQueueDaoImpl.class.getName());
@Override
public void ensureQueue(String syncObjType, long syncObjId) {
@ -63,17 +61,15 @@ public class SyncQueueDaoImpl extends GenericDaoBase<SyncQueueVO, Long> implemen
@Override
public SyncQueueVO find(String syncObjType, long syncObjId) {
SearchCriteria<SyncQueueVO> sc = TypeIdSearch.create();
SearchBuilder<SyncQueueVO> sb = createSearchBuilder();
sb.and("syncObjType", sb.entity().getSyncObjType(), SearchCriteria.Op.EQ);
sb.and("syncObjId", sb.entity().getSyncObjId(), SearchCriteria.Op.EQ);
sb.done();
SearchCriteria sc = sb.create();
sc.setParameters("syncObjType", syncObjType);
sc.setParameters("syncObjId", syncObjId);
return findOneActiveBy(sc);
}
protected SyncQueueDaoImpl() {
super();
TypeIdSearch = createSearchBuilder();
TypeIdSearch.and("syncObjType", TypeIdSearch.entity().getSyncObjType(), SearchCriteria.Op.EQ);
TypeIdSearch.and("syncObjId", TypeIdSearch.entity().getSyncObjId(), SearchCriteria.Op.EQ);
TypeIdSearch.done();
}
}

View File

@ -46,10 +46,11 @@ public class SyncQueueItemDaoImpl extends GenericDaoBase<SyncQueueItemVO, Long>
SearchBuilder<SyncQueueItemVO> sb = createSearchBuilder();
sb.and("queueId", sb.entity().getQueueId(), SearchCriteria.Op.EQ);
sb.and("lastProcessNumber", sb.entity().getLastProcessNumber(), SearchCriteria.Op.NULL);
sb.and("lastProcessNumber", sb.entity().getLastProcessNumber(),
SearchCriteria.Op.NULL);
sb.done();
SearchCriteria<SyncQueueItemVO> sc = sb.create();
SearchCriteria sc = sb.create();
sc.setParameters("queueId", queueId);
Filter filter = new Filter(SyncQueueItemVO.class, "created", true, 0L, 1L);
@ -101,7 +102,7 @@ public class SyncQueueItemDaoImpl extends GenericDaoBase<SyncQueueItemVO, Long>
SearchCriteria.Op.EQ);
sb.done();
SearchCriteria<SyncQueueItemVO> sc = sb.create();
SearchCriteria sc = sb.create();
sc.setParameters("lastProcessMsid", msid);
Filter filter = new Filter(SyncQueueItemVO.class, "created", true, 0L, 1L);

View File

@ -18,15 +18,15 @@
package com.cloud.cluster.dao;
import java.sql.PreparedStatement;
import java.util.Date;
import java.util.List;
import java.util.TimeZone;
import javax.ejb.Local;
import org.apache.log4j.Logger;
import java.sql.PreparedStatement;
import java.util.Date;
import java.util.List;
import java.util.TimeZone;
import javax.ejb.Local;
import org.apache.log4j.Logger;
import com.cloud.cluster.ManagementServerHostVO;
import com.cloud.utils.DateUtil;
import com.cloud.utils.db.GenericDaoBase;
@ -41,7 +41,7 @@ public class ManagementServerHostDaoImpl extends GenericDaoBase<ManagementServer
private final SearchBuilder<ManagementServerHostVO> MsIdSearch;
public ManagementServerHostVO findByMsid(long msid) {
SearchCriteria<ManagementServerHostVO> sc = MsIdSearch.create();
SearchCriteria sc = MsIdSearch.create();
sc.setParameters("msid", msid);
List<ManagementServerHostVO> l = listBy(sc);
@ -98,7 +98,7 @@ public class ManagementServerHostDaoImpl extends GenericDaoBase<ManagementServer
activeSearch.and("lastUpdateTime", activeSearch.entity().getLastUpdateTime(), SearchCriteria.Op.GT);
activeSearch.and("removed", activeSearch.entity().getRemoved(), SearchCriteria.Op.NULL);
SearchCriteria<ManagementServerHostVO> sc = activeSearch.create();
SearchCriteria sc = activeSearch.create();
sc.setParameters("lastUpdateTime", cutTime);
return listBy(sc);

View File

@ -36,13 +36,9 @@ public interface ResourceCount {
public void setType(ResourceType type);
public Long getAccountId();
public long getAccountId();
public void setAccountId(Long accountId);
public Long getDomainId();
public void setDomainId(Long domainId);
public void setAccountId(long accountId);
public long getCount();

View File

@ -27,6 +27,8 @@ import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;
import com.cloud.configuration.ResourceCount.ResourceType;
@Entity
@Table(name="resource_count")
public class ResourceCountVO implements ResourceCount {
@ -41,19 +43,15 @@ public class ResourceCountVO implements ResourceCount {
private ResourceCount.ResourceType type;
@Column(name="account_id")
private Long accountId;
@Column(name="domain_id")
private Long domainId;
private long accountId;
@Column(name="count")
private long count;
public ResourceCountVO() {}
public ResourceCountVO(Long accountId, Long domainId, ResourceCount.ResourceType type, long count) {
public ResourceCountVO(long accountId, ResourceCount.ResourceType type, long count) {
this.accountId = accountId;
this.domainId = domainId;
this.type = type;
this.count = count;
}
@ -74,22 +72,14 @@ public class ResourceCountVO implements ResourceCount {
this.type = type;
}
public Long getAccountId() {
public long getAccountId() {
return accountId;
}
public void setAccountId(Long accountId) {
public void setAccountId(long accountId) {
this.accountId = accountId;
}
public Long getDomainId() {
return domainId;
}
public void setDomainId(Long domainId) {
this.domainId = domainId;
}
public long getCount() {
return count;
}
@ -97,4 +87,5 @@ public class ResourceCountVO implements ResourceCount {
public void setCount(long count) {
this.count = count;
}
}

View File

@ -58,7 +58,7 @@ public class ConfigurationDaoImpl extends GenericDaoBase<ConfigurationVO, String
if (_configs == null) {
_configs = new HashMap<String, String>();
SearchCriteria<ConfigurationVO> sc = InstanceSearch.create();
SearchCriteria sc = InstanceSearch.create();
sc.setParameters("instance", "DEFAULT");
List<ConfigurationVO> configurations = listBy(sc);
@ -124,7 +124,7 @@ public class ConfigurationDaoImpl extends GenericDaoBase<ConfigurationVO, String
@Override
public String getValue(String name) {
SearchCriteria<ConfigurationVO> sc = NameSearch.create();
SearchCriteria sc = NameSearch.create();
sc.setParameters("name", name);
List<ConfigurationVO> configurations = listBy(sc);

View File

@ -24,37 +24,6 @@ import com.cloud.utils.db.GenericDao;
public interface ResourceCountDao extends GenericDao<ResourceCountVO, Long> {
/**
* Get the count of in use resources for an account by type
* @param accountId the id of the account to get the resource count
* @param type the type of resource (e.g. user_vm, public_ip, volume)
* @return the count of resources in use for the given type and account
*/
public long getAccountCount(long accountId, ResourceType type);
/**
* Get the count of in use resources for a domain by type
* @param domainId the id of the domain to get the resource count
* @param type the type of resource (e.g. user_vm, public_ip, volume)
* @return the count of resources in use for the given type and domain
*/
public long getDomainCount(long domainId, ResourceType type);
/**
* Update the count of resources in use for the given account and given resource type
* @param accountId the id of the account to update resource count
* @param type the type of resource (e.g. user_vm, public_ip, volume)
* @param increment whether the change is adding or subtracting from the current count
* @param delta the number of resources being added/released
*/
public void updateAccountCount(long accountId, ResourceType type, boolean increment, long delta);
/**
* Update the count of resources in use for the given domain and given resource type
* @param domainId the id of the domain to update resource count
* @param type the type of resource (e.g. user_vm, public_ip, volume)
* @param increment whether the change is adding or subtracting from the current count
* @param delta the number of resources being added/released
*/
public void updateDomainCount(long domainId, ResourceType type, boolean increment, long delta);
public long getCount(long accountId, ResourceType type);
public void updateCount(long accountId, ResourceType type, boolean increment, long delta);
}

View File

@ -20,88 +20,48 @@ package com.cloud.configuration.dao;
import javax.ejb.Local;
import com.cloud.configuration.ResourceCount.ResourceType;
import com.cloud.configuration.ResourceCountVO;
import com.cloud.configuration.ResourceLimitVO;
import com.cloud.configuration.ResourceCount.ResourceType;
import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
@Local(value={ResourceCountDao.class})
public class ResourceCountDaoImpl extends GenericDaoBase<ResourceCountVO, Long> implements ResourceCountDao {
private SearchBuilder<ResourceCountVO> IdTypeSearch;
private SearchBuilder<ResourceCountVO> DomainIdTypeSearch;
SearchBuilder<ResourceCountVO> IdTypeSearch;
public ResourceCountDaoImpl() {
IdTypeSearch = createSearchBuilder();
IdTypeSearch.and("type", IdTypeSearch.entity().getType(), SearchCriteria.Op.EQ);
IdTypeSearch.and("accountId", IdTypeSearch.entity().getAccountId(), SearchCriteria.Op.EQ);
IdTypeSearch.done();
DomainIdTypeSearch = createSearchBuilder();
DomainIdTypeSearch.and("type", DomainIdTypeSearch.entity().getType(), SearchCriteria.Op.EQ);
DomainIdTypeSearch.and("domainId", DomainIdTypeSearch.entity().getDomainId(), SearchCriteria.Op.EQ);
DomainIdTypeSearch.done();
}
private ResourceCountVO findByAccountIdAndType(long accountId, ResourceType type) {
if (type == null) {
return null;
}
SearchCriteria<ResourceCountVO> sc = IdTypeSearch.create();
SearchCriteria sc = IdTypeSearch.create();
sc.setParameters("accountId", accountId);
sc.setParameters("type", type);
return findOneBy(sc);
}
private ResourceCountVO findByDomainIdAndType(long domainId, ResourceType type) {
if (type == null) {
return null;
}
SearchCriteria<ResourceCountVO> sc = DomainIdTypeSearch.create();
sc.setParameters("domainId", domainId);
sc.setParameters("type", type);
return findOneBy(sc);
}
@Override
public long getAccountCount(long accountId, ResourceType type) {
public long getCount(long accountId, ResourceType type) {
ResourceCountVO resourceCountVO = findByAccountIdAndType(accountId, type);
return (resourceCountVO != null) ? resourceCountVO.getCount() : 0;
}
@Override
public long getDomainCount(long domainId, ResourceType type) {
ResourceCountVO resourceCountVO = findByDomainIdAndType(domainId, type);
return (resourceCountVO != null) ? resourceCountVO.getCount() : 0;
}
@Override
public void updateAccountCount(long accountId, ResourceType type, boolean increment, long delta) {
delta = increment ? delta : delta * -1;
ResourceCountVO resourceCountVO = findByAccountIdAndType(accountId, type);
if (resourceCountVO == null) {
resourceCountVO = new ResourceCountVO(accountId, null, type, 0);
resourceCountVO.setCount(resourceCountVO.getCount() + delta);
persist(resourceCountVO);
} else {
resourceCountVO.setCount(resourceCountVO.getCount() + delta);
update(resourceCountVO.getId(), resourceCountVO);
}
}
@Override
public void updateDomainCount(long domainId, ResourceType type, boolean increment, long delta) {
public void updateCount(long accountId, ResourceType type, boolean increment, long delta) {
ResourceCountVO resourceCountVO = findByAccountIdAndType(accountId, type);
delta = increment ? delta : delta * -1;
ResourceCountVO resourceCountVO = findByDomainIdAndType(domainId, type);
if (resourceCountVO == null) {
resourceCountVO = new ResourceCountVO(null, domainId, type, 0);
resourceCountVO = new ResourceCountVO(accountId, type, 0);
resourceCountVO.setCount(resourceCountVO.getCount() + delta);
persist(resourceCountVO);
} else {
@ -109,4 +69,5 @@ public class ResourceCountDaoImpl extends GenericDaoBase<ResourceCountVO, Long>
update(resourceCountVO.getId(), resourceCountVO);
}
}
}

View File

@ -22,6 +22,7 @@ import java.util.List;
import com.cloud.configuration.ResourceCount;
import com.cloud.configuration.ResourceLimitVO;
import com.cloud.configuration.ResourceCount.ResourceType;
import com.cloud.utils.db.GenericDao;
public interface ResourceLimitDao extends GenericDao<ResourceLimitVO, Long> {
@ -32,4 +33,5 @@ public interface ResourceLimitDao extends GenericDao<ResourceLimitVO, Long> {
public List<ResourceLimitVO> listByDomainId(Long domainId);
public boolean update(Long id, Long max);
public ResourceCount.ResourceType getLimitType(String type);
}

View File

@ -30,8 +30,9 @@ import com.cloud.utils.db.SearchCriteria;
@Local(value={ResourceLimitDao.class})
public class ResourceLimitDaoImpl extends GenericDaoBase<ResourceLimitVO, Long> implements ResourceLimitDao {
private SearchBuilder<ResourceLimitVO> IdTypeSearch;
SearchBuilder<ResourceLimitVO> IdTypeSearch;
public ResourceLimitDaoImpl () {
IdTypeSearch = createSearchBuilder();
IdTypeSearch.and("type", IdTypeSearch.entity().getType(), SearchCriteria.Op.EQ);
@ -39,12 +40,12 @@ public class ResourceLimitDaoImpl extends GenericDaoBase<ResourceLimitVO, Long>
IdTypeSearch.and("accountId", IdTypeSearch.entity().getAccountId(), SearchCriteria.Op.EQ);
IdTypeSearch.done();
}
public ResourceLimitVO findByDomainIdAndType(Long domainId, ResourceCount.ResourceType type) {
if (domainId == null || type == null)
return null;
SearchCriteria<ResourceLimitVO> sc = IdTypeSearch.create();
SearchCriteria sc = IdTypeSearch.create();
sc.setParameters("domainId", domainId);
sc.setParameters("type", type);
@ -55,7 +56,7 @@ public class ResourceLimitDaoImpl extends GenericDaoBase<ResourceLimitVO, Long>
if (domainId == null)
return null;
SearchCriteria<ResourceLimitVO> sc = IdTypeSearch.create();
SearchCriteria sc = IdTypeSearch.create();
sc.setParameters("domainId", domainId);
return listBy(sc);
@ -65,7 +66,7 @@ public class ResourceLimitDaoImpl extends GenericDaoBase<ResourceLimitVO, Long>
if (accountId == null || type == null)
return null;
SearchCriteria<ResourceLimitVO> sc = IdTypeSearch.create();
SearchCriteria sc = IdTypeSearch.create();
sc.setParameters("accountId", accountId);
sc.setParameters("type", type);
@ -76,7 +77,7 @@ public class ResourceLimitDaoImpl extends GenericDaoBase<ResourceLimitVO, Long>
if (accountId == null)
return null;
SearchCriteria<ResourceLimitVO> sc = IdTypeSearch.create();
SearchCriteria sc = IdTypeSearch.create();
sc.setParameters("accountId", accountId);
return listBy(sc);

View File

@ -36,21 +36,21 @@ public class AccountVlanMapDaoImpl extends GenericDaoBase<AccountVlanMapVO, Long
@Override
public List<AccountVlanMapVO> listAccountVlanMapsByAccount(long accountId) {
SearchCriteria<AccountVlanMapVO> sc = AccountSearch.create();
SearchCriteria sc = AccountSearch.create();
sc.setParameters("accountId", accountId);
return listBy(sc);
}
@Override
public List<AccountVlanMapVO> listAccountVlanMapsByVlan(long vlanDbId) {
SearchCriteria<AccountVlanMapVO> sc = VlanSearch.create();
SearchCriteria sc = VlanSearch.create();
sc.setParameters("vlanDbId", vlanDbId);
return listBy(sc);
}
@Override
public AccountVlanMapVO findAccountVlanMap(long accountId, long vlanDbId) {
SearchCriteria<AccountVlanMapVO> sc = AccountVlanSearch.create();
SearchCriteria sc = AccountVlanSearch.create();
sc.setParameters("accountId", accountId);
sc.setParameters("vlanDbId", vlanDbId);
return findOneBy(sc);

View File

@ -42,7 +42,7 @@ public class ClusterDaoImpl extends GenericDaoBase<ClusterVO, Long> implements C
@Override
public List<ClusterVO> listByPodId(long podId) {
SearchCriteria<ClusterVO> sc = PodSearch.create();
SearchCriteria sc = PodSearch.create();
sc.setParameters("pod", podId);
return listActiveBy(sc);
@ -50,7 +50,7 @@ public class ClusterDaoImpl extends GenericDaoBase<ClusterVO, Long> implements C
@Override
public ClusterVO findBy(String name, long podId) {
SearchCriteria<ClusterVO> sc = PodSearch.create();
SearchCriteria sc = PodSearch.create();
sc.setParameters("pod", podId);
sc.setParameters("name", name);

View File

@ -64,7 +64,7 @@ public class DataCenterDaoImpl extends GenericDaoBase<DataCenterVO, Long> implem
@Override
public DataCenterVO findByName(String name) {
SearchCriteria<DataCenterVO> sc = NameSearch.create();
SearchCriteria sc = NameSearch.create();
sc.setParameters("name", name);
return findOneActiveBy(sc);
}

View File

@ -49,7 +49,7 @@ public class DataCenterIpAddressDaoImpl extends GenericDaoBase<DataCenterIpAddre
private final SearchBuilder<DataCenterIpAddressVO> FreePodDcIpSearch;
public DataCenterIpAddressVO takeIpAddress(long dcId, long podId, long instanceId) {
SearchCriteria<DataCenterIpAddressVO> sc = FreeIpSearch.create();
SearchCriteria sc = FreeIpSearch.create();
sc.setParameters("dc", dcId);
sc.setParameters("pod", podId);
@ -86,7 +86,7 @@ public class DataCenterIpAddressDaoImpl extends GenericDaoBase<DataCenterIpAddre
}
public boolean mark(long dcId, long podId, String ip) {
SearchCriteria<DataCenterIpAddressVO> sc = FreePodDcIpSearch.create();
SearchCriteria sc = FreePodDcIpSearch.create();
sc.setParameters("podId", podId);
sc.setParameters("dcId", dcId);
sc.setParameters("ipAddress", ip);
@ -124,7 +124,7 @@ public class DataCenterIpAddressDaoImpl extends GenericDaoBase<DataCenterIpAddre
if (s_logger.isDebugEnabled()) {
s_logger.debug("Releasing ip address: " + ipAddress + " data center " + dcId);
}
SearchCriteria<DataCenterIpAddressVO> sc = IpDcSearch.create();
SearchCriteria sc = IpDcSearch.create();
sc.setParameters("ip", ipAddress);
sc.setParameters("dc", dcId);
sc.setParameters("instance", instanceId);
@ -170,14 +170,14 @@ public class DataCenterIpAddressDaoImpl extends GenericDaoBase<DataCenterIpAddre
}
public List<DataCenterIpAddressVO> listByPodIdDcId(long podId, long dcId) {
SearchCriteria<DataCenterIpAddressVO> sc = PodDcSearch.create();
SearchCriteria sc = PodDcSearch.create();
sc.setParameters("podId", podId);
sc.setParameters("dataCenterId", dcId);
return listBy(sc);
}
public List<DataCenterIpAddressVO> listByPodIdDcIdIpAddress(long podId, long dcId, String ipAddress) {
SearchCriteria<DataCenterIpAddressVO> sc = PodDcIpSearch.create();
SearchCriteria sc = PodDcIpSearch.create();
sc.setParameters("dcId", dcId);
sc.setParameters("podId", podId);
sc.setParameters("ipAddress", ipAddress);

View File

@ -49,7 +49,7 @@ public class DataCenterLinkLocalIpAddressDaoImpl extends GenericDaoBase<DataCent
private final SearchBuilder<DataCenterLinkLocalIpAddressVO> FreePodDcIpSearch;
public DataCenterLinkLocalIpAddressVO takeIpAddress(long dcId, long podId, long instanceId) {
SearchCriteria<DataCenterLinkLocalIpAddressVO> sc = FreeIpSearch.create();
SearchCriteria sc = FreeIpSearch.create();
sc.setParameters("dc", dcId);
sc.setParameters("pod", podId);
@ -86,7 +86,7 @@ public class DataCenterLinkLocalIpAddressDaoImpl extends GenericDaoBase<DataCent
}
public boolean mark(long dcId, long podId, String ip) {
SearchCriteria<DataCenterLinkLocalIpAddressVO> sc = FreePodDcIpSearch.create();
SearchCriteria sc = FreePodDcIpSearch.create();
sc.setParameters("podId", podId);
sc.setParameters("dcId", dcId);
sc.setParameters("ipAddress", ip);
@ -124,7 +124,7 @@ public class DataCenterLinkLocalIpAddressDaoImpl extends GenericDaoBase<DataCent
if (s_logger.isDebugEnabled()) {
s_logger.debug("Releasing ip address: " + ipAddress + " data center " + dcId);
}
SearchCriteria<DataCenterLinkLocalIpAddressVO> sc = IpDcSearch.create();
SearchCriteria sc = IpDcSearch.create();
sc.setParameters("ip", ipAddress);
sc.setParameters("dc", dcId);
sc.setParameters("instance", instanceId);
@ -170,14 +170,14 @@ public class DataCenterLinkLocalIpAddressDaoImpl extends GenericDaoBase<DataCent
}
public List<DataCenterLinkLocalIpAddressVO> listByPodIdDcId(long podId, long dcId) {
SearchCriteria<DataCenterLinkLocalIpAddressVO> sc = PodDcSearch.create();
SearchCriteria sc = PodDcSearch.create();
sc.setParameters("podId", podId);
sc.setParameters("dataCenterId", dcId);
return listBy(sc);
}
public List<DataCenterLinkLocalIpAddressVO> listByPodIdDcIdIpAddress(long podId, long dcId, String ipAddress) {
SearchCriteria<DataCenterLinkLocalIpAddressVO> sc = PodDcIpSearch.create();
SearchCriteria sc = PodDcIpSearch.create();
sc.setParameters("dcId", dcId);
sc.setParameters("podId", podId);
sc.setParameters("ipAddress", ipAddress);

View File

@ -20,9 +20,11 @@ package com.cloud.dc.dao;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.Date;
import java.util.Formatter;
import java.util.List;
import com.cloud.dc.DataCenterVnetVO;
import com.cloud.exception.InternalErrorException;
import com.cloud.utils.db.GenericDao;
import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
@ -41,13 +43,13 @@ public class DataCenterVnetDaoImpl extends GenericDaoBase<DataCenterVnetVO, Long
private final SearchBuilder<DataCenterVnetVO> DcSearchAllocated;
public List<DataCenterVnetVO> listAllocatedVnets(long dcId) {
SearchCriteria<DataCenterVnetVO> sc = DcSearchAllocated.create();
SearchCriteria sc = DcSearchAllocated.create();
sc.setParameters("dc", dcId);
return listActiveBy(sc);
}
public List<DataCenterVnetVO> findVnet(long dcId, String vnet) {
SearchCriteria<DataCenterVnetVO> sc = VnetDcSearch.create();;
SearchCriteria sc = VnetDcSearch.create();;
sc.setParameters("dc", dcId);
sc.setParameters("vnet", vnet);
return listActiveBy(sc);
@ -86,7 +88,7 @@ public class DataCenterVnetDaoImpl extends GenericDaoBase<DataCenterVnetVO, Long
}
public DataCenterVnetVO take(long dcId, long accountId) {
SearchCriteria<DataCenterVnetVO> sc = FreeVnetSearch.create();
SearchCriteria sc = FreeVnetSearch.create();
sc.setParameters("dc", dcId);
Date now = new Date();
Transaction txn = Transaction.currentTxn();
@ -109,7 +111,7 @@ public class DataCenterVnetDaoImpl extends GenericDaoBase<DataCenterVnetVO, Long
}
public void release(String vnet, long dcId, long accountId) {
SearchCriteria<DataCenterVnetVO> sc = VnetDcSearchAllocated.create();
SearchCriteria sc = VnetDcSearchAllocated.create();
sc.setParameters("vnet", vnet);
sc.setParameters("dc", dcId);
sc.setParameters("account", accountId);

View File

@ -57,14 +57,14 @@ public class HostPodDaoImpl extends GenericDaoBase<HostPodVO, Long> implements H
}
public List<HostPodVO> listByDataCenterId(long id) {
SearchCriteria<HostPodVO> sc = DataCenterIdSearch.create();
SearchCriteria sc = DataCenterIdSearch.create();
sc.setParameters("dcId", id);
return listActiveBy(sc);
}
public HostPodVO findByName(String name, long dcId) {
SearchCriteria<HostPodVO> sc = DataCenterAndNameSearch.create();
SearchCriteria sc = DataCenterAndNameSearch.create();
sc.setParameters("dc", dcId);
sc.setParameters("name", name);

View File

@ -40,7 +40,7 @@ public class PodVlanDaoImpl extends GenericDaoBase<PodVlanVO, Long> implements G
private final SearchBuilder<PodVlanVO> PodSearchAllocated;
public List<PodVlanVO> listAllocatedVnets(long podId) {
SearchCriteria<PodVlanVO> sc = PodSearchAllocated.create();
SearchCriteria sc = PodSearchAllocated.create();
sc.setParameters("podId", podId);
return listActiveBy(sc);
}
@ -78,7 +78,7 @@ public class PodVlanDaoImpl extends GenericDaoBase<PodVlanVO, Long> implements G
}
public PodVlanVO take(long podId, long accountId) {
SearchCriteria<PodVlanVO> sc = FreeVlanSearch.create();
SearchCriteria sc = FreeVlanSearch.create();
sc.setParameters("podId", podId);
Date now = new Date();
Transaction txn = Transaction.currentTxn();
@ -101,7 +101,7 @@ public class PodVlanDaoImpl extends GenericDaoBase<PodVlanVO, Long> implements G
}
public void release(String vlan, long podId, long accountId) {
SearchCriteria<PodVlanVO> sc = VlanPodSearch.create();
SearchCriteria sc = VlanPodSearch.create();
sc.setParameters("vlan", vlan);
sc.setParameters("podId", podId);
sc.setParameters("account", accountId);

View File

@ -36,21 +36,21 @@ public class PodVlanMapDaoImpl extends GenericDaoBase<PodVlanMapVO, Long> implem
@Override
public List<PodVlanMapVO> listPodVlanMapsByPod(long podId) {
SearchCriteria<PodVlanMapVO> sc = PodSearch.create();
SearchCriteria sc = PodSearch.create();
sc.setParameters("podId", podId);
return listBy(sc);
}
@Override
public List<PodVlanMapVO> listPodVlanMapsByVlan(long vlanDbId) {
SearchCriteria<PodVlanMapVO> sc = VlanSearch.create();
SearchCriteria sc = VlanSearch.create();
sc.setParameters("vlanDbId", vlanDbId);
return listBy(sc);
}
@Override
public PodVlanMapVO findPodVlanMap(long podId, long vlanDbId) {
SearchCriteria<PodVlanMapVO> sc = PodVlanSearch.create();
SearchCriteria sc = PodVlanSearch.create();
sc.setParameters("podId", podId);
sc.setParameters("vlanDbId", vlanDbId);
return findOneBy(sc);

View File

@ -25,6 +25,8 @@ import java.util.Map;
import javax.ejb.Local;
import javax.naming.ConfigurationException;
import org.apache.log4j.Logger;
import com.cloud.dc.AccountVlanMapVO;
import com.cloud.dc.PodVlanMapVO;
import com.cloud.dc.Vlan;
@ -53,7 +55,7 @@ public class VlanDaoImpl extends GenericDaoBase<VlanVO, Long> implements VlanDao
@Override
public VlanVO findByZoneAndVlanId(long zoneId, String vlanId) {
SearchCriteria<VlanVO> sc = ZoneVlanIdSearch.create();
SearchCriteria sc = ZoneVlanIdSearch.create();
sc.setParameters("zoneId", zoneId);
sc.setParameters("vlanId", vlanId);
return findOneActiveBy(sc);
@ -61,7 +63,7 @@ public class VlanDaoImpl extends GenericDaoBase<VlanVO, Long> implements VlanDao
@Override
public List<VlanVO> findByZone(long zoneId) {
SearchCriteria<VlanVO> sc = ZoneSearch.create();
SearchCriteria sc = ZoneSearch.create();
sc.setParameters("zoneId", zoneId);
return listBy(sc);
}
@ -84,7 +86,7 @@ public class VlanDaoImpl extends GenericDaoBase<VlanVO, Long> implements VlanDao
@Override
public List<VlanVO> listByZoneAndType(long zoneId, VlanType vlanType) {
SearchCriteria<VlanVO> sc = ZoneTypeSearch.create();
SearchCriteria sc = ZoneTypeSearch.create();
sc.setParameters("zoneId", zoneId);
sc.setParameters("vlanType", vlanType);
return listBy(sc);
@ -226,7 +228,7 @@ public class VlanDaoImpl extends GenericDaoBase<VlanVO, Long> implements VlanDao
@Override
public boolean zoneHasDirectAttachUntaggedVlans(long zoneId) {
SearchCriteria<VlanVO> sc = ZoneTypeAllPodsSearch.create();
SearchCriteria sc = ZoneTypeAllPodsSearch.create();
sc.setParameters("zoneId", zoneId);
sc.setParameters("vlanType", VlanType.DirectAttached);
@ -237,7 +239,7 @@ public class VlanDaoImpl extends GenericDaoBase<VlanVO, Long> implements VlanDao
@Override
public Pair<String, VlanVO> assignPodDirectAttachIpAddress(long zoneId,
long podId, long accountId, long domainId) {
SearchCriteria<VlanVO> sc = ZoneTypePodSearch.create();
SearchCriteria sc = ZoneTypePodSearch.create();
sc.setParameters("zoneId", zoneId);
sc.setParameters("vlanType", VlanType.DirectAttached);
sc.setJoinParameters("vlan", "podId", podId);

View File

@ -186,7 +186,7 @@ public class DomainDaoImpl extends GenericDaoBase<DomainVO, Long> implements Dom
@Override
public DomainVO findDomainByPath(String domainPath) {
SearchCriteria<DomainVO> sc = createSearchCriteria();
SearchCriteria sc = createSearchCriteria();
sc.addAnd("path", SearchCriteria.Op.EQ, domainPath);
return findOneActiveBy(sc);
}
@ -202,7 +202,7 @@ public class DomainDaoImpl extends GenericDaoBase<DomainVO, Long> implements Dom
}
boolean result = false;
SearchCriteria<DomainVO> sc = DomainPairSearch.create();
SearchCriteria sc = DomainPairSearch.create();
sc.setParameters("id", parentId, childId);
List<DomainVO> domainPair = listActiveBy(sc);

View File

@ -19,15 +19,15 @@
package com.cloud.event.dao;
import java.util.Date;
import java.util.List;
import java.util.List;
import com.cloud.event.EventVO;
import com.cloud.utils.db.Filter;
import com.cloud.utils.db.GenericDao;
import com.cloud.utils.db.SearchCriteria;
public interface EventDao extends GenericDao<EventVO, Long> {
public List<EventVO> searchAllEvents(SearchCriteria<EventVO> sc, Filter filter);
public List<EventVO> searchAllEvents(SearchCriteria sc, Filter filter);
public List<EventVO> listOlderEvents(Date oldTime);

View File

@ -19,14 +19,15 @@
package com.cloud.event.dao;
import java.util.Date;
import java.util.List;
import javax.ejb.Local;
import org.apache.log4j.Logger;
import java.util.List;
import javax.ejb.Local;
import org.apache.log4j.Logger;
import com.cloud.event.EventState;
import com.cloud.event.EventVO;
import com.cloud.utils.db.DB;
import com.cloud.utils.db.Filter;
import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
@ -52,14 +53,20 @@ public class EventDaoImpl extends GenericDaoBase<EventVO, Long> implements Event
}
@Override
public List<EventVO> searchAllEvents(SearchCriteria<EventVO> sc, Filter filter) {
@DB
public List<EventVO> searchAllEvents(SearchCriteria sc, Filter filter) {
return listBy(sc, filter);
}
@Override
public List<EventVO> search(final SearchCriteria sc, final Filter filter) {
return super.search(sc, filter);
}
@Override
public List<EventVO> listOlderEvents(Date oldTime) {
if (oldTime == null) return null;
SearchCriteria<EventVO> sc = createSearchCriteria();
SearchCriteria sc = createSearchCriteria();
sc.addAnd("createDate", SearchCriteria.Op.LT, oldTime);
return listBy(sc, null);
@ -68,7 +75,7 @@ public class EventDaoImpl extends GenericDaoBase<EventVO, Long> implements Event
@Override
public List<EventVO> listStartedEvents(Date minTime, Date maxTime) {
if (minTime == null || maxTime == null) return null;
SearchCriteria<EventVO> sc = StartedEventsSearch.create();
SearchCriteria sc = StartedEventsSearch.create();
sc.setParameters("state", EventState.Completed);
sc.setParameters("startId", 0);
sc.setParameters("createDate", minTime, maxTime);
@ -77,7 +84,7 @@ public class EventDaoImpl extends GenericDaoBase<EventVO, Long> implements Event
@Override
public EventVO findCompletedEvent(long startId) {
SearchCriteria<EventVO> sc = CompletedEventSearch.create();
SearchCriteria sc = CompletedEventSearch.create();
sc.setParameters("state", EventState.Completed);
sc.setParameters("startId", startId);
return findOneBy(sc);

View File

@ -15,35 +15,31 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.api;
package com.cloud.exception;
public class NetworkUsageAnswer extends Answer {
Long bytesSent;
Long bytesReceived;
import com.cloud.utils.SerialVersionUID;
/**
* This exception is thrown when the agent is unavailable to accept an
* command.
*
*/
public class AgentUnavailableException extends Exception {
protected NetworkUsageAnswer() {
private static final long serialVersionUID = SerialVersionUID.AgentUnavailableException;
long _agentId;
public AgentUnavailableException(String msg, long agentId) {
super("Host " + agentId + ": " + msg);
_agentId = agentId;
}
public NetworkUsageAnswer(NetworkUsageCommand cmd, String details, Long bytesSent, Long bytesReceived) {
super(cmd, true, details);
this.bytesReceived = bytesReceived;
this.bytesSent = bytesSent;
public AgentUnavailableException(long agentId) {
this("Unable to reach host.", agentId);
}
public void setBytesReceived(Long bytesReceived) {
this.bytesReceived = bytesReceived;
}
public Long getBytesReceived() {
return bytesReceived;
}
public void setBytesSent(Long bytesSent) {
this.bytesSent = bytesSent;
}
public Long getBytesSent() {
return bytesSent;
public long getAgentId() {
return _agentId;
}
}

View File

@ -0,0 +1,29 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
import com.cloud.utils.SerialVersionUID;
public class ConcurrentOperationException extends Exception {
private static final long serialVersionUID = SerialVersionUID.ConcurrentOperationException;
public ConcurrentOperationException(String msg) {
super(msg);
}
}

View File

@ -0,0 +1,33 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
import com.cloud.utils.SerialVersionUID;
public class DiscoveryException extends Exception {
private static final long serialVersionUID = SerialVersionUID.DiscoveryException;
public DiscoveryException(String msg) {
this(msg, null);
}
public DiscoveryException(String msg, Throwable cause) {
super(msg, cause);
}
}

View File

@ -0,0 +1,35 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.exception;
import com.cloud.utils.SerialVersionUID;
/**
* This exception is thrown when a machine is in HA State and a operation,
* such as start or stop, is attempted on it. Machines that are in HA
* states need to be properly cleaned up before anything special can be
* done with it. Hence this special state.
*/
public class HAStateException extends ManagementServerException {
private static final long serialVersionUID = SerialVersionUID.HAStateException;
public HAStateException(String msg) {
super(msg);
}
}

View File

@ -16,22 +16,23 @@
*
*/
package com.cloud.api;
package com.cloud.exception;
import static java.lang.annotation.ElementType.FIELD;
import com.cloud.utils.SerialVersionUID;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
/**
* Exception thrown when the end there's not enough ip addresses in the system.
*/
public class InsufficientAddressCapacityException extends InsufficientCapacityException {
import com.cloud.api.BaseCmd.CommandType;
private static final long serialVersionUID = SerialVersionUID.InsufficientAddressCapacityException;
public InsufficientAddressCapacityException(String msg) {
super(msg);
}
protected InsufficientAddressCapacityException() {
super();
}
@Retention(RetentionPolicy.RUNTIME)
@Target({FIELD})
public @interface Parameter {
String name() default "";
boolean required() default false;
CommandType type() default CommandType.OBJECT;
CommandType collectionType() default CommandType.OBJECT;
}

Some files were not shown because too many files have changed in this diff Show More