diff --git a/INSTALL.md b/INSTALL.md
index 37415dc25ec..893f5551e44 100644
--- a/INSTALL.md
+++ b/INSTALL.md
@@ -96,25 +96,22 @@ Clean and build:
$ mvn clean install -P systemvm,developer
-In case you want support for VMWare, SRX and other non-Apache (referred to as nonoss)
-compliant libs, you may download the following jar artifacts from respective vendors:
+CloudStack supports several plugins that depend on libraries with distribution restrictions.
+Because of this they are not included in the default build. Enable these additional plugins
+activate their respective profiles. For convenience adding -Dnoredist will enable all plugins
+that depend on libraries with distribution restrictions. The build procedure expects that the
+required libraries are present in the maven repository.
- deps/cloud-iControl.jar
- deps/cloud-manageontap.jar
- deps/cloud-netscaler-sdx.jar
- deps/cloud-netscaler.jar
- deps/vmware-apputils.jar
- deps/vmware-vim.jar
- deps/vmware-vim25.jar
-
-Install them to ~/.m2 so maven can get them as dependencies:
+The following procedure can be used to add the libraries to the local maven repository. Details
+on obtaining the required libraries can be found in this file. Note that this will vary between
+releases of cloudstack
$ cd deps
$ ./install-non-oss.sh
-To build with nonoss components, use the build command with the nonoss flag:
+To build all non redistributable components, add the noredist flag to the build command:
- $ mvn clean install -P systemvm,developer -Dnonoss
+ $ mvn clean install -P systemvm,developer -Dnoredist
Clear old database (if any) and deploy the database schema:
@@ -153,7 +150,7 @@ This section describes packaging and installation.
To create debs:
- $ mvn -P deps # -D nonoss, for nonoss as described in the "Building" section above
+ $ mvn -P deps # -D noredist, for noredist as described in the "Building" section above
$ dpkg-buildpackage
All the deb packages will be created in ../$PWD
@@ -183,15 +180,15 @@ Install needed packages, apt-get upgrade for upgrading:
To create rpms:
- $ mvn -P deps # -D nonoss, for nonoss as described in the "Building" section above
- $ ./waf rpm
+ $ cd packaging/centos63
+ $ bash packaging.sh [ -p NOREDIST ]
-All the rpm packages will be create in artifacts/rpmbuild/RPMS/x86_64
+All the rpm packages will be create in dist/rpmbuild/RPMS/x86_64
To create a yum repo: (assuming appropriate user privileges)
$ path=/path/to/your/webserver/cloudstack
- $ cd artifacts/rpmbuild/RPMS/x86_64
+ $ cd dist/rpmbuild/RPMS/x86_64
$ mv *.rpm $path
$ createrepo $path
@@ -208,10 +205,10 @@ Installation:
Install needed packages:
$ yum update
- $ yum install cloud-client # management server
+ $ yum install cloudstack-management # management server
$ yum install mysql-server # mysql server
- $ yum install cloud-agent # agent (kvm)
- $ yum install cloud-usage # usage server
+ $ yum install cloudstack-agent # agent (kvm)
+ $ yum install cloudstack-usage # usage server
## Installing CloudMonkey CLI
diff --git a/agent/bindir/cloudstack-agent-upgrade.in b/agent/bindir/cloudstack-agent-upgrade.in
new file mode 100644
index 00000000000..72b0fae5853
--- /dev/null
+++ b/agent/bindir/cloudstack-agent-upgrade.in
@@ -0,0 +1,63 @@
+#!/usr/bin/python
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from cloudutils.networkConfig import networkConfig
+from cloudutils.utilities import bash
+import logging
+import re
+def isOldStyleBridge(brName):
+ if brName.find("cloudVirBr") == 0:
+ return True
+ else:
+ return False
+def upgradeBridgeName(brName, enslavedDev):
+ print("upgrade bridge: %s, %s"%(brName, enslavedDev))
+ vlanId = brName.replace("cloudVirBr", "")
+ print("find vlan Id: %s"%vlanId)
+ phyDev = enslavedDev.split(".")[0]
+ print("find physical device %s"%phyDev)
+ newBrName = "br" + phyDev + "-" + vlanId
+ print("new bridge name %s"%newBrName)
+ bash("ip link set %s down"%brName)
+ bash("ip link set %s name %s"%(brName, newBrName))
+ bash("ip link set %s up" %newBrName)
+ cmd = "iptables-save | grep FORWARD | grep -w " + brName
+ rules = bash(cmd).stdout.split('\n')
+ rules.pop()
+ for rule in rules:
+ try:
+ delrule = re.sub("-A", "-D", rule)
+ newrule = re.sub(" " + brName + " ", " " + newBrName + " ", rule)
+ bash("iptables " + delrule)
+ bash("iptables " + newrule)
+ except:
+ logging.exception("Ignoring failure to update rules for rule " + rule + " on bridge " + brName)
+if __name__ == '__main__':
+ netlib = networkConfig()
+ bridges = netlib.listNetworks()
+ bridges = filter(isOldStyleBridge, bridges)
+ for br in bridges:
+ enslavedDev = netlib.getEnslavedDev(br, 1)
+ if enslavedDev is not None:
+ upgradeBridgeName(br, enslavedDev)
+
+ bridges = netlib.listNetworks()
+ bridges = filter(isOldStyleBridge, bridges)
+ if len(bridges) > 0:
+ print("Warning: upgrade is not finished, still some bridges have the old style name:" + str(bridges))
+ else:
+ print("Upgrade succeed")
diff --git a/agent/bindir/libvirtqemuhook.in b/agent/bindir/libvirtqemuhook.in
new file mode 100755
index 00000000000..7bf9634fdf5
--- /dev/null
+++ b/agent/bindir/libvirtqemuhook.in
@@ -0,0 +1,53 @@
+#!/usr/bin/python
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import sys
+from xml.dom.minidom import parse
+from cloudutils.configFileOps import configFileOps
+from cloudutils.networkConfig import networkConfig
+def isOldStyleBridge(brName):
+ if brName.find("cloudVirBr") == 0:
+ return True
+ else:
+ return False
+def getGuestNetworkDevice():
+ netlib = networkConfig()
+ cfo = configFileOps("/etc/cloudstack/agent/agent.properties")
+ guestDev = cfo.getEntry("guest.network.device")
+ enslavedDev = netlib.getEnslavedDev(guestDev, 1)
+ return enslavedDev
+def handleMigrateBegin():
+ try:
+ domain = parse(sys.stdin)
+ for interface in domain.getElementsByTagName("interface"):
+ source = interface.getElementsByTagName("source")[0]
+ bridge = source.getAttribute("bridge")
+ if not isOldStyleBridge(bridge):
+ continue
+ vlanId = bridge.replace("cloudVirBr","")
+ phyDev = getGuestNetworkDevice()
+ newBrName="br" + phyDev + "-" + vlanId
+ source.setAttribute("bridge", newBrName)
+ print(domain.toxml())
+ except:
+ pass
+if __name__ == '__main__':
+ if len(sys.argv) != 5:
+ sys.exit(0)
+
+ if sys.argv[2] == "migrate" and sys.argv[3] == "begin":
+ handleMigrateBegin()
diff --git a/agent/pom.xml b/agent/pom.xml
index 7b00a93963f..14133226053 100644
--- a/agent/pom.xml
+++ b/agent/pom.xml
@@ -36,6 +36,10 @@
cloud-utils${project.version}
+
+ commons-io
+ commons-io
+ commons-daemoncommons-daemon
diff --git a/agent/scripts/run.sh b/agent/scripts/run.sh
deleted file mode 100755
index 1fa427539fd..00000000000
--- a/agent/scripts/run.sh
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/usr/bin/env bash
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements. See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership. The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License. You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied. See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-#run.sh runs the agent client.
-java $1 -Xms128M -Xmx384M -cp cglib-nodep-2.2.jar:trilead-ssh2-build213.jar:cloud-api.jar:cloud-core-extras.jar:cloud-utils.jar:cloud-agent.jar:cloud-console-proxy.jar:cloud-console-common.jar:freemarker.jar:log4j-1.2.15.jar:ws-commons-util-1.0.2.jar:xmlrpc-client-3.1.3.jar:cloud-core.jar:xmlrpc-common-3.1.3.jar:javaee-api-5.0-1.jar:gson-1.3.jar:commons-httpclient-3.1.jar:commons-logging-1.1.1.jar:commons-codec-1.4.jar:commons-collections-3.2.1.jar:commons-pool-1.4.jar:apache-log4j-extras-1.0.jar:libvirt-0.4.5.jar:jna.jar:.:/etc/cloud:./*:/usr/share/java/*:./conf com.cloud.agent.AgentShell
diff --git a/agent/src/com/cloud/agent/Agent.java b/agent/src/com/cloud/agent/Agent.java
index f309474fbc5..c4f17b24ae7 100755
--- a/agent/src/com/cloud/agent/Agent.java
+++ b/agent/src/com/cloud/agent/Agent.java
@@ -26,7 +26,6 @@ import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Timer;
-import java.util.TimerTask;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.SynchronousQueue;
@@ -36,6 +35,7 @@ import java.util.concurrent.atomic.AtomicInteger;
import javax.naming.ConfigurationException;
+import org.apache.cloudstack.managed.context.ManagedContextTimerTask;
import org.apache.log4j.Logger;
import com.cloud.agent.api.AgentControlAnswer;
@@ -45,7 +45,6 @@ import com.cloud.agent.api.Command;
import com.cloud.agent.api.CronCommand;
import com.cloud.agent.api.MaintainAnswer;
import com.cloud.agent.api.MaintainCommand;
-import com.cloud.agent.api.ModifySshKeysCommand;
import com.cloud.agent.api.PingCommand;
import com.cloud.agent.api.ReadyCommand;
import com.cloud.agent.api.ShutdownCommand;
@@ -731,7 +730,7 @@ public class Agent implements HandlerFactory, IAgentControl {
}
}
- public class WatchTask extends TimerTask {
+ public class WatchTask extends ManagedContextTimerTask {
protected Request _request;
protected Agent _agent;
protected Link _link;
@@ -744,7 +743,7 @@ public class Agent implements HandlerFactory, IAgentControl {
}
@Override
- public void run() {
+ protected void runInContext() {
if (s_logger.isTraceEnabled()) {
s_logger.trace("Scheduling " + (_request instanceof Response ? "Ping" : "Watch Task"));
}
@@ -760,7 +759,7 @@ public class Agent implements HandlerFactory, IAgentControl {
}
}
- public class StartupTask extends TimerTask {
+ public class StartupTask extends ManagedContextTimerTask {
protected Link _link;
protected volatile boolean cancelled = false;
@@ -782,7 +781,7 @@ public class Agent implements HandlerFactory, IAgentControl {
}
@Override
- public synchronized void run() {
+ protected synchronized void runInContext() {
if (!cancelled) {
if (s_logger.isInfoEnabled()) {
s_logger.info("The startup command is now cancelled");
diff --git a/agent/src/com/cloud/agent/AgentShell.java b/agent/src/com/cloud/agent/AgentShell.java
index bf1e8180e44..900a13f4ab1 100644
--- a/agent/src/com/cloud/agent/AgentShell.java
+++ b/agent/src/com/cloud/agent/AgentShell.java
@@ -19,14 +19,12 @@ package com.cloud.agent;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
-import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.lang.reflect.Constructor;
import java.lang.reflect.InvocationTargetException;
import java.util.ArrayList;
import java.util.Collections;
-import java.util.Date;
import java.util.Enumeration;
import java.util.HashMap;
import java.util.List;
@@ -39,9 +37,8 @@ import javax.naming.ConfigurationException;
import org.apache.commons.daemon.Daemon;
import org.apache.commons.daemon.DaemonContext;
import org.apache.commons.daemon.DaemonInitException;
-import org.apache.commons.httpclient.HttpClient;
-import org.apache.commons.httpclient.MultiThreadedHttpConnectionManager;
-import org.apache.commons.httpclient.methods.GetMethod;
+import org.apache.commons.io.IOUtils;
+import org.apache.commons.lang.math.NumberUtils;
import org.apache.log4j.Logger;
import org.apache.log4j.xml.DOMConfigurator;
@@ -56,12 +53,10 @@ import com.cloud.utils.PropertiesUtil;
import com.cloud.utils.backoff.BackoffAlgorithm;
import com.cloud.utils.backoff.impl.ConstantTimeBackoff;
import com.cloud.utils.exception.CloudRuntimeException;
-import com.cloud.utils.script.Script;
public class AgentShell implements IAgentShell, Daemon {
private static final Logger s_logger = Logger.getLogger(AgentShell.class
.getName());
- private static final MultiThreadedHttpConnectionManager s_httpClientManager = new MultiThreadedHttpConnectionManager();
private final Properties _properties = new Properties();
private final Map _cmdLineProperties = new HashMap();
@@ -172,7 +167,7 @@ public class AgentShell implements IAgentShell, Daemon {
_storage.persist(name, value);
}
- private void loadProperties() throws ConfigurationException {
+ void loadProperties() throws ConfigurationException {
final File file = PropertiesUtil.findConfigFile("agent.properties");
if (file == null) {
throw new ConfigurationException("Unable to find agent.properties.");
@@ -180,14 +175,18 @@ public class AgentShell implements IAgentShell, Daemon {
s_logger.info("agent.properties found at " + file.getAbsolutePath());
+ InputStream propertiesStream = null;
try {
- _properties.load(new FileInputStream(file));
+ propertiesStream = new FileInputStream(file);
+ _properties.load(propertiesStream);
} catch (final FileNotFoundException ex) {
throw new CloudRuntimeException("Cannot find the file: "
+ file.getAbsolutePath(), ex);
} catch (final IOException ex) {
throw new CloudRuntimeException("IOException in reading "
+ file.getAbsolutePath(), ex);
+ } finally {
+ IOUtils.closeQuietly(propertiesStream);
}
}
@@ -199,30 +198,32 @@ public class AgentShell implements IAgentShell, Daemon {
String zone = null;
String pod = null;
String guid = null;
- for (int i = 0; i < args.length; i++) {
- final String[] tokens = args[i].split("=");
+ for (String param : args) {
+ final String[] tokens = param.split("=");
if (tokens.length != 2) {
- System.out.println("Invalid Parameter: " + args[i]);
+ System.out.println("Invalid Parameter: " + param);
continue;
}
+ final String paramName = tokens[0];
+ final String paramValue = tokens[1];
// save command line properties
- _cmdLineProperties.put(tokens[0], tokens[1]);
+ _cmdLineProperties.put(paramName, paramValue);
- if (tokens[0].equalsIgnoreCase("port")) {
- port = tokens[1];
- } else if (tokens[0].equalsIgnoreCase("threads") || tokens[0].equalsIgnoreCase("workers")) {
- workers = tokens[1];
- } else if (tokens[0].equalsIgnoreCase("host")) {
- host = tokens[1];
- } else if (tokens[0].equalsIgnoreCase("zone")) {
- zone = tokens[1];
- } else if (tokens[0].equalsIgnoreCase("pod")) {
- pod = tokens[1];
- } else if (tokens[0].equalsIgnoreCase("guid")) {
- guid = tokens[1];
- } else if (tokens[0].equalsIgnoreCase("eth1ip")) {
- _privateIp = tokens[1];
+ if (paramName.equalsIgnoreCase("port")) {
+ port = paramValue;
+ } else if (paramName.equalsIgnoreCase("threads") || paramName.equalsIgnoreCase("workers")) {
+ workers = paramValue;
+ } else if (paramName.equalsIgnoreCase("host")) {
+ host = paramValue;
+ } else if (paramName.equalsIgnoreCase("zone")) {
+ zone = paramValue;
+ } else if (paramName.equalsIgnoreCase("pod")) {
+ pod = paramValue;
+ } else if (paramName.equalsIgnoreCase("guid")) {
+ guid = paramValue;
+ } else if (paramName.equalsIgnoreCase("eth1ip")) {
+ _privateIp = paramValue;
}
}
@@ -230,16 +231,16 @@ public class AgentShell implements IAgentShell, Daemon {
port = getProperty(null, "port");
}
- _port = NumbersUtil.parseInt(port, 8250);
+ _port = NumberUtils.toInt(port, 8250);
- _proxyPort = NumbersUtil.parseInt(
+ _proxyPort = NumberUtils.toInt(
getProperty(null, "consoleproxy.httpListenPort"), 443);
if (workers == null) {
workers = getProperty(null, "workers");
}
- _workers = NumbersUtil.parseInt(workers, 5);
+ _workers = NumberUtils.toInt(workers, 5);
if (host == null) {
host = getProperty(null, "host");
@@ -309,7 +310,7 @@ public class AgentShell implements IAgentShell, Daemon {
// For KVM agent, do it specially here
File file = new File("/etc/cloudstack/agent/log4j-cloud.xml");
- if(file == null || !file.exists()) {
+ if(!file.exists()) {
file = PropertiesUtil.findConfigFile("log4j-cloud.xml");
}
DOMConfigurator.configureAndWatch(file.getAbsolutePath());
diff --git a/agent/src/com/cloud/agent/dao/impl/PropertiesStorage.java b/agent/src/com/cloud/agent/dao/impl/PropertiesStorage.java
index 2bf26f48642..411d946a294 100755
--- a/agent/src/com/cloud/agent/dao/impl/PropertiesStorage.java
+++ b/agent/src/com/cloud/agent/dao/impl/PropertiesStorage.java
@@ -17,7 +17,6 @@
package com.cloud.agent.dao.impl;
import java.io.File;
-import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
@@ -26,6 +25,7 @@ import java.util.Properties;
import javax.ejb.Local;
+import org.apache.commons.io.IOUtils;
import org.apache.log4j.Logger;
import com.cloud.agent.dao.StorageComponent;
@@ -59,18 +59,10 @@ public class PropertiesStorage implements StorageComponent {
_properties.store(output, _name);
output.flush();
output.close();
- } catch (FileNotFoundException e) {
- s_logger.error("Who deleted the file? ", e);
} catch (IOException e) {
s_logger.error("Uh-oh: ", e);
} finally {
- if (output != null) {
- try {
- output.close();
- } catch (IOException e) {
- // ignore.
- }
- }
+ IOUtils.closeQuietly(output);
}
}
@@ -99,7 +91,7 @@ public class PropertiesStorage implements StorageComponent {
}
try {
- _properties.load(new FileInputStream(file));
+ PropertiesUtil.loadFromFile(_properties, file);
_file = file;
} catch (FileNotFoundException e) {
s_logger.error("How did we get here? ", e);
diff --git a/agent/src/com/cloud/agent/resource/consoleproxy/ConsoleProxyResource.java b/agent/src/com/cloud/agent/resource/consoleproxy/ConsoleProxyResource.java
index ee5c36176c8..6f49f47a1ed 100644
--- a/agent/src/com/cloud/agent/resource/consoleproxy/ConsoleProxyResource.java
+++ b/agent/src/com/cloud/agent/resource/consoleproxy/ConsoleProxyResource.java
@@ -32,6 +32,7 @@ import java.util.Properties;
import javax.naming.ConfigurationException;
+import org.apache.cloudstack.managed.context.ManagedContextRunnable;
import org.apache.log4j.Logger;
import com.cloud.agent.Agent.ExitStatus;
@@ -357,8 +358,9 @@ public class ConsoleProxyResource extends ServerResourceBase implements
private void launchConsoleProxy(final byte[] ksBits, final String ksPassword, final String encryptorPassword) {
final Object resource = this;
if (_consoleProxyMain == null) {
- _consoleProxyMain = new Thread(new Runnable() {
- public void run() {
+ _consoleProxyMain = new Thread(new ManagedContextRunnable() {
+ @Override
+ protected void runInContext() {
try {
Class> consoleProxyClazz = Class.forName("com.cloud.consoleproxy.ConsoleProxy");
try {
diff --git a/agent/test/com/cloud/agent/AgentShellTest.java b/agent/test/com/cloud/agent/AgentShellTest.java
new file mode 100644
index 00000000000..d92accbd7e8
--- /dev/null
+++ b/agent/test/com/cloud/agent/AgentShellTest.java
@@ -0,0 +1,48 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+package com.cloud.agent;
+
+import java.util.UUID;
+
+import javax.naming.ConfigurationException;
+
+import junit.framework.Assert;
+
+import org.junit.Test;
+
+public class AgentShellTest {
+ @Test
+ public void parseCommand() throws ConfigurationException {
+ AgentShell shell = new AgentShell();
+ UUID anyUuid = UUID.randomUUID();
+ shell.parseCommand(new String[] { "port=55555", "threads=4",
+ "host=localhost", "pod=pod1", "guid=" + anyUuid, "zone=zone1" });
+ Assert.assertEquals(55555, shell.getPort());
+ Assert.assertEquals(4, shell.getWorkers());
+ Assert.assertEquals("localhost", shell.getHost());
+ Assert.assertEquals(anyUuid.toString(), shell.getGuid());
+ Assert.assertEquals("pod1", shell.getPod());
+ Assert.assertEquals("zone1", shell.getZone());
+ }
+ @Test
+ public void loadProperties() throws ConfigurationException {
+ AgentShell shell = new AgentShell();
+ shell.loadProperties();
+ Assert.assertNotNull(shell.getProperties());
+ Assert.assertFalse(shell.getProperties().entrySet().isEmpty());
+ }
+}
diff --git a/agent/test/com/cloud/agent/dao/impl/PropertiesStorageTest.java b/agent/test/com/cloud/agent/dao/impl/PropertiesStorageTest.java
new file mode 100644
index 00000000000..adaebc61287
--- /dev/null
+++ b/agent/test/com/cloud/agent/dao/impl/PropertiesStorageTest.java
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package com.cloud.agent.dao.impl;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.HashMap;
+
+import junit.framework.Assert;
+
+import org.apache.commons.io.FileUtils;
+import org.junit.Test;
+
+public class PropertiesStorageTest {
+ @Test
+ public void configureWithNotExistingFile() {
+ String fileName = "target/notyetexistingfile"
+ + System.currentTimeMillis();
+ File file = new File(fileName);
+
+ PropertiesStorage storage = new PropertiesStorage();
+ HashMap params = new HashMap();
+ params.put("path", fileName);
+ Assert.assertTrue(storage.configure("test", params));
+ Assert.assertTrue(file.exists());
+ storage.persist("foo", "bar");
+ Assert.assertEquals("bar", storage.get("foo"));
+
+ storage.stop();
+ file.delete();
+ }
+
+ @Test
+ public void configureWithExistingFile() throws IOException {
+ String fileName = "target/existingfile"
+ + System.currentTimeMillis();
+ File file = new File(fileName);
+
+ FileUtils.writeStringToFile(file, "a=b\n\n");
+
+ PropertiesStorage storage = new PropertiesStorage();
+ HashMap params = new HashMap();
+ params.put("path", fileName);
+ Assert.assertTrue(storage.configure("test", params));
+ Assert.assertEquals("b", storage.get("a"));
+ Assert.assertTrue(file.exists());
+ storage.persist("foo", "bar");
+ Assert.assertEquals("bar", storage.get("foo"));
+
+ storage.stop();
+ file.delete();
+ }
+}
diff --git a/docs/runbook/publican.cfg b/api/resources/META-INF/cloudstack/api-planner/module.properties
similarity index 76%
rename from docs/runbook/publican.cfg
rename to api/resources/META-INF/cloudstack/api-planner/module.properties
index 72722cd8ab5..8eed8791149 100644
--- a/docs/runbook/publican.cfg
+++ b/api/resources/META-INF/cloudstack/api-planner/module.properties
@@ -1,13 +1,12 @@
-# Config::Simple 4.59
-# Fri May 25 12:50:59 2012
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
-# distributed with this work for additional information#
+# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
-# http://www.apache.org/licenses/LICENSE-2.0
+#
+# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
@@ -15,8 +14,5 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
-
-xml_lang: "en-US"
-type: Book
-brand: cloudstack
-
+name=api-planner
+parent=planner
\ No newline at end of file
diff --git a/api/resources/META-INF/cloudstack/api-planner/spring-api-planner-context.xml b/api/resources/META-INF/cloudstack/api-planner/spring-api-planner-context.xml
new file mode 100644
index 00000000000..2fd34a8ee0a
--- /dev/null
+++ b/api/resources/META-INF/cloudstack/api-planner/spring-api-planner-context.xml
@@ -0,0 +1,34 @@
+
+
+
+
+
+
+
+
diff --git a/api/src/com/cloud/agent/api/to/DataStoreTO.java b/api/src/com/cloud/agent/api/to/DataStoreTO.java
index 9014f8e2b81..b79ba7d64be 100644
--- a/api/src/com/cloud/agent/api/to/DataStoreTO.java
+++ b/api/src/com/cloud/agent/api/to/DataStoreTO.java
@@ -20,7 +20,7 @@ package com.cloud.agent.api.to;
import com.cloud.storage.DataStoreRole;
-
public interface DataStoreTO {
public DataStoreRole getRole();
+ public String getUuid();
}
diff --git a/api/src/com/cloud/agent/api/to/DiskTO.java b/api/src/com/cloud/agent/api/to/DiskTO.java
index 556ccd4db46..a577689bdef 100644
--- a/api/src/com/cloud/agent/api/to/DiskTO.java
+++ b/api/src/com/cloud/agent/api/to/DiskTO.java
@@ -18,21 +18,35 @@
*/
package com.cloud.agent.api.to;
+import java.util.Map;
+
import com.cloud.storage.Volume;
public class DiskTO {
+ public static final String CHAP_INITIATOR_USERNAME = "chapInitiatorUsername";
+ public static final String CHAP_INITIATOR_SECRET = "chapInitiatorSecret";
+ public static final String CHAP_TARGET_USERNAME = "chapTargetUsername";
+ public static final String CHAP_TARGET_SECRET = "chapTargetSecret";
+ public static final String MANAGED = "managed";
+ public static final String IQN = "iqn";
+ public static final String STORAGE_HOST = "storageHost";
+ public static final String STORAGE_PORT = "storagePort";
+ public static final String VOLUME_SIZE = "volumeSize";
+
private DataTO data;
private Long diskSeq;
- private String vdiUuid;
+ private String path;
private Volume.Type type;
+ private Map _details;
+
public DiskTO() {
}
- public DiskTO(DataTO data, Long diskSeq, String vdiUuid, Volume.Type type) {
+ public DiskTO(DataTO data, Long diskSeq, String path, Volume.Type type) {
this.data = data;
this.diskSeq = diskSeq;
- this.vdiUuid = vdiUuid;
+ this.path = path;
this.type = type;
}
@@ -52,12 +66,12 @@ public class DiskTO {
this.diskSeq = diskSeq;
}
- public String getVdiUuid() {
- return vdiUuid;
+ public String getPath() {
+ return path;
}
- public void setVdiUuid(String vdiUuid) {
- this.vdiUuid = vdiUuid;
+ public void setPath(String path) {
+ this.path = path;
}
public Volume.Type getType() {
@@ -67,4 +81,12 @@ public class DiskTO {
public void setType(Volume.Type type) {
this.type = type;
}
+
+ public void setDetails(Map details) {
+ _details = details;
+ }
+
+ public Map getDetails() {
+ return _details;
+ }
}
diff --git a/api/src/com/cloud/agent/api/to/NfsTO.java b/api/src/com/cloud/agent/api/to/NfsTO.java
index 415c95ce3f5..54683c7f410 100644
--- a/api/src/com/cloud/agent/api/to/NfsTO.java
+++ b/api/src/com/cloud/agent/api/to/NfsTO.java
@@ -22,6 +22,7 @@ public class NfsTO implements DataStoreTO {
private String _url;
private DataStoreRole _role;
+ private String uuid;
public NfsTO() {
@@ -55,6 +56,12 @@ public class NfsTO implements DataStoreTO {
this._role = _role;
}
+ @Override
+ public String getUuid() {
+ return uuid;
+ }
-
+ public void setUuid(String uuid) {
+ this.uuid = uuid;
+ }
}
diff --git a/api/src/com/cloud/agent/api/to/S3TO.java b/api/src/com/cloud/agent/api/to/S3TO.java
index b1b692a8bad..350b9ca8b60 100644
--- a/api/src/com/cloud/agent/api/to/S3TO.java
+++ b/api/src/com/cloud/agent/api/to/S3TO.java
@@ -39,6 +39,7 @@ public final class S3TO implements S3Utils.ClientOptions, DataStoreTO {
private Integer socketTimeout;
private Date created;
private boolean enableRRS;
+ private long maxSingleUploadSizeInBytes;
public S3TO() {
@@ -50,7 +51,7 @@ public final class S3TO implements S3Utils.ClientOptions, DataStoreTO {
final String secretKey, final String endPoint,
final String bucketName, final Boolean httpsFlag,
final Integer connectionTimeout, final Integer maxErrorRetry,
- final Integer socketTimeout, final Date created, final boolean enableRRS) {
+ final Integer socketTimeout, final Date created, final boolean enableRRS, final long maxUploadSize) {
super();
@@ -66,6 +67,7 @@ public final class S3TO implements S3Utils.ClientOptions, DataStoreTO {
this.socketTimeout = socketTimeout;
this.created = created;
this.enableRRS = enableRRS;
+ this.maxSingleUploadSizeInBytes = maxUploadSize;
}
@@ -268,7 +270,6 @@ public final class S3TO implements S3Utils.ClientOptions, DataStoreTO {
}
-
public boolean getEnableRRS() {
return enableRRS;
}
@@ -277,5 +278,28 @@ public final class S3TO implements S3Utils.ClientOptions, DataStoreTO {
this.enableRRS = enableRRS;
}
+ public long getMaxSingleUploadSizeInBytes() {
+ return maxSingleUploadSizeInBytes;
+ }
+ public void setMaxSingleUploadSizeInBytes(long maxSingleUploadSizeInBytes) {
+ this.maxSingleUploadSizeInBytes = maxSingleUploadSizeInBytes;
+ }
+
+ public boolean getSingleUpload(long objSize){
+ if ( maxSingleUploadSizeInBytes < 0 ){
+ // always use single part upload
+ return true;
+ } else if ( maxSingleUploadSizeInBytes == 0 ){
+ // always use multi part upload
+ return false;
+ } else {
+ // check object size to set flag
+ if (objSize < maxSingleUploadSizeInBytes){
+ return true;
+ } else{
+ return false;
+ }
+ }
+ }
}
diff --git a/api/src/com/cloud/agent/api/to/SwiftTO.java b/api/src/com/cloud/agent/api/to/SwiftTO.java
index 7349d7779ac..3ad131ac4d8 100644
--- a/api/src/com/cloud/agent/api/to/SwiftTO.java
+++ b/api/src/com/cloud/agent/api/to/SwiftTO.java
@@ -29,8 +29,7 @@ public class SwiftTO implements DataStoreTO, SwiftUtil.SwiftClientCfg {
public SwiftTO() { }
- public SwiftTO(Long id, String url, String account, String userName, String key
- ) {
+ public SwiftTO(Long id, String url, String account, String userName, String key) {
this.id = id;
this.url = url;
this.account = account;
@@ -46,14 +45,17 @@ public class SwiftTO implements DataStoreTO, SwiftUtil.SwiftClientCfg {
return url;
}
+ @Override
public String getAccount() {
return account;
}
+ @Override
public String getUserName() {
return userName;
}
+ @Override
public String getKey() {
return key;
}
@@ -67,4 +69,9 @@ public class SwiftTO implements DataStoreTO, SwiftUtil.SwiftClientCfg {
public String getEndPoint() {
return this.url;
}
+
+ @Override
+ public String getUuid() {
+ return null;
+ }
}
diff --git a/api/src/com/cloud/event/EventTypes.java b/api/src/com/cloud/event/EventTypes.java
index 076a7c59211..0406c3e852d 100755
--- a/api/src/com/cloud/event/EventTypes.java
+++ b/api/src/com/cloud/event/EventTypes.java
@@ -76,6 +76,7 @@ public class EventTypes {
public static final String EVENT_VM_MIGRATE = "VM.MIGRATE";
public static final String EVENT_VM_MOVE = "VM.MOVE";
public static final String EVENT_VM_RESTORE = "VM.RESTORE";
+ public static final String EVENT_VM_EXPUNGE = "VM.EXPUNGE";
// Domain Router
public static final String EVENT_ROUTER_CREATE = "ROUTER.CREATE";
@@ -188,6 +189,8 @@ public class EventTypes {
public static final String EVENT_VOLUME_DETAIL_UPDATE = "VOLUME.DETAIL.UPDATE";
public static final String EVENT_VOLUME_DETAIL_ADD = "VOLUME.DETAIL.ADD";
public static final String EVENT_VOLUME_DETAIL_REMOVE = "VOLUME.DETAIL.REMOVE";
+ public static final String EVENT_VOLUME_UPDATE = "VOLUME.UPDATE";
+
// Domains
public static final String EVENT_DOMAIN_CREATE = "DOMAIN.CREATE";
@@ -197,6 +200,7 @@ public class EventTypes {
// Snapshots
public static final String EVENT_SNAPSHOT_CREATE = "SNAPSHOT.CREATE";
public static final String EVENT_SNAPSHOT_DELETE = "SNAPSHOT.DELETE";
+ public static final String EVENT_SNAPSHOT_REVERT = "SNAPSHOT.REVERT";
public static final String EVENT_SNAPSHOT_POLICY_CREATE = "SNAPSHOTPOLICY.CREATE";
public static final String EVENT_SNAPSHOT_POLICY_UPDATE = "SNAPSHOTPOLICY.UPDATE";
public static final String EVENT_SNAPSHOT_POLICY_DELETE = "SNAPSHOTPOLICY.DELETE";
@@ -456,6 +460,9 @@ public class EventTypes {
public static final String EVENT_ACL_GROUP_GRANT = "ACLGROUP.GRANT";
public static final String EVENT_ACL_GROUP_REVOKE = "ACLGROUP.REVOKE";
+ // Object store migration
+ public static final String EVENT_MIGRATE_PREPARE_SECONDARY_STORAGE = "MIGRATE.PREPARE.SS";
+
static {
// TODO: need a way to force author adding event types to declare the entity details as well, with out braking
diff --git a/api/src/com/cloud/exception/ConcurrentOperationException.java b/api/src/com/cloud/exception/ConcurrentOperationException.java
index cfe6ba3fa0a..018dba55f2e 100644
--- a/api/src/com/cloud/exception/ConcurrentOperationException.java
+++ b/api/src/com/cloud/exception/ConcurrentOperationException.java
@@ -17,8 +17,9 @@
package com.cloud.exception;
import com.cloud.utils.SerialVersionUID;
+import com.cloud.utils.exception.CloudRuntimeException;
-public class ConcurrentOperationException extends CloudException {
+public class ConcurrentOperationException extends CloudRuntimeException {
private static final long serialVersionUID = SerialVersionUID.ConcurrentOperationException;
diff --git a/api/src/com/cloud/network/Networks.java b/api/src/com/cloud/network/Networks.java
index 7069282a669..0412bf45982 100755
--- a/api/src/com/cloud/network/Networks.java
+++ b/api/src/com/cloud/network/Networks.java
@@ -108,6 +108,7 @@ public class Networks {
},
Mido("mido", String.class),
Pvlan("pvlan", String.class),
+ Vxlan("vxlan", Long.class),
UnDecided(null, null);
private final String scheme;
diff --git a/api/src/com/cloud/network/PhysicalNetwork.java b/api/src/com/cloud/network/PhysicalNetwork.java
index f6cb1a6e0b6..55b18e67ba9 100644
--- a/api/src/com/cloud/network/PhysicalNetwork.java
+++ b/api/src/com/cloud/network/PhysicalNetwork.java
@@ -39,7 +39,8 @@ public interface PhysicalNetwork extends Identity, InternalIdentity {
STT,
VNS,
MIDO,
- SSP;
+ SSP,
+ VXLAN;
}
public enum BroadcastDomainRange {
diff --git a/api/src/com/cloud/network/RemoteAccessVpn.java b/api/src/com/cloud/network/RemoteAccessVpn.java
index 058b2f486e6..4f61334db1e 100644
--- a/api/src/com/cloud/network/RemoteAccessVpn.java
+++ b/api/src/com/cloud/network/RemoteAccessVpn.java
@@ -31,6 +31,7 @@ public interface RemoteAccessVpn extends ControlledEntity, InternalIdentity, Ide
String getIpRange();
String getIpsecPresharedKey();
String getLocalIp();
- long getNetworkId();
+ Long getNetworkId();
+ Long getVpcId();
State getState();
}
diff --git a/api/src/com/cloud/network/VirtualRouterProvider.java b/api/src/com/cloud/network/VirtualRouterProvider.java
index f67686e6b08..02efb93db5a 100644
--- a/api/src/com/cloud/network/VirtualRouterProvider.java
+++ b/api/src/com/cloud/network/VirtualRouterProvider.java
@@ -20,14 +20,14 @@ import org.apache.cloudstack.api.Identity;
import org.apache.cloudstack.api.InternalIdentity;
public interface VirtualRouterProvider extends InternalIdentity, Identity {
- public enum VirtualRouterProviderType {
+ public enum Type {
VirtualRouter,
ElasticLoadBalancerVm,
VPCVirtualRouter,
InternalLbVm
}
- public VirtualRouterProviderType getType();
+ public Type getType();
public boolean isEnabled();
diff --git a/api/src/com/cloud/network/element/RemoteAccessVPNServiceProvider.java b/api/src/com/cloud/network/element/RemoteAccessVPNServiceProvider.java
index 4950ed92cab..b9233755249 100644
--- a/api/src/com/cloud/network/element/RemoteAccessVPNServiceProvider.java
+++ b/api/src/com/cloud/network/element/RemoteAccessVPNServiceProvider.java
@@ -19,7 +19,6 @@ package com.cloud.network.element;
import java.util.List;
import com.cloud.exception.ResourceUnavailableException;
-import com.cloud.network.Network;
import com.cloud.network.RemoteAccessVpn;
import com.cloud.network.VpnUser;
import com.cloud.utils.component.Adapter;
@@ -27,7 +26,7 @@ import com.cloud.utils.component.Adapter;
public interface RemoteAccessVPNServiceProvider extends Adapter {
String[] applyVpnUsers(RemoteAccessVpn vpn, List extends VpnUser> users) throws ResourceUnavailableException;
- boolean startVpn(Network network, RemoteAccessVpn vpn) throws ResourceUnavailableException;
+ boolean startVpn(RemoteAccessVpn vpn) throws ResourceUnavailableException;
- boolean stopVpn(Network network, RemoteAccessVpn vpn) throws ResourceUnavailableException;
+ boolean stopVpn(RemoteAccessVpn vpn) throws ResourceUnavailableException;
}
diff --git a/api/src/com/cloud/network/element/VirtualRouterElementService.java b/api/src/com/cloud/network/element/VirtualRouterElementService.java
index ea971b89c5d..b0db3d9bce2 100644
--- a/api/src/com/cloud/network/element/VirtualRouterElementService.java
+++ b/api/src/com/cloud/network/element/VirtualRouterElementService.java
@@ -22,12 +22,12 @@ import org.apache.cloudstack.api.command.admin.router.ConfigureVirtualRouterElem
import org.apache.cloudstack.api.command.admin.router.ListVirtualRouterElementsCmd;
import com.cloud.network.VirtualRouterProvider;
-import com.cloud.network.VirtualRouterProvider.VirtualRouterProviderType;
+import com.cloud.network.VirtualRouterProvider.Type;
import com.cloud.utils.component.PluggableService;
public interface VirtualRouterElementService extends PluggableService{
VirtualRouterProvider configure(ConfigureVirtualRouterElementCmd cmd);
- VirtualRouterProvider addElement(Long nspId, VirtualRouterProviderType providerType);
+ VirtualRouterProvider addElement(Long nspId, Type providerType);
VirtualRouterProvider getCreatedElement(long id);
List extends VirtualRouterProvider> searchForVirtualRouterElement(ListVirtualRouterElementsCmd cmd);
}
diff --git a/api/src/com/cloud/network/vpn/RemoteAccessVpnService.java b/api/src/com/cloud/network/vpn/RemoteAccessVpnService.java
index 285e714122a..de7692d38af 100644
--- a/api/src/com/cloud/network/vpn/RemoteAccessVpnService.java
+++ b/api/src/com/cloud/network/vpn/RemoteAccessVpnService.java
@@ -31,7 +31,7 @@ import com.cloud.utils.Pair;
public interface RemoteAccessVpnService {
static final String RemoteAccessVpnClientIpRangeCK = "remote.access.vpn.client.iprange";
- RemoteAccessVpn createRemoteAccessVpn(long vpnServerAddressId, String ipRange, boolean openFirewall, long networkId)
+ RemoteAccessVpn createRemoteAccessVpn(long vpnServerAddressId, String ipRange, boolean openFirewall)
throws NetworkRuleConflictException;
void destroyRemoteAccessVpnForIp(long vpnServerAddressId, Account caller) throws ResourceUnavailableException;
RemoteAccessVpn startRemoteAccessVpn(long vpnServerAddressId, boolean openFirewall) throws ResourceUnavailableException;
@@ -47,5 +47,4 @@ public interface RemoteAccessVpnService {
List extends RemoteAccessVpn> listRemoteAccessVpns(long networkId);
RemoteAccessVpn getRemoteAccessVpn(long vpnAddrId);
-
}
diff --git a/api/src/com/cloud/offering/NetworkOffering.java b/api/src/com/cloud/offering/NetworkOffering.java
index 6c5573e0368..749dae32fc9 100644
--- a/api/src/com/cloud/offering/NetworkOffering.java
+++ b/api/src/com/cloud/offering/NetworkOffering.java
@@ -130,4 +130,6 @@ public interface NetworkOffering extends InfrastructureEntity, InternalIdentity,
boolean getEgressDefaultPolicy();
Integer getConcurrentConnections();
+
+ boolean isKeepAliveEnabled();
}
diff --git a/api/src/com/cloud/server/ResourceMetaDataService.java b/api/src/com/cloud/server/ResourceMetaDataService.java
index 556f97453a1..a71cfe7f1ee 100644
--- a/api/src/com/cloud/server/ResourceMetaDataService.java
+++ b/api/src/com/cloud/server/ResourceMetaDataService.java
@@ -19,19 +19,19 @@ package com.cloud.server;
import java.util.List;
import java.util.Map;
-import com.cloud.server.ResourceTag.TaggedResourceType;
+import org.apache.cloudstack.api.ResourceDetail;
+
+import com.cloud.server.ResourceTag.ResourceObjectType;
public interface ResourceMetaDataService {
- TaggedResourceType getResourceType (String resourceTypeStr);
-
/**
* @param resourceId TODO
* @param resourceType
* @param details
* @return
*/
- boolean addResourceMetaData(String resourceId, TaggedResourceType resourceType, Map details);
+ boolean addResourceMetaData(String resourceId, ResourceObjectType resourceType, Map details);
/**
@@ -41,7 +41,14 @@ public interface ResourceMetaDataService {
* @param key
* @return
*/
- public boolean deleteResourceMetaData(String resourceId, TaggedResourceType resourceType, String key);
+ public boolean deleteResourceMetaData(String resourceId, ResourceObjectType resourceType, String key);
- }
+ ResourceDetail getDetail(long resourceId, ResourceObjectType resourceType, String key);
+
+
+ Map getDetailsMap(long resourceId, ResourceObjectType resourceType, Boolean forDisplay);
+
+ List extends ResourceDetail> getDetailsList(long resourceId, ResourceObjectType resourceType, Boolean forDisplay);
+
+}
diff --git a/api/src/com/cloud/server/ResourceTag.java b/api/src/com/cloud/server/ResourceTag.java
index f1d31e4e0d0..ab74d260dc3 100644
--- a/api/src/com/cloud/server/ResourceTag.java
+++ b/api/src/com/cloud/server/ResourceTag.java
@@ -22,25 +22,45 @@ import org.apache.cloudstack.api.InternalIdentity;
public interface ResourceTag extends ControlledEntity, Identity, InternalIdentity {
- public enum TaggedResourceType {
- UserVm,
- Template,
- ISO,
- Volume,
- Snapshot,
- Network,
- Nic,
- LoadBalancer,
- PortForwardingRule,
- FirewallRule,
- SecurityGroup,
- PublicIpAddress,
- Project,
- Vpc,
- NetworkACL,
- StaticRoute,
- VMSnapshot,
- RemoteAccessVpn
+ //FIXME - extract enum to another interface as its used both by resourceTags and resourceMetaData code
+ public enum ResourceObjectType {
+ UserVm (true, true),
+ Template (true, true),
+ ISO (true, false),
+ Volume (true, true),
+ Snapshot (true, false),
+ Network (true, true),
+ Nic (false, true),
+ LoadBalancer (true, false),
+ PortForwardingRule (true, false),
+ FirewallRule (true, true),
+ SecurityGroup (true, false),
+ PublicIpAddress (true, false),
+ Project (true, false),
+ Vpc (true, false),
+ NetworkACL (true, false),
+ StaticRoute (true, false),
+ VMSnapshot (true, false),
+ RemoteAccessVpn (true, false),
+ Zone (false, true),
+ ServiceOffering (false, true),
+ Storage(false, true);
+
+ ResourceObjectType(boolean resourceTagsSupport, boolean resourceMetadataSupport) {
+ this.resourceTagsSupport = resourceTagsSupport;
+ this.metadataSupport = resourceMetadataSupport;
+ }
+
+ private final boolean resourceTagsSupport;
+ private final boolean metadataSupport;
+
+ public boolean resourceTagsSupport() {
+ return this.resourceTagsSupport;
+ }
+
+ public boolean resourceMetadataSupport() {
+ return this.metadataSupport;
+ }
}
/**
@@ -61,7 +81,7 @@ public interface ResourceTag extends ControlledEntity, Identity, InternalIdentit
/**
* @return
*/
- TaggedResourceType getResourceType();
+ ResourceObjectType getResourceType();
/**
* @return
diff --git a/api/src/com/cloud/server/TaggedResourceService.java b/api/src/com/cloud/server/TaggedResourceService.java
index 46b185480bb..97046ac3950 100644
--- a/api/src/com/cloud/server/TaggedResourceService.java
+++ b/api/src/com/cloud/server/TaggedResourceService.java
@@ -19,12 +19,10 @@ package com.cloud.server;
import java.util.List;
import java.util.Map;
-import com.cloud.server.ResourceTag.TaggedResourceType;
+import com.cloud.server.ResourceTag.ResourceObjectType;
public interface TaggedResourceService {
- TaggedResourceType getResourceType (String resourceTypeStr);
-
/**
* @param resourceIds TODO
* @param resourceType
@@ -32,14 +30,7 @@ public interface TaggedResourceService {
* @param customer TODO
* @return
*/
- List createTags(List resourceIds, TaggedResourceType resourceType, Map tags, String customer);
-
- /**
- * @param resourceId
- * @param resourceType
- * @return
- */
- String getUuid(String resourceId, TaggedResourceType resourceType);
+ List createTags(List resourceIds, ResourceObjectType resourceType, Map tags, String customer);
/**
@@ -48,10 +39,19 @@ public interface TaggedResourceService {
* @param tags
* @return
*/
- boolean deleteTags(List resourceIds, TaggedResourceType resourceType, Map tags);
+ boolean deleteTags(List resourceIds, ResourceObjectType resourceType, Map tags);
- List extends ResourceTag> listByResourceTypeAndId(TaggedResourceType type, long resourceId);
+ List extends ResourceTag> listByResourceTypeAndId(ResourceObjectType type, long resourceId);
- public Long getResourceId(String resourceId, TaggedResourceType resourceType);
+ //FIXME - the methods below should be extracted to its separate manager/service responsible just for retrieving object details
+ ResourceObjectType getResourceType (String resourceTypeStr);
- }
+ /**
+ * @param resourceId
+ * @param resourceType
+ * @return
+ */
+ String getUuid(String resourceId, ResourceObjectType resourceType);
+
+ public long getResourceId(String resourceId, ResourceObjectType resourceType);
+}
diff --git a/api/src/com/cloud/storage/StorageService.java b/api/src/com/cloud/storage/StorageService.java
index 1ae1d3a7102..cbbc1f33559 100644
--- a/api/src/com/cloud/storage/StorageService.java
+++ b/api/src/com/cloud/storage/StorageService.java
@@ -22,9 +22,9 @@ import org.apache.cloudstack.api.command.admin.storage.AddImageStoreCmd;
import org.apache.cloudstack.api.command.admin.storage.CancelPrimaryStorageMaintenanceCmd;
import org.apache.cloudstack.api.command.admin.storage.CreateSecondaryStagingStoreCmd;
import org.apache.cloudstack.api.command.admin.storage.CreateStoragePoolCmd;
-import org.apache.cloudstack.api.command.admin.storage.DeleteSecondaryStagingStoreCmd;
import org.apache.cloudstack.api.command.admin.storage.DeleteImageStoreCmd;
import org.apache.cloudstack.api.command.admin.storage.DeletePoolCmd;
+import org.apache.cloudstack.api.command.admin.storage.DeleteSecondaryStagingStoreCmd;
import org.apache.cloudstack.api.command.admin.storage.UpdateStoragePoolCmd;
import com.cloud.exception.DiscoveryException;
@@ -97,4 +97,18 @@ public interface StorageService{
ImageStore discoverImageStore(AddImageStoreCmd cmd) throws IllegalArgumentException, DiscoveryException, InvalidParameterValueException;
+ /**
+ * Prepare NFS secondary storage for object store migration
+ *
+ * @param cmd
+ * - the command specifying secondaryStorageId
+ * @return the storage pool
+ * @throws ResourceUnavailableException
+ * TODO
+ * @throws InsufficientCapacityException
+ * TODO
+ */
+ public ImageStore prepareSecondaryStorageForObjectStoreMigration(Long storeId) throws ResourceUnavailableException,
+ InsufficientCapacityException;
+
}
diff --git a/api/src/com/cloud/storage/VolumeApiService.java b/api/src/com/cloud/storage/VolumeApiService.java
index 0194c817cac..4806ae7c06f 100644
--- a/api/src/com/cloud/storage/VolumeApiService.java
+++ b/api/src/com/cloud/storage/VolumeApiService.java
@@ -84,7 +84,7 @@ public interface VolumeApiService {
Snapshot allocSnapshot(Long volumeId, Long policyId)
throws ResourceAllocationException;
- Volume updateVolume(UpdateVolumeCmd updateVolumeCmd);
+ Volume updateVolume(long volumeId, String path, String state, Long storageId, Boolean displayVolume);
/**
* Extracts the volume to a particular location.
diff --git a/api/src/com/cloud/storage/snapshot/SnapshotApiService.java b/api/src/com/cloud/storage/snapshot/SnapshotApiService.java
index 23e65220ff9..4f135107f07 100644
--- a/api/src/com/cloud/storage/snapshot/SnapshotApiService.java
+++ b/api/src/com/cloud/storage/snapshot/SnapshotApiService.java
@@ -106,4 +106,6 @@ public interface SnapshotApiService {
* @return
*/
Long getHostIdForSnapshotOperation(Volume vol);
+
+ boolean revertSnapshot(Long snapshotId);
}
diff --git a/api/src/com/cloud/user/DomainService.java b/api/src/com/cloud/user/DomainService.java
index 7c302e377fd..f10728f6fb4 100644
--- a/api/src/com/cloud/user/DomainService.java
+++ b/api/src/com/cloud/user/DomainService.java
@@ -33,6 +33,9 @@ public interface DomainService {
Domain getDomain(String uuid);
+ Domain getDomainByName(String name, long parentId);
+
+
/**
* Return whether a domain is a child domain of a given domain.
*
diff --git a/api/src/com/cloud/vm/UserVmService.java b/api/src/com/cloud/vm/UserVmService.java
index 7d459b99a9e..0b142e83b72 100755
--- a/api/src/com/cloud/vm/UserVmService.java
+++ b/api/src/com/cloud/vm/UserVmService.java
@@ -23,6 +23,7 @@ import javax.naming.InsufficientResourcesException;
import org.apache.cloudstack.api.BaseCmd.HTTPMethod;
import org.apache.cloudstack.api.command.admin.vm.AssignVMCmd;
+import org.apache.cloudstack.api.command.admin.vm.ExpungeVMCmd;
import org.apache.cloudstack.api.command.admin.vm.RecoverVMCmd;
import org.apache.cloudstack.api.command.user.vm.AddNicToVMCmd;
import org.apache.cloudstack.api.command.user.vm.DeployVMCmd;
@@ -463,4 +464,8 @@ public interface UserVmService {
UserVm upgradeVirtualMachine(ScaleVMCmd cmd) throws ResourceUnavailableException, ConcurrentOperationException, ManagementServerException, VirtualMachineMigrationException;
+ UserVm expungeVm(ExpungeVMCmd cmd) throws ResourceUnavailableException, ConcurrentOperationException;
+
+ UserVm expungeVm(long vmId) throws ResourceUnavailableException, ConcurrentOperationException;
+
}
diff --git a/api/src/com/cloud/vm/VmDetailConstants.java b/api/src/com/cloud/vm/VmDetailConstants.java
index 5ff3ce02fe4..87f4b5dc5de 100644
--- a/api/src/com/cloud/vm/VmDetailConstants.java
+++ b/api/src/com/cloud/vm/VmDetailConstants.java
@@ -21,4 +21,5 @@ public interface VmDetailConstants {
public static final String NIC_ADAPTER = "nicAdapter";
public static final String ROOK_DISK_CONTROLLER = "rootDiskController";
public static final String NESTED_VIRTUALIZATION_FLAG = "nestedVirtualizationFlag";
+ public static final String HYPERVISOR_TOOLS_VERSION = "hypervisortoolsversion";
}
diff --git a/api/src/org/apache/cloudstack/api/APICommand.java b/api/src/org/apache/cloudstack/api/APICommand.java
index 621b3476066..008bd1ed4d8 100644
--- a/api/src/org/apache/cloudstack/api/APICommand.java
+++ b/api/src/org/apache/cloudstack/api/APICommand.java
@@ -22,6 +22,7 @@ import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
+import org.apache.cloudstack.acl.RoleType;
import org.apache.cloudstack.api.ResponseObject.ResponseView;
@Retention(RetentionPolicy.RUNTIME)
@@ -40,4 +41,6 @@ public @interface APICommand {
String since() default "";
ResponseView responseView() default ResponseView.Admin;
+
+ RoleType[] authorized() default {};
}
diff --git a/api/src/org/apache/cloudstack/api/ApiCommandJobType.java b/api/src/org/apache/cloudstack/api/ApiCommandJobType.java
index c48e6494477..6f9ac2dbf33 100644
--- a/api/src/org/apache/cloudstack/api/ApiCommandJobType.java
+++ b/api/src/org/apache/cloudstack/api/ApiCommandJobType.java
@@ -28,6 +28,7 @@ public enum ApiCommandJobType {
SystemVm,
Host,
StoragePool,
+ ImageStore,
IpAddress,
PortableIpAddress,
SecurityGroup,
diff --git a/api/src/org/apache/cloudstack/api/ApiConstants.java b/api/src/org/apache/cloudstack/api/ApiConstants.java
index 78200e58b50..32c2c5e9637 100755
--- a/api/src/org/apache/cloudstack/api/ApiConstants.java
+++ b/api/src/org/apache/cloudstack/api/ApiConstants.java
@@ -34,6 +34,7 @@ public class ApiConstants {
public static final String BYTES_READ_RATE = "bytesreadrate";
public static final String BYTES_WRITE_RATE = "byteswriterate";
public static final String CATEGORY = "category";
+ public static final String CAN_REVERT = "canrevert";
public static final String CERTIFICATE = "certificate";
public static final String PRIVATE_KEY = "privatekey";
public static final String DOMAIN_SUFFIX = "domainsuffix";
@@ -142,6 +143,7 @@ public class ApiConstants {
public static final String MAX_SNAPS = "maxsnaps";
public static final String MEMORY = "memory";
public static final String MODE = "mode";
+ public static final String KEEPALIVE_ENABLED = "keepaliveenabled";
public static final String NAME = "name";
public static final String METHOD_NAME = "methodname";
public static final String NETWORK_DOMAIN = "networkdomain";
@@ -186,6 +188,7 @@ public class ApiConstants {
public static final String REQUIRES_HVM = "requireshvm";
public static final String RESOURCE_TYPE = "resourcetype";
public static final String RESPONSE = "response";
+ public static final String REVERTABLE = "revertable";
public static final String QUERY_FILTER = "queryfilter";
public static final String SCHEDULE = "schedule";
public static final String SCOPE = "scope";
@@ -530,6 +533,11 @@ public class ApiConstants {
public static final String ENTITY_ID = "entityid";
public static final String ACCESS_TYPE = "accesstype";
+ public static final String RESOURCE_DETAILS = "resourcedetails";
+ public static final String EXPUNGE = "expunge";
+ public static final String FOR_DISPLAY = "fordisplay";
+
+
public enum HostDetails {
all, capacity, events, stats, min;
}
diff --git a/api/src/org/apache/cloudstack/api/BaseCmd.java b/api/src/org/apache/cloudstack/api/BaseCmd.java
index 84095546cec..b1ee0877428 100644
--- a/api/src/org/apache/cloudstack/api/BaseCmd.java
+++ b/api/src/org/apache/cloudstack/api/BaseCmd.java
@@ -21,7 +21,6 @@ import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.HashMap;
-import java.util.List;
import java.util.Map;
import java.util.regex.Pattern;
@@ -76,7 +75,6 @@ import com.cloud.user.Account;
import com.cloud.user.AccountService;
import com.cloud.user.DomainService;
import com.cloud.user.ResourceLimitService;
-import com.cloud.utils.Pair;
import com.cloud.utils.db.EntityManager;
import com.cloud.vm.UserVmService;
import com.cloud.vm.snapshot.VMSnapshotService;
@@ -303,172 +301,6 @@ public abstract class BaseCmd {
return lowercaseParams;
}
- public String buildResponse(ServerApiException apiException, String responseType) {
- StringBuffer sb = new StringBuffer();
- if (RESPONSE_TYPE_JSON.equalsIgnoreCase(responseType)) {
- // JSON response
- sb.append("{ \"" + getCommandName() + "\" : { " + "\"@attributes\":{\"cloud-stack-version\":\"" + _mgr.getVersion() + "\"},");
- sb.append("\"errorcode\" : \"" + apiException.getErrorCode() + "\", \"description\" : \"" + apiException.getDescription() + "\" } }");
- } else {
- sb.append("");
- sb.append("<" + getCommandName() + ">");
- sb.append("" + apiException.getErrorCode() + "");
- sb.append("" + escapeXml(apiException.getDescription()) + "");
- sb.append("" + getCommandName() + " cloud-stack-version=\"" + _mgr.getVersion() + "\">");
- }
- return sb.toString();
- }
-
- public String buildResponse(List> tagList, String responseType) {
- StringBuffer prefixSb = new StringBuffer();
- StringBuffer suffixSb = new StringBuffer();
-
- // set up the return value with the name of the response
- if (RESPONSE_TYPE_JSON.equalsIgnoreCase(responseType)) {
- prefixSb.append("{ \"" + getCommandName() + "\" : { \"@attributes\":{\"cloud-stack-version\":\"" + _mgr.getVersion() + "\"},");
- } else {
- prefixSb.append("");
- prefixSb.append("<" + getCommandName() + " cloud-stack-version=\"" + _mgr.getVersion() + "\">");
- }
-
- int i = 0;
- for (Pair tagData : tagList) {
- String tagName = tagData.first();
- Object tagValue = tagData.second();
- if (tagValue instanceof Object[]) {
- Object[] subObjects = (Object[]) tagValue;
- if (subObjects.length < 1) {
- continue;
- }
- writeObjectArray(responseType, suffixSb, i++, tagName, subObjects);
- } else {
- writeNameValuePair(suffixSb, tagName, tagValue, responseType, i++);
- }
- }
-
- if (suffixSb.length() > 0) {
- if (RESPONSE_TYPE_JSON.equalsIgnoreCase(responseType)) { // append comma only if we have some suffix else
- // not as per strict Json syntax.
- prefixSb.append(",");
- }
- prefixSb.append(suffixSb);
- }
- // close the response
- if (RESPONSE_TYPE_JSON.equalsIgnoreCase(responseType)) {
- prefixSb.append("} }");
- } else {
- prefixSb.append("" + getCommandName() + ">");
- }
- return prefixSb.toString();
- }
-
- private void writeNameValuePair(StringBuffer sb, String tagName, Object tagValue, String responseType, int propertyCount) {
- if (tagValue == null) {
- return;
- }
-
- if (tagValue instanceof Object[]) {
- Object[] subObjects = (Object[]) tagValue;
- if (subObjects.length < 1) {
- return;
- }
- writeObjectArray(responseType, sb, propertyCount, tagName, subObjects);
- } else {
- if (RESPONSE_TYPE_JSON.equalsIgnoreCase(responseType)) {
- String seperator = ((propertyCount > 0) ? ", " : "");
- sb.append(seperator + "\"" + tagName + "\" : \"" + escapeJSON(tagValue.toString()) + "\"");
- } else {
- sb.append("<" + tagName + ">" + escapeXml(tagValue.toString()) + "" + tagName + ">");
- }
- }
- }
-
- @SuppressWarnings("rawtypes")
- private void writeObjectArray(String responseType, StringBuffer sb, int propertyCount, String tagName, Object[] subObjects) {
- if (RESPONSE_TYPE_JSON.equalsIgnoreCase(responseType)) {
- String separator = ((propertyCount > 0) ? ", " : "");
- sb.append(separator);
- }
- int j = 0;
- for (Object subObject : subObjects) {
- if (subObject instanceof List) {
- List subObjList = (List) subObject;
- writeSubObject(sb, tagName, subObjList, responseType, j++);
- }
- }
-
- if (RESPONSE_TYPE_JSON.equalsIgnoreCase(responseType)) {
- sb.append("]");
- }
- }
-
- @SuppressWarnings("rawtypes")
- private void writeSubObject(StringBuffer sb, String tagName, List tagList, String responseType, int objectCount) {
- if (RESPONSE_TYPE_JSON.equalsIgnoreCase(responseType)) {
- sb.append(((objectCount == 0) ? "\"" + tagName + "\" : [ { " : ", { "));
- } else {
- sb.append("<" + tagName + ">");
- }
-
- int i = 0;
- for (Object tag : tagList) {
- if (tag instanceof Pair) {
- Pair nameValuePair = (Pair) tag;
- writeNameValuePair(sb, (String) nameValuePair.first(), nameValuePair.second(), responseType, i++);
- }
- }
-
- if (RESPONSE_TYPE_JSON.equalsIgnoreCase(responseType)) {
- sb.append("}");
- } else {
- sb.append("" + tagName + ">");
- }
- }
-
- /**
- * Escape xml response set to false by default. API commands to override this method to allow escaping
- */
- public boolean requireXmlEscape() {
- return true;
- }
-
- private String escapeXml(String xml) {
- if (!requireXmlEscape()) {
- return xml;
- }
- int iLen = xml.length();
- if (iLen == 0) {
- return xml;
- }
- StringBuffer sOUT = new StringBuffer(iLen + 256);
- int i = 0;
- for (; i < iLen; i++) {
- char c = xml.charAt(i);
- if (c == '<') {
- sOUT.append("<");
- } else if (c == '>') {
- sOUT.append(">");
- } else if (c == '&') {
- sOUT.append("&");
- } else if (c == '"') {
- sOUT.append(""");
- } else if (c == '\'') {
- sOUT.append("'");
- } else {
- sOUT.append(c);
- }
- }
- return sOUT.toString();
- }
-
- private static String escapeJSON(String str) {
- if (str == null) {
- return str;
- }
-
- return str.replace("\"", "\\\"");
- }
-
protected long getInstanceIdFromJobSuccessResult(String result) {
s_logger.debug("getInstanceIdFromJobSuccessResult not overridden in subclass " + this.getClass().getName());
return 0;
diff --git a/api/src/org/apache/cloudstack/api/ResourceDetail.java b/api/src/org/apache/cloudstack/api/ResourceDetail.java
new file mode 100644
index 00000000000..4914c7806c1
--- /dev/null
+++ b/api/src/org/apache/cloudstack/api/ResourceDetail.java
@@ -0,0 +1,29 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+package org.apache.cloudstack.api;
+
+public interface ResourceDetail extends InternalIdentity{
+
+ public long getResourceId();
+
+ public String getName();
+
+ public String getValue();
+
+ public boolean isDisplay();
+
+}
diff --git a/api/src/org/apache/cloudstack/api/command/admin/network/CreateNetworkOfferingCmd.java b/api/src/org/apache/cloudstack/api/command/admin/network/CreateNetworkOfferingCmd.java
index bdad904c1dd..7296d5315d8 100644
--- a/api/src/org/apache/cloudstack/api/command/admin/network/CreateNetworkOfferingCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/admin/network/CreateNetworkOfferingCmd.java
@@ -96,12 +96,15 @@ public class CreateNetworkOfferingCmd extends BaseCmd {
private Boolean isPersistent;
@Parameter(name=ApiConstants.DETAILS, type=CommandType.MAP, since="4.2.0", description="Network offering details in key/value pairs." +
- " Supported keys are internallbprovider/publiclbprovider with service provider as a value")
+ " Supported keys are internallbprovider/publiclbprovider with service provider as a value")
protected Map details;
@Parameter(name=ApiConstants.EGRESS_DEFAULT_POLICY, type=CommandType.BOOLEAN, description="true if default guest network egress policy is allow; false if default egress policy is deny")
private Boolean egressDefaultPolicy;
+ @Parameter(name=ApiConstants.KEEPALIVE_ENABLED, type=CommandType.BOOLEAN, required=false, description="if true keepalive will be turned on in the loadbalancer. At the time of writing this has only an effect on haproxy; the mode http and httpclose options are unset in the haproxy conf file.")
+ private Boolean keepAliveEnabled;
+
@Parameter(name=ApiConstants.MAX_CONNECTIONS, type=CommandType.INTEGER, description="maximum number of concurrent connections supported by the network offering")
private Integer maxConnections;
@@ -175,6 +178,10 @@ public class CreateNetworkOfferingCmd extends BaseCmd {
return egressDefaultPolicy;
}
+ public Boolean getKeepAliveEnabled() {
+ return keepAliveEnabled;
+ }
+
public Integer getMaxconnections() {
return maxConnections;
}
diff --git a/api/src/org/apache/cloudstack/api/command/admin/network/UpdateNetworkOfferingCmd.java b/api/src/org/apache/cloudstack/api/command/admin/network/UpdateNetworkOfferingCmd.java
index c9c4c8ad3b4..f9bdadb4547 100644
--- a/api/src/org/apache/cloudstack/api/command/admin/network/UpdateNetworkOfferingCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/admin/network/UpdateNetworkOfferingCmd.java
@@ -57,6 +57,9 @@ public class UpdateNetworkOfferingCmd extends BaseCmd {
@Parameter(name=ApiConstants.STATE, type=CommandType.STRING, description="update state for the network offering")
private String state;
+ @Parameter(name=ApiConstants.KEEPALIVE_ENABLED, type=CommandType.BOOLEAN, required=false, description="if true keepalive will be turned on in the loadbalancer. At the time of writing this has only an effect on haproxy; the mode http and httpclose options are unset in the haproxy conf file.")
+ private Boolean keepAliveEnabled;
+
@Parameter(name=ApiConstants.MAX_CONNECTIONS, type=CommandType.INTEGER, description="maximum number of concurrent connections supported by the network offering")
private Integer maxConnections;
@@ -91,6 +94,10 @@ public class UpdateNetworkOfferingCmd extends BaseCmd {
public Integer getMaxconnections() {
return maxConnections;
}
+
+ public Boolean getKeepAliveEnabled() {
+ return keepAliveEnabled;
+ }
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
/////////////////////////////////////////////////////
diff --git a/api/src/org/apache/cloudstack/api/command/admin/offering/UpdateDiskOfferingCmd.java b/api/src/org/apache/cloudstack/api/command/admin/offering/UpdateDiskOfferingCmd.java
index 1e421a13d3f..a7b8dcd5439 100644
--- a/api/src/org/apache/cloudstack/api/command/admin/offering/UpdateDiskOfferingCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/admin/offering/UpdateDiskOfferingCmd.java
@@ -49,6 +49,9 @@ public class UpdateDiskOfferingCmd extends BaseCmd{
@Parameter(name=ApiConstants.SORT_KEY, type=CommandType.INTEGER, description="sort key of the disk offering, integer")
private Integer sortKey;
+ @Parameter(name=ApiConstants.DISPLAY_OFFERING, type=CommandType.BOOLEAN, description="an optional field, whether to display the offering to the end user or not.")
+ private Boolean displayOffering;
+
/////////////////////////////////////////////////////
/////////////////// Accessors ///////////////////////
/////////////////////////////////////////////////////
@@ -69,8 +72,11 @@ public class UpdateDiskOfferingCmd extends BaseCmd{
return sortKey;
}
+ public Boolean getDisplayOffering() {
+ return displayOffering;
+ }
- /////////////////////////////////////////////////////
+/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
/////////////////////////////////////////////////////
diff --git a/api/src/org/apache/cloudstack/api/command/admin/router/CreateVirtualRouterElementCmd.java b/api/src/org/apache/cloudstack/api/command/admin/router/CreateVirtualRouterElementCmd.java
index 66c8ae5cb74..35da69778f3 100644
--- a/api/src/org/apache/cloudstack/api/command/admin/router/CreateVirtualRouterElementCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/admin/router/CreateVirtualRouterElementCmd.java
@@ -36,7 +36,7 @@ import com.cloud.event.EventTypes;
import com.cloud.exception.InvalidParameterValueException;
import com.cloud.exception.ResourceAllocationException;
import com.cloud.network.VirtualRouterProvider;
-import com.cloud.network.VirtualRouterProvider.VirtualRouterProviderType;
+import com.cloud.network.VirtualRouterProvider.Type;
import com.cloud.network.element.VirtualRouterElementService;
import com.cloud.user.Account;
@@ -70,15 +70,15 @@ public class CreateVirtualRouterElementCmd extends BaseAsyncCreateCmd {
return nspId;
}
- public VirtualRouterProviderType getProviderType() {
+ public Type getProviderType() {
if (providerType != null) {
- if (providerType.equalsIgnoreCase(VirtualRouterProviderType.VirtualRouter.toString())) {
- return VirtualRouterProviderType.VirtualRouter;
- } else if (providerType.equalsIgnoreCase(VirtualRouterProviderType.VPCVirtualRouter.toString())) {
- return VirtualRouterProviderType.VPCVirtualRouter;
+ if (providerType.equalsIgnoreCase(Type.VirtualRouter.toString())) {
+ return Type.VirtualRouter;
+ } else if (providerType.equalsIgnoreCase(Type.VPCVirtualRouter.toString())) {
+ return Type.VPCVirtualRouter;
} else throw new InvalidParameterValueException("Invalid providerType specified");
}
- return VirtualRouterProviderType.VirtualRouter;
+ return Type.VirtualRouter;
}
/////////////////////////////////////////////////////
diff --git a/api/src/org/apache/cloudstack/api/command/admin/storage/ListStoragePoolsCmd.java b/api/src/org/apache/cloudstack/api/command/admin/storage/ListStoragePoolsCmd.java
index 26351bb7755..ddf0391a905 100644
--- a/api/src/org/apache/cloudstack/api/command/admin/storage/ListStoragePoolsCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/admin/storage/ListStoragePoolsCmd.java
@@ -59,7 +59,7 @@ public class ListStoragePoolsCmd extends BaseListCmd {
@Parameter(name=ApiConstants.ZONE_ID, type=CommandType.UUID, entityType = ZoneResponse.class,
description="the Zone ID for the storage pool")
private Long zoneId;
-
+
@Parameter(name=ApiConstants.ID, type=CommandType.UUID, entityType = StoragePoolResponse.class,
description="the ID of the storage pool")
private Long id;
@@ -109,6 +109,7 @@ public class ListStoragePoolsCmd extends BaseListCmd {
return s_name;
}
+ @Override
public ApiCommandJobType getInstanceType() {
return ApiCommandJobType.StoragePool;
}
diff --git a/api/src/org/apache/cloudstack/api/command/admin/storage/PrepareSecondaryStorageForMigrationCmd.java b/api/src/org/apache/cloudstack/api/command/admin/storage/PrepareSecondaryStorageForMigrationCmd.java
new file mode 100644
index 00000000000..d0c995a64f1
--- /dev/null
+++ b/api/src/org/apache/cloudstack/api/command/admin/storage/PrepareSecondaryStorageForMigrationCmd.java
@@ -0,0 +1,109 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+package org.apache.cloudstack.api.command.admin.storage;
+
+import org.apache.log4j.Logger;
+
+import org.apache.cloudstack.api.APICommand;
+import org.apache.cloudstack.api.ApiCommandJobType;
+import org.apache.cloudstack.api.ApiConstants;
+import org.apache.cloudstack.api.ApiErrorCode;
+import org.apache.cloudstack.api.BaseAsyncCmd;
+import org.apache.cloudstack.api.Parameter;
+import org.apache.cloudstack.api.ServerApiException;
+import org.apache.cloudstack.api.response.ImageStoreResponse;
+import org.apache.cloudstack.context.CallContext;
+
+import com.cloud.event.EventTypes;
+import com.cloud.exception.InsufficientCapacityException;
+import com.cloud.exception.ResourceUnavailableException;
+import com.cloud.storage.ImageStore;
+import com.cloud.user.Account;
+
+@APICommand(name = "prepareSecondaryStorageForMigration", description = "Prepare a NFS secondary storage to migrate to use object store like S3", responseObject = ImageStoreResponse.class)
+public class PrepareSecondaryStorageForMigrationCmd extends BaseAsyncCmd {
+ public static final Logger s_logger = Logger.getLogger(PrepareSecondaryStorageForMigrationCmd.class.getName());
+ private static final String s_name = "preparesecondarystorageformigrationresponse";
+
+ /////////////////////////////////////////////////////
+ //////////////// API parameters /////////////////////
+ /////////////////////////////////////////////////////
+
+ @Parameter(name = ApiConstants.ID, type = CommandType.UUID, entityType = ImageStoreResponse.class,
+ required = true, description = "Secondary image store ID")
+ private Long id;
+
+ /////////////////////////////////////////////////////
+ /////////////////// Accessors ///////////////////////
+ /////////////////////////////////////////////////////
+
+ public Long getId() {
+ return id;
+ }
+
+ /////////////////////////////////////////////////////
+ /////////////// API Implementation///////////////////
+ /////////////////////////////////////////////////////
+
+ @Override
+ public String getCommandName() {
+ return s_name;
+ }
+
+ @Override
+ public ApiCommandJobType getInstanceType() {
+ return ApiCommandJobType.ImageStore;
+ }
+
+ @Override
+ public Long getInstanceId() {
+ return getId();
+ }
+
+ @Override
+ public long getEntityOwnerId() {
+ Account account = CallContext.current().getCallingAccount();
+ if (account != null) {
+ return account.getId();
+ }
+
+ return Account.ACCOUNT_ID_SYSTEM; // no account info given, parent this command to SYSTEM so ERROR events are tracked
+ }
+
+ @Override
+ public String getEventType() {
+ return EventTypes.EVENT_MIGRATE_PREPARE_SECONDARY_STORAGE;
+ }
+
+ @Override
+ public String getEventDescription() {
+ return "preparing secondary storage: " + getId() + " for object store migration";
+ }
+
+ @Override
+ public void execute() throws ResourceUnavailableException, InsufficientCapacityException{
+ ImageStore result = _storageService.prepareSecondaryStorageForObjectStoreMigration(getId());
+ if (result != null){
+ ImageStoreResponse response = _responseGenerator.createImageStoreResponse(result);
+ response.setResponseName(getCommandName());
+ response.setResponseName("secondarystorage");
+ setResponseObject(response);
+ } else {
+ throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, "Failed to prepare secondary storage for object store migration");
+ }
+ }
+}
diff --git a/api/src/org/apache/cloudstack/api/command/admin/vm/AssignVMCmd.java b/api/src/org/apache/cloudstack/api/command/admin/vm/AssignVMCmd.java
index 2a60e192ca3..6da4b6c9034 100644
--- a/api/src/org/apache/cloudstack/api/command/admin/vm/AssignVMCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/admin/vm/AssignVMCmd.java
@@ -39,7 +39,7 @@ import com.cloud.uservm.UserVm;
public class AssignVMCmd extends BaseCmd {
public static final Logger s_logger = Logger.getLogger(AssignVMCmd.class.getName());
- private static final String s_name = "moveuservmresponse";
+ private static final String s_name = "assignvirtualmachineresponse";
/////////////////////////////////////////////////////
//////////////// API parameters /////////////////////
diff --git a/api/src/org/apache/cloudstack/api/command/admin/vm/ExpungeVMCmd.java b/api/src/org/apache/cloudstack/api/command/admin/vm/ExpungeVMCmd.java
new file mode 100644
index 00000000000..387a0e986b2
--- /dev/null
+++ b/api/src/org/apache/cloudstack/api/command/admin/vm/ExpungeVMCmd.java
@@ -0,0 +1,116 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+package org.apache.cloudstack.api.command.admin.vm;
+
+import org.apache.cloudstack.api.APICommand;
+import org.apache.cloudstack.api.ApiCommandJobType;
+import org.apache.cloudstack.api.ApiConstants;
+import org.apache.cloudstack.api.ApiErrorCode;
+import org.apache.cloudstack.api.BaseAsyncCmd;
+import org.apache.cloudstack.api.Parameter;
+import org.apache.cloudstack.api.ServerApiException;
+import org.apache.cloudstack.api.response.SuccessResponse;
+import org.apache.cloudstack.api.response.UserVmResponse;
+import org.apache.cloudstack.context.CallContext;
+import org.apache.log4j.Logger;
+
+import com.cloud.event.EventTypes;
+import com.cloud.exception.ConcurrentOperationException;
+import com.cloud.exception.InvalidParameterValueException;
+import com.cloud.exception.ResourceUnavailableException;
+import com.cloud.user.Account;
+import com.cloud.uservm.UserVm;
+import com.cloud.utils.exception.CloudRuntimeException;
+
+@APICommand(name = "expungeVirtualMachine", description="Expunge a virtual machine. Once expunged, it cannot be recoverd.", responseObject=SuccessResponse.class)
+public class ExpungeVMCmd extends BaseAsyncCmd {
+ public static final Logger s_logger = Logger.getLogger(ExpungeVMCmd.class.getName());
+
+ private static final String s_name = "expungevirtualmachineresponse";
+
+ /////////////////////////////////////////////////////
+ //////////////// API parameters /////////////////////
+ /////////////////////////////////////////////////////
+
+ @Parameter(name=ApiConstants.ID, type=CommandType.UUID, entityType=UserVmResponse.class,
+ required=true, description="The ID of the virtual machine")
+ private Long id;
+
+ /////////////////////////////////////////////////////
+ /////////////////// Accessors ///////////////////////
+ /////////////////////////////////////////////////////
+
+ public Long getId() {
+ return id;
+ }
+
+ /////////////////////////////////////////////////////
+ /////////////// API Implementation///////////////////
+ /////////////////////////////////////////////////////
+
+ @Override
+ public String getCommandName() {
+ return s_name;
+ }
+
+ @Override
+ public long getEntityOwnerId() {
+ UserVm vm = _responseGenerator.findUserVmById(getId());
+ if (vm != null) {
+ return vm.getAccountId();
+ }
+
+ return Account.ACCOUNT_ID_SYSTEM; // no account info given, parent this command to SYSTEM so ERROR events are tracked
+ }
+
+ @Override
+ public String getEventType() {
+ return EventTypes.EVENT_VM_EXPUNGE;
+ }
+
+ @Override
+ public String getEventDescription() {
+ return "Expunging vm: " + getId();
+ }
+
+ public ApiCommandJobType getInstanceType() {
+ return ApiCommandJobType.VirtualMachine;
+ }
+
+ public Long getInstanceId() {
+ return getId();
+ }
+
+ @Override
+ public void execute() throws ResourceUnavailableException, ConcurrentOperationException{
+ CallContext.current().setEventDetails("Vm Id: "+getId());
+ try {
+ UserVm result = _userVmService.expungeVm(this);
+
+ if (result != null) {
+ SuccessResponse response = new SuccessResponse(getCommandName());
+ this.setResponseObject(response);
+ } else {
+ throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, "Failed to expunge vm");
+ }
+ } catch (InvalidParameterValueException ipve) {
+ throw new ServerApiException(ApiErrorCode.PARAM_ERROR, ipve.getMessage());
+ } catch (CloudRuntimeException cre) {
+ throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, cre.getMessage());
+ }
+ }
+}
diff --git a/api/src/org/apache/cloudstack/api/command/user/network/UpdateNetworkCmd.java b/api/src/org/apache/cloudstack/api/command/user/network/UpdateNetworkCmd.java
index e0cd7133e11..a44b76828fb 100644
--- a/api/src/org/apache/cloudstack/api/command/user/network/UpdateNetworkCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/user/network/UpdateNetworkCmd.java
@@ -153,9 +153,7 @@ public class UpdateNetworkCmd extends BaseAsyncCmd {
@Override
public String getEventDescription() {
-
-
- StringBuffer eventMsg = new StringBuffer("Updating network: " + getId());
+ StringBuilder eventMsg = new StringBuilder("Updating network: " + getId());
if (getNetworkOfferingId() != null) {
Network network = _networkService.getNetwork(getId());
if (network == null) {
diff --git a/api/src/org/apache/cloudstack/api/command/user/offering/ListServiceOfferingsCmd.java b/api/src/org/apache/cloudstack/api/command/user/offering/ListServiceOfferingsCmd.java
index ca16cdc7efe..60eb438050f 100644
--- a/api/src/org/apache/cloudstack/api/command/user/offering/ListServiceOfferingsCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/user/offering/ListServiceOfferingsCmd.java
@@ -16,6 +16,11 @@
// under the License.
package org.apache.cloudstack.api.command.user.offering;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map;
+
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.BaseListCmd;
@@ -24,9 +29,10 @@ import org.apache.cloudstack.api.response.DomainResponse;
import org.apache.cloudstack.api.response.ListResponse;
import org.apache.cloudstack.api.response.ServiceOfferingResponse;
import org.apache.cloudstack.api.response.UserVmResponse;
-
import org.apache.log4j.Logger;
+import com.cloud.exception.InvalidParameterValueException;
+
@APICommand(name = "listServiceOfferings", description="Lists all available service offerings.", responseObject=ServiceOfferingResponse.class)
public class ListServiceOfferingsCmd extends BaseListCmd {
public static final Logger s_logger = Logger.getLogger(ListServiceOfferingsCmd.class.getName());
@@ -98,7 +104,6 @@ public class ListServiceOfferingsCmd extends BaseListCmd {
@Override
public void execute(){
-
ListResponse response = _queryService.searchForServiceOfferings(this);
response.setResponseName(getCommandName());
this.setResponseObject(response);
diff --git a/api/src/org/apache/cloudstack/api/command/user/snapshot/RevertSnapshotCmd.java b/api/src/org/apache/cloudstack/api/command/user/snapshot/RevertSnapshotCmd.java
new file mode 100644
index 00000000000..6e790e1c170
--- /dev/null
+++ b/api/src/org/apache/cloudstack/api/command/user/snapshot/RevertSnapshotCmd.java
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.cloudstack.api.command.user.snapshot;
+
+import org.apache.cloudstack.api.APICommand;
+import org.apache.cloudstack.api.ApiCommandJobType;
+import org.apache.cloudstack.api.ApiConstants;
+import org.apache.cloudstack.api.ApiErrorCode;
+import org.apache.cloudstack.api.BaseAsyncCmd;
+import org.apache.cloudstack.api.BaseCmd;
+import org.apache.cloudstack.api.Parameter;
+import org.apache.cloudstack.api.ServerApiException;
+import org.apache.cloudstack.api.response.SnapshotResponse;
+import org.apache.cloudstack.api.response.SuccessResponse;
+import org.apache.cloudstack.context.CallContext;
+
+import com.cloud.event.EventTypes;
+import com.cloud.storage.Snapshot;
+import com.cloud.user.Account;
+
+@APICommand(name = "revertSnapshot", description = "revert a volume snapshot.", responseObject = SnapshotResponse.class)
+public class RevertSnapshotCmd extends BaseAsyncCmd {
+ private static final String s_name = "revertsnapshotresponse";
+ @Parameter(name= ApiConstants.ID, type= BaseCmd.CommandType.UUID, entityType = SnapshotResponse.class,
+ required=true, description="The ID of the snapshot")
+ private Long id;
+
+ public Long getId() {
+ return id;
+ }
+
+
+ @Override
+ public String getCommandName() {
+ return s_name;
+ }
+
+ @Override
+ public long getEntityOwnerId() {
+ Snapshot snapshot = _entityMgr.findById(Snapshot.class, getId());
+ if (snapshot != null) {
+ return snapshot.getAccountId();
+ }
+
+ return Account.ACCOUNT_ID_SYSTEM; // no account info given, parent this command to SYSTEM so ERROR events are tracked
+ }
+
+ @Override
+ public String getEventType() {
+ return EventTypes.EVENT_SNAPSHOT_REVERT;
+ }
+
+ @Override
+ public String getEventDescription() {
+ return "revert snapshot: " + getId();
+ }
+
+ @Override
+ public ApiCommandJobType getInstanceType() {
+ return ApiCommandJobType.Snapshot;
+ }
+
+ @Override
+ public Long getInstanceId() {
+ return getId();
+ }
+
+ @Override
+ public void execute(){
+ CallContext.current().setEventDetails("Snapshot Id: "+getId());
+ boolean result = _snapshotService.revertSnapshot(getId());
+ if (result) {
+ SuccessResponse response = new SuccessResponse(getCommandName());
+ response.setResponseName(getCommandName());
+ this.setResponseObject(response);
+ } else {
+ throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, "Failed to revert snapshot");
+ }
+ }
+}
diff --git a/api/src/org/apache/cloudstack/api/command/user/ssh/CreateSSHKeyPairCmd.java b/api/src/org/apache/cloudstack/api/command/user/ssh/CreateSSHKeyPairCmd.java
index 6f1a081da12..e36bd73263b 100644
--- a/api/src/org/apache/cloudstack/api/command/user/ssh/CreateSSHKeyPairCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/user/ssh/CreateSSHKeyPairCmd.java
@@ -20,16 +20,16 @@ import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.BaseCmd;
import org.apache.cloudstack.api.Parameter;
+import org.apache.cloudstack.api.response.CreateSSHKeyPairResponse;
import org.apache.cloudstack.api.response.DomainResponse;
import org.apache.cloudstack.api.response.ProjectResponse;
-import org.apache.cloudstack.api.response.SSHKeyPairResponse;
import org.apache.cloudstack.context.CallContext;
import org.apache.log4j.Logger;
import com.cloud.user.SSHKeyPair;
-@APICommand(name = "createSSHKeyPair", description="Create a new keypair and returns the private key", responseObject=SSHKeyPairResponse.class)
+@APICommand(name = "createSSHKeyPair", description="Create a new keypair and returns the private key", responseObject=CreateSSHKeyPairResponse.class)
public class CreateSSHKeyPairCmd extends BaseCmd {
public static final Logger s_logger = Logger.getLogger(CreateSSHKeyPairCmd.class.getName());
private static final String s_name = "createsshkeypairresponse";
@@ -91,7 +91,7 @@ public class CreateSSHKeyPairCmd extends BaseCmd {
@Override
public void execute() {
SSHKeyPair r = _mgr.createSSHKeyPair(this);
- SSHKeyPairResponse response = new SSHKeyPairResponse(r.getName(), r.getFingerprint(), r.getPrivateKey());
+ CreateSSHKeyPairResponse response = new CreateSSHKeyPairResponse(r.getName(), r.getFingerprint(), r.getPrivateKey());
response.setResponseName(getCommandName());
response.setObjectName("keypair");
this.setResponseObject(response);
diff --git a/api/src/org/apache/cloudstack/api/command/user/tag/CreateTagsCmd.java b/api/src/org/apache/cloudstack/api/command/user/tag/CreateTagsCmd.java
index a01bac39a4b..84226d7cc99 100644
--- a/api/src/org/apache/cloudstack/api/command/user/tag/CreateTagsCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/user/tag/CreateTagsCmd.java
@@ -34,7 +34,7 @@ import org.apache.log4j.Logger;
import com.cloud.event.EventTypes;
import com.cloud.server.ResourceTag;
-import com.cloud.server.ResourceTag.TaggedResourceType;
+import com.cloud.server.ResourceTag.ResourceObjectType;
@APICommand(name = "createTags", description = "Creates resource tag(s)", responseObject = SuccessResponse.class, since = "4.0.0")
public class CreateTagsCmd extends BaseAsyncCmd{
public static final Logger s_logger = Logger.getLogger(CreateTagsCmd.class.getName());
@@ -64,7 +64,7 @@ public class CreateTagsCmd extends BaseAsyncCmd{
/////////////////////////////////////////////////////
- public TaggedResourceType getResourceType(){
+ public ResourceObjectType getResourceType(){
return _taggedResourceService.getResourceType(resourceType);
}
diff --git a/api/src/org/apache/cloudstack/api/command/user/tag/DeleteTagsCmd.java b/api/src/org/apache/cloudstack/api/command/user/tag/DeleteTagsCmd.java
index a6ba0da82b7..5ce2e3795d1 100644
--- a/api/src/org/apache/cloudstack/api/command/user/tag/DeleteTagsCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/user/tag/DeleteTagsCmd.java
@@ -33,7 +33,7 @@ import org.apache.cloudstack.api.response.SuccessResponse;
import org.apache.log4j.Logger;
import com.cloud.event.EventTypes;
-import com.cloud.server.ResourceTag.TaggedResourceType;
+import com.cloud.server.ResourceTag.ResourceObjectType;
@APICommand(name = "deleteTags", description = "Deleting resource tag(s)", responseObject = SuccessResponse.class, since = "4.0.0")
public class DeleteTagsCmd extends BaseAsyncCmd{
public static final Logger s_logger = Logger.getLogger(DeleteTagsCmd.class.getName());
@@ -59,7 +59,7 @@ public class DeleteTagsCmd extends BaseAsyncCmd{
/////////////////////////////////////////////////////
- public TaggedResourceType getResourceType(){
+ public ResourceObjectType getResourceType(){
return _taggedResourceService.getResourceType(resourceType);
}
diff --git a/api/src/org/apache/cloudstack/api/command/user/vm/DestroyVMCmd.java b/api/src/org/apache/cloudstack/api/command/user/vm/DestroyVMCmd.java
index 06959c15e75..b3e8d1f83fe 100644
--- a/api/src/org/apache/cloudstack/api/command/user/vm/DestroyVMCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/user/vm/DestroyVMCmd.java
@@ -16,6 +16,8 @@
// under the License.
package org.apache.cloudstack.api.command.user.vm;
+import java.util.List;
+
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiCommandJobType;
import org.apache.cloudstack.api.ApiConstants;
@@ -25,13 +27,11 @@ import org.apache.cloudstack.api.Parameter;
import org.apache.cloudstack.api.ServerApiException;
import org.apache.cloudstack.api.response.UserVmResponse;
import org.apache.cloudstack.context.CallContext;
-
import org.apache.log4j.Logger;
import com.cloud.event.EventTypes;
import com.cloud.exception.ConcurrentOperationException;
import com.cloud.exception.ResourceUnavailableException;
-import com.cloud.hypervisor.Hypervisor.HypervisorType;
import com.cloud.user.Account;
import com.cloud.uservm.UserVm;
@@ -48,7 +48,12 @@ public class DestroyVMCmd extends BaseAsyncCmd {
@Parameter(name=ApiConstants.ID, type=CommandType.UUID, entityType=UserVmResponse.class,
required=true, description="The ID of the virtual machine")
private Long id;
-
+
+
+ @Parameter(name=ApiConstants.EXPUNGE, type=CommandType.BOOLEAN,
+ description="If true is passed, the vm is expunged immediately. False by default. Parameter can be passed to the call by ROOT/Domain admin only", since="4.2.1")
+ private Boolean expunge;
+
/////////////////////////////////////////////////////
/////////////////// Accessors ///////////////////////
/////////////////////////////////////////////////////
@@ -56,6 +61,13 @@ public class DestroyVMCmd extends BaseAsyncCmd {
public Long getId() {
return id;
}
+
+ public boolean getExpunge() {
+ if (expunge == null) {
+ return false;
+ }
+ return expunge;
+ }
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
@@ -97,11 +109,14 @@ public class DestroyVMCmd extends BaseAsyncCmd {
@Override
public void execute() throws ResourceUnavailableException, ConcurrentOperationException{
CallContext.current().setEventDetails("Vm Id: "+getId());
- UserVm result;
- result = _userVmService.destroyVm(this);
+ UserVm result = _userVmService.destroyVm(this);
+ UserVmResponse response = new UserVmResponse();
if (result != null) {
- UserVmResponse response = _responseGenerator.createUserVmResponse("virtualmachine", result).get(0);
+ List responses = _responseGenerator.createUserVmResponse("virtualmachine", result);
+ if (responses != null && !responses.isEmpty()) {
+ response = responses.get(0);
+ }
response.setResponseName("virtualmachine");
this.setResponseObject(response);
} else {
diff --git a/api/src/org/apache/cloudstack/api/command/user/volume/AddResourceDetailCmd.java b/api/src/org/apache/cloudstack/api/command/user/volume/AddResourceDetailCmd.java
index a3b92478d1e..1384b58b2d0 100644
--- a/api/src/org/apache/cloudstack/api/command/user/volume/AddResourceDetailCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/user/volume/AddResourceDetailCmd.java
@@ -71,7 +71,7 @@ public class AddResourceDetailCmd extends BaseAsyncCmd {
return detailsMap;
}
- public ResourceTag.TaggedResourceType getResourceType() {
+ public ResourceTag.ResourceObjectType getResourceType() {
return _taggedResourceService.getResourceType(resourceType);
}
diff --git a/api/src/org/apache/cloudstack/api/command/user/volume/ListResourceDetailsCmd.java b/api/src/org/apache/cloudstack/api/command/user/volume/ListResourceDetailsCmd.java
index c02d4b4c6ef..1e522b2d53b 100644
--- a/api/src/org/apache/cloudstack/api/command/user/volume/ListResourceDetailsCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/user/volume/ListResourceDetailsCmd.java
@@ -17,7 +17,8 @@
package org.apache.cloudstack.api.command.user.volume;
-import com.cloud.server.ResourceTag;
+import java.util.List;
+
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.BaseListProjectAndAccountResourcesCmd;
@@ -25,40 +26,27 @@ import org.apache.cloudstack.api.Parameter;
import org.apache.cloudstack.api.response.ListResponse;
import org.apache.cloudstack.api.response.ResourceDetailResponse;
import org.apache.cloudstack.api.response.ResourceTagResponse;
+import org.apache.cloudstack.context.CallContext;
-import java.util.List;
+import com.cloud.server.ResourceTag;
@APICommand(name = "listResourceDetails", description = "List resource detail(s)", responseObject = ResourceTagResponse.class, since = "4.2")
public class ListResourceDetailsCmd extends BaseListProjectAndAccountResourcesCmd{
private static final String s_name = "listresourcedetailsresponse";
- @Parameter(name=ApiConstants.RESOURCE_TYPE, type=CommandType.STRING, description="list by resource type")
+ @Parameter(name=ApiConstants.RESOURCE_TYPE, type=CommandType.STRING, description="list by resource type", required=true)
private String resourceType;
- @Parameter(name=ApiConstants.RESOURCE_ID, type=CommandType.STRING, description="list by resource id")
+ @Parameter(name=ApiConstants.RESOURCE_ID, type=CommandType.STRING, description="list by resource id", required=true)
private String resourceId;
@Parameter(name=ApiConstants.KEY, type=CommandType.STRING, description="list by key")
private String key;
-
- /////////////////////////////////////////////////////
- /////////////////// Accessors ///////////////////////
- /////////////////////////////////////////////////////
-
- @Override
- public void execute() {
-
- ListResponse response = new ListResponse();
- List resourceDetailResponse = _queryService.listResource(this);
- response.setResponses(resourceDetailResponse);
- response.setResponseName(getCommandName());
- this.setResponseObject(response);
- }
-
- public ResourceTag.TaggedResourceType getResourceType() {
- return _taggedResourceService.getResourceType(resourceType);
- }
-
+
+ @Parameter(name=ApiConstants.FOR_DISPLAY, type=CommandType.BOOLEAN, description="if set to true, only details marked with display=true, are returned." +
+ " Always false is the call is made by the regular user", since="4.3")
+ private Boolean forDisplay;
+
public String getResourceId() {
return resourceId;
}
@@ -71,5 +59,33 @@ public class ListResourceDetailsCmd extends BaseListProjectAndAccountResourcesCm
public String getCommandName() {
return s_name;
}
+
+ public Boolean forDisplay() {
+ if (!_accountService.isAdmin(CallContext.current().getCallingAccount().getType())) {
+ return true;
+ }
+
+ return forDisplay;
+ }
+
+ /////////////////////////////////////////////////////
+ /////////////////// Accessors ///////////////////////
+ /////////////////////////////////////////////////////
+
+ @Override
+ public void execute() {
+
+ ListResponse response = new ListResponse();
+ List resourceDetailResponse = _queryService.listResourceDetails(this);
+ response.setResponses(resourceDetailResponse);
+ response.setResponseName(getCommandName());
+ this.setResponseObject(response);
+ }
+
+ public ResourceTag.ResourceObjectType getResourceType() {
+ return _taggedResourceService.getResourceType(resourceType);
+ }
+
+
}
diff --git a/api/src/org/apache/cloudstack/api/command/user/volume/RemoveResourceDetailCmd.java b/api/src/org/apache/cloudstack/api/command/user/volume/RemoveResourceDetailCmd.java
index 8be70f348d0..5f2e131f514 100644
--- a/api/src/org/apache/cloudstack/api/command/user/volume/RemoveResourceDetailCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/user/volume/RemoveResourceDetailCmd.java
@@ -16,32 +16,21 @@
// under the License.
package org.apache.cloudstack.api.command.user.volume;
-import com.cloud.server.ResourceTag;
-
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiCommandJobType;
import org.apache.cloudstack.api.ApiConstants;
-import org.apache.cloudstack.api.ApiErrorCode;
import org.apache.cloudstack.api.BaseAsyncCmd;
import org.apache.cloudstack.api.Parameter;
-import org.apache.cloudstack.api.ServerApiException;
import org.apache.cloudstack.api.response.SuccessResponse;
-import org.apache.cloudstack.api.response.UserVmResponse;
-import org.apache.cloudstack.api.response.VolumeResponse;
-import org.apache.cloudstack.context.CallContext;
-
import org.apache.log4j.Logger;
import com.cloud.event.EventTypes;
-import com.cloud.storage.Volume;
-import com.cloud.user.Account;
-
-import java.util.*;
+import com.cloud.server.ResourceTag;
@APICommand(name = "removeResourceDetail", description="Removes detail for the Resource.", responseObject=SuccessResponse.class)
public class RemoveResourceDetailCmd extends BaseAsyncCmd {
public static final Logger s_logger = Logger.getLogger(RemoveResourceDetailCmd.class.getName());
- private static final String s_name = "RemoveResourceDetailresponse";
+ private static final String s_name = "removeresourcedetailresponse";
/////////////////////////////////////////////////////
//////////////// API parameters /////////////////////
@@ -62,7 +51,7 @@ public class RemoveResourceDetailCmd extends BaseAsyncCmd {
/////////////////////////////////////////////////////
- public ResourceTag.TaggedResourceType getResourceType(){
+ public ResourceTag.ResourceObjectType getResourceType(){
return _taggedResourceService.getResourceType(resourceType);
}
diff --git a/api/src/org/apache/cloudstack/api/command/user/volume/UpdateVolumeCmd.java b/api/src/org/apache/cloudstack/api/command/user/volume/UpdateVolumeCmd.java
index ad7c9920ad4..d4e3a6c1643 100644
--- a/api/src/org/apache/cloudstack/api/command/user/volume/UpdateVolumeCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/user/volume/UpdateVolumeCmd.java
@@ -23,32 +23,39 @@ import org.apache.cloudstack.api.ApiErrorCode;
import org.apache.cloudstack.api.BaseAsyncCmd;
import org.apache.cloudstack.api.Parameter;
import org.apache.cloudstack.api.ServerApiException;
-import org.apache.cloudstack.api.response.UserVmResponse;
+import org.apache.cloudstack.api.response.StoragePoolResponse;
import org.apache.cloudstack.api.response.VolumeResponse;
import org.apache.cloudstack.context.CallContext;
-
import org.apache.log4j.Logger;
import com.cloud.event.EventTypes;
+import com.cloud.exception.InvalidParameterValueException;
import com.cloud.storage.Volume;
-import com.cloud.user.Account;
@APICommand(name = "updateVolume", description="Updates the volume.", responseObject=VolumeResponse.class)
public class UpdateVolumeCmd extends BaseAsyncCmd {
public static final Logger s_logger = Logger.getLogger(UpdateVolumeCmd.class.getName());
- private static final String s_name = "addVolumeresponse";
+ private static final String s_name = "updatevolumeresponse";
/////////////////////////////////////////////////////
//////////////// API parameters /////////////////////
/////////////////////////////////////////////////////
- @Parameter(name=ApiConstants.ID, type=CommandType.UUID, entityType=VolumeResponse.class,
- required=true, description="the ID of the disk volume")
+ @Parameter(name=ApiConstants.ID, type=CommandType.UUID, entityType=VolumeResponse.class, description="the ID of the disk volume")
private Long id;
- @Parameter(name=ApiConstants.PATH, type=CommandType.STRING,
- required=true, description="the path of the volume")
+ @Parameter(name=ApiConstants.PATH, type=CommandType.STRING, description="The path of the volume")
private String path;
+
+ @Parameter(name=ApiConstants.STORAGE_ID, type=CommandType.UUID, entityType=StoragePoolResponse.class,
+ description="Destination storage pool UUID for the volume", since="4.3")
+ private Long storageId;
+
+ @Parameter(name=ApiConstants.STATE, type=CommandType.STRING, description="The state of the volume", since="4.3")
+ private String state;
+
+ @Parameter(name=ApiConstants.DISPLAY_VOLUME, type=CommandType.BOOLEAN, description="an optional field, whether to the display the volume to the end user or not.")
+ private Boolean displayVolume;
/////////////////////////////////////////////////////
/////////////////// Accessors ///////////////////////
@@ -61,8 +68,20 @@ public class UpdateVolumeCmd extends BaseAsyncCmd {
public Long getId() {
return id;
}
+
+ public Long getStorageId() {
+ return storageId;
+ }
- /////////////////////////////////////////////////////
+ public String getState() {
+ return state;
+ }
+
+ public Boolean getDisplayVolume() {
+ return displayVolume;
+ }
+
+/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
/////////////////////////////////////////////////////
@@ -83,25 +102,37 @@ public class UpdateVolumeCmd extends BaseAsyncCmd {
public long getEntityOwnerId() {
Volume volume = _responseGenerator.findVolumeById(getId());
if (volume == null) {
- return Account.ACCOUNT_ID_SYSTEM; // bad id given, parent this command to SYSTEM so ERROR events are tracked
+ throw new InvalidParameterValueException("Invalid volume id was provided");
}
return volume.getAccountId();
}
@Override
public String getEventType() {
- return EventTypes.EVENT_VOLUME_ATTACH;
+ return EventTypes.EVENT_VOLUME_UPDATE;
}
@Override
public String getEventDescription() {
- return "adding detail to the volume: " + getId();
+ StringBuilder desc = new StringBuilder("Updating volume: ");
+ desc.append(getId()).append(" with");
+ if (getPath() != null) {
+ desc.append(" path " + getPath());
+ }
+ if (getStorageId() != null) {
+ desc.append(", storage id " + getStorageId());
+ }
+
+ if (getState() != null) {
+ desc.append(", state " + getState());
+ }
+ return desc.toString();
}
@Override
public void execute(){
CallContext.current().setEventDetails("Volume Id: "+getId());
- Volume result = _volumeService.updateVolume(this);
+ Volume result = _volumeService.updateVolume(getId(), getPath(), getState(), getStorageId(), getDisplayVolume());
if (result != null) {
VolumeResponse response = _responseGenerator.createVolumeResponse(result);
response.setResponseName(getCommandName());
diff --git a/api/src/org/apache/cloudstack/api/command/user/vpn/CreateRemoteAccessVpnCmd.java b/api/src/org/apache/cloudstack/api/command/user/vpn/CreateRemoteAccessVpnCmd.java
index ff681a9d1e6..523101d67fc 100644
--- a/api/src/org/apache/cloudstack/api/command/user/vpn/CreateRemoteAccessVpnCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/user/vpn/CreateRemoteAccessVpnCmd.java
@@ -126,25 +126,10 @@ public class CreateRemoteAccessVpnCmd extends BaseAsyncCreateCmd {
return EventTypes.EVENT_REMOTE_ACCESS_VPN_CREATE;
}
- public long getNetworkId() {
- IpAddress ip = _entityMgr.findById(IpAddress.class, getPublicIpId());
- Long ntwkId = null;
-
- if (ip.getAssociatedWithNetworkId() != null) {
- ntwkId = ip.getAssociatedWithNetworkId();
- }
-
- if (ntwkId == null) {
- throw new InvalidParameterValueException("Unable to create remote access vpn for the ipAddress id=" + getPublicIpId() +
- " as ip is not associated with any network and no networkId is passed in");
- }
- return ntwkId;
- }
-
@Override
public void create() {
try {
- RemoteAccessVpn vpn = _ravService.createRemoteAccessVpn(publicIpId, ipRange, getOpenFirewall(), getNetworkId());
+ RemoteAccessVpn vpn = _ravService.createRemoteAccessVpn(publicIpId, ipRange, getOpenFirewall());
if (vpn != null) {
this.setEntityId(vpn.getServerAddressId());
// find uuid for server ip address
diff --git a/api/src/org/apache/cloudstack/api/command/user/zone/ListZonesByCmd.java b/api/src/org/apache/cloudstack/api/command/user/zone/ListZonesByCmd.java
index 4cf3b58a0a8..2a98cfbe928 100644
--- a/api/src/org/apache/cloudstack/api/command/user/zone/ListZonesByCmd.java
+++ b/api/src/org/apache/cloudstack/api/command/user/zone/ListZonesByCmd.java
@@ -16,21 +16,21 @@
// under the License.
package org.apache.cloudstack.api.command.user.zone;
-import java.util.ArrayList;
-import java.util.List;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map;
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.BaseListCmd;
import org.apache.cloudstack.api.Parameter;
-import org.apache.cloudstack.api.BaseCmd.CommandType;
import org.apache.cloudstack.api.response.DomainResponse;
import org.apache.cloudstack.api.response.ListResponse;
-import org.apache.cloudstack.api.response.ServiceOfferingResponse;
import org.apache.cloudstack.api.response.ZoneResponse;
import org.apache.log4j.Logger;
-import com.cloud.dc.DataCenter;
+import com.cloud.exception.InvalidParameterValueException;
@APICommand(name = "listZones", description="Lists zones", responseObject=ZoneResponse.class)
public class ListZonesByCmd extends BaseListCmd {
@@ -62,6 +62,9 @@ public class ListZonesByCmd extends BaseListCmd {
@Parameter(name=ApiConstants.SHOW_CAPACITIES, type=CommandType.BOOLEAN, description="flag to display the capacity of the zones")
private Boolean showCapacities;
+
+ @Parameter(name = ApiConstants.TAGS, type = CommandType.MAP, description = "List zones by resource tags (key/value pairs)", since="4.3")
+ private Map tags;
/////////////////////////////////////////////////////
/////////////////// Accessors ///////////////////////
@@ -90,6 +93,25 @@ public class ListZonesByCmd extends BaseListCmd {
public Boolean getShowCapacities() {
return showCapacities;
}
+
+ public Map getTags() {
+ Map tagsMap = null;
+ if (tags != null && !tags.isEmpty()) {
+ tagsMap = new HashMap();
+ Collection> servicesCollection = tags.values();
+ Iterator> iter = servicesCollection.iterator();
+ while (iter.hasNext()) {
+ HashMap services = (HashMap) iter.next();
+ String key = services.get("key");
+ String value = services.get("value");
+ if (value == null) {
+ throw new InvalidParameterValueException("No value is passed in for key " + key);
+ }
+ tagsMap.put(key, value);
+ }
+ }
+ return tagsMap;
+ }
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
diff --git a/api/src/org/apache/cloudstack/api/response/CreateSSHKeyPairResponse.java b/api/src/org/apache/cloudstack/api/response/CreateSSHKeyPairResponse.java
new file mode 100644
index 00000000000..e247fb4dcbc
--- /dev/null
+++ b/api/src/org/apache/cloudstack/api/response/CreateSSHKeyPairResponse.java
@@ -0,0 +1,41 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+package org.apache.cloudstack.api.response;
+
+import com.cloud.serializer.Param;
+import com.google.gson.annotations.SerializedName;
+
+public class CreateSSHKeyPairResponse extends SSHKeyPairResponse {
+
+ @SerializedName("privatekey") @Param(description="Private key")
+ private String privateKey;
+
+ public CreateSSHKeyPairResponse() {}
+
+ public CreateSSHKeyPairResponse(String name, String fingerprint, String privateKey) {
+ super(name, fingerprint);
+ this.privateKey = privateKey;
+ }
+
+ public String getPrivateKey() {
+ return privateKey;
+ }
+
+ public void setPrivateKey(String privateKey) {
+ this.privateKey = privateKey;
+ }
+}
diff --git a/api/src/org/apache/cloudstack/api/response/ResourceDetailResponse.java b/api/src/org/apache/cloudstack/api/response/ResourceDetailResponse.java
index 0e917d71904..989a126a1ae 100644
--- a/api/src/org/apache/cloudstack/api/response/ResourceDetailResponse.java
+++ b/api/src/org/apache/cloudstack/api/response/ResourceDetailResponse.java
@@ -16,14 +16,8 @@
// under the License.
package org.apache.cloudstack.api.response;
-import java.util.Date;
-import java.util.HashSet;
-import java.util.LinkedHashSet;
-import java.util.Set;
-
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.BaseResponse;
-import org.apache.cloudstack.api.EntityReference;
import com.cloud.serializer.Param;
import com.google.gson.annotations.SerializedName;
@@ -47,6 +41,11 @@ public class ResourceDetailResponse extends BaseResponse{
@Param(description = "value of the resource detail")
private String value;
+
+ @SerializedName(ApiConstants.FOR_DISPLAY)
+ @Param(description = "if detail is returned to the regular user", since="4.3")
+ private boolean forDisplay;
+
public String getResourceId() {
return resourceId;
}
@@ -78,4 +77,8 @@ public class ResourceDetailResponse extends BaseResponse{
public void setValue(String value) {
this.value = value;
}
+
+ public void setForDisplay(boolean forDisplay) {
+ this.forDisplay = forDisplay;
+ }
}
diff --git a/api/src/org/apache/cloudstack/api/response/SSHKeyPairResponse.java b/api/src/org/apache/cloudstack/api/response/SSHKeyPairResponse.java
index 2791853d4a2..e102bab0394 100644
--- a/api/src/org/apache/cloudstack/api/response/SSHKeyPairResponse.java
+++ b/api/src/org/apache/cloudstack/api/response/SSHKeyPairResponse.java
@@ -30,19 +30,11 @@ public class SSHKeyPairResponse extends BaseResponse {
@SerializedName("fingerprint") @Param(description="Fingerprint of the public key")
private String fingerprint;
- @SerializedName("privatekey") @Param(description="Private key")
- private String privateKey;
-
public SSHKeyPairResponse() {}
public SSHKeyPairResponse(String name, String fingerprint) {
- this(name, fingerprint, null);
- }
-
- public SSHKeyPairResponse(String name, String fingerprint, String privateKey) {
this.name = name;
this.fingerprint = fingerprint;
- this.privateKey = privateKey;
}
public String getName() {
@@ -61,12 +53,4 @@ public class SSHKeyPairResponse extends BaseResponse {
this.fingerprint = fingerprint;
}
- public String getPrivateKey() {
- return privateKey;
- }
-
- public void setPrivateKey(String privateKey) {
- this.privateKey = privateKey;
- }
-
}
diff --git a/api/src/org/apache/cloudstack/api/response/ServiceOfferingResponse.java b/api/src/org/apache/cloudstack/api/response/ServiceOfferingResponse.java
index 5c5b369ec25..e305ee95e70 100644
--- a/api/src/org/apache/cloudstack/api/response/ServiceOfferingResponse.java
+++ b/api/src/org/apache/cloudstack/api/response/ServiceOfferingResponse.java
@@ -17,19 +17,15 @@
package org.apache.cloudstack.api.response;
import java.util.Date;
-
-
-import com.google.gson.annotations.SerializedName;
-
import java.util.Map;
-
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.BaseResponse;
import org.apache.cloudstack.api.EntityReference;
import com.cloud.offering.ServiceOffering;
import com.cloud.serializer.Param;
+import com.google.gson.annotations.SerializedName;
@EntityReference(value = ServiceOffering.class)
public class ServiceOfferingResponse extends BaseResponse {
@@ -108,6 +104,10 @@ public class ServiceOfferingResponse extends BaseResponse {
@SerializedName(ApiConstants.SERVICE_OFFERING_DETAILS)
@Param(description = "additional key/value details tied with this service offering", since = "4.2.0")
private Map details;
+
+
+ public ServiceOfferingResponse(){
+ }
public String getId() {
return id;
@@ -287,4 +287,5 @@ public class ServiceOfferingResponse extends BaseResponse {
public void setDetails(Map details) {
this.details = details;
}
+
}
diff --git a/api/src/org/apache/cloudstack/api/response/SnapshotResponse.java b/api/src/org/apache/cloudstack/api/response/SnapshotResponse.java
index e9cb109bf31..7c2b4a99770 100644
--- a/api/src/org/apache/cloudstack/api/response/SnapshotResponse.java
+++ b/api/src/org/apache/cloudstack/api/response/SnapshotResponse.java
@@ -26,18 +26,8 @@ import org.apache.cloudstack.api.EntityReference;
import com.cloud.serializer.Param;
import com.cloud.storage.Snapshot;
import com.google.gson.annotations.SerializedName;
-import com.cloud.serializer.Param;
-import com.cloud.storage.Snapshot;
-import com.google.gson.annotations.SerializedName;
-import org.apache.cloudstack.api.ApiConstants;
-import org.apache.cloudstack.api.BaseResponse;
-import org.apache.cloudstack.api.EntityReference;
-
-import java.util.Date;
-import java.util.List;
@EntityReference(value=Snapshot.class)
-@SuppressWarnings("unused")
public class SnapshotResponse extends BaseResponse implements ControlledEntityResponse {
@SerializedName(ApiConstants.ID)
@Param(description = "ID of the snapshot")
@@ -100,6 +90,9 @@ public class SnapshotResponse extends BaseResponse implements ControlledEntityRe
@SerializedName(ApiConstants.TAGS) @Param(description="the list of resource tags associated with snapshot", responseObject = ResourceTagResponse.class)
private List tags;
+ @SerializedName(ApiConstants.REVERTABLE)
+ @Param(description="indicates whether the underlying storage supports reverting the volume to this snapshot")
+ private boolean revertable;
@Override
public String getObjectId() {
@@ -118,6 +111,7 @@ public class SnapshotResponse extends BaseResponse implements ControlledEntityRe
return accountName;
}
+ @Override
public void setAccountName(String accountName) {
this.accountName = accountName;
}
@@ -131,6 +125,7 @@ public class SnapshotResponse extends BaseResponse implements ControlledEntityRe
this.domainId = domainId;
}
+ @Override
public void setDomainName(String domainName) {
this.domainName = domainName;
}
@@ -180,8 +175,16 @@ public class SnapshotResponse extends BaseResponse implements ControlledEntityRe
public void setZoneId(String zoneId) {
this.zoneId = zoneId;
}
-
+
public void setTags(List tags) {
this.tags = tags;
}
+
+ public boolean isRevertable() {
+ return revertable;
+ }
+
+ public void setRevertable(boolean revertable) {
+ this.revertable = revertable;
+ }
}
diff --git a/api/src/org/apache/cloudstack/api/response/UserVmResponse.java b/api/src/org/apache/cloudstack/api/response/UserVmResponse.java
index d9bb2a976ee..9a7f91c70f6 100644
--- a/api/src/org/apache/cloudstack/api/response/UserVmResponse.java
+++ b/api/src/org/apache/cloudstack/api/response/UserVmResponse.java
@@ -18,6 +18,7 @@ package org.apache.cloudstack.api.response;
import java.util.Date;
import java.util.LinkedHashSet;
+import java.util.Map;
import java.util.Set;
import org.apache.cloudstack.affinity.AffinityGroupResponse;
@@ -177,6 +178,9 @@ public class UserVmResponse extends BaseResponse implements ControlledEntityResp
@SerializedName(ApiConstants.TAGS) @Param(description="the list of resource tags associated with vm", responseObject = ResourceTagResponse.class)
private Set tags;
+
+ @SerializedName(ApiConstants.DETAILS) @Param(description="Template details in key/value pairs.", since="4.2.1")
+ private Map details;
@SerializedName(ApiConstants.SSH_KEYPAIR) @Param(description="ssh key-pair")
private String keyPairName;
@@ -653,5 +657,8 @@ public class UserVmResponse extends BaseResponse implements ControlledEntityResp
public void setServiceState(String state) {
this.serviceState = state;
}
-
+
+ public void setDetails(Map details) {
+ this.details = details;
+ }
}
diff --git a/api/src/org/apache/cloudstack/api/response/VolumeResponse.java b/api/src/org/apache/cloudstack/api/response/VolumeResponse.java
index 338fcaae5a4..56c007f2d69 100644
--- a/api/src/org/apache/cloudstack/api/response/VolumeResponse.java
+++ b/api/src/org/apache/cloudstack/api/response/VolumeResponse.java
@@ -17,7 +17,6 @@
package org.apache.cloudstack.api.response;
import java.util.Date;
-import java.util.HashSet;
import java.util.LinkedHashSet;
import java.util.Set;
@@ -178,12 +177,30 @@ public class VolumeResponse extends BaseResponse implements ControlledViewEntity
@Param(description="the status of the volume")
private String status;
- @SerializedName(ApiConstants.TAGS) @Param(description="the list of resource tags associated with volume", responseObject = ResourceTagResponse.class)
+ @SerializedName(ApiConstants.TAGS)
+ @Param(description="the list of resource tags associated with volume", responseObject = ResourceTagResponse.class)
private Set tags;
- @SerializedName(ApiConstants.DISPLAY_VOLUME) @Param(description="an optional field whether to the display the volume to the end user or not.")
+ @SerializedName(ApiConstants.DISPLAY_VOLUME)
+ @Param(description="an optional field whether to the display the volume to the end user or not.")
private Boolean displayVm;
+ @SerializedName(ApiConstants.PATH)
+ @Param(description="The path of the volume")
+ private String path;
+
+ @SerializedName(ApiConstants.STORAGE_ID)
+ @Param(description = "id of the primary storage hosting the disk volume; returned to admin user only", since="4.3")
+ private String storagePoolId;
+
+ public String getPath() {
+ return path;
+ }
+
+ public void setPath(String path) {
+ this.path = path;
+ }
+
public VolumeResponse(){
tags = new LinkedHashSet();
}
@@ -388,4 +405,7 @@ public class VolumeResponse extends BaseResponse implements ControlledViewEntity
this.displayVm = displayVm;
}
+ public void setStoragePoolId(String storagePoolId) {
+ this.storagePoolId = storagePoolId;
+ }
}
diff --git a/api/src/org/apache/cloudstack/api/response/ZoneResponse.java b/api/src/org/apache/cloudstack/api/response/ZoneResponse.java
index 2ebb15a1ecf..2f93e9159ce 100644
--- a/api/src/org/apache/cloudstack/api/response/ZoneResponse.java
+++ b/api/src/org/apache/cloudstack/api/response/ZoneResponse.java
@@ -16,7 +16,10 @@
// under the License.
package org.apache.cloudstack.api.response;
+import java.util.LinkedHashSet;
import java.util.List;
+import java.util.Map;
+import java.util.Set;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.BaseResponse;
@@ -98,6 +101,19 @@ public class ZoneResponse extends BaseResponse {
@SerializedName(ApiConstants.LOCAL_STORAGE_ENABLED) @Param(description="true if local storage offering enabled, false otherwise")
private boolean localStorageEnabled;
+
+ @SerializedName(ApiConstants.TAGS) @Param(description="the list of resource tags associated with zone.",
+ responseObject = ResourceTagResponse.class, since="4.3")
+ private Set tags;
+
+ @SerializedName(ApiConstants.RESOURCE_DETAILS)
+ @Param(description = "Meta data associated with the zone (key/value pairs)", since = "4.3.0")
+ private Map resourceDetails;
+
+
+ public ZoneResponse(){
+ tags = new LinkedHashSet();
+ }
public void setId(String id) {
this.id = id;
@@ -198,4 +214,12 @@ public class ZoneResponse extends BaseResponse {
public void setIp6Dns2(String ip6Dns2) {
this.ip6Dns2 = ip6Dns2;
}
+
+ public void addTag(ResourceTagResponse tag){
+ this.tags.add(tag);
+ }
+
+ public void setResourceDetails(Map details) {
+ this.resourceDetails = details;
+ }
}
diff --git a/api/src/org/apache/cloudstack/context/CallContext.java b/api/src/org/apache/cloudstack/context/CallContext.java
index a62a3da72c4..5439aee7062 100644
--- a/api/src/org/apache/cloudstack/context/CallContext.java
+++ b/api/src/org/apache/cloudstack/context/CallContext.java
@@ -18,8 +18,10 @@ package org.apache.cloudstack.context;
import java.util.HashMap;
import java.util.Map;
+import java.util.Stack;
import java.util.UUID;
+import org.apache.cloudstack.managed.threadlocal.ManagedThreadLocal;
import org.apache.log4j.Logger;
import org.apache.log4j.NDC;
@@ -37,18 +39,27 @@ import com.cloud.utils.exception.CloudRuntimeException;
*/
public class CallContext {
private static final Logger s_logger = Logger.getLogger(CallContext.class);
- private static ThreadLocal s_currentContext = new ThreadLocal();
+ private static ManagedThreadLocal s_currentContext = new ManagedThreadLocal();
+ private static ManagedThreadLocal> s_currentContextStack =
+ new ManagedThreadLocal>() {
+ @Override
+ protected Stack initialValue() {
+ return new Stack();
+ }
+ };
private String contextId;
private Account account;
+ private long accountId;
private long startEventId = 0;
private String eventDescription;
private String eventDetails;
private String eventType;
private User user;
+ private long userId;
private final Maporg.apache.axis2axis2
@@ -72,7 +73,6 @@
log4jlog4j
- ${cs.log4j.version}org.apache.cloudstack
@@ -97,22 +97,19 @@
org.apache.ws.commons.axiomaxiom-impl
-
+
com.google.code.gsongson
- ${cs.gson.version}commons-codeccommons-codec
- ${cs.codec.version}javax.servletservlet-api
- ${cs.servlet.version}provided
@@ -123,7 +120,6 @@
org.jasyptjasypt
- ${cs.jasypt.version}com.caringo.client
@@ -137,15 +133,15 @@
mar
- bouncycastle
- bcprov-jdk14
+ bouncycastle
+ bcprov-jdk14
- org.apache.xalan
- xalan
+ org.apache.xalan
+ xalan
- org.opensaml
+ org.opensamlopensaml
@@ -157,126 +153,127 @@
mar
- bouncycastle
- bcprov-jdk14
+ bouncycastle
+ bcprov-jdk14
- org.apache.xalan
- xalan
+ org.apache.xalan
+ xalan
- org.opensaml
+ org.opensamlopensaml
- org.apache.rampart
- rampart-core
- ${cs.rampart.version}
- runtime
+ org.apache.rampart
+ rampart-core
+ ${cs.rampart.version}
+ runtime
- org.apache.xalan
- xalan
+ org.apache.xalan
+ xalan
- org.opensaml
+ org.opensamlopensaml
- org.apache.rampart
- rampart-policy
- ${cs.rampart.version}
- runtime
+ org.apache.rampart
+ rampart-policy
+ ${cs.rampart.version}
+ runtime
- org.apache.xalan
- xalan
+ org.apache.xalan
+ xalan
- org.opensaml
+ org.opensamlopensaml
- org.apache.rampart
- rampart-trust
- ${cs.rampart.version}
- runtime
+ org.apache.rampart
+ rampart-trust
+ ${cs.rampart.version}
+ runtime
- org.apache.xalan
- xalan
+ org.apache.xalan
+ xalan
- org.opensaml
+ org.opensamlopensaml
- org.slf4j
- slf4j-jdk14
- 1.6.1
- runtime
+ org.slf4j
+ slf4j-jdk14
+ 1.6.1
+ runtime
- org.slf4j
- slf4j-api
- 1.6.1
- runtime
+ org.slf4j
+ slf4j-api
+ 1.6.1
+ runtime
- org.apache.ws.security
- wss4j
- 1.6.1
- runtime
+ org.apache.ws.security
+ wss4j
+ 1.6.1
+ runtime
- joda-time
- joda-time
- 1.5.2
- runtime
+ joda-time
+ joda-time
+ 1.5.2
+ runtime
- org.opensaml
- xmltooling
- 1.3.1
- runtime
+ org.opensaml
+ xmltooling
+ 1.3.1
+ runtime
- org.opensaml
- openws
- 1.4.1
- runtime
+ org.opensaml
+ openws
+ 1.4.1
+ runtime
- velocity
- velocity
- 1.5
- runtime
+ velocity
+ velocity
+ 1.5
+ runtime
- org.opensaml
- opensaml
- 2.5.1-1
- runtime
+ org.opensaml
+ opensaml
+ 2.5.1-1
+ runtime
- org.apache.santuario
- xmlsec
- 1.4.2
- runtime
+ org.apache.santuario
+ xmlsec
+ 1.4.2
+ runtime
- org.bouncycastle
- bcprov-jdk16
- 1.45
- runtime
+ org.bouncycastle
+ bcprov-jdk16
+
+ 1.45
+ runtimemysql
@@ -302,7 +299,7 @@
org.apache.cloudstackcloud-framework-db${project.version}
-
+
@@ -313,7 +310,7 @@
- ../utils/conf/
+ ../utils/conf/${basedir}/resource/AmazonEC2
@@ -331,7 +328,7 @@
-
+ org.apache.maven.pluginsmaven-war-plugin2.3
@@ -358,7 +355,6 @@
maven-antrun-plugin
- 1.7generate-resource
@@ -368,52 +364,49 @@
-
+
-
+
-
+
-
-
+
+
-
-
-
- org.apache.axis2
- axis2-aar-maven-plugin
- 1.6.2
- true
-
- false
- cloud-ec2
- ${project.build.directory}/WEB-INF/services
-
-
- resource/AmazonEC2
- META-INF
-
- services.xml
-
-
-
-
-
-
-
- aar
-
-
-
-
+
+
+
+ org.apache.axis2
+ axis2-aar-maven-plugin
+ 1.6.2
+ true
+
+ false
+ cloud-ec2
+ ${project.build.directory}/WEB-INF/services
+
+
+ resource/AmazonEC2
+ META-INF
+
+ services.xml
+
+
+
+
+
+
+
+ aar
+
+
+
+
@@ -426,19 +419,15 @@
-
- org.apache.maven.plugins
-
-
- maven-antrun-plugin
-
+ org.apache.maven.plugins
+ maven-antrun-plugin [1.7,)run
-
+
@@ -451,7 +440,7 @@
-
+
@@ -483,14 +472,10 @@
-
-
-
+
+
+
@@ -555,27 +540,27 @@
-
- org.codehaus.mojo
- exec-maven-plugin
- 1.2.1
-
-
- clean
-
- exec
-
-
- rm
-
- -rf
- ${basedir}/wsdl/
- ${basedir}/resources/AmazonEC2.wsdl
-
-
-
-
-
+
+ org.codehaus.mojo
+ exec-maven-plugin
+ 1.2.1
+
+
+ clean
+
+ exec
+
+
+ rm
+
+ -rf
+ ${basedir}/wsdl/
+ ${basedir}/resources/AmazonEC2.wsdl
+
+
+
+
+
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/BucketPolicyDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/BucketPolicyDaoImpl.java
index dd354a39ffb..00486cbaceb 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/BucketPolicyDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/BucketPolicyDaoImpl.java
@@ -26,7 +26,7 @@ import com.cloud.bridge.model.BucketPolicyVO;
import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
-import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={BucketPolicyDao.class})
@@ -42,7 +42,7 @@ public class BucketPolicyDaoImpl extends GenericDaoBase im
public BucketPolicyVO getByName( String bucketName ) {
SearchBuilder searchByBucket = createSearchBuilder();
searchByBucket.and("BucketName", searchByBucket.entity().getBucketName(), SearchCriteria.Op.EQ);
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
try {
txn.start();
SearchCriteria sc = searchByBucket.create();
@@ -59,7 +59,7 @@ public class BucketPolicyDaoImpl extends GenericDaoBase im
public void deletePolicy( String bucketName ) {
SearchBuilder deleteByBucket = createSearchBuilder();
deleteByBucket.and("BucketName", deleteByBucket.entity().getBucketName(), SearchCriteria.Op.EQ);
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
try {
txn.start();
SearchCriteria sc = deleteByBucket.create();
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/CloudStackAccountDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/CloudStackAccountDaoImpl.java
index 8fbc7c8e3af..75a693e4955 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/CloudStackAccountDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/CloudStackAccountDaoImpl.java
@@ -25,6 +25,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={CloudStackAccountDao.class})
@@ -34,7 +35,7 @@ public class CloudStackAccountDaoImpl extends GenericDaoBase SearchByUUID = createSearchBuilder();
- Transaction txn = Transaction.open(Transaction.CLOUD_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.CLOUD_DB);
try {
txn.start();
SearchByUUID.and("uuid", SearchByUUID.entity().getUuid(),
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/CloudStackConfigurationDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/CloudStackConfigurationDaoImpl.java
index bc77ea1d886..644dcdcef37 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/CloudStackConfigurationDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/CloudStackConfigurationDaoImpl.java
@@ -27,6 +27,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={CloudStackConfigurationDao.class})
@@ -42,7 +43,7 @@ public class CloudStackConfigurationDaoImpl extends GenericDaoBase sc = NameSearch.create();
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/CloudStackSvcOfferingDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/CloudStackSvcOfferingDaoImpl.java
index 8021eb618e9..cb8d129f528 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/CloudStackSvcOfferingDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/CloudStackSvcOfferingDaoImpl.java
@@ -29,6 +29,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={CloudStackSvcOfferingDao.class})
@@ -42,7 +43,7 @@ public class CloudStackSvcOfferingDaoImpl extends GenericDaoBase searchByName = createSearchBuilder();
searchByName.and("name", searchByName.entity().getName(), SearchCriteria.Op.EQ);
searchByName.done();
- Transaction txn = Transaction.open(Transaction.CLOUD_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.CLOUD_DB);
try {
txn.start();
SearchCriteria sc = searchByName.create();
@@ -61,7 +62,7 @@ public class CloudStackSvcOfferingDaoImpl extends GenericDaoBase searchByID = createSearchBuilder();
searchByID.and("uuid", searchByID.entity().getUuid(), SearchCriteria.Op.EQ);
searchByID.done();
- Transaction txn = Transaction.open(Transaction.CLOUD_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.CLOUD_DB);
try {
txn.start();
SearchCriteria sc = searchByID.create();
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/CloudStackUserDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/CloudStackUserDaoImpl.java
index f7e1da65dc6..7fe1dabee4d 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/CloudStackUserDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/CloudStackUserDaoImpl.java
@@ -26,6 +26,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
import com.cloud.utils.crypt.DBEncryptionUtil;
@Component
@@ -43,7 +44,7 @@ public class CloudStackUserDaoImpl extends GenericDaoBase searchByAccessKey = createSearchBuilder();
searchByAccessKey.and("apiKey", searchByAccessKey.entity().getApiKey(), SearchCriteria.Op.EQ);
searchByAccessKey.done();
- Transaction txn = Transaction.open(Transaction.CLOUD_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.CLOUD_DB);
try {
txn.start();
SearchCriteria sc = searchByAccessKey.create();
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/MHostDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/MHostDaoImpl.java
index 222325498b9..b52fcaf221b 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/MHostDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/MHostDaoImpl.java
@@ -25,6 +25,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={MHostDao.class})
@@ -38,7 +39,7 @@ public class MHostDaoImpl extends GenericDaoBase implements MHost
@Override
public MHostVO getByHostKey(String hostKey) {
NameSearch.and("MHostKey", NameSearch.entity().getHostKey(), SearchCriteria.Op.EQ);
- Transaction txn = Transaction.open("cloudbridge", Transaction.AWSAPI_DB, true);
+ TransactionLegacy txn = TransactionLegacy.open("cloudbridge", TransactionLegacy.AWSAPI_DB, true);
try {
txn.start();
SearchCriteria sc = NameSearch.create();
@@ -52,7 +53,7 @@ public class MHostDaoImpl extends GenericDaoBase implements MHost
@Override
public void updateHeartBeat(MHostVO mhost) {
- Transaction txn = Transaction.open("cloudbridge", Transaction.AWSAPI_DB, true);
+ TransactionLegacy txn = TransactionLegacy.open("cloudbridge", TransactionLegacy.AWSAPI_DB, true);
try {
txn.start();
update(mhost.getId(), mhost);
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/MHostMountDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/MHostMountDaoImpl.java
index 8b99f487911..8a7153a9b85 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/MHostMountDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/MHostMountDaoImpl.java
@@ -25,6 +25,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={MHostMountDao.class})
@@ -37,7 +38,7 @@ public class MHostMountDaoImpl extends GenericDaoBase implem
public MHostMountVO getHostMount(long mHostId, long sHostId) {
SearchByMHostID.and("MHostID", SearchByMHostID.entity().getmHostID(), SearchCriteria.Op.EQ);
SearchByMHostID.and("SHostID", SearchByMHostID.entity().getsHostID(), SearchCriteria.Op.EQ);
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
try {
txn.start();
SearchCriteria sc = SearchByMHostID.create();
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/MultiPartPartsDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/MultiPartPartsDaoImpl.java
index 6f314951697..f1472e675aa 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/MultiPartPartsDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/MultiPartPartsDaoImpl.java
@@ -28,6 +28,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={MultiPartPartsDao.class})
@@ -42,7 +43,7 @@ public class MultiPartPartsDaoImpl extends GenericDaoBase sc = ByUploadID.create();
@@ -61,7 +62,7 @@ public class MultiPartPartsDaoImpl extends GenericDaoBase byUploadID = createSearchBuilder();
byUploadID.and("UploadID", byUploadID.entity().getUploadid(), SearchCriteria.Op.EQ);
byUploadID.and("partNumber", byUploadID.entity().getPartNumber(), SearchCriteria.Op.GT);
- Transaction txn = Transaction.currentTxn(); // Transaction.open("cloudbridge", Transaction.AWSAPI_DB, true);
+ TransactionLegacy txn = TransactionLegacy.currentTxn(); // Transaction.open("cloudbridge", Transaction.AWSAPI_DB, true);
try {
txn.start();
SearchCriteria sc = byUploadID.create();
@@ -82,7 +83,7 @@ public class MultiPartPartsDaoImpl extends GenericDaoBase byUploadID = createSearchBuilder();
byUploadID.and("UploadID", byUploadID.entity().getUploadid(), SearchCriteria.Op.EQ);
byUploadID.and("partNumber", byUploadID.entity().getPartNumber(), SearchCriteria.Op.EQ);
- Transaction txn = Transaction.currentTxn(); // Transaction.open("cloudbridge", Transaction.AWSAPI_DB, true);
+ TransactionLegacy txn = TransactionLegacy.currentTxn(); // Transaction.open("cloudbridge", Transaction.AWSAPI_DB, true);
try {
txn.start();
SearchCriteria sc = byUploadID.create();
@@ -102,7 +103,7 @@ public class MultiPartPartsDaoImpl extends GenericDaoBase byUploadID = createSearchBuilder();
byUploadID.and("UploadID", byUploadID.entity().getUploadid(), SearchCriteria.Op.EQ);
byUploadID.and("partNumber", byUploadID.entity().getPartNumber(), SearchCriteria.Op.EQ);
- Transaction txn = Transaction.currentTxn(); // Transaction.open("cloudbridge", Transaction.AWSAPI_DB, true);
+ TransactionLegacy txn = TransactionLegacy.currentTxn(); // Transaction.open("cloudbridge", Transaction.AWSAPI_DB, true);
try {
txn.start();
SearchCriteria sc = byUploadID.create();
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/MultiPartUploadsDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/MultiPartUploadsDaoImpl.java
index 0f76e80a952..41133a06e92 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/MultiPartUploadsDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/MultiPartUploadsDaoImpl.java
@@ -33,6 +33,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={MultiPartUploadsDao.class})
@@ -42,9 +43,9 @@ public class MultiPartUploadsDaoImpl extends GenericDaoBase multipartExits( int uploadId ) {
MultiPartUploadsVO uploadvo = null;
- Transaction txn = null;
+ TransactionLegacy txn = null;
try {
- txn = Transaction.open(Transaction.AWSAPI_DB);
+ txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
uploadvo = findById(new Long(uploadId));
if (null != uploadvo)
return new OrderedPair(uploadvo.getAccessKey(), uploadvo.getNameKey());
@@ -58,9 +59,9 @@ public class MultiPartUploadsDaoImpl extends GenericDaoBase sc = byBucket.create();
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/MultipartLoadDao.java b/awsapi/src/com/cloud/bridge/persist/dao/MultipartLoadDao.java
index c1a69dc5e47..4e6ff3d1b25 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/MultipartLoadDao.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/MultipartLoadDao.java
@@ -34,6 +34,7 @@ import com.cloud.bridge.service.core.s3.S3MultipartPart;
import com.cloud.bridge.service.core.s3.S3MultipartUpload;
import com.cloud.bridge.util.OrderedPair;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
public class MultipartLoadDao {
public static final Logger logger = Logger.getLogger(MultipartLoadDao.class);
@@ -94,9 +95,9 @@ public class MultipartLoadDao {
*/
public int initiateUpload( String accessKey, String bucketName, String key, String cannedAccess, S3MetaDataEntry[] meta ) {
int uploadId = -1;
- Transaction txn = null;
+ TransactionLegacy txn = null;
try {
- txn = Transaction.open(Transaction.AWSAPI_DB);
+ txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
Date tod = new Date();
MultiPartUploadsVO uploadVO = new MultiPartUploadsVO(accessKey,
bucketName, key, cannedAccess, tod);
@@ -315,9 +316,9 @@ public class MultipartLoadDao {
private void saveMultipartMeta( int uploadId, S3MetaDataEntry[] meta ) {
if (null == meta) return;
- Transaction txn = null;
+ TransactionLegacy txn = null;
try {
- txn = Transaction.open(Transaction.AWSAPI_DB);
+ txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
for( int i=0; i < meta.length; i++ )
{
S3MetaDataEntry entry = meta[i];
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/MultipartMetaDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/MultipartMetaDaoImpl.java
index 7ab93599d22..fec0a2c1280 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/MultipartMetaDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/MultipartMetaDaoImpl.java
@@ -27,6 +27,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={MultipartMetaDao.class})
@@ -37,7 +38,7 @@ public class MultipartMetaDaoImpl extends GenericDaoBase
SearchBuilder searchByUID = createSearchBuilder();
searchByUID.and("UploadID", searchByUID.entity().getUploadID(), SearchCriteria.Op.EQ);
searchByUID.done();
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
try {
txn.start();
SearchCriteria sc = searchByUID.create();
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/OfferingDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/OfferingDaoImpl.java
index ea7d264f80c..963f1084134 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/OfferingDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/OfferingDaoImpl.java
@@ -29,6 +29,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={OfferingDao.class})
@@ -39,7 +40,7 @@ public class OfferingDaoImpl extends GenericDaoBase impl
@Override
public int getOfferingCount() {
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
try {
txn.start();
return listAll().size();
@@ -56,7 +57,7 @@ public class OfferingDaoImpl extends GenericDaoBase impl
SearchBuilder searchByAmazon = createSearchBuilder();
searchByAmazon.and("AmazonEC2Offering", searchByAmazon.entity().getAmazonOffering() , SearchCriteria.Op.EQ);
searchByAmazon.done();
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
try {
txn.start();
SearchCriteria sc = searchByAmazon.create();
@@ -74,7 +75,7 @@ public class OfferingDaoImpl extends GenericDaoBase impl
SearchBuilder searchByAmazon = createSearchBuilder();
searchByAmazon.and("CloudStackOffering", searchByAmazon.entity().getAmazonOffering() , SearchCriteria.Op.EQ);
searchByAmazon.done();
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
try {
txn.start();
SearchCriteria sc = searchByAmazon.create();
@@ -93,7 +94,7 @@ public class OfferingDaoImpl extends GenericDaoBase impl
searchByAmazon.and("CloudStackOffering", searchByAmazon.entity().getAmazonOffering() , SearchCriteria.Op.EQ);
searchByAmazon.and("AmazonEC2Offering", searchByAmazon.entity().getCloudstackOffering() , SearchCriteria.Op.EQ);
searchByAmazon.done();
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
OfferingBundleVO offering = null;
try {
txn.start();
@@ -122,7 +123,7 @@ public class OfferingDaoImpl extends GenericDaoBase impl
SearchBuilder searchByAmazon = createSearchBuilder();
searchByAmazon.and("AmazonEC2Offering", searchByAmazon.entity().getAmazonOffering() , SearchCriteria.Op.EQ);
searchByAmazon.done();
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
try {
txn.start();
SearchCriteria sc = searchByAmazon.create();
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/SAclDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/SAclDaoImpl.java
index d88660e05c9..d4b4c90fedc 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/SAclDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/SAclDaoImpl.java
@@ -32,6 +32,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={SAclDao.class})
@@ -46,7 +47,7 @@ public class SAclDaoImpl extends GenericDaoBase implements SAclDao
SearchByTarget.and("TargetID", SearchByTarget.entity().getTargetId(), SearchCriteria.Op.EQ);
SearchByTarget.done();
Filter filter = new Filter(SAclVO.class, "grantOrder", Boolean.TRUE, null, null);
- Transaction txn = Transaction.open( Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open( TransactionLegacy.AWSAPI_DB);
try {
txn.start();
SearchCriteria sc = SearchByTarget.create();
@@ -66,7 +67,7 @@ public class SAclDaoImpl extends GenericDaoBase implements SAclDao
SearchByAcl.and("TargetID", SearchByAcl.entity().getTargetId(), SearchCriteria.Op.EQ);
SearchByAcl.and("GranteeCanonicalID", SearchByAcl.entity().getGranteeCanonicalId(), SearchCriteria.Op.EQ);
Filter filter = new Filter(SAclVO.class, "grantOrder", Boolean.TRUE, null, null);
- Transaction txn = Transaction.open( Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open( TransactionLegacy.AWSAPI_DB);
try {
txn.start();
SearchCriteria sc = SearchByAcl.create();
@@ -85,7 +86,7 @@ public class SAclDaoImpl extends GenericDaoBase implements SAclDao
SearchByTarget.and("Target", SearchByTarget.entity().getTarget(), SearchCriteria.Op.EQ);
SearchByTarget.and("TargetID", SearchByTarget.entity().getTargetId(), SearchCriteria.Op.EQ);
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
try {
txn.start();
SearchCriteria sc = SearchByTarget.create();
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/SBucketDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/SBucketDaoImpl.java
index 817c682a946..552281d8b85 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/SBucketDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/SBucketDaoImpl.java
@@ -29,6 +29,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={SBucketDao.class})
@@ -42,7 +43,7 @@ public class SBucketDaoImpl extends GenericDaoBase implements S
SearchBuilder SearchByName = createSearchBuilder();
SearchByName.and("Name", SearchByName.entity().getName(), SearchCriteria.Op.EQ);
//Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
- Transaction txn = Transaction.open("cloudbridge", Transaction.AWSAPI_DB, true);
+ TransactionLegacy txn = TransactionLegacy.open("cloudbridge", TransactionLegacy.AWSAPI_DB, true);
try {
txn.start();
SearchCriteria sc = SearchByName.create();
@@ -59,7 +60,7 @@ public class SBucketDaoImpl extends GenericDaoBase implements S
SearchBuilder ByCanonicalID = createSearchBuilder();
ByCanonicalID.and("OwnerCanonicalID", ByCanonicalID.entity().getOwnerCanonicalId(), SearchCriteria.Op.EQ);
Filter filter = new Filter(SBucketVO.class, "createTime", Boolean.TRUE, null, null);
- Transaction txn = Transaction.currentTxn(); // Transaction.open("cloudbridge", Transaction.AWSAPI_DB, true);
+ TransactionLegacy txn = TransactionLegacy.currentTxn(); // Transaction.open("cloudbridge", Transaction.AWSAPI_DB, true);
try {
txn.start();
SearchCriteria sc = ByCanonicalID.create();
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/SHostDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/SHostDaoImpl.java
index 9b6b5359759..5d2e9b901b3 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/SHostDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/SHostDaoImpl.java
@@ -25,6 +25,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={SHostDao.class})
@@ -36,7 +37,7 @@ public class SHostDaoImpl extends GenericDaoBase implements SHost
SearchBuilder HostSearch = createSearchBuilder();
HostSearch.and("Host", HostSearch.entity().getHost(), SearchCriteria.Op.EQ);
HostSearch.done();
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
try {
txn.start();
SearchCriteria sc = HostSearch.create();
@@ -55,7 +56,7 @@ public class SHostDaoImpl extends GenericDaoBase implements SHost
LocalStorageHostSearch.and("MHostID", LocalStorageHostSearch.entity().getMhostid(), SearchCriteria.Op.EQ);
LocalStorageHostSearch.and("ExportRoot", LocalStorageHostSearch.entity().getExportRoot(), SearchCriteria.Op.EQ);
LocalStorageHostSearch.done();
- Transaction txn = Transaction.currentTxn();
+ TransactionLegacy txn = TransactionLegacy.currentTxn();
try {
txn.start();
SearchCriteria sc = LocalStorageHostSearch.create();
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/SMetaDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/SMetaDaoImpl.java
index 8fdc9493d82..95355b92689 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/SMetaDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/SMetaDaoImpl.java
@@ -28,6 +28,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={SMetaDao.class})
@@ -41,7 +42,7 @@ public class SMetaDaoImpl extends GenericDaoBase implements SMeta
SearchByTarget.and("Target", SearchByTarget.entity().getTarget(), SearchCriteria.Op.EQ);
SearchByTarget.and("TargetID", SearchByTarget.entity().getTargetId(), SearchCriteria.Op.EQ);
SearchByTarget.done();
- Transaction txn = Transaction.open( Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open( TransactionLegacy.AWSAPI_DB);
try {
txn.start();
SearchCriteria sc = SearchByTarget.create();
@@ -71,7 +72,7 @@ public class SMetaDaoImpl extends GenericDaoBase implements SMeta
SearchBuilder SearchByTarget = createSearchBuilder();
SearchByTarget.and("Target", SearchByTarget.entity().getTarget(), SearchCriteria.Op.EQ);
SearchByTarget.and("TargetID", SearchByTarget.entity().getTargetId(), SearchCriteria.Op.EQ);
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
try {
txn.start();
SearchCriteria sc = SearchByTarget.create();
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/SObjectDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/SObjectDaoImpl.java
index 6d23757b8b5..e6370feca1b 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/SObjectDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/SObjectDaoImpl.java
@@ -33,6 +33,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={SObjectDao.class})
@@ -47,7 +48,7 @@ public class SObjectDaoImpl extends GenericDaoBase implements S
SearchBuilder SearchByName = createSearchBuilder();
SearchByName.and("SBucketID", SearchByName.entity().getBucketID() , SearchCriteria.Op.EQ);
SearchByName.and("NameKey", SearchByName.entity().getNameKey() , SearchCriteria.Op.EQ);
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
try {
txn.start();
SearchCriteria sc = SearchByName.create();
@@ -76,7 +77,7 @@ public class SObjectDaoImpl extends GenericDaoBase implements S
SearchByBucket.and("SBucketID", SearchByBucket.entity().getBucketID(), SearchCriteria.Op.EQ);
SearchByBucket.and("DeletionMark", SearchByBucket.entity().getDeletionMark(), SearchCriteria.Op.NULL);
- Transaction txn = Transaction.currentTxn(); // Transaction.open("cloudbridge", Transaction.AWSAPI_DB, true);
+ TransactionLegacy txn = TransactionLegacy.currentTxn(); // Transaction.open("cloudbridge", Transaction.AWSAPI_DB, true);
try {
txn.start();
SearchCriteria sc = SearchByBucket.create();
@@ -100,7 +101,7 @@ public class SObjectDaoImpl extends GenericDaoBase implements S
List objects = new ArrayList();
getAllBuckets.and("SBucketID", getAllBuckets.entity().getBucketID(), SearchCriteria.Op.EQ);
- Transaction txn = Transaction.currentTxn(); // Transaction.open("cloudbridge", Transaction.AWSAPI_DB, true);
+ TransactionLegacy txn = TransactionLegacy.currentTxn(); // Transaction.open("cloudbridge", Transaction.AWSAPI_DB, true);
try {
txn.start();
SearchCriteria sc = getAllBuckets.create();
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/SObjectItemDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/SObjectItemDaoImpl.java
index 57140c49072..294b32d4d4f 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/SObjectItemDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/SObjectItemDaoImpl.java
@@ -27,6 +27,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={SObjectItemDao.class})
@@ -39,7 +40,7 @@ public class SObjectItemDaoImpl extends GenericDaoBase impl
@Override
public SObjectItemVO getByObjectIdNullVersion(long id) {
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
SearchBuilder SearchByID = createSearchBuilder();
SearchByID.and("ID", SearchByID.entity().getId(), SearchCriteria.Op.EQ);
@@ -56,7 +57,7 @@ public class SObjectItemDaoImpl extends GenericDaoBase impl
@Override
public List getItems(long sobjectID) {
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
SearchBuilder SearchBySobjectID = createSearchBuilder();
SearchBySobjectID.and("SObjectID", SearchBySobjectID.entity().getId(), SearchCriteria.Op.EQ);
diff --git a/awsapi/src/com/cloud/bridge/persist/dao/UserCredentialsDaoImpl.java b/awsapi/src/com/cloud/bridge/persist/dao/UserCredentialsDaoImpl.java
index c45886f794c..b60a717a3ee 100644
--- a/awsapi/src/com/cloud/bridge/persist/dao/UserCredentialsDaoImpl.java
+++ b/awsapi/src/com/cloud/bridge/persist/dao/UserCredentialsDaoImpl.java
@@ -29,6 +29,7 @@ import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
@Local(value={UserCredentialsDao.class})
@@ -41,7 +42,7 @@ public class UserCredentialsDaoImpl extends GenericDaoBase SearchByAccessKey = createSearchBuilder();
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
try {
txn.start();
SearchByAccessKey.and("AccessKey", SearchByAccessKey.entity()
@@ -60,7 +61,7 @@ public class UserCredentialsDaoImpl extends GenericDaoBase SearchByCertID = createSearchBuilder();
SearchByCertID.and("CertUniqueId", SearchByCertID.entity().getCertUniqueId(), SearchCriteria.Op.EQ);
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
try {
txn.start();
SearchCriteria sc = SearchByCertID.create();
diff --git a/awsapi/src/com/cloud/bridge/service/EC2RestServlet.java b/awsapi/src/com/cloud/bridge/service/EC2RestServlet.java
index 50ac26f2901..1ef04a4aebd 100644
--- a/awsapi/src/com/cloud/bridge/service/EC2RestServlet.java
+++ b/awsapi/src/com/cloud/bridge/service/EC2RestServlet.java
@@ -161,6 +161,7 @@ import com.cloud.bridge.util.ConfigurationHelper;
import com.cloud.bridge.util.EC2RestAuth;
import com.cloud.stack.models.CloudStackAccount;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component("EC2RestServlet")
public class EC2RestServlet extends HttpServlet {
@@ -377,7 +378,7 @@ public class EC2RestServlet extends HttpServlet {
private void setUserKeys( HttpServletRequest request, HttpServletResponse response ) {
String[] accessKey = null;
String[] secretKey = null;
- Transaction txn = null;
+ TransactionLegacy txn = null;
try {
// -> all these parameters are required
accessKey = request.getParameterValues( "accesskey" );
@@ -398,7 +399,7 @@ public class EC2RestServlet extends HttpServlet {
return;
}
try {
- txn = Transaction.open(Transaction.AWSAPI_DB);
+ txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
txn.start();
// -> use the keys to see if the account actually exists
ServiceProvider.getInstance().getEC2Engine().validateAccount( accessKey[0], secretKey[0] );
@@ -434,7 +435,7 @@ public class EC2RestServlet extends HttpServlet {
*/
private void setCertificate( HttpServletRequest request, HttpServletResponse response )
throws Exception {
- Transaction txn = null;
+ TransactionLegacy txn = null;
try {
// [A] Pull the cert and cloud AccessKey from the request
String[] certificate = request.getParameterValues( "cert" );
@@ -470,7 +471,7 @@ public class EC2RestServlet extends HttpServlet {
// [C] Associate the cert's uniqueId with the Cloud API keys
String uniqueId = AuthenticationUtils.X509CertUniqueId( userCert );
logger.debug( "SetCertificate, uniqueId: " + uniqueId );
- txn = Transaction.open(Transaction.AWSAPI_DB);
+ txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
txn.start();
UserCredentialsVO user = ucDao.getByAccessKey(accessKey[0]);
user.setCertUniqueId(uniqueId);
@@ -505,7 +506,7 @@ public class EC2RestServlet extends HttpServlet {
*/
private void deleteCertificate( HttpServletRequest request, HttpServletResponse response )
throws Exception {
- Transaction txn = null;
+ TransactionLegacy txn = null;
try {
String [] accessKey = request.getParameterValues( "AWSAccessKeyId" );
if ( null == accessKey || 0 == accessKey.length ) {
@@ -527,7 +528,7 @@ public class EC2RestServlet extends HttpServlet {
/* UserCredentialsDao credentialDao = new UserCredentialsDao();
credentialDao.setCertificateId( accessKey[0], null );
- */ txn = Transaction.open(Transaction.AWSAPI_DB);
+ */ txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
UserCredentialsVO user = ucDao.getByAccessKey(accessKey[0]);
user.setCertUniqueId(null);
ucDao.update(user.getId(), user);
diff --git a/awsapi/src/com/cloud/bridge/service/S3RestServlet.java b/awsapi/src/com/cloud/bridge/service/S3RestServlet.java
index 7e69fd65087..192e1a28e51 100644
--- a/awsapi/src/com/cloud/bridge/service/S3RestServlet.java
+++ b/awsapi/src/com/cloud/bridge/service/S3RestServlet.java
@@ -67,6 +67,7 @@ import com.cloud.bridge.util.RestAuth;
import com.cloud.bridge.util.S3SoapAuth;
import com.cloud.utils.db.DB;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
public class S3RestServlet extends HttpServlet {
private static final long serialVersionUID = -6168996266762804877L;
public static final String ENABLE_S3_API="enable.s3.api";
@@ -139,7 +140,7 @@ public class S3RestServlet extends HttpServlet {
*/
private void processRequest( HttpServletRequest request, HttpServletResponse response, String method )
{
- Transaction txn = Transaction.open("cloudbridge", Transaction.AWSAPI_DB, true);
+ TransactionLegacy txn = TransactionLegacy.open("cloudbridge", TransactionLegacy.AWSAPI_DB, true);
try {
logRequest(request);
@@ -274,7 +275,7 @@ public class S3RestServlet extends HttpServlet {
// -> use the keys to see if the account actually exists
//ServiceProvider.getInstance().getEC2Engine().validateAccount( accessKey[0], secretKey[0] );
//UserCredentialsDaoImpl credentialDao = new UserCredentialsDao();
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
txn.start();
UserCredentialsVO user = new UserCredentialsVO(accessKey[0], secretKey[0]);
user = ucDao.persist(user);
diff --git a/awsapi/src/com/cloud/bridge/service/controller/s3/S3BucketAction.java b/awsapi/src/com/cloud/bridge/service/controller/s3/S3BucketAction.java
index c98de34a698..4d7c41a75b3 100644
--- a/awsapi/src/com/cloud/bridge/service/controller/s3/S3BucketAction.java
+++ b/awsapi/src/com/cloud/bridge/service/controller/s3/S3BucketAction.java
@@ -94,6 +94,7 @@ import com.cloud.bridge.util.XSerializer;
import com.cloud.bridge.util.XSerializerXmlAdapter;
import com.cloud.bridge.util.XmlHelper;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
public class S3BucketAction implements ServletAction {
@@ -371,7 +372,7 @@ public class S3BucketAction implements ServletAction {
response.setStatus(403);
return;
}
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
// [B] Place the policy into the database over writting an existing policy
try {
// -> first make sure that the policy is valid by parsing it
diff --git a/awsapi/src/com/cloud/bridge/service/controller/s3/ServiceProvider.java b/awsapi/src/com/cloud/bridge/service/controller/s3/ServiceProvider.java
index a0892cc979b..0854741699f 100644
--- a/awsapi/src/com/cloud/bridge/service/controller/s3/ServiceProvider.java
+++ b/awsapi/src/com/cloud/bridge/service/controller/s3/ServiceProvider.java
@@ -35,6 +35,7 @@ import javax.annotation.PostConstruct;
import javax.inject.Inject;
import org.apache.axis2.AxisFault;
+import org.apache.cloudstack.managed.context.ManagedContextTimerTask;
import org.apache.log4j.Logger;
import org.apache.log4j.xml.DOMConfigurator;
import org.springframework.stereotype.Component;
@@ -61,6 +62,7 @@ import com.cloud.bridge.util.OrderedPair;
import com.cloud.utils.component.ManagerBase;
import com.cloud.utils.db.DB;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
@Component
public class ServiceProvider extends ManagerBase {
@@ -89,7 +91,7 @@ public class ServiceProvider extends ManagerBase {
protected ServiceProvider() throws IOException {
// register service implementation object
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
txn.close();
}
@@ -182,7 +184,7 @@ public class ServiceProvider extends ManagerBase {
public UserInfo getUserInfo(String accessKey) {
UserInfo info = new UserInfo();
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
try {
txn.start();
UserCredentialsVO cloudKeys = ucDao.getByAccessKey( accessKey );
@@ -252,7 +254,7 @@ public class ServiceProvider extends ManagerBase {
multipartDir = properties.getProperty("storage.multipartDir");
- Transaction txn1 = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn1 = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
timer.schedule(getHeartbeatTask(), HEARTBEAT_INTERVAL, HEARTBEAT_INTERVAL);
txn1.close();
@@ -280,10 +282,9 @@ public class ServiceProvider extends ManagerBase {
}
private TimerTask getHeartbeatTask() {
- return new TimerTask() {
-
+ return new ManagedContextTimerTask() {
@Override
- public void run() {
+ protected void runInContext() {
try {
mhost.setLastHeartbeatTime(DateHelper.currentGMTTime());
mhostDao.updateHeartBeat(mhost);
diff --git a/awsapi/src/com/cloud/bridge/service/core/s3/S3Engine.java b/awsapi/src/com/cloud/bridge/service/core/s3/S3Engine.java
index 7beb012d4b7..05e87d788db 100644
--- a/awsapi/src/com/cloud/bridge/service/core/s3/S3Engine.java
+++ b/awsapi/src/com/cloud/bridge/service/core/s3/S3Engine.java
@@ -86,6 +86,7 @@ import com.cloud.bridge.util.StringHelper;
import com.cloud.bridge.util.Triple;
import com.cloud.utils.db.DB;
import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionLegacy;
/**
* The CRUD control actions to be invoked from S3BucketAction or S3ObjectAction.
@@ -195,7 +196,7 @@ public class S3Engine {
String cannedAccessPolicy = request.getCannedAccess();
String bucketName = request.getBucketName();
response.setBucketName( bucketName );
- Transaction txn= null;
+ TransactionLegacy txn= null;
verifyBucketName( bucketName, false );
S3PolicyContext context = new S3PolicyContext( PolicyActions.CreateBucket, bucketName );
@@ -205,7 +206,7 @@ public class S3Engine {
OrderedPair shost_storagelocation_pair = null;
boolean success = false;
try {
- txn = Transaction.open(Transaction.AWSAPI_DB);
+ txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
if (bucketDao.getByName(request.getBucketName()) != null)
throw new ObjectAlreadyExistsException("Bucket already exists");
@@ -257,10 +258,10 @@ public class S3Engine {
String bucketName = request.getBucketName();
SBucketVO sbucket = bucketDao.getByName(bucketName);
- Transaction txn = null;
+ TransactionLegacy txn = null;
if ( sbucket != null )
{
- txn = Transaction.open(Transaction.AWSAPI_DB);
+ txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
txn.start();
S3PolicyContext context = new S3PolicyContext( PolicyActions.DeleteBucket, bucketName );
switch( verifyPolicy( context ))
@@ -699,7 +700,7 @@ public class S3Engine {
if (null != version)
httpResp.addHeader("x-amz-version-id", version);
httpResp.flushBuffer();
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
// [C] Re-assemble the object from its uploaded file parts
try {
// explicit transaction control to avoid holding transaction during
@@ -752,11 +753,11 @@ public class S3Engine {
S3BucketAdapter bucketAdapter = getStorageHostBucketAdapter(host_storagelocation_pair.getFirst());
String itemFileName = object_objectitem_pair.getSecond().getStoredPath();
InputStream is = null;
- Transaction txn = null;
+ TransactionLegacy txn = null;
try {
// explicit transaction control to avoid holding transaction during file-copy process
- txn = Transaction.open(Transaction.AWSAPI_DB);
+ txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
txn.start();
is = request.getDataInputStream();
String md5Checksum = bucketAdapter.saveObject(is, host_storagelocation_pair.getSecond(), bucket.getName(), itemFileName);
@@ -813,11 +814,11 @@ public class S3Engine {
S3BucketAdapter bucketAdapter = getStorageHostBucketAdapter(host_storagelocation_pair.getFirst());
String itemFileName = object_objectitem_pair.getSecond().getStoredPath();
InputStream is = null;
- Transaction txn = null;
+ TransactionLegacy txn = null;
try {
// explicit transaction control to avoid holding transaction during file-copy process
- txn = Transaction.open(Transaction.AWSAPI_DB);
+ txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
txn.start();
is = request.getInputStream();
@@ -1505,7 +1506,7 @@ public class S3Engine {
context.setEvalParam( ConditionKeys.Acl, cannedAccessPolicy);
verifyAccess( context, "SBucket", bucket.getId(), SAcl.PERMISSION_WRITE ); // TODO - check this validates plain POSTs
- Transaction txn = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
txn.start();
// [B] If versioning is off them we over write a null object item
@@ -1554,7 +1555,7 @@ public class S3Engine {
}
else
{
- Transaction txn1 = Transaction.open(Transaction.AWSAPI_DB);
+ TransactionLegacy txn1 = TransactionLegacy.open(TransactionLegacy.AWSAPI_DB);
txn1.start();
// -> there is no object nor an object item
object = new SObjectVO();
diff --git a/client/WEB-INF/classes/resources/messages.properties b/client/WEB-INF/classes/resources/messages.properties
index bc1e43692a3..12d2a11a294 100644
--- a/client/WEB-INF/classes/resources/messages.properties
+++ b/client/WEB-INF/classes/resources/messages.properties
@@ -14,6 +14,12 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
+label.hypervisors=Hypervisors
+label.home=Home
+label.sockets=Sockets
+label.root.disk.size=Root disk size
+label.s3.nfs.server=S3 NFS Server
+label.s3.nfs.path=S3 NFS Path
label.delete.events=Delete events
label.delete.alerts=Delete alerts
label.archive.alerts=Archive alerts
@@ -39,7 +45,7 @@ message.acquire.ip.nic=Please confirm that you would like to acquire a new secon
message.select.affinity.groups=Please select any affinity groups you want this VM to belong to:
message.no.affinity.groups=You do not have any affinity groups. Please continue to the next step.
label.action.delete.nic=Remove NIC
-message.action.delete.nic=Please confirm that want to remove this NIC, which will also remove the associated network from the VM.
+message.action.delete.nic=Please confirm that want to remove this NIC, which will also remove the associated network from the VM.
changed.item.properties=Changed item properties
confirm.enable.s3=Please fill in the following information to enable support for S3-backed Secondary Storage
confirm.enable.swift=Please fill in the following information to enable support for Swift
@@ -200,6 +206,8 @@ label.action.enable.user.processing=Enabling User....
label.action.enable.user=Enable User
label.action.enable.zone.processing=Enabling Zone....
label.action.enable.zone=Enable Zone
+label.action.expunge.instance=Expunge Instance
+label.action.expunge.instance.processing=Expunging Instance....
label.action.force.reconnect.processing=Reconnecting....
label.action.force.reconnect=Force Reconnect
label.action.generate.keys.processing=Generate Keys....
@@ -249,6 +257,8 @@ label.action.stop.systemvm.processing=Stopping System VM....
label.action.stop.systemvm=Stop System VM
label.action.take.snapshot.processing=Taking Snapshot....
label.action.take.snapshot=Take Snapshot
+label.action.revert.snapshot.processing=Reverting to Snapshot...
+label.action.revert.snapshot=Revert to Snapshot
label.action.unmanage.cluster.processing=Unmanaging Cluster....
label.action.unmanage.cluster=Unmanage Cluster
label.action.update.OS.preference.processing=Updating OS Preference....
@@ -315,6 +325,7 @@ label.add.template=Add Template
label.add.to.group=Add to group
label.add.user=Add User
label.add.vlan=Add VLAN
+label.add.vxlan=Add VXLAN
label.add.VM.to.tier=Add VM to tier
label.add.vm=Add VM
label.add.vms.to.lb=Add VM(s) to load balancer rule
@@ -412,6 +423,11 @@ label.cluster.type=Cluster Type
label.cluster=Cluster
label.clusters=Clusters
label.clvm=CLVM
+label.rbd=RBD
+label.rbd.monitor=Ceph monitor
+label.rbd.pool=Ceph pool
+label.rbd.id=Cephx user
+label.rbd.secret=Cephx secret
label.code=Code
label.community=Community
label.compute.and.storage=Compute and Storage
@@ -540,6 +556,7 @@ label.end.IP=End IP
label.end.port=End Port
label.end.reserved.system.IP=End Reserved system IP
label.end.vlan=End Vlan
+label.end.vxlan=End Vxlan
label.endpoint.or.operation=Endpoint or Operation
label.endpoint=Endpoint
label.enter.token=Enter token
@@ -551,6 +568,7 @@ label.ESP.lifetime=ESP Lifetime (second)
label.ESP.policy=ESP policy
label.esx.host=ESX/ESXi Host
label.example=Example
+label.expunge=Expunge
label.external.link=External link
label.f5=F5
label.failed=Failed
@@ -799,6 +817,7 @@ label.network.domain.text=Network domain
label.network.domain=Network Domain
label.network.id=Network ID
label.network.label.display.for.blank.value=Use default gateway
+label.network.limits=Network limits
label.network.name=Network Name
label.network.offering.display.text=Network Offering Display Text
label.network.offering.id=Network Offering ID
@@ -1026,12 +1045,14 @@ label.source.nat=Source NAT
label.source=Source
label.specify.IP.ranges=Specify IP ranges
label.specify.vlan=Specify VLAN
+label.specify.vxlan=Specify VXLAN
label.SR.name = SR Name-Label
label.srx=SRX
label.start.IP=Start IP
label.start.port=Start Port
label.start.reserved.system.IP=Start Reserved system IP
label.start.vlan=Start Vlan
+label.start.vxlan=Start Vxlan
label.state=State
label.static.nat.enabled=Static NAT Enabled
label.static.nat.to=Static NAT to
@@ -1158,6 +1179,9 @@ label.virtual.routers=Virtual Routers
label.vlan.id=VLAN ID
label.vlan.range=VLAN Range
label.vlan=VLAN
+label.vxlan.id=VXLAN ID
+label.vxlan.range=VXLAN Range
+label.vxlan=VXLAN
label.vm.add=Add Instance
label.vm.destroy=Destroy
label.vm.display.name=VM display name
@@ -1263,6 +1287,7 @@ message.action.enable.nexusVswitch=Please confirm that you want to enable this n
message.action.enable.physical.network=Please confirm that you want to enable this physical network.
message.action.enable.pod=Please confirm that you want to enable this pod.
message.action.enable.zone=Please confirm that you want to enable this zone.
+message.action.expunge.instance=Please confirm that you want to expunge this instance.
message.action.force.reconnect=Your host has been successfully forced to reconnect. This process can take up to several minutes.
message.action.host.enable.maintenance.mode=Enabling maintenance mode will cause a live migration of all running instances on this host to any available host.
message.action.instance.reset.password=Please confirm that you want to change the ROOT password for this virtual machine.
@@ -1283,6 +1308,7 @@ message.action.stop.instance=Please confirm that you want to stop this instance.
message.action.stop.router=All services provided by this virtual router will be interrupted. Please confirm that you want to stop this router.
message.action.stop.systemvm=Please confirm that you want to stop this system VM.
message.action.take.snapshot=Please confirm that you want to take a snapshot of this volume.
+message.action.revert.snapshot=Please confirm that you want to revert the owning volume to this snapshot.
message.action.unmanage.cluster=Please confirm that you want to unmanage the cluster.
message.action.vmsnapshot.delete=Please confirm that you want to delete this VM snapshot.
message.action.vmsnapshot.revert=Revert VM snapshot
diff --git a/client/WEB-INF/classes/resources/messages_de_DE.properties b/client/WEB-INF/classes/resources/messages_de_DE.properties
index 3c0c8deaabd..2f164609d00 100644
--- a/client/WEB-INF/classes/resources/messages_de_DE.properties
+++ b/client/WEB-INF/classes/resources/messages_de_DE.properties
@@ -224,6 +224,7 @@ label.add.system.service.offering=System-Service-Angebot hinzuf\u00fcgen
label.add.template=Vorlage hinzuf\u00fcgen
label.add.user=Benutzer hinzuf\u00fcgen
label.add.vlan=VLAN hinzuf\u00fcgen
+label.add.vxlan=VXLAN hinzuf\u00fcgen
label.add.volume=Volume hinzuf\u00fcgen
label.add.zone=Zone hinzuf\u00fcgen
label.admin.accounts=Administrator-Konten
@@ -621,6 +622,9 @@ label.virtual.network=Virtuelles Netzwerk
label.vlan.id=VLAN ID
label.vlan.range=VLAN Reichweite
label.vlan=VLAN
+label.vxlan.id=VXLAN ID
+label.vxlan.range=VXLAN Reichweite
+label.vxlan=VXLAN
label.vm.add=Instanz hinzuf\u00fcgen
label.vm.destroy=Zerst\u00f6ren
label.VMFS.datastore=VMFS Datenspeicher
diff --git a/client/WEB-INF/classes/resources/messages_es.properties b/client/WEB-INF/classes/resources/messages_es.properties
index 86eb596689c..3620047a275 100644
--- a/client/WEB-INF/classes/resources/messages_es.properties
+++ b/client/WEB-INF/classes/resources/messages_es.properties
@@ -238,6 +238,7 @@ label.add.template=A\u00c3\u00b1adir plantilla
label.add.to.group=Agregar al grupo
label.add.user=Agregar usuario
label.add.vlan=A\u00c3\u00b1adir VLAN
+label.add.vxlan=A\u00c3\u00b1adir VXLAN
label.add.volume=A\u00c3\u00b1adir volumen
label.add.zone=A\u00c3\u00b1adir Zona
label.admin.accounts=Administrador de Cuentas
@@ -606,6 +607,7 @@ label.snapshot.s=Instant\u00c3\u00a1nea (s)
label.snapshots=instant\u00c3\u00a1neas
label.source.nat=NAT Fuente
label.specify.vlan=Especifique VLAN
+label.specify.vxlan=Especifique VXLAN
label.SR.name = SR Nombre de etiqueta
label.start.port=Iniciar Puerto
label.state=Estado
@@ -685,6 +687,9 @@ label.virtual.network=Red Virtual
label.vlan.id=ID de VLAN
label.vlan.range=VLAN Gama
label.vlan=VLAN
+label.vxlan.id=ID de VXLAN
+label.vxlan.range=VXLAN Gama
+label.vxlan=VXLAN
label.vm.add=A\u00c3\u00b1adir Instancia
label.vm.destroy=Destroy
label.VMFS.datastore=VMFS de datos tienda
diff --git a/client/WEB-INF/classes/resources/messages_fr_FR.properties b/client/WEB-INF/classes/resources/messages_fr_FR.properties
index 284fde89386..db624221ddf 100644
--- a/client/WEB-INF/classes/resources/messages_fr_FR.properties
+++ b/client/WEB-INF/classes/resources/messages_fr_FR.properties
@@ -300,6 +300,7 @@ label.add.template=Ajouter un mod\u00e8le
label.add.to.group=Ajouter au groupe
label.add.user=Ajouter un utilisateur
label.add.vlan=Ajouter un VLAN
+label.add.vxlan=Ajouter un VXLAN
label.add.vm=Ajouter VM
label.add.vms=Ajouter VMs
label.add.vms.to.lb=Ajouter une/des VM(s) \u00e0 la r\u00e8gle de r\u00e9partition de charge
@@ -512,6 +513,7 @@ label.endpoint=Terminaison
label.end.port=Port de fin
label.end.reserved.system.IP=Adresse IP de fin r\u00e9serv\u00e9e Syst\u00e8me
label.end.vlan=VLAN de fin
+label.end.vxlan=VXLAN de fin
label.enter.token=Entrez le jeton unique
label.error.code=Code d\\'erreur
label.error=Erreur
@@ -995,12 +997,14 @@ label.source.nat=NAT Source
label.source=Origine
label.specify.IP.ranges=Sp\u00e9cifier des plages IP
label.specify.vlan=Pr\u00e9ciser le VLAN
+label.specify.vxlan=Pr\u00e9ciser le VXLAN
label.SR.name = Nom du point de montage
label.srx=SRX
label.start.IP=Plage de d\u00e9but IP
label.start.port=Port de d\u00e9but
label.start.reserved.system.IP=Adresse IP de d\u00e9but r\u00e9serv\u00e9e Syst\u00e8me
label.start.vlan=VLAN de d\u00e9part
+label.start.vxlan=VXLAN de d\u00e9part
label.state=\u00c9tat
label.static.nat.enabled=NAT statique activ\u00e9
label.static.nat=NAT Statique
@@ -1127,6 +1131,9 @@ label.virtual.routers=Routeurs virtuels
label.vlan.id=ID du VLAN
label.vlan.range=Plage du VLAN
label.vlan=VLAN
+label.vxlan.id=VXLAN ID
+label.vxlan.range=Plage du VXLAN
+label.vxlan=VXLAN
label.vm.add=Ajouter une instance
label.vm.destroy=D\u00e9truire
label.vm.display.name=Nom commun VM
diff --git a/client/WEB-INF/classes/resources/messages_ja.properties b/client/WEB-INF/classes/resources/messages_ja.properties
index 56fa55a3e4c..d01efe88ff1 100644
--- a/client/WEB-INF/classes/resources/messages_ja.properties
+++ b/client/WEB-INF/classes/resources/messages_ja.properties
@@ -1,4 +1,4 @@
-# Licensed to the Apache Software Foundation (ASF) under one
+a# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
@@ -313,6 +313,7 @@ label.add.template=\u30c6\u30f3\u30d7\u30ec\u30fc\u30c8\u306e\u8ffd\u52a0
label.add.to.group=\u8ffd\u52a0\u5148\u30b0\u30eb\u30fc\u30d7
label.add.user=\u30e6\u30fc\u30b6\u30fc\u306e\u8ffd\u52a0
label.add.vlan=VLAN \u306e\u8ffd\u52a0
+label.add.vxlan=VXLAN \u306e\u8ffd\u52a0
label.add.VM.to.tier=\u968e\u5c64\u3078\u306e VM \u306e\u8ffd\u52a0
label.add.vm=VM \u306e\u8ffd\u52a0
label.add.vms.to.lb=\u8ca0\u8377\u5206\u6563\u898f\u5247\u3078\u306e VM \u306e\u8ffd\u52a0
@@ -532,6 +533,7 @@ label.end.IP=\u7d42\u4e86 IP \u30a2\u30c9\u30ec\u30b9
label.end.port=\u7d42\u4e86\u30dd\u30fc\u30c8
label.end.reserved.system.IP=\u4e88\u7d04\u6e08\u307f\u7d42\u4e86\u30b7\u30b9\u30c6\u30e0 IP \u30a2\u30c9\u30ec\u30b9
label.end.vlan=\u7d42\u4e86 VLAN
+label.end.vxlan=\u7d42\u4e86 VXLAN
label.endpoint.or.operation=\u30a8\u30f3\u30c9\u30dd\u30a4\u30f3\u30c8\u307e\u305f\u306f\u64cd\u4f5c
label.endpoint=\u30a8\u30f3\u30c9\u30dd\u30a4\u30f3\u30c8
label.enter.token=\u30c8\u30fc\u30af\u30f3\u306e\u5165\u529b
@@ -1007,12 +1009,14 @@ label.source.nat=\u9001\u4fe1\u5143 NAT
label.source=\u9001\u4fe1\u5143
label.specify.IP.ranges=IP \u30a2\u30c9\u30ec\u30b9\u306e\u7bc4\u56f2\u306e\u6307\u5b9a
label.specify.vlan=VLAN \u3092\u6307\u5b9a\u3059\u308b
+label.specify.vxlan=VXLAN \u3092\u6307\u5b9a\u3059\u308b
label.SR.name = SR \u540d\u30e9\u30d9\u30eb
label.srx=SRX
label.start.IP=\u958b\u59cb IP \u30a2\u30c9\u30ec\u30b9
label.start.port=\u958b\u59cb\u30dd\u30fc\u30c8
label.start.reserved.system.IP=\u4e88\u7d04\u6e08\u307f\u958b\u59cb\u30b7\u30b9\u30c6\u30e0 IP \u30a2\u30c9\u30ec\u30b9
label.start.vlan=\u958b\u59cb VLAN
+label.start.vxlan=\u958b\u59cb VXLAN
label.state=\u72b6\u614b
label.static.nat.enabled=\u9759\u7684 NAT \u6709\u52b9
label.static.nat.to=\u9759\u7684 NAT \u306e\u8a2d\u5b9a\u5148:
@@ -1139,6 +1143,9 @@ label.virtual.routers=\u4eee\u60f3\u30eb\u30fc\u30bf\u30fc
label.vlan.id=VLAN ID
label.vlan.range=VLAN \u306e\u7bc4\u56f2
label.vlan=VLAN
+label.vxlan.id=VXLAN ID
+label.vxlan.range=VXLAN \u306e\u7bc4\u56f2
+label.vxlan=VXLAN
label.vm.add=\u30a4\u30f3\u30b9\u30bf\u30f3\u30b9\u306e\u8ffd\u52a0
label.vm.destroy=\u7834\u68c4
label.vm.display.name=VM \u8868\u793a\u540d
diff --git a/client/WEB-INF/classes/resources/messages_ko_KR.properties b/client/WEB-INF/classes/resources/messages_ko_KR.properties
index 7f3d5ebae75..b755072d613 100644
--- a/client/WEB-INF/classes/resources/messages_ko_KR.properties
+++ b/client/WEB-INF/classes/resources/messages_ko_KR.properties
@@ -289,6 +289,7 @@ label.add.to.group=\uadf8\ub8f9\uc5d0 \ucd94\uac00
label.add=\ucd94\uac00
label.add.user=\uc0ac\uc6a9\uc790 \ucd94\uac00
label.add.vlan=VLAN \ucd94\uac00
+label.add.vxlan=VXLAN \ucd94\uac00
label.add.vms.to.lb=\ub124\ud2b8\uc6cc\ud06c \ub85c\ub4dc \uacf5\uc720 \uaddc\uce59\uc5d0 VM \ucd94\uac00
label.add.vms=VM \ucd94\uac00
label.add.VM.to.tier=\uacc4\uce35\uc5d0 VM \ucd94\uac00
@@ -479,6 +480,7 @@ label.endpoint.or.operation=\uc5d4\ub4dc \ud3ec\uc778\ud2b8 \ub610\ub294 \uc791\
label.end.port=\uc885\ub8cc \ud3ec\ud1a0
label.end.reserved.system.IP=\uc608\uc57d\ub41c \uc885\ub8cc \uc2dc\uc2a4\ud15c IP \uc8fc\uc18c
label.end.vlan=\uc885\ub8cc VLAN
+label.end.vxlan=\uc885\ub8cc VXLAN
label.enter.token=\ud1a0\ud070 \uc785\ub825
label.error.code=\uc624\ub958 \ucf54\ub4dc
label.error=\uc624\ub958
@@ -925,12 +927,14 @@ label.source.nat=\uc804\uc1a1\uc6d0 NAT
label.source=\uc2dc\uc791 \uc704\uce58
label.specify.IP.ranges=IP \uc8fc\uc18c \ubc94\uc704 \uc9c0\uc815
label.specify.vlan=VLAN \uc9c0\uc815
+label.specify.vxlan=VXLAN \uc9c0\uc815
label.SR.name = SR \uba85 \ub77c\ubca8
label.srx=SRX
label.start.IP=\uc2dc\uc791 IP \uc8fc\uc18c
label.start.port=\uc2dc\uc791 \ud3ec\ud1a0
label.start.reserved.system.IP=\uc608\uc57d\ub41c \uc2dc\uc791 \uc2dc\uc2a4\ud15c IP \uc8fc\uc18c
label.start.vlan=\uc2dc\uc791 VLAN
+label.start.vxlan=\uc2dc\uc791 VXLAN
label.state=\uc0c1\ud0dc
label.static.nat.enabled=\uc815\uc801 NAT \uc720\ud6a8
label.static.nat.to=\uc815\uc801 NAT \uc124\uc815 \uc704\uce58\:
@@ -1055,6 +1059,9 @@ label.virtual.router=\uac00\uc0c1 \ub77c\uc6b0\ud130
label.vlan.id=VLAN ID
label.vlan.range=VLAN \ubc94\uc704
label.vlan=\uac00\uc0c1 \ub124\ud2b8\uc6cc\ud06c(VLAN)
+label.vxlan.id=VXLAN ID
+label.vxlan.range=VXLAN \ubc94\uc704
+label.vxlan=VXLAN
label.vm.add=\uc778\uc2a4\ud134\uc2a4 \ucd94\uac00
label.vm.destroy=\ud30c\uae30
label.vm.display.name=VM \ud45c\uc2dc\uba85
diff --git a/client/WEB-INF/classes/resources/messages_pt_BR.properties b/client/WEB-INF/classes/resources/messages_pt_BR.properties
index 9f7a663657f..86bb83177a8 100644
--- a/client/WEB-INF/classes/resources/messages_pt_BR.properties
+++ b/client/WEB-INF/classes/resources/messages_pt_BR.properties
@@ -288,6 +288,7 @@ label.add.template=Adicionar Template
label.add.to.group=Adicionar ao grupo
label.add.user=Adicionar Usu\u00e1rio
label.add.vlan=Adicionar VLAN
+label.add.vxlan=Adicionar VXLAN
label.add.vm=Adicionar VM
label.add.vms=Adicionar VMs
label.add.vms.to.lb=Add VM(s) na regra de balanceamento de carga
@@ -480,6 +481,7 @@ label.endpoint=Ponto de acesso
label.end.port=Porta Final
label.end.reserved.system.IP=Fim dos IPs reservados para o sistema
label.end.vlan=Vlan do fim
+label.end.vxlan=Vxlan do fim
label.enter.token=Digite o token
label.error.code=C\u00f3digo de Erro
label.error=Erro
@@ -931,12 +933,14 @@ label.source.nat=Source NAT
label.source=Origem
label.specify.IP.ranges=Especifique range de IP
label.specify.vlan=Especificar VLAN
+label.specify.vxlan=Especificar VXLAN
label.SR.name = SR Name-Label
label.srx=SRX
label.start.IP=IP do in\u00edcio
label.start.port=Porta de In\u00edcio
label.start.reserved.system.IP=In\u00edcio dos IPs reservados para o sistema
label.start.vlan=Vlan do in\u00edcio
+label.start.vxlan=Vxlan do in\u00edcio
label.state=Estado
label.static.nat.enabled=NAT est\u00e1tico Habilitado
label.static.nat=NAT Est\u00e1tico
@@ -1059,6 +1063,9 @@ label.virtual.routers=Roteadores Virtuais
label.vlan.id=VLAN ID
label.vlan.range=Intervalo de VLAN
label.vlan=VLAN
+label.vxlan.id=VXLAN ID
+label.vxlan.range=Intervalo de VXLAN
+label.vxlan=VXLAN
label.vm.add=Adicionar Cloud Server
label.vm.destroy=Apagar
label.vm.display.name=Nome de exibi\u00e7\u00e3o da VM
diff --git a/client/WEB-INF/classes/resources/messages_ru_RU.properties b/client/WEB-INF/classes/resources/messages_ru_RU.properties
index 37a36a9b022..62c791f61b9 100644
--- a/client/WEB-INF/classes/resources/messages_ru_RU.properties
+++ b/client/WEB-INF/classes/resources/messages_ru_RU.properties
@@ -283,6 +283,7 @@ label.add.to.group=\u0414\u043e\u0431\u0430\u0432\u0438\u0442\u044c \u0432 \u043
label.add=\u0414\u043e\u0431\u0430\u0432\u0438\u0442\u044c
label.add.user=\u0414\u043e\u0431\u0430\u0432\u0438\u0442\u044c \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044f
label.add.vlan=\u0414\u043e\u0431\u0430\u0432\u0438\u0442\u044c VLAN
+label.add.vxlan=\u0414\u043e\u0431\u0430\u0432\u0438\u0442\u044c VXLAN
label.add.vms.to.lb=\u0414\u043e\u0431\u0430\u0432\u0438\u0442\u044c \u0412\u041c \u0432 \u043f\u0440\u0430\u0432\u0438\u043b\u043e \u0431\u0430\u043b\u0430\u043d\u0441\u0438\u0440\u043e\u0432\u043a\u0438 \u043d\u0430\u0433\u0440\u0443\u0437\u043a\u0438
label.add.vms=\u0414\u043e\u0431\u0430\u0432\u0438\u0442\u044c \u0412\u041c
label.add.vm=\u0414\u043e\u0431\u0430\u0432\u0438\u0442\u044c \u0412\u041c
@@ -454,6 +455,7 @@ label.endpoint.or.operation=\u041a\u043e\u043d\u0435\u0447\u043d\u0430\u044f \u0
label.end.port=\u041a\u043e\u043d\u0435\u0447\u043d\u044b\u0439 \u043f\u043e\u0440\u0442
label.end.reserved.system.IP=\u041a\u043e\u043d\u0435\u0447\u043d\u044b\u0439 \u0437\u0430\u0440\u0435\u0437\u0435\u0440\u0432\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u044b\u0439 \u0441\u0438\u0441\u0442\u0435\u043c\u043d\u044b\u0439 IP-\u0430\u0434\u0440\u0435\u0441
label.end.vlan=\u041a\u043e\u043d\u0435\u0447\u043d\u044b\u0439 VLAN
+label.end.vxlan=\u041a\u043e\u043d\u0435\u0447\u043d\u044b\u0439 VXLAN
label.enter.token=\u0412\u0432\u0435\u0434\u0438\u0442\u0435 \u0442\u0430\u043b\u043e\u043d
label.error.code=\u041a\u043e\u0434 \u043e\u0448\u0438\u0431\u043a\u0438
label.error=\u041e\u0448\u0438\u0431\u043a\u0430
@@ -874,12 +876,14 @@ label.source.nat=Source NAT
label.source=\u0418\u0441\u0442\u043e\u0447\u043d\u0438\u043a
label.specify.IP.ranges=\u0423\u043a\u0430\u0436\u0438\u0442\u0435 \u0434\u0438\u0430\u043f\u0430\u0437\u043e\u043d IP-\u0430\u0434\u0440\u0435\u0441\u043e\u0432
label.specify.vlan=\u0423\u043a\u0430\u0436\u0438\u0442\u0435 VLAN
+label.specify.vxlan=\u0423\u043a\u0430\u0436\u0438\u0442\u0435 VXLAN
label.SR.name = SR Name-Label
label.srx=SRX
label.start.IP=\u041d\u0430\u0447\u0430\u043b\u044c\u043d\u044b\u0439 IP
label.start.port=\u041d\u0430\u0447\u0430\u043b\u044c\u043d\u044b\u0439 \u043f\u043e\u0440\u0442
label.start.reserved.system.IP=\u041d\u0430\u0447\u0430\u043b\u044c\u043d\u044b\u0439 \u0437\u0430\u0440\u0435\u0437\u0435\u0440\u0432\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u044b\u0439 \u0441\u0438\u0441\u0442\u0435\u043c\u043d\u044b\u0439 IP-\u0430\u0434\u0440\u0435\u0441
label.start.vlan=\u041d\u0430\u0447\u0430\u043b\u044c\u043d\u044b\u0439 VLAN
+label.start.vxlan=\u041d\u0430\u0447\u0430\u043b\u044c\u043d\u044b\u0439 VXLAN
label.state=\u0421\u043e\u0441\u0442\u043e\u044f\u043d\u0438\u0435
label.static.nat.enabled=\u0421\u0442\u0430\u0442\u0438\u0447\u0435\u0441\u043a\u0438\u0439 NAT \u0432\u043a\u043b\u044e\u0447\u0435\u043d
label.static.nat.to=\u0421\u0442\u0430\u0442\u0438\u0447\u043d\u044b\u0439 NAT \u043a
@@ -1001,6 +1005,9 @@ label.virtual.router=\u0412\u0438\u0440\u0442\u0443\u0430\u043b\u044c\u043d\u044
label.vlan.id=ID VLAN
label.vlan.range=\u0414\u0438\u0430\u043f\u0430\u0437\u043e\u043d VLAN
label.vlan=VLAN
+label.vxlan.id=VXLAN ID
+label.vxlan.range=\u0414\u0438\u0430\u043f\u0430\u0437\u043e\u043d Range
+label.vxlan=VXLAN
label.vm.add=\u0414\u043e\u0431\u0430\u0432\u0438\u0442\u044c \u043c\u0430\u0448\u0438\u043d\u044b
label.vm.destroy=\u0423\u043d\u0438\u0447\u0442\u043e\u0436\u0438\u0442\u044c
label.vm.display.name=\u041e\u0442\u043e\u0431\u0440\u0430\u0436\u0430\u0435\u043c\u043e\u0435 \u0438\u043c\u044f \u0412\u041c
diff --git a/client/WEB-INF/classes/resources/messages_zh_CN.properties b/client/WEB-INF/classes/resources/messages_zh_CN.properties
index 6ab251faaed..acb67bb51f7 100644
--- a/client/WEB-INF/classes/resources/messages_zh_CN.properties
+++ b/client/WEB-INF/classes/resources/messages_zh_CN.properties
@@ -314,6 +314,7 @@ label.add.template=\u6dfb\u52a0\u6a21\u677f
label.add.to.group=\u6dfb\u52a0\u5230\u7ec4
label.add.user=\u6dfb\u52a0\u7528\u6237
label.add.vlan=\u6dfb\u52a0 VLAN
+label.add.vxlan=\u6dfb\u52a0 VXLAN
label.add.VM.to.tier=\u5411\u5c42\u4e2d\u6dfb\u52a0 VM
label.add.vm=\u6dfb\u52a0 VM
label.add.vms.to.lb=\u5411\u8d1f\u8f7d\u5e73\u8861\u5668\u89c4\u5219\u4e2d\u6dfb\u52a0 VM
@@ -539,6 +540,7 @@ label.end.IP=\u7ed3\u675f IP
label.end.port=\u7ed3\u675f\u7aef\u53e3
label.end.reserved.system.IP=\u7ed3\u675f\u9884\u7559\u7cfb\u7edf IP
label.end.vlan=\u7ed3\u675f VLAN
+label.end.vxlan=\u7ed3\u675f VXLAN
label.endpoint.or.operation=\u7aef\u70b9\u6216\u64cd\u4f5c
label.endpoint=\u7aef\u70b9
label.enter.token=\u8f93\u5165\u4ee4\u724c
@@ -1025,12 +1027,14 @@ label.source.nat=\u6e90 NAT
label.source=\u6e90\u7b97\u6cd5
label.specify.IP.ranges=\u6307\u5b9a IP \u8303\u56f4
label.specify.vlan=\u6307\u5b9a VLAN
+label.specify.vxlan=\u6307\u5b9a VXLAN
label.SR.name = SR \u540d\u79f0\u6807\u7b7e
label.srx=SRX
label.start.IP=\u8d77\u59cb IP
label.start.port=\u8d77\u59cb\u7aef\u53e3
label.start.reserved.system.IP=\u8d77\u59cb\u9884\u7559\u7cfb\u7edf IP
label.start.vlan=\u8d77\u59cb VLAN
+label.start.vxlan=\u8d77\u59cb VXLAN
label.state=\u72b6\u6001
label.static.nat.enabled=\u5df2\u542f\u7528\u9759\u6001 NAT
label.static.nat.to=\u9759\u6001 NAT \u76ee\u6807
@@ -1157,6 +1161,9 @@ label.virtual.routers=\u865a\u62df\u8def\u7531\u5668
label.vlan.id=VLAN ID
label.vlan.range=VLAN \u8303\u56f4
label.vlan=VLAN
+label.vxlan.id=VXLAN ID
+label.vxlan.range=VXLAN Range
+label.vxlan=VXLAN
label.vm.add=\u6dfb\u52a0\u5b9e\u4f8b
label.vm.destroy=\u9500\u6bc1
label.vm.display.name=VM \u663e\u793a\u540d\u79f0
diff --git a/client/WEB-INF/web.xml b/client/WEB-INF/web.xml
index e5c05d3fd20..1af38e14535 100644
--- a/client/WEB-INF/web.xml
+++ b/client/WEB-INF/web.xml
@@ -29,11 +29,11 @@
- org.springframework.web.context.ContextLoaderListener
+ org.apache.cloudstack.spring.module.web.CloudStackContextLoaderListener
-
+ contextConfigLocation
- classpath:applicationContext.xml, classpath:componentContext.xml
+ classpath:META-INF/cloudstack/webApplicationContext.xml
diff --git a/client/pom.xml b/client/pom.xml
index 99a3c3e1e92..8cbdaffe94f 100644
--- a/client/pom.xml
+++ b/client/pom.xml
@@ -7,7 +7,8 @@
the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
OF ANY KIND, either express or implied. See the License for the specific language
governing permissions and limitations under the License. -->
-4.0.0cloud-client-ui
@@ -19,6 +20,16 @@
4.3.0-SNAPSHOT
+
+ org.apache.cloudstack
+ cloud-framework-spring-module
+ ${project.version}
+
+
+ org.apache.cloudstack
+ cloud-framework-spring-lifecycle
+ ${project.version}
+ org.apache.cloudstackcloud-plugin-storage-volume-solidfire
@@ -109,6 +120,11 @@
cloud-plugin-network-internallb${project.version}
+
+ org.apache.cloudstack
+ cloud-plugin-network-vxlan
+ ${project.version}
+ org.apache.cloudstackcloud-plugin-hypervisor-xen
@@ -280,11 +296,6 @@
cloud-plugin-host-anti-affinity${project.version}
-
- org.apache.cloudstack
- cloud-console-proxy
- ${project.version}
-
@@ -353,27 +364,7 @@
maven-antrun-plugin
- 1.7
-
-
- copy-systemvm
- process-resources
-
- run
-
-
-
-
-
-
-
-
-
-
-
- generate-resourcegenerate-resources
@@ -382,95 +373,93 @@
-
-
+
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
@@ -483,77 +472,42 @@
- process-nonoss
+ process-noredistprocess-resourcesrun
-
+ test
+ replace="cloud-stack-components-specification=components-nonoss.xml" byline="true"
+ />
- process-simulator-context
+ process-noredist-spring-contextprocess-resourcesrun
-
- test
-
-
-
-
-
- process-nonoss-spring-context
- process-resources
-
- run
-
-
-
-
+
+
-
-
+
+
-
- process-quickcloud-spring-context
- process-resources
-
- run
-
-
-
- quickcloud
-
-
-
-
@@ -602,19 +556,15 @@
-
- org.apache.maven.plugins
-
-
- maven-antrun-plugin
-
+ org.apache.maven.plugins
+ maven-antrun-plugin [1.7,)run
-
+
@@ -625,6 +575,51 @@
+
+ systemvm
+
+
+ systemvm
+
+
+
+
+ org.apache.cloudstack
+ cloud-systemvm
+ ${project.version}
+ pom
+
+
+
+
+
+ maven-antrun-plugin
+ 1.7
+
+
+
+ copy-systemvm
+ process-resources
+
+ run
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ simulator
@@ -644,7 +639,7 @@
netapp
- nonoss
+ noredist
@@ -659,7 +654,7 @@
f5
- nonoss
+ noredist
@@ -674,7 +669,7 @@
netscaler
- nonoss
+ noredist
@@ -689,7 +684,7 @@
srx
- nonoss
+ noredist
@@ -704,7 +699,7 @@
vmware
- nonoss
+ noredist
@@ -725,5 +720,20 @@
+
+ quickcloud
+
+
+ quickcloud
+
+
+
+
+ org.apache.cloudstack
+ cloud-quickcloud
+ ${project.version}
+
+
+
diff --git a/client/resources/META-INF/cloudstack/webApplicationContext.xml b/client/resources/META-INF/cloudstack/webApplicationContext.xml
new file mode 100644
index 00000000000..fea2709747b
--- /dev/null
+++ b/client/resources/META-INF/cloudstack/webApplicationContext.xml
@@ -0,0 +1,32 @@
+
+
+
+
+
+
diff --git a/client/tomcatconf/commands.properties.in b/client/tomcatconf/commands.properties.in
index 9bb0ea25a55..428042a5137 100644
--- a/client/tomcatconf/commands.properties.in
+++ b/client/tomcatconf/commands.properties.in
@@ -71,6 +71,7 @@ assignVirtualMachine=7
migrateVirtualMachine=1
migrateVirtualMachineWithVolume=1
recoverVirtualMachine=7
+expungeVirtualMachine=1
#### snapshot commands
createSnapshot=15
@@ -79,7 +80,7 @@ deleteSnapshot=15
createSnapshotPolicy=15
deleteSnapshotPolicies=15
listSnapshotPolicies=15
-
+revertSnapshot=15
#### template commands
createTemplate=15
@@ -255,6 +256,7 @@ deleteImageStore=1
createSecondaryStagingStore=1
listSecondaryStagingStores=1
deleteSecondaryStagingStore=1
+prepareSecondaryStorageForMigration=1
#### host commands
addHost=3
@@ -471,7 +473,7 @@ listTags=15
#### Meta Data commands
addResourceDetail=1
removeResourceDetail=1
-listResourceDetails=1
+listResourceDetails=15
### Site-to-site VPN commands
createVpnCustomerGateway=15
@@ -493,7 +495,7 @@ listVirtualRouterElements=7
#### usage commands
generateUsageRecords=1
-listUsageRecords=1
+listUsageRecords=7
listUsageTypes=1
#### traffic monitor commands
@@ -678,6 +680,7 @@ addLdapConfiguration=3
deleteLdapConfiguration=3
listLdapUsers=3
ldapCreateAccount=3
+importLdapUsers=3
### Acl commands
createAclRole=7
@@ -695,3 +698,5 @@ removeAccountFromAclGroup=7
grantPermissionToAclGroup=7
revokePermissionFromAclGroup=7
+
+
diff --git a/client/tomcatconf/log4j-cloud.xml.in b/client/tomcatconf/log4j-cloud.xml.in
index d439b771f4f..08021f2077b 100755
--- a/client/tomcatconf/log4j-cloud.xml.in
+++ b/client/tomcatconf/log4j-cloud.xml.in
@@ -152,6 +152,14 @@ under the License.
+
+
+
+
+
+
+
+
diff --git a/client/tomcatconf/tomcat6-nonssl.conf.in b/client/tomcatconf/tomcat6-nonssl.conf.in
index 4a9a70f619e..5ce724c73b7 100644
--- a/client/tomcatconf/tomcat6-nonssl.conf.in
+++ b/client/tomcatconf/tomcat6-nonssl.conf.in
@@ -41,7 +41,7 @@ CATALINA_TMPDIR="@MSENVIRON@/temp"
# Use JAVA_OPTS to set java.library.path for libtcnative.so
#JAVA_OPTS="-Djava.library.path=/usr/lib64"
-JAVA_OPTS="-Djava.awt.headless=true -Dcom.sun.management.jmxremote.port=45219 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Xmx2g -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=@MSLOGDIR@ -XX:PermSize=512M -XX:MaxPermSize=800m"
+JAVA_OPTS="-Djava.awt.headless=true -Dcom.sun.management.jmxremote=false -Xmx2g -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=@MSLOGDIR@ -XX:PermSize=512M -XX:MaxPermSize=800m"
# What user should run tomcat
TOMCAT_USER="@MSUSER@"
diff --git a/client/tomcatconf/tomcat6-ssl.conf.in b/client/tomcatconf/tomcat6-ssl.conf.in
index 0d2650871b6..c967a98be98 100644
--- a/client/tomcatconf/tomcat6-ssl.conf.in
+++ b/client/tomcatconf/tomcat6-ssl.conf.in
@@ -40,7 +40,7 @@ CATALINA_TMPDIR="@MSENVIRON@/temp"
# Use JAVA_OPTS to set java.library.path for libtcnative.so
#JAVA_OPTS="-Djava.library.path=/usr/lib64"
-JAVA_OPTS="-Djava.awt.headless=true -Djavax.net.ssl.trustStore=/etc/cloudstack/management/cloudmanagementserver.keystore -Djavax.net.ssl.trustStorePassword=vmops.com -Dcom.sun.management.jmxremote.port=45219 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Xmx2g -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=@MSLOGDIR@ -XX:MaxPermSize=800m -XX:PermSize=512M"
+JAVA_OPTS="-Djava.awt.headless=true -Dcom.sun.management.jmxremote=false -Djavax.net.ssl.trustStore=/etc/cloudstack/management/cloudmanagementserver.keystore -Djavax.net.ssl.trustStorePassword=vmops.com -Xmx2g -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=@MSLOGDIR@ -XX:MaxPermSize=800m -XX:PermSize=512M"
# What user should run tomcat
TOMCAT_USER="@MSUSER@"
diff --git a/docs/qig/publican.cfg b/core/resources/META-INF/cloudstack/allocator/module.properties
similarity index 75%
rename from docs/qig/publican.cfg
rename to core/resources/META-INF/cloudstack/allocator/module.properties
index 52d434c3775..7866be06f30 100644
--- a/docs/qig/publican.cfg
+++ b/core/resources/META-INF/cloudstack/allocator/module.properties
@@ -1,13 +1,12 @@
-# Config::Simple 4.59
-# Fri May 25 12:50:59 2012
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
-# distributed with this work for additional information#
+# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
-# http://www.apache.org/licenses/LICENSE-2.0
+#
+# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
@@ -15,8 +14,5 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
-
-xml_lang: "en-US"
-type: Book
-brand: cloudstack
-docname: qig
+name=allocator
+parent=core
diff --git a/core/resources/META-INF/cloudstack/allocator/spring-core-allocator-context.xml b/core/resources/META-INF/cloudstack/allocator/spring-core-allocator-context.xml
new file mode 100644
index 00000000000..65ebc704400
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/allocator/spring-core-allocator-context.xml
@@ -0,0 +1,32 @@
+
+
+
+
+
+
\ No newline at end of file
diff --git a/core/resources/META-INF/cloudstack/allocator/spring-core-lifecycle-allocator-context-inheritable.xml b/core/resources/META-INF/cloudstack/allocator/spring-core-lifecycle-allocator-context-inheritable.xml
new file mode 100644
index 00000000000..ad00de8be2c
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/allocator/spring-core-lifecycle-allocator-context-inheritable.xml
@@ -0,0 +1,42 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/core/resources/META-INF/cloudstack/api/module.properties b/core/resources/META-INF/cloudstack/api/module.properties
new file mode 100644
index 00000000000..cc66a099a6c
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/api/module.properties
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+name=api
+parent=core
diff --git a/core/resources/META-INF/cloudstack/api/spring-core-lifecycle-api-context-inheritable.xml b/core/resources/META-INF/cloudstack/api/spring-core-lifecycle-api-context-inheritable.xml
new file mode 100644
index 00000000000..b0ed228c0da
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/api/spring-core-lifecycle-api-context-inheritable.xml
@@ -0,0 +1,53 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/core/resources/META-INF/cloudstack/backend/module.properties b/core/resources/META-INF/cloudstack/backend/module.properties
new file mode 100644
index 00000000000..ab18ad18837
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/backend/module.properties
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+name=backend
+parent=core
diff --git a/core/resources/META-INF/cloudstack/bootstrap/module.properties b/core/resources/META-INF/cloudstack/bootstrap/module.properties
new file mode 100644
index 00000000000..716bd002d47
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/bootstrap/module.properties
@@ -0,0 +1,17 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+name=bootstrap
diff --git a/core/resources/META-INF/cloudstack/bootstrap/spring-bootstrap-context-inheritable.xml b/core/resources/META-INF/cloudstack/bootstrap/spring-bootstrap-context-inheritable.xml
new file mode 100644
index 00000000000..adee3ed28e0
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/bootstrap/spring-bootstrap-context-inheritable.xml
@@ -0,0 +1,39 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/core/resources/META-INF/cloudstack/bootstrap/spring-bootstrap-context.xml b/core/resources/META-INF/cloudstack/bootstrap/spring-bootstrap-context.xml
new file mode 100644
index 00000000000..40fcc71c14e
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/bootstrap/spring-bootstrap-context.xml
@@ -0,0 +1,32 @@
+
+
+
+
+
+
diff --git a/core/resources/META-INF/cloudstack/compute/module.properties b/core/resources/META-INF/cloudstack/compute/module.properties
new file mode 100644
index 00000000000..0a12aae7c19
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/compute/module.properties
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+name=compute
+parent=backend
diff --git a/core/resources/META-INF/cloudstack/compute/spring-core-lifecycle-compute-context-inheritable.xml b/core/resources/META-INF/cloudstack/compute/spring-core-lifecycle-compute-context-inheritable.xml
new file mode 100644
index 00000000000..b57f52fc2ef
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/compute/spring-core-lifecycle-compute-context-inheritable.xml
@@ -0,0 +1,45 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/core/resources/META-INF/cloudstack/core/module.properties b/core/resources/META-INF/cloudstack/core/module.properties
new file mode 100644
index 00000000000..fd5ecb7bf15
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/core/module.properties
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+name=core
+parent=system
diff --git a/core/resources/META-INF/cloudstack/core/spring-core-context.xml b/core/resources/META-INF/cloudstack/core/spring-core-context.xml
new file mode 100644
index 00000000000..6cd00a40103
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/core/spring-core-context.xml
@@ -0,0 +1,36 @@
+
+
+
+
+
+
+
+
+
+
diff --git a/core/resources/META-INF/cloudstack/core/spring-core-lifecycle-core-context-inheritable.xml b/core/resources/META-INF/cloudstack/core/spring-core-lifecycle-core-context-inheritable.xml
new file mode 100644
index 00000000000..06b9f5e0748
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/core/spring-core-lifecycle-core-context-inheritable.xml
@@ -0,0 +1,41 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/core/resources/META-INF/cloudstack/core/spring-core-registry-core-context.xml b/core/resources/META-INF/cloudstack/core/spring-core-registry-core-context.xml
new file mode 100644
index 00000000000..c2467b1a850
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/core/spring-core-registry-core-context.xml
@@ -0,0 +1,273 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/core/resources/META-INF/cloudstack/discoverer/module.properties b/core/resources/META-INF/cloudstack/discoverer/module.properties
new file mode 100644
index 00000000000..e511fb5e37d
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/discoverer/module.properties
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+name=discoverer
+parent=core
diff --git a/core/resources/META-INF/cloudstack/discoverer/spring-core-lifecycle-discoverer-context-inheritable.xml b/core/resources/META-INF/cloudstack/discoverer/spring-core-lifecycle-discoverer-context-inheritable.xml
new file mode 100644
index 00000000000..2c83a104b32
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/discoverer/spring-core-lifecycle-discoverer-context-inheritable.xml
@@ -0,0 +1,35 @@
+
+
+
+
+
+
+
+
+
diff --git a/core/resources/META-INF/cloudstack/network/module.properties b/core/resources/META-INF/cloudstack/network/module.properties
new file mode 100644
index 00000000000..1a15fb01131
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/network/module.properties
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+name=network
+parent=backend
diff --git a/core/resources/META-INF/cloudstack/network/spring-core-lifecycle-network-context-inheritable.xml b/core/resources/META-INF/cloudstack/network/spring-core-lifecycle-network-context-inheritable.xml
new file mode 100644
index 00000000000..3388ca41284
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/network/spring-core-lifecycle-network-context-inheritable.xml
@@ -0,0 +1,94 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/core/resources/META-INF/cloudstack/planner/module.properties b/core/resources/META-INF/cloudstack/planner/module.properties
new file mode 100644
index 00000000000..96359fbe6e3
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/planner/module.properties
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+name=planner
+parent=allocator
\ No newline at end of file
diff --git a/core/resources/META-INF/cloudstack/planner/spring-core-lifecycle-planner-context-inheritable.xml b/core/resources/META-INF/cloudstack/planner/spring-core-lifecycle-planner-context-inheritable.xml
new file mode 100644
index 00000000000..715f86d9c28
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/planner/spring-core-lifecycle-planner-context-inheritable.xml
@@ -0,0 +1,41 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/core/resources/META-INF/cloudstack/storage/module.properties b/core/resources/META-INF/cloudstack/storage/module.properties
new file mode 100644
index 00000000000..564e85e116e
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/storage/module.properties
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+name=storage
+parent=backend
diff --git a/core/resources/META-INF/cloudstack/storage/spring-lifecycle-storage-context-inheritable.xml b/core/resources/META-INF/cloudstack/storage/spring-lifecycle-storage-context-inheritable.xml
new file mode 100644
index 00000000000..ad78cad8edc
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/storage/spring-lifecycle-storage-context-inheritable.xml
@@ -0,0 +1,80 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/core/resources/META-INF/cloudstack/system/module.properties b/core/resources/META-INF/cloudstack/system/module.properties
new file mode 100644
index 00000000000..0b07ebeb478
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/system/module.properties
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+name=system
+parent=bootstrap
diff --git a/core/resources/META-INF/cloudstack/system/spring-core-system-context-inheritable.xml b/core/resources/META-INF/cloudstack/system/spring-core-system-context-inheritable.xml
new file mode 100644
index 00000000000..80c5da744bb
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/system/spring-core-system-context-inheritable.xml
@@ -0,0 +1,54 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/core/resources/META-INF/cloudstack/system/spring-core-system-context.xml b/core/resources/META-INF/cloudstack/system/spring-core-system-context.xml
new file mode 100644
index 00000000000..c2d540ca102
--- /dev/null
+++ b/core/resources/META-INF/cloudstack/system/spring-core-system-context.xml
@@ -0,0 +1,50 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/core/src/com/cloud/agent/api/AttachVolumeCommand.java b/core/src/com/cloud/agent/api/AttachVolumeCommand.java
index 49b2a706b4b..e9276198dbf 100644
--- a/core/src/com/cloud/agent/api/AttachVolumeCommand.java
+++ b/core/src/com/cloud/agent/api/AttachVolumeCommand.java
@@ -25,6 +25,7 @@ public class AttachVolumeCommand extends Command {
private StoragePoolType pooltype;
private String volumePath;
private String volumeName;
+ private Long volumeSize;
private Long deviceId;
private String chainInfo;
private String poolUuid;
@@ -45,13 +46,14 @@ public class AttachVolumeCommand extends Command {
public AttachVolumeCommand(boolean attach, boolean managed, String vmName,
StoragePoolType pooltype, String volumePath, String volumeName,
- Long deviceId, String chainInfo) {
+ Long volumeSize, Long deviceId, String chainInfo) {
this.attach = attach;
this._managed = managed;
this.vmName = vmName;
this.pooltype = pooltype;
this.volumePath = volumePath;
this.volumeName = volumeName;
+ this.volumeSize = volumeSize;
this.deviceId = deviceId;
this.chainInfo = chainInfo;
}
@@ -85,6 +87,10 @@ public class AttachVolumeCommand extends Command {
return volumeName;
}
+ public Long getVolumeSize() {
+ return volumeSize;
+ }
+
public Long getDeviceId() {
return deviceId;
}
diff --git a/core/src/com/cloud/agent/api/ClusterSyncAnswer.java b/core/src/com/cloud/agent/api/ClusterSyncAnswer.java
index 99fee2a9dd1..e5ea1f15aca 100644
--- a/core/src/com/cloud/agent/api/ClusterSyncAnswer.java
+++ b/core/src/com/cloud/agent/api/ClusterSyncAnswer.java
@@ -18,12 +18,12 @@ package com.cloud.agent.api;
import java.util.HashMap;
-import com.cloud.utils.Pair;
+import com.cloud.utils.Ternary;
import com.cloud.vm.VirtualMachine.State;
public class ClusterSyncAnswer extends Answer {
private long _clusterId;
- private HashMap> _newStates;
+ private HashMap> _newStates;
private boolean _isExecuted=false;
// this is here because a cron command answer is being sent twice
@@ -38,7 +38,7 @@ public class ClusterSyncAnswer extends Answer {
}
- public ClusterSyncAnswer(long clusterId, HashMap> newStates){
+ public ClusterSyncAnswer(long clusterId, HashMap> newStates){
_clusterId = clusterId;
_newStates = newStates;
result = true;
@@ -48,7 +48,7 @@ public class ClusterSyncAnswer extends Answer {
return _clusterId;
}
- public HashMap> getNewStates() {
+ public HashMap> getNewStates() {
return _newStates;
}
diff --git a/core/src/com/cloud/agent/api/CreateVMSnapshotAnswer.java b/core/src/com/cloud/agent/api/CreateVMSnapshotAnswer.java
index f9fb1642b3f..8b8e69e9c38 100644
--- a/core/src/com/cloud/agent/api/CreateVMSnapshotAnswer.java
+++ b/core/src/com/cloud/agent/api/CreateVMSnapshotAnswer.java
@@ -17,21 +17,21 @@
package com.cloud.agent.api;
-import java.util.List;
+import org.apache.cloudstack.storage.to.VolumeObjectTO;
-import com.cloud.agent.api.to.VolumeTO;
+import java.util.List;
public class CreateVMSnapshotAnswer extends Answer {
- private List volumeTOs;
+ private List volumeTOs;
private VMSnapshotTO vmSnapshotTo;
- public List getVolumeTOs() {
+ public List getVolumeTOs() {
return volumeTOs;
}
- public void setVolumeTOs(List volumeTOs) {
+ public void setVolumeTOs(List volumeTOs) {
this.volumeTOs = volumeTOs;
}
@@ -53,7 +53,7 @@ public class CreateVMSnapshotAnswer extends Answer {
}
public CreateVMSnapshotAnswer(CreateVMSnapshotCommand cmd,
- VMSnapshotTO vmSnapshotTo, List volumeTOs) {
+ VMSnapshotTO vmSnapshotTo, List volumeTOs) {
super(cmd, true, "");
this.vmSnapshotTo = vmSnapshotTo;
this.volumeTOs = volumeTOs;
diff --git a/core/src/com/cloud/agent/api/CreateVMSnapshotCommand.java b/core/src/com/cloud/agent/api/CreateVMSnapshotCommand.java
index 478987d993b..bfbc21d1c2b 100644
--- a/core/src/com/cloud/agent/api/CreateVMSnapshotCommand.java
+++ b/core/src/com/cloud/agent/api/CreateVMSnapshotCommand.java
@@ -18,12 +18,14 @@ package com.cloud.agent.api;
import java.util.List;
+import com.cloud.agent.api.to.DataTO;
import com.cloud.agent.api.to.VolumeTO;
import com.cloud.vm.VirtualMachine;
+import org.apache.cloudstack.storage.to.VolumeObjectTO;
public class CreateVMSnapshotCommand extends VMSnapshotBaseCommand {
- public CreateVMSnapshotCommand(String vmName, VMSnapshotTO snapshot, List volumeTOs, String guestOSType, VirtualMachine.State vmState) {
+ public CreateVMSnapshotCommand(String vmName, VMSnapshotTO snapshot, List volumeTOs, String guestOSType, VirtualMachine.State vmState) {
super(vmName, snapshot, volumeTOs, guestOSType);
this.vmState = vmState;
}
diff --git a/core/src/com/cloud/agent/api/DeleteVMSnapshotAnswer.java b/core/src/com/cloud/agent/api/DeleteVMSnapshotAnswer.java
index 8f4ecad3d80..d6ae95cb89d 100644
--- a/core/src/com/cloud/agent/api/DeleteVMSnapshotAnswer.java
+++ b/core/src/com/cloud/agent/api/DeleteVMSnapshotAnswer.java
@@ -16,12 +16,12 @@
// under the License.
package com.cloud.agent.api;
+import org.apache.cloudstack.storage.to.VolumeObjectTO;
+
import java.util.List;
-import com.cloud.agent.api.to.VolumeTO;
-
public class DeleteVMSnapshotAnswer extends Answer {
- private List volumeTOs;
+ private List volumeTOs;
public DeleteVMSnapshotAnswer() {
}
@@ -32,16 +32,16 @@ public class DeleteVMSnapshotAnswer extends Answer {
}
public DeleteVMSnapshotAnswer(DeleteVMSnapshotCommand cmd,
- List volumeTOs) {
+ List volumeTOs) {
super(cmd, true, "");
this.volumeTOs = volumeTOs;
}
- public List getVolumeTOs() {
+ public List getVolumeTOs() {
return volumeTOs;
}
- public void setVolumeTOs(List volumeTOs) {
+ public void setVolumeTOs(List volumeTOs) {
this.volumeTOs = volumeTOs;
}
diff --git a/core/src/com/cloud/agent/api/DeleteVMSnapshotCommand.java b/core/src/com/cloud/agent/api/DeleteVMSnapshotCommand.java
index c213448bf9c..1c64a2b6e97 100644
--- a/core/src/com/cloud/agent/api/DeleteVMSnapshotCommand.java
+++ b/core/src/com/cloud/agent/api/DeleteVMSnapshotCommand.java
@@ -19,10 +19,11 @@ package com.cloud.agent.api;
import java.util.List;
import com.cloud.agent.api.to.VolumeTO;
+import org.apache.cloudstack.storage.to.VolumeObjectTO;
public class DeleteVMSnapshotCommand extends VMSnapshotBaseCommand {
- public DeleteVMSnapshotCommand(String vmName, VMSnapshotTO snapshot, List volumeTOs, String guestOSType) {
+ public DeleteVMSnapshotCommand(String vmName, VMSnapshotTO snapshot, List volumeTOs, String guestOSType) {
super( vmName, snapshot, volumeTOs, guestOSType);
}
}
diff --git a/core/src/com/cloud/agent/api/MigrateCommand.java b/core/src/com/cloud/agent/api/MigrateCommand.java
index 5042b8c1971..0d8f70cf047 100644
--- a/core/src/com/cloud/agent/api/MigrateCommand.java
+++ b/core/src/com/cloud/agent/api/MigrateCommand.java
@@ -16,26 +16,33 @@
// under the License.
package com.cloud.agent.api;
+import com.cloud.agent.api.to.VirtualMachineTO;
+
public class MigrateCommand extends Command {
String vmName;
String destIp;
String hostGuid;
boolean isWindows;
-
+ VirtualMachineTO vmTO;
protected MigrateCommand() {
}
- public MigrateCommand(String vmName, String destIp, boolean isWindows) {
+ public MigrateCommand(String vmName, String destIp, boolean isWindows, VirtualMachineTO vmTO) {
this.vmName = vmName;
this.destIp = destIp;
this.isWindows = isWindows;
+ this.vmTO = vmTO;
}
public boolean isWindows() {
return isWindows;
}
+ public VirtualMachineTO getVirtualMachine() {
+ return vmTO;
+ }
+
public String getDestinationIp() {
return destIp;
}
diff --git a/core/src/com/cloud/agent/api/MigrateWithStorageAnswer.java b/core/src/com/cloud/agent/api/MigrateWithStorageAnswer.java
index d87a5f184c8..6468884f464 100644
--- a/core/src/com/cloud/agent/api/MigrateWithStorageAnswer.java
+++ b/core/src/com/cloud/agent/api/MigrateWithStorageAnswer.java
@@ -20,9 +20,6 @@ import java.util.List;
import org.apache.cloudstack.storage.to.VolumeObjectTO;
-import com.cloud.agent.api.to.DiskTO;
-import com.cloud.agent.api.to.VolumeTO;
-
public class MigrateWithStorageAnswer extends Answer {
List volumeTos;
diff --git a/core/src/com/cloud/agent/api/MigrateWithStorageCompleteAnswer.java b/core/src/com/cloud/agent/api/MigrateWithStorageCompleteAnswer.java
index fd8f22f3579..ec8bd0f4b65 100644
--- a/core/src/com/cloud/agent/api/MigrateWithStorageCompleteAnswer.java
+++ b/core/src/com/cloud/agent/api/MigrateWithStorageCompleteAnswer.java
@@ -20,8 +20,6 @@ import java.util.List;
import org.apache.cloudstack.storage.to.VolumeObjectTO;
-import com.cloud.agent.api.to.VolumeTO;
-
public class MigrateWithStorageCompleteAnswer extends Answer {
List volumeTos;
diff --git a/core/src/com/cloud/agent/api/RebootCommand.java b/core/src/com/cloud/agent/api/RebootCommand.java
index 49712b6fce5..299e61b76af 100755
--- a/core/src/com/cloud/agent/api/RebootCommand.java
+++ b/core/src/com/cloud/agent/api/RebootCommand.java
@@ -16,7 +16,6 @@
// under the License.
package com.cloud.agent.api;
-import com.cloud.hypervisor.Hypervisor;
import com.cloud.vm.VirtualMachine;
public class RebootCommand extends Command {
diff --git a/core/src/com/cloud/agent/api/RevertToVMSnapshotAnswer.java b/core/src/com/cloud/agent/api/RevertToVMSnapshotAnswer.java
index 848ffc0ebf8..6170864c08e 100644
--- a/core/src/com/cloud/agent/api/RevertToVMSnapshotAnswer.java
+++ b/core/src/com/cloud/agent/api/RevertToVMSnapshotAnswer.java
@@ -17,14 +17,14 @@
package com.cloud.agent.api;
-import java.util.List;
-
-import com.cloud.agent.api.to.VolumeTO;
import com.cloud.vm.VirtualMachine;
+import org.apache.cloudstack.storage.to.VolumeObjectTO;
+
+import java.util.List;
public class RevertToVMSnapshotAnswer extends Answer {
- private List volumeTOs;
+ private List volumeTOs;
private VirtualMachine.State vmState;
public RevertToVMSnapshotAnswer(RevertToVMSnapshotCommand cmd, boolean result,
@@ -37,7 +37,7 @@ public class RevertToVMSnapshotAnswer extends Answer {
}
public RevertToVMSnapshotAnswer(RevertToVMSnapshotCommand cmd,
- List volumeTOs,
+ List volumeTOs,
VirtualMachine.State vmState) {
super(cmd, true, "");
this.volumeTOs = volumeTOs;
@@ -48,11 +48,11 @@ public class RevertToVMSnapshotAnswer extends Answer {
return vmState;
}
- public List getVolumeTOs() {
+ public List getVolumeTOs() {
return volumeTOs;
}
- public void setVolumeTOs(List volumeTOs) {
+ public void setVolumeTOs(List volumeTOs) {
this.volumeTOs = volumeTOs;
}
diff --git a/core/src/com/cloud/agent/api/RevertToVMSnapshotCommand.java b/core/src/com/cloud/agent/api/RevertToVMSnapshotCommand.java
index 429a186e0dc..1e5fd6c9a68 100644
--- a/core/src/com/cloud/agent/api/RevertToVMSnapshotCommand.java
+++ b/core/src/com/cloud/agent/api/RevertToVMSnapshotCommand.java
@@ -19,10 +19,11 @@ package com.cloud.agent.api;
import java.util.List;
import com.cloud.agent.api.to.VolumeTO;
+import org.apache.cloudstack.storage.to.VolumeObjectTO;
public class RevertToVMSnapshotCommand extends VMSnapshotBaseCommand {
- public RevertToVMSnapshotCommand(String vmName, VMSnapshotTO snapshot, List volumeTOs, String guestOSType) {
+ public RevertToVMSnapshotCommand(String vmName, VMSnapshotTO snapshot, List volumeTOs, String guestOSType) {
super(vmName, snapshot, volumeTOs, guestOSType);
}
diff --git a/core/src/com/cloud/agent/api/StartAnswer.java b/core/src/com/cloud/agent/api/StartAnswer.java
index 922d060cfae..f3e75dfb75d 100644
--- a/core/src/com/cloud/agent/api/StartAnswer.java
+++ b/core/src/com/cloud/agent/api/StartAnswer.java
@@ -16,11 +16,14 @@
// under the License.
package com.cloud.agent.api;
+import java.util.Map;
+
import com.cloud.agent.api.to.VirtualMachineTO;
public class StartAnswer extends Answer {
VirtualMachineTO vm;
String host_guid;
+ Map _iqnToPath;
protected StartAnswer() {
}
@@ -54,4 +57,12 @@ public class StartAnswer extends Answer {
public String getHost_guid() {
return host_guid;
}
+
+ public void setIqnToPath(Map iqnToPath) {
+ _iqnToPath = iqnToPath;
+ }
+
+ public Map getIqnToPath() {
+ return _iqnToPath;
+ }
}
diff --git a/core/src/com/cloud/agent/api/StartupRoutingCommand.java b/core/src/com/cloud/agent/api/StartupRoutingCommand.java
index 5961ab0017e..d52666b7d9d 100755
--- a/core/src/com/cloud/agent/api/StartupRoutingCommand.java
+++ b/core/src/com/cloud/agent/api/StartupRoutingCommand.java
@@ -22,7 +22,7 @@ import java.util.Map;
import com.cloud.host.Host;
import com.cloud.hypervisor.Hypervisor.HypervisorType;
import com.cloud.network.Networks.RouterPrivateIpStrategy;
-import com.cloud.utils.Pair;
+import com.cloud.utils.Ternary;
import com.cloud.vm.VirtualMachine.State;
public class StartupRoutingCommand extends StartupCommand {
@@ -48,7 +48,7 @@ public class StartupRoutingCommand extends StartupCommand {
long dom0MinMemory;
boolean poolSync;
Map vms;
- HashMap> _clusterVMStates;
+ HashMap> _clusterVMStates;
String caps;
String pool;
HypervisorType hypervisorType;
@@ -129,7 +129,7 @@ getHostDetails().put(RouterPrivateIpStrategy.class.getCanonicalName(), privIpStr
}
}
- public void setClusterVMStateChanges(HashMap> allStates){
+ public void setClusterVMStateChanges(HashMap> allStates){
_clusterVMStates = allStates;
}
@@ -157,7 +157,7 @@ getHostDetails().put(RouterPrivateIpStrategy.class.getCanonicalName(), privIpStr
return vms;
}
- public HashMap> getClusterVMStateChanges() {
+ public HashMap> getClusterVMStateChanges() {
return _clusterVMStates;
}
diff --git a/core/src/com/cloud/agent/api/StopAnswer.java b/core/src/com/cloud/agent/api/StopAnswer.java
index 0af23853da5..614835e2a37 100755
--- a/core/src/com/cloud/agent/api/StopAnswer.java
+++ b/core/src/com/cloud/agent/api/StopAnswer.java
@@ -17,37 +17,34 @@
package com.cloud.agent.api;
public class StopAnswer extends RebootAnswer {
- Integer vncPort;
+
+ private String hypervisortoolsversion;
Integer timeOffset;
protected StopAnswer() {
}
- public StopAnswer(StopCommand cmd, String details, Integer vncPort, Integer timeOffset, boolean success) {
+ public StopAnswer(StopCommand cmd, String details, String hypervisortoolsversion, Integer timeOffset, boolean success) {
super(cmd, details, success);
- this.vncPort = vncPort;
+ this.hypervisortoolsversion = hypervisortoolsversion;
this.timeOffset = timeOffset;
}
- public StopAnswer(StopCommand cmd, String details, Integer vncPort, boolean success) {
+ public StopAnswer(StopCommand cmd, String details, boolean success) {
super(cmd, details, success);
- this.vncPort = vncPort;
+ this.hypervisortoolsversion = null;
this.timeOffset = null;
}
- public StopAnswer(StopCommand cmd, String details, boolean success) {
- super(cmd, details, success);
- vncPort = null;
- timeOffset = null;
- }
public StopAnswer(StopCommand cmd, Exception e) {
super(cmd, e);
+ this.hypervisortoolsversion = null;
+ this.timeOffset = null;
}
- @Override
- public Integer getVncPort() {
- return vncPort;
+ public String getHypervisorToolsVersion() {
+ return hypervisortoolsversion;
}
public Integer getTimeOffset() {
diff --git a/core/src/com/cloud/agent/api/VMSnapshotBaseCommand.java b/core/src/com/cloud/agent/api/VMSnapshotBaseCommand.java
index 2120f2f73b1..b2c524194ea 100644
--- a/core/src/com/cloud/agent/api/VMSnapshotBaseCommand.java
+++ b/core/src/com/cloud/agent/api/VMSnapshotBaseCommand.java
@@ -19,27 +19,29 @@ package com.cloud.agent.api;
import java.util.List;
+import com.cloud.agent.api.to.DataTO;
import com.cloud.agent.api.to.VolumeTO;
+import org.apache.cloudstack.storage.to.VolumeObjectTO;
public class VMSnapshotBaseCommand extends Command{
- protected List volumeTOs;
+ protected List volumeTOs;
protected VMSnapshotTO target;
protected String vmName;
protected String guestOSType;
- public VMSnapshotBaseCommand(String vmName, VMSnapshotTO snapshot, List volumeTOs, String guestOSType) {
+ public VMSnapshotBaseCommand(String vmName, VMSnapshotTO snapshot, List volumeTOs, String guestOSType) {
this.vmName = vmName;
this.target = snapshot;
this.volumeTOs = volumeTOs;
this.guestOSType = guestOSType;
}
- public List getVolumeTOs() {
+ public List getVolumeTOs() {
return volumeTOs;
}
- public void setVolumeTOs(List volumeTOs) {
+ public void setVolumeTOs(List volumeTOs) {
this.volumeTOs = volumeTOs;
}
diff --git a/core/src/com/cloud/agent/api/routing/HealthCheckLBConfigAnswer.java b/core/src/com/cloud/agent/api/routing/HealthCheckLBConfigAnswer.java
index dfca4ab5908..ee8033a7e28 100644
--- a/core/src/com/cloud/agent/api/routing/HealthCheckLBConfigAnswer.java
+++ b/core/src/com/cloud/agent/api/routing/HealthCheckLBConfigAnswer.java
@@ -20,7 +20,6 @@ import java.util.List;
import com.cloud.agent.api.Answer;
import com.cloud.agent.api.to.LoadBalancerTO;
-import com.cloud.agent.api.to.NicTO;
/**
* LoadBalancerConfigCommand sends the load balancer configuration
diff --git a/core/src/com/cloud/agent/api/routing/HealthCheckLBConfigCommand.java b/core/src/com/cloud/agent/api/routing/HealthCheckLBConfigCommand.java
index f705f6c9707..7206d2f5e35 100644
--- a/core/src/com/cloud/agent/api/routing/HealthCheckLBConfigCommand.java
+++ b/core/src/com/cloud/agent/api/routing/HealthCheckLBConfigCommand.java
@@ -17,7 +17,6 @@
package com.cloud.agent.api.routing;
import com.cloud.agent.api.to.LoadBalancerTO;
-import com.cloud.agent.api.to.NicTO;
/**
* LoadBalancerConfigCommand sends the load balancer configuration
diff --git a/core/src/com/cloud/agent/api/routing/LoadBalancerConfigCommand.java b/core/src/com/cloud/agent/api/routing/LoadBalancerConfigCommand.java
index ee29290b720..3a51e8ad6be 100644
--- a/core/src/com/cloud/agent/api/routing/LoadBalancerConfigCommand.java
+++ b/core/src/com/cloud/agent/api/routing/LoadBalancerConfigCommand.java
@@ -33,6 +33,7 @@ public class LoadBalancerConfigCommand extends NetworkElementCommand {
public String lbStatsAuth = "admin1:AdMiN123";
public String lbStatsUri = "/admin?stats";
public String maxconn ="";
+ public boolean keepAliveEnabled = false;
NicTO nic;
Long vpcId;
@@ -44,7 +45,7 @@ public class LoadBalancerConfigCommand extends NetworkElementCommand {
this.vpcId = vpcId;
}
- public LoadBalancerConfigCommand(LoadBalancerTO[] loadBalancers,String PublicIp,String GuestIp,String PrivateIp, NicTO nic, Long vpcId, String maxconn) {
+ public LoadBalancerConfigCommand(LoadBalancerTO[] loadBalancers,String PublicIp,String GuestIp,String PrivateIp, NicTO nic, Long vpcId, String maxconn, boolean keepAliveEnabled) {
this.loadBalancers = loadBalancers;
this.lbStatsPublicIP = PublicIp;
this.lbStatsPrivateIP = PrivateIp;
@@ -52,6 +53,7 @@ public class LoadBalancerConfigCommand extends NetworkElementCommand {
this.nic = nic;
this.vpcId = vpcId;
this.maxconn=maxconn;
+ this.keepAliveEnabled = keepAliveEnabled;
}
public NicTO getNic() {
diff --git a/core/src/com/cloud/agent/api/routing/RemoteAccessVpnCfgCommand.java b/core/src/com/cloud/agent/api/routing/RemoteAccessVpnCfgCommand.java
index 68d7caf016f..37278ee64d3 100644
--- a/core/src/com/cloud/agent/api/routing/RemoteAccessVpnCfgCommand.java
+++ b/core/src/com/cloud/agent/api/routing/RemoteAccessVpnCfgCommand.java
@@ -20,10 +20,13 @@ package com.cloud.agent.api.routing;
public class RemoteAccessVpnCfgCommand extends NetworkElementCommand {
boolean create;
+ private boolean vpcEnabled;
String vpnServerIp;
String ipRange;
String presharedKey;
String localIp;
+ private String localCidr;
+ private String publicInterface;
protected RemoteAccessVpnCfgCommand() {
this.create = false;
@@ -39,12 +42,18 @@ public class RemoteAccessVpnCfgCommand extends NetworkElementCommand {
}
- public RemoteAccessVpnCfgCommand(boolean create, String vpnServerAddress, String localIp, String ipRange, String ipsecPresharedKey) {
+ public RemoteAccessVpnCfgCommand(boolean create, String vpnServerAddress, String localIp, String ipRange, String ipsecPresharedKey, boolean vpcEnabled) {
this.vpnServerIp = vpnServerAddress;
this.ipRange = ipRange;
this.presharedKey = ipsecPresharedKey;
this.localIp = localIp;
this.create = create;
+ this.vpcEnabled = vpcEnabled;
+ if (vpcEnabled) {
+ this.setPublicInterface("eth1");
+ } else {
+ this.setPublicInterface("eth2");
+ }
}
public String getVpnServerIp() {
@@ -75,4 +84,28 @@ public class RemoteAccessVpnCfgCommand extends NetworkElementCommand {
return localIp;
}
+ public boolean isVpcEnabled() {
+ return vpcEnabled;
+ }
+
+ public void setVpcEnabled(boolean vpcEnabled) {
+ this.vpcEnabled = vpcEnabled;
+ }
+
+ public String getLocalCidr() {
+ return localCidr;
+ }
+
+ public void setLocalCidr(String localCidr) {
+ this.localCidr = localCidr;
+ }
+
+ public String getPublicInterface() {
+ return publicInterface;
+ }
+
+ public void setPublicInterface(String publicInterface) {
+ this.publicInterface = publicInterface;
+ }
+
}
diff --git a/core/src/com/cloud/agent/api/routing/SetNetworkACLCommand.java b/core/src/com/cloud/agent/api/routing/SetNetworkACLCommand.java
index 236e8ea907a..ba4b4b427bf 100644
--- a/core/src/com/cloud/agent/api/routing/SetNetworkACLCommand.java
+++ b/core/src/com/cloud/agent/api/routing/SetNetworkACLCommand.java
@@ -20,9 +20,7 @@ package com.cloud.agent.api.routing;
import java.util.Arrays;
import java.util.Collections;
import java.util.Comparator;
-import java.util.HashSet;
import java.util.List;
-import java.util.Set;
import com.cloud.agent.api.to.NetworkACLTO;
import com.cloud.agent.api.to.NicTO;
diff --git a/core/src/com/cloud/agent/api/storage/UploadCommand.java b/core/src/com/cloud/agent/api/storage/UploadCommand.java
index 9b893e2abd5..98eebe423c3 100644
--- a/core/src/com/cloud/agent/api/storage/UploadCommand.java
+++ b/core/src/com/cloud/agent/api/storage/UploadCommand.java
@@ -41,30 +41,30 @@ public class UploadCommand extends AbstractUploadCommand implements InternalIden
this.template = new TemplateTO(template);
this.url = url;
this.installPath = installPath;
- this.checksum = template.getChecksum();
- this.id = template.getId();
- this.templateSizeInBytes = sizeInBytes;
+ checksum = template.getChecksum();
+ id = template.getId();
+ templateSizeInBytes = sizeInBytes;
}
public UploadCommand(String url, long id, long sizeInBytes, String installPath, Type type){
- this.template = null;
+ template = null;
this.url = url;
this.installPath = installPath;
this.id = id;
this.type = type;
- this.templateSizeInBytes = sizeInBytes;
+ templateSizeInBytes = sizeInBytes;
}
protected UploadCommand() {
}
public UploadCommand(UploadCommand that) {
- this.template = that.template;
- this.url = that.url;
- this.installPath = that.installPath;
- this.checksum = that.getChecksum();
- this.id = that.id;
+ template = that.template;
+ url = that.url;
+ installPath = that.installPath;
+ checksum = that.getChecksum();
+ id = that.id;
}
public String getDescription() {
@@ -114,7 +114,8 @@ public class UploadCommand extends AbstractUploadCommand implements InternalIden
this.templateSizeInBytes = templateSizeInBytes;
}
- public long getId() {
+ @Override
+ public long getId() {
return id;
}
diff --git a/core/src/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java b/core/src/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java
index 9e6216f5036..874146c6258 100755
--- a/core/src/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java
+++ b/core/src/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java
@@ -103,12 +103,10 @@ import com.cloud.utils.ssh.SshHelper;
@Local(value = {VirtualRoutingResource.class})
public class VirtualRoutingResource implements Manager {
private static final Logger s_logger = Logger.getLogger(VirtualRoutingResource.class);
- private String _savepasswordPath; // This script saves a random password to the DomR file system
private String _publicIpAddress;
private String _firewallPath;
private String _loadbPath;
private String _dhcpEntryPath;
- private String _vmDataPath;
private String _publicEthIf;
private String _privateEthIf;
private String _bumpUpPriorityPath;
@@ -215,6 +213,8 @@ public class VirtualRoutingResource implements Manager {
args += " -s ";
args += cmd.getVpnServerIp();
}
+ args += " -C " + cmd.getLocalCidr();
+ args += " -i " + cmd.getPublicInterface();
String result = routerProxy("vpn_l2tp.sh", cmd.getAccessDetail(NetworkElementCommand.ROUTER_IP), args);
if (result != null) {
return new Answer(cmd, false, "Configure VPN failed");
@@ -549,13 +549,14 @@ public class VirtualRoutingResource implements Manager {
final String vmIpAddress = cmd.getVmIpAddress();
final String local = vmName;
- // Run save_password_to_domr.sh
- final String result = savePassword(routerPrivateIPAddress, vmIpAddress, password, local);
+ String args = "-v " + vmIpAddress;
+ args += " -p " + password;
+
+ String result = routerProxy("savepassword.sh", routerPrivateIPAddress, args);
if (result != null) {
return new Answer(cmd, false, "Unable to save password to DomR.");
- } else {
- return new Answer(cmd);
}
+ return new Answer(cmd);
}
protected Answer execute(final DhcpEntryCommand cmd) {
@@ -814,16 +815,6 @@ public class VirtualRoutingResource implements Manager {
return new ConsoleProxyLoadAnswer(cmd, proxyVmId, proxyVmName, success, result);
}
- public String savePassword(final String privateIpAddress, final String vmIpAddress, final String password, final String localPath) {
- final Script command = new Script(_savepasswordPath, _startTimeout, s_logger);
- command.add("-r", privateIpAddress);
- command.add("-v", vmIpAddress);
- command.add("-p", password);
- command.add(localPath);
-
- return command.execute();
- }
-
public String assignGuestNetwork(final String dev, final String routerIP,
final String routerGIP, final String gateway, final String cidr,
final String netmask, final String dns, final String domainName) {
@@ -1129,11 +1120,6 @@ public class VirtualRoutingResource implements Manager {
throw new ConfigurationException("Unable to find the call_loadbalancer.sh");
}
- _savepasswordPath = findScript("save_password_to_domr.sh");
- if (_savepasswordPath == null) {
- throw new ConfigurationException("Unable to find save_password_to_domr.sh");
- }
-
_dhcpEntryPath = findScript("dhcp_entry.sh");
if (_dhcpEntryPath == null) {
throw new ConfigurationException("Unable to find dhcp_entry.sh");
@@ -1216,6 +1202,41 @@ public class VirtualRoutingResource implements Manager {
return "Unable to connect";
}
+ public boolean connect(final String ipAddress, int retry, int sleep) {
+ for (int i = 0; i <= retry; i++) {
+ SocketChannel sch = null;
+ try {
+ if (s_logger.isDebugEnabled()) {
+ s_logger.debug("Trying to connect to " + ipAddress);
+ }
+ sch = SocketChannel.open();
+ sch.configureBlocking(true);
+
+ final InetSocketAddress addr = new InetSocketAddress(ipAddress, _port);
+ sch.connect(addr);
+ return true;
+ } catch (final IOException e) {
+ if (s_logger.isDebugEnabled()) {
+ s_logger.debug("Could not connect to " + ipAddress);
+ }
+ } finally {
+ if (sch != null) {
+ try {
+ sch.close();
+ } catch (final IOException e) {}
+ }
+ }
+ try {
+ Thread.sleep(sleep);
+ } catch (final InterruptedException e) {
+ }
+ }
+
+ s_logger.debug("Unable to logon to " + ipAddress);
+
+ return false;
+ }
+
@Override
public String getName() {
return _name;
diff --git a/core/src/com/cloud/agent/transport/Request.java b/core/src/com/cloud/agent/transport/Request.java
index b0fa4cc2960..cbeb112fea7 100755
--- a/core/src/com/cloud/agent/transport/Request.java
+++ b/core/src/com/cloud/agent/transport/Request.java
@@ -31,14 +31,6 @@ import java.util.zip.GZIPOutputStream;
import org.apache.log4j.Level;
import org.apache.log4j.Logger;
-import com.cloud.agent.api.Answer;
-import com.cloud.agent.api.Command;
-import com.cloud.agent.api.SecStorageFirewallCfgCommand.PortConfig;
-import com.cloud.exception.UnsupportedVersionException;
-import com.cloud.serializer.GsonHelper;
-import com.cloud.utils.NumbersUtil;
-import com.cloud.utils.Pair;
-import com.cloud.utils.exception.CloudRuntimeException;
import com.google.gson.Gson;
import com.google.gson.JsonArray;
import com.google.gson.JsonDeserializationContext;
@@ -50,6 +42,15 @@ import com.google.gson.JsonSerializationContext;
import com.google.gson.JsonSerializer;
import com.google.gson.stream.JsonReader;
+import com.cloud.agent.api.Answer;
+import com.cloud.agent.api.Command;
+import com.cloud.agent.api.SecStorageFirewallCfgCommand.PortConfig;
+import com.cloud.exception.UnsupportedVersionException;
+import com.cloud.serializer.GsonHelper;
+import com.cloud.utils.NumbersUtil;
+import com.cloud.utils.Pair;
+import com.cloud.utils.exception.CloudRuntimeException;
+
/**
* Request is a simple wrapper around command and answer to add sequencing,
* versioning, and flags. Note that the version here represents the changes
@@ -107,7 +108,8 @@ public class Request {
protected long _agentId;
protected Command[] _cmds;
protected String _content;
-
+ protected String _agentName;
+
protected Request() {
}
@@ -141,6 +143,11 @@ public class Request {
setFromServer(fromServer);
}
+ public Request(long agentId, String agentName, long mgmtId, Command[] cmds, boolean stopOnError, boolean fromServer) {
+ this(agentId, mgmtId, cmds, stopOnError, fromServer);
+ setAgentName(agentName);
+ }
+
public void setSequence(long seq) {
_seq = seq;
}
@@ -158,14 +165,14 @@ public class Request {
}
protected Request(final Request that, final Command[] cmds) {
- this._ver = that._ver;
- this._seq = that._seq;
+ _ver = that._ver;
+ _seq = that._seq;
setInSequence(that.executeInSequence());
setStopOnError(that.stopOnError());
- this._cmds = cmds;
- this._mgmtId = that._mgmtId;
- this._via = that._via;
- this._agentId = that._agentId;
+ _cmds = cmds;
+ _mgmtId = that._mgmtId;
+ _via = that._via;
+ _agentId = that._agentId;
setFromServer(!that.isFromServer());
}
@@ -173,6 +180,10 @@ public class Request {
_flags |= (stopOnError ? FLAG_STOP_ON_ERROR : 0);
}
+ private final void setAgentName(String agentName) {
+ _agentName = agentName;
+ }
+
private final void setInSequence(boolean inSequence) {
_flags |= (inSequence ? FLAG_IN_SEQUENCE : 0);
}
@@ -287,7 +298,7 @@ public class Request {
retBuff.flip();
return retBuff;
}
-
+
public static ByteBuffer doCompress(ByteBuffer buffer, int length) {
ByteArrayOutputStream byteOut = new ByteArrayOutputStream(length);
byte[] array;
@@ -307,11 +318,11 @@ public class Request {
}
return ByteBuffer.wrap(byteOut.toByteArray());
}
-
+
public ByteBuffer[] toBytes() {
final ByteBuffer[] buffers = new ByteBuffer[2];
ByteBuffer tmp;
-
+
if (_content == null) {
_content = s_gson.toJson(_cmds, _cmds.getClass());
}
@@ -372,7 +383,7 @@ public class Request {
}
}
}
-
+
@Override
public String toString() {
return log("", true, Level.DEBUG);
@@ -421,7 +432,11 @@ public class Request {
buf.append(msg);
buf.append(" { ").append(getType());
- buf.append(", MgmtId: ").append(_mgmtId).append(", via: ").append(_via);
+ if (_agentName != null) {
+ buf.append(", MgmtId: ").append(_mgmtId).append(", via: ").append(_via).append("(" + _agentName + ")");
+ } else {
+ buf.append(", MgmtId: ").append(_mgmtId).append(", via: ").append(_via);
+ }
buf.append(", Ver: ").append(_ver.toString());
buf.append(", Flags: ").append(Integer.toBinaryString(getFlags())).append(", ");
buf.append(content);
@@ -447,7 +462,7 @@ public class Request {
if (version.ordinal() != Version.v1.ordinal() && version.ordinal() != Version.v3.ordinal()) {
throw new UnsupportedVersionException("This version is no longer supported: " + version.toString(), UnsupportedVersionException.IncompatibleVersion);
}
- final byte reserved = buff.get(); // tossed away for now.
+ buff.get();
final short flags = buff.getShort();
final boolean isRequest = (flags & FLAG_REQUEST) > 0;
@@ -456,7 +471,7 @@ public class Request {
final int size = buff.getInt();
final long mgmtId = buff.getLong();
final long agentId = buff.getLong();
-
+
long via;
if (version.ordinal() == Version.v1.ordinal()) {
via = buff.getLong();
@@ -467,7 +482,7 @@ public class Request {
if ((flags & FLAG_COMPRESSED) != 0) {
buff = doDecompress(buff, size);
}
-
+
byte[] command = null;
int offset = 0;
if (buff.hasArray()) {
@@ -519,7 +534,7 @@ public class Request {
public static long getViaAgentId(final byte[] bytes) {
return NumbersUtil.bytesToLong(bytes, 32);
}
-
+
public static boolean fromServer(final byte[] bytes) {
return (bytes[3] & FLAG_FROM_SERVER) > 0;
}
diff --git a/core/src/com/cloud/exception/UsageServerException.java b/core/src/com/cloud/exception/UsageServerException.java
index 68f83777b77..924934f0496 100644
--- a/core/src/com/cloud/exception/UsageServerException.java
+++ b/core/src/com/cloud/exception/UsageServerException.java
@@ -18,15 +18,20 @@ package com.cloud.exception;
public class UsageServerException extends CloudException {
- public UsageServerException() {
-
- }
+ /**
+ *
+ */
+ private static final long serialVersionUID = -8398313106067116466L;
+
+ public UsageServerException() {
+
+ }
+
+ public UsageServerException(String message) {
+ super(message);
+ }
- public UsageServerException(String message) {
- super(message);
- }
-
}
diff --git a/core/src/com/cloud/network/HAProxyConfigurator.java b/core/src/com/cloud/network/HAProxyConfigurator.java
index 230912595cf..ae49a2e236f 100644
--- a/core/src/com/cloud/network/HAProxyConfigurator.java
+++ b/core/src/com/cloud/network/HAProxyConfigurator.java
@@ -44,6 +44,7 @@ public class HAProxyConfigurator implements LoadBalancerConfigurator {
private static String[] globalSection = { "global",
"\tlog 127.0.0.1:3914 local0 warning",
"\tmaxconn 4096",
+ "\tmaxpipes 1024",
"\tchroot /var/lib/haproxy",
"\tuser haproxy",
"\tgroup haproxy",
@@ -122,7 +123,9 @@ public class HAProxyConfigurator implements LoadBalancerConfigurator {
sb = new StringBuilder();
// FIXME sb.append("\t").append("balance ").append(algorithm);
result.add(sb.toString());
- if (publicPort.equals(NetUtils.HTTP_PORT)) {
+ if (publicPort.equals(NetUtils.HTTP_PORT)
+ // && global option httpclose set (or maybe not in this spot???)
+ ) {
sb = new StringBuilder();
sb.append("\t").append("mode http");
result.add(sb.toString());
@@ -434,7 +437,7 @@ public class HAProxyConfigurator implements LoadBalancerConfigurator {
return sb.toString();
}
- private List getRulesForPool(LoadBalancerTO lbTO) {
+ private List getRulesForPool(LoadBalancerTO lbTO, boolean keepAliveEnabled) {
StringBuilder sb = new StringBuilder();
String poolName = sb.append(lbTO.getSrcIp().replace(".", "_"))
.append('-').append(lbTO.getSrcPort()).toString();
@@ -498,7 +501,9 @@ public class HAProxyConfigurator implements LoadBalancerConfigurator {
if ((stickinessSubRule != null) && !destsAvailable) {
s_logger.warn("Haproxy stickiness policy for lb rule: " + lbTO.getSrcIp() + ":" + lbTO.getSrcPort() +": Not Applied, cause: backends are unavailable");
}
- if ((publicPort.equals(NetUtils.HTTP_PORT)) || (httpbasedStickiness) ) {
+ if ((publicPort.equals(NetUtils.HTTP_PORT)
+ && !keepAliveEnabled
+ ) || (httpbasedStickiness) ) {
sb = new StringBuilder();
sb.append("\t").append("mode http");
result.add(sb.toString());
@@ -516,23 +521,58 @@ public class HAProxyConfigurator implements LoadBalancerConfigurator {
StringBuilder rule = new StringBuilder("\nlisten ").append(ruleName)
.append(" ").append(statsIp).append(":")
.append(lbCmd.lbStatsPort);
+ // TODO DH: write test for this in both cases
+ if(!lbCmd.keepAliveEnabled) {
+ s_logger.info("Haproxy mode http enabled");
+ rule.append("\n\tmode http\n\toption httpclose");
+ }
rule.append(
- "\n\tmode http\n\toption httpclose\n\tstats enable\n\tstats uri ")
+ "\n\tstats enable\n\tstats uri ")
.append(lbCmd.lbStatsUri)
.append("\n\tstats realm Haproxy\\ Statistics\n\tstats auth ")
.append(lbCmd.lbStatsAuth);
rule.append("\n");
- return rule.toString();
+ String result = rule.toString();
+ if(s_logger.isDebugEnabled()) {
+ s_logger.debug("Haproxystats rule: " + result);
+ }
+ return result;
}
@Override
public String[] generateConfiguration(LoadBalancerConfigCommand lbCmd) {
List result = new ArrayList();
List gSection = Arrays.asList(globalSection);
+// note that this is overwritten on the String in the static ArrayList
gSection.set(2,"\tmaxconn " + lbCmd.maxconn);
+ // TODO DH: write test for this function
+ String pipesLine = "\tmaxpipes " + Long.toString(Long.parseLong(lbCmd.maxconn)/4);
+ gSection.set(3,pipesLine);
+ if(s_logger.isDebugEnabled()) {
+ for(String s : gSection) {
+ s_logger.debug("global section: " + s);
+ }
+ }
result.addAll(gSection);
+ // TODO decide under what circumstances these options are needed
+// result.add("\tnokqueue");
+// result.add("\tnopoll");
+
result.add(blankLine);
- result.addAll(Arrays.asList(defaultsSection));
+ List dSection = Arrays.asList(defaultsSection);
+ if(lbCmd.keepAliveEnabled) {
+ dSection.set(6, "\t#no option set here :<");
+ dSection.set(7, "\tno option forceclose");
+ } else {
+ dSection.set(6, "\toption forwardfor");
+ dSection.set(7, "\toption forceclose");
+ }
+ if(s_logger.isDebugEnabled()) {
+ for(String s : dSection) {
+ s_logger.debug("default section: " + s);
+ }
+ }
+ result.addAll(dSection);
if (!lbCmd.lbStatsVisibility.equals("disabled")) {
/* new rule : listen admin_page guestip/link-local:8081 */
if (lbCmd.lbStatsVisibility.equals("global")) {
@@ -571,7 +611,7 @@ public class HAProxyConfigurator implements LoadBalancerConfigurator {
if ( lbTO.isRevoked() ) {
continue;
}
- List poolRules = getRulesForPool(lbTO);
+ List poolRules = getRulesForPool(lbTO, lbCmd.keepAliveEnabled);
result.addAll(poolRules);
has_listener = true;
}
diff --git a/core/src/com/cloud/storage/JavaStorageLayer.java b/core/src/com/cloud/storage/JavaStorageLayer.java
index bfaa767eaed..e2e28ee5c36 100644
--- a/core/src/com/cloud/storage/JavaStorageLayer.java
+++ b/core/src/com/cloud/storage/JavaStorageLayer.java
@@ -17,7 +17,6 @@
package com.cloud.storage;
import java.io.File;
-import java.io.FileFilter;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
@@ -41,7 +40,7 @@ public class JavaStorageLayer implements StorageLayer {
public JavaStorageLayer(boolean makeWorldWriteable) {
this();
- this._makeWorldWriteable = makeWorldWriteable;
+ _makeWorldWriteable = makeWorldWriteable;
}
@Override
@@ -171,7 +170,7 @@ public class JavaStorageLayer implements StorageLayer {
File dir = new File(dirName);
if (dir.exists()) {
String uniqDirName = dir.getAbsolutePath() + File.separator + UUID.randomUUID().toString();
- if (this.mkdir(uniqDirName)) {
+ if (mkdir(uniqDirName)) {
return new File(uniqDirName);
}
}
@@ -219,6 +218,7 @@ public class JavaStorageLayer implements StorageLayer {
return dirPaths;
}
+ @Override
public boolean setWorldReadableAndWriteable(File file) {
return (file.setReadable(true, false) && file.setWritable(true, false));
}
diff --git a/core/src/com/cloud/storage/resource/StoragePoolResource.java b/core/src/com/cloud/storage/resource/StoragePoolResource.java
index 8dff97db9c0..f6d7896b34c 100644
--- a/core/src/com/cloud/storage/resource/StoragePoolResource.java
+++ b/core/src/com/cloud/storage/resource/StoragePoolResource.java
@@ -21,8 +21,6 @@ import com.cloud.agent.api.storage.CopyVolumeAnswer;
import com.cloud.agent.api.storage.CopyVolumeCommand;
import com.cloud.agent.api.storage.CreateAnswer;
import com.cloud.agent.api.storage.CreateCommand;
-import com.cloud.agent.api.storage.CreateVolumeOVAAnswer;
-import com.cloud.agent.api.storage.CreateVolumeOVACommand;
import com.cloud.agent.api.storage.DestroyCommand;
import com.cloud.agent.api.storage.PrimaryStorageDownloadAnswer;
import com.cloud.agent.api.storage.PrimaryStorageDownloadCommand;
diff --git a/core/src/com/cloud/storage/resource/StorageProcessor.java b/core/src/com/cloud/storage/resource/StorageProcessor.java
index 5fa9f8a86e3..29f4a677375 100644
--- a/core/src/com/cloud/storage/resource/StorageProcessor.java
+++ b/core/src/com/cloud/storage/resource/StorageProcessor.java
@@ -23,8 +23,12 @@ import org.apache.cloudstack.storage.command.CopyCommand;
import org.apache.cloudstack.storage.command.CreateObjectCommand;
import org.apache.cloudstack.storage.command.DeleteCommand;
import org.apache.cloudstack.storage.command.DettachCommand;
+import org.apache.cloudstack.storage.command.ForgetObjectCmd;
+import org.apache.cloudstack.storage.command.IntroduceObjectCmd;
import com.cloud.agent.api.Answer;
+import org.apache.cloudstack.storage.command.ForgetObjectCmd;
+import org.apache.cloudstack.storage.command.IntroduceObjectCmd;
public interface StorageProcessor {
public Answer copyTemplateToPrimaryStorage(CopyCommand cmd);
@@ -43,4 +47,6 @@ public interface StorageProcessor {
public Answer deleteVolume(DeleteCommand cmd);
public Answer createVolumeFromSnapshot(CopyCommand cmd);
public Answer deleteSnapshot(DeleteCommand cmd);
+ Answer introduceObject(IntroduceObjectCmd cmd);
+ Answer forgetObject(ForgetObjectCmd cmd);
}
diff --git a/core/src/com/cloud/storage/resource/StorageSubsystemCommandHandlerBase.java b/core/src/com/cloud/storage/resource/StorageSubsystemCommandHandlerBase.java
index ab9aa2a3ee6..b43722a6418 100644
--- a/core/src/com/cloud/storage/resource/StorageSubsystemCommandHandlerBase.java
+++ b/core/src/com/cloud/storage/resource/StorageSubsystemCommandHandlerBase.java
@@ -24,6 +24,7 @@ import org.apache.cloudstack.storage.command.CreateObjectAnswer;
import org.apache.cloudstack.storage.command.CreateObjectCommand;
import org.apache.cloudstack.storage.command.DeleteCommand;
import org.apache.cloudstack.storage.command.DettachCommand;
+import org.apache.cloudstack.storage.command.IntroduceObjectCmd;
import org.apache.cloudstack.storage.command.StorageSubSystemCommand;
import org.apache.log4j.Logger;
@@ -33,7 +34,6 @@ import com.cloud.agent.api.to.DataObjectType;
import com.cloud.agent.api.to.DataStoreTO;
import com.cloud.agent.api.to.DataTO;
import com.cloud.agent.api.to.DiskTO;
-import com.cloud.agent.api.to.NfsTO;
import com.cloud.storage.DataStoreRole;
import com.cloud.storage.Volume;
@@ -55,6 +55,8 @@ public class StorageSubsystemCommandHandlerBase implements StorageSubsystemComma
return execute((AttachCommand)command);
} else if (command instanceof DettachCommand) {
return execute((DettachCommand)command);
+ } else if (command instanceof IntroduceObjectCmd) {
+ return processor.introduceObject((IntroduceObjectCmd)command);
}
return new Answer((Command)command, false, "not implemented yet");
}
@@ -65,7 +67,7 @@ public class StorageSubsystemCommandHandlerBase implements StorageSubsystemComma
DataStoreTO srcDataStore = srcData.getDataStore();
DataStoreTO destDataStore = destData.getDataStore();
- if ((srcData.getObjectType() == DataObjectType.TEMPLATE) && (srcDataStore instanceof NfsTO) && (destData.getDataStore().getRole() == DataStoreRole.Primary)) {
+ if (srcData.getObjectType() == DataObjectType.TEMPLATE && srcData.getDataStore().getRole() == DataStoreRole.Image && destData.getDataStore().getRole() == DataStoreRole.Primary) {
//copy template to primary storage
return processor.copyTemplateToPrimaryStorage(cmd);
} else if (srcData.getObjectType() == DataObjectType.TEMPLATE && srcDataStore.getRole() == DataStoreRole.Primary && destDataStore.getRole() == DataStoreRole.Primary) {
@@ -80,18 +82,19 @@ public class StorageSubsystemCommandHandlerBase implements StorageSubsystemComma
} else if (destData.getObjectType() == DataObjectType.TEMPLATE) {
return processor.createTemplateFromVolume(cmd);
}
- } else if (srcData.getObjectType() == DataObjectType.SNAPSHOT && srcData.getDataStore().getRole() == DataStoreRole.Primary) {
+ } else if (srcData.getObjectType() == DataObjectType.SNAPSHOT && destData.getObjectType() == DataObjectType.SNAPSHOT &&
+ srcData.getDataStore().getRole() == DataStoreRole.Primary) {
return processor.backupSnapshot(cmd);
} else if (srcData.getObjectType() == DataObjectType.SNAPSHOT && destData.getObjectType() == DataObjectType.VOLUME) {
- return processor.createVolumeFromSnapshot(cmd);
+ return processor.createVolumeFromSnapshot(cmd);
} else if (srcData.getObjectType() == DataObjectType.SNAPSHOT && destData.getObjectType() == DataObjectType.TEMPLATE) {
return processor.createTemplateFromSnapshot(cmd);
}
return new Answer(cmd, false, "not implemented yet");
}
-
-
+
+
protected Answer execute(CreateObjectCommand cmd) {
DataTO data = cmd.getData();
try {
@@ -106,21 +109,21 @@ public class StorageSubsystemCommandHandlerBase implements StorageSubsystemComma
return new CreateObjectAnswer(e.toString());
}
}
-
+
protected Answer execute(DeleteCommand cmd) {
DataTO data = cmd.getData();
Answer answer = null;
if (data.getObjectType() == DataObjectType.VOLUME) {
answer = processor.deleteVolume(cmd);
} else if (data.getObjectType() == DataObjectType.SNAPSHOT) {
- answer = processor.deleteSnapshot(cmd);
+ answer = processor.deleteSnapshot(cmd);
} else {
answer = new Answer(cmd, false, "unsupported type");
}
return answer;
}
-
+
protected Answer execute(AttachCommand cmd) {
DiskTO disk = cmd.getDisk();
if (disk.getType() == Volume.Type.ISO) {
@@ -129,7 +132,7 @@ public class StorageSubsystemCommandHandlerBase implements StorageSubsystemComma
return processor.attachVolume(cmd);
}
}
-
+
protected Answer execute(DettachCommand cmd) {
DiskTO disk = cmd.getDisk();
if (disk.getType() == Volume.Type.ISO) {
diff --git a/core/src/com/cloud/storage/template/FtpTemplateUploader.java b/core/src/com/cloud/storage/template/FtpTemplateUploader.java
index 61b1984634a..c3c9f1e74ad 100755
--- a/core/src/com/cloud/storage/template/FtpTemplateUploader.java
+++ b/core/src/com/cloud/storage/template/FtpTemplateUploader.java
@@ -30,203 +30,202 @@ import org.apache.log4j.Logger;
public class FtpTemplateUploader implements TemplateUploader {
-
- public static final Logger s_logger = Logger.getLogger(FtpTemplateUploader.class.getName());
- public TemplateUploader.Status status = TemplateUploader.Status.NOT_STARTED;
- public String errorString = "";
- public long totalBytes = 0;
- public long entitySizeinBytes;
- private String sourcePath;
- private String ftpUrl;
- private UploadCompleteCallback completionCallback;
- private boolean resume;
+
+ public static final Logger s_logger = Logger.getLogger(FtpTemplateUploader.class.getName());
+ public TemplateUploader.Status status = TemplateUploader.Status.NOT_STARTED;
+ public String errorString = "";
+ public long totalBytes = 0;
+ public long entitySizeinBytes;
+ private String sourcePath;
+ private String ftpUrl;
+ private UploadCompleteCallback completionCallback;
private BufferedInputStream inputStream = null;
private BufferedOutputStream outputStream = null;
- private static final int CHUNK_SIZE = 1024*1024; //1M
-
- public FtpTemplateUploader(String sourcePath, String url, UploadCompleteCallback callback, long entitySizeinBytes){
-
- this.sourcePath = sourcePath;
- this.ftpUrl = url;
- this.completionCallback = callback;
- this.entitySizeinBytes = entitySizeinBytes;
-
- }
-
- public long upload(UploadCompleteCallback callback )
- {
-
- switch (status) {
- case ABORTED:
- case UNRECOVERABLE_ERROR:
- case UPLOAD_FINISHED:
- return 0;
- default:
-
- }
-
- Date start = new Date();
-
- StringBuffer sb = new StringBuffer(ftpUrl);
- // check for authentication else assume its anonymous access.
- /* if (user != null && password != null)
+ private static final int CHUNK_SIZE = 1024*1024; //1M
+
+ public FtpTemplateUploader(String sourcePath, String url, UploadCompleteCallback callback, long entitySizeinBytes){
+
+ this.sourcePath = sourcePath;
+ ftpUrl = url;
+ completionCallback = callback;
+ this.entitySizeinBytes = entitySizeinBytes;
+
+ }
+
+ @Override
+ public long upload(UploadCompleteCallback callback )
+ {
+
+ switch (status) {
+ case ABORTED:
+ case UNRECOVERABLE_ERROR:
+ case UPLOAD_FINISHED:
+ return 0;
+ default:
+
+ }
+
+ new Date();
+
+ StringBuffer sb = new StringBuffer(ftpUrl);
+ // check for authentication else assume its anonymous access.
+ /* if (user != null && password != null)
{
sb.append( user );
sb.append( ':' );
sb.append( password );
sb.append( '@' );
- }*/
- /*
- * type ==> a=ASCII mode, i=image (binary) mode, d= file directory
- * listing
- */
- sb.append( ";type=i" );
+ }*/
+ /*
+ * type ==> a=ASCII mode, i=image (binary) mode, d= file directory
+ * listing
+ */
+ sb.append( ";type=i" );
- try
- {
- URL url = new URL( sb.toString() );
- URLConnection urlc = url.openConnection();
- File sourceFile = new File(sourcePath);
- entitySizeinBytes = sourceFile.length();
+ try
+ {
+ URL url = new URL( sb.toString() );
+ URLConnection urlc = url.openConnection();
+ File sourceFile = new File(sourcePath);
+ entitySizeinBytes = sourceFile.length();
- outputStream = new BufferedOutputStream( urlc.getOutputStream() );
- inputStream = new BufferedInputStream( new FileInputStream(sourceFile) );
+ outputStream = new BufferedOutputStream( urlc.getOutputStream() );
+ inputStream = new BufferedInputStream( new FileInputStream(sourceFile) );
- status = TemplateUploader.Status.IN_PROGRESS;
+ status = TemplateUploader.Status.IN_PROGRESS;
- int bytes = 0;
- byte[] block = new byte[CHUNK_SIZE];
- boolean done=false;
- while (!done && status != Status.ABORTED ) {
- if ( (bytes = inputStream.read(block, 0, CHUNK_SIZE)) > -1) {
- outputStream.write(block,0, bytes);
- totalBytes += bytes;
- } else {
- done = true;
- }
- }
- status = TemplateUploader.Status.UPLOAD_FINISHED;
- return totalBytes;
- } catch (MalformedURLException e) {
- status = TemplateUploader.Status.UNRECOVERABLE_ERROR;
- errorString = e.getMessage();
- s_logger.error(errorString);
- } catch (IOException e) {
- status = TemplateUploader.Status.UNRECOVERABLE_ERROR;
- errorString = e.getMessage();
- s_logger.error(errorString);
- }
- finally
- {
- try
- {
- if (inputStream != null){
- inputStream.close();
- }
- if (outputStream != null){
- outputStream.close();
- }
- }catch (IOException ioe){
- s_logger.error(" Caught exception while closing the resources" );
- }
- if (callback != null) {
- callback.uploadComplete(status);
- }
- }
+ int bytes = 0;
+ byte[] block = new byte[CHUNK_SIZE];
+ boolean done=false;
+ while (!done && status != Status.ABORTED ) {
+ if ( (bytes = inputStream.read(block, 0, CHUNK_SIZE)) > -1) {
+ outputStream.write(block,0, bytes);
+ totalBytes += bytes;
+ } else {
+ done = true;
+ }
+ }
+ status = TemplateUploader.Status.UPLOAD_FINISHED;
+ return totalBytes;
+ } catch (MalformedURLException e) {
+ status = TemplateUploader.Status.UNRECOVERABLE_ERROR;
+ errorString = e.getMessage();
+ s_logger.error(errorString);
+ } catch (IOException e) {
+ status = TemplateUploader.Status.UNRECOVERABLE_ERROR;
+ errorString = e.getMessage();
+ s_logger.error(errorString);
+ }
+ finally
+ {
+ try
+ {
+ if (inputStream != null){
+ inputStream.close();
+ }
+ if (outputStream != null){
+ outputStream.close();
+ }
+ }catch (IOException ioe){
+ s_logger.error(" Caught exception while closing the resources" );
+ }
+ if (callback != null) {
+ callback.uploadComplete(status);
+ }
+ }
- return 0;
- }
+ return 0;
+ }
- @Override
- public void run() {
- try {
- upload(completionCallback);
- } catch (Throwable t) {
- s_logger.warn("Caught exception during upload "+ t.getMessage(), t);
- errorString = "Failed to install: " + t.getMessage();
- status = TemplateUploader.Status.UNRECOVERABLE_ERROR;
- }
-
- }
+ @Override
+ public void run() {
+ try {
+ upload(completionCallback);
+ } catch (Throwable t) {
+ s_logger.warn("Caught exception during upload "+ t.getMessage(), t);
+ errorString = "Failed to install: " + t.getMessage();
+ status = TemplateUploader.Status.UNRECOVERABLE_ERROR;
+ }
- @Override
- public Status getStatus() {
- return status;
- }
+ }
- @Override
- public String getUploadError() {
- return errorString;
- }
+ @Override
+ public Status getStatus() {
+ return status;
+ }
- @Override
- public String getUploadLocalPath() {
- return sourcePath;
- }
+ @Override
+ public String getUploadError() {
+ return errorString;
+ }
- @Override
- public int getUploadPercent() {
- if (entitySizeinBytes == 0) {
- return 0;
- }
- return (int)(100.0*totalBytes/entitySizeinBytes);
- }
+ @Override
+ public String getUploadLocalPath() {
+ return sourcePath;
+ }
- @Override
- public long getUploadTime() {
- // TODO
- return 0;
- }
+ @Override
+ public int getUploadPercent() {
+ if (entitySizeinBytes == 0) {
+ return 0;
+ }
+ return (int)(100.0*totalBytes/entitySizeinBytes);
+ }
- @Override
- public long getUploadedBytes() {
- return totalBytes;
- }
+ @Override
+ public long getUploadTime() {
+ // TODO
+ return 0;
+ }
- @Override
- public void setResume(boolean resume) {
- this.resume = resume;
-
- }
+ @Override
+ public long getUploadedBytes() {
+ return totalBytes;
+ }
- @Override
- public void setStatus(Status status) {
- this.status = status;
- }
+ @Override
+ public void setResume(boolean resume) {
- @Override
- public void setUploadError(String string) {
- errorString = string;
- }
+ }
- @Override
- public boolean stopUpload() {
- switch (getStatus()) {
- case IN_PROGRESS:
- try {
- if(outputStream != null) {
- outputStream.close();
- }
- if (inputStream != null){
- inputStream.close();
- }
- } catch (IOException e) {
- s_logger.error(" Caught exception while closing the resources" );
- }
- status = TemplateUploader.Status.ABORTED;
- return true;
- case UNKNOWN:
- case NOT_STARTED:
- case RECOVERABLE_ERROR:
- case UNRECOVERABLE_ERROR:
- case ABORTED:
- status = TemplateUploader.Status.ABORTED;
- case UPLOAD_FINISHED:
- return true;
+ @Override
+ public void setStatus(Status status) {
+ this.status = status;
+ }
- default:
- return true;
- }
- }
+ @Override
+ public void setUploadError(String string) {
+ errorString = string;
+ }
+
+ @Override
+ public boolean stopUpload() {
+ switch (getStatus()) {
+ case IN_PROGRESS:
+ try {
+ if(outputStream != null) {
+ outputStream.close();
+ }
+ if (inputStream != null){
+ inputStream.close();
+ }
+ } catch (IOException e) {
+ s_logger.error(" Caught exception while closing the resources" );
+ }
+ status = TemplateUploader.Status.ABORTED;
+ return true;
+ case UNKNOWN:
+ case NOT_STARTED:
+ case RECOVERABLE_ERROR:
+ case UNRECOVERABLE_ERROR:
+ case ABORTED:
+ status = TemplateUploader.Status.ABORTED;
+ case UPLOAD_FINISHED:
+ return true;
+
+ default:
+ return true;
+ }
+ }
}
diff --git a/core/src/com/cloud/storage/template/HttpTemplateDownloader.java b/core/src/com/cloud/storage/template/HttpTemplateDownloader.java
index d87dd68bb81..f0f19629841 100644
--- a/core/src/com/cloud/storage/template/HttpTemplateDownloader.java
+++ b/core/src/com/cloud/storage/template/HttpTemplateDownloader.java
@@ -22,14 +22,9 @@ import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.io.RandomAccessFile;
-import java.net.Inet6Address;
-import java.net.InetAddress;
-import java.net.URI;
import java.net.URISyntaxException;
-import java.net.UnknownHostException;
import java.util.Date;
-import org.apache.cloudstack.storage.command.DownloadCommand.ResourceType;
import org.apache.commons.httpclient.ChunkedInputStream;
import org.apache.commons.httpclient.Credentials;
import org.apache.commons.httpclient.Header;
@@ -45,10 +40,11 @@ import org.apache.commons.httpclient.auth.AuthScope;
import org.apache.commons.httpclient.methods.GetMethod;
import org.apache.commons.httpclient.params.HttpMethodParams;
import org.apache.log4j.Logger;
+import org.apache.cloudstack.managed.context.ManagedContextRunnable;
+import org.apache.cloudstack.storage.command.DownloadCommand.ResourceType;
import com.cloud.agent.api.storage.Proxy;
import com.cloud.storage.StorageLayer;
-import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.Pair;
import com.cloud.utils.UriUtils;
@@ -56,145 +52,146 @@ import com.cloud.utils.UriUtils;
* Download a template file using HTTP
*
*/
-public class HttpTemplateDownloader implements TemplateDownloader {
- public static final Logger s_logger = Logger.getLogger(HttpTemplateDownloader.class.getName());
+public class HttpTemplateDownloader extends ManagedContextRunnable implements TemplateDownloader {
+ public static final Logger s_logger = Logger.getLogger(HttpTemplateDownloader.class.getName());
private static final MultiThreadedHttpConnectionManager s_httpClientManager = new MultiThreadedHttpConnectionManager();
- private static final int CHUNK_SIZE = 1024*1024; //1M
- private String downloadUrl;
- private String toFile;
- public TemplateDownloader.Status status= TemplateDownloader.Status.NOT_STARTED;
- public String errorString = " ";
- private long remoteSize = 0;
- public long downloadTime = 0;
- public long totalBytes;
- private final HttpClient client;
- private GetMethod request;
- private boolean resume = false;
- private DownloadCompleteCallback completionCallback;
- StorageLayer _storage;
- boolean inited = true;
+ private static final int CHUNK_SIZE = 1024*1024; //1M
+ private String downloadUrl;
+ private String toFile;
+ public TemplateDownloader.Status status= TemplateDownloader.Status.NOT_STARTED;
+ public String errorString = " ";
+ private long remoteSize = 0;
+ public long downloadTime = 0;
+ public long totalBytes;
+ private final HttpClient client;
+ private GetMethod request;
+ private boolean resume = false;
+ private DownloadCompleteCallback completionCallback;
+ StorageLayer _storage;
+ boolean inited = true;
- private String toDir;
- private long MAX_TEMPLATE_SIZE_IN_BYTES;
- private ResourceType resourceType = ResourceType.TEMPLATE;
- private final HttpMethodRetryHandler myretryhandler;
+ private String toDir;
+ private long MAX_TEMPLATE_SIZE_IN_BYTES;
+ private ResourceType resourceType = ResourceType.TEMPLATE;
+ private final HttpMethodRetryHandler myretryhandler;
- public HttpTemplateDownloader (StorageLayer storageLayer, String downloadUrl, String toDir, DownloadCompleteCallback callback, long maxTemplateSizeInBytes, String user, String password, Proxy proxy, ResourceType resourceType) {
- this._storage = storageLayer;
- this.downloadUrl = downloadUrl;
- this.setToDir(toDir);
- this.status = TemplateDownloader.Status.NOT_STARTED;
- this.resourceType = resourceType;
- this.MAX_TEMPLATE_SIZE_IN_BYTES = maxTemplateSizeInBytes;
+ public HttpTemplateDownloader (StorageLayer storageLayer, String downloadUrl, String toDir, DownloadCompleteCallback callback, long maxTemplateSizeInBytes, String user, String password, Proxy proxy, ResourceType resourceType) {
+ _storage = storageLayer;
+ this.downloadUrl = downloadUrl;
+ setToDir(toDir);
+ status = TemplateDownloader.Status.NOT_STARTED;
+ this.resourceType = resourceType;
+ MAX_TEMPLATE_SIZE_IN_BYTES = maxTemplateSizeInBytes;
- this.totalBytes = 0;
- this.client = new HttpClient(s_httpClientManager);
+ totalBytes = 0;
+ client = new HttpClient(s_httpClientManager);
- myretryhandler = new HttpMethodRetryHandler() {
- public boolean retryMethod(
- final HttpMethod method,
- final IOException exception,
- int executionCount) {
- if (executionCount >= 2) {
- // Do not retry if over max retry count
- return false;
- }
- if (exception instanceof NoHttpResponseException) {
- // Retry if the server dropped connection on us
- return true;
- }
- if (!method.isRequestSent()) {
- // Retry if the request has not been sent fully or
- // if it's OK to retry methods that have been sent
- return true;
- }
- // otherwise do not retry
- return false;
- }
- };
+ myretryhandler = new HttpMethodRetryHandler() {
+ @Override
+ public boolean retryMethod(
+ final HttpMethod method,
+ final IOException exception,
+ int executionCount) {
+ if (executionCount >= 2) {
+ // Do not retry if over max retry count
+ return false;
+ }
+ if (exception instanceof NoHttpResponseException) {
+ // Retry if the server dropped connection on us
+ return true;
+ }
+ if (!method.isRequestSent()) {
+ // Retry if the request has not been sent fully or
+ // if it's OK to retry methods that have been sent
+ return true;
+ }
+ // otherwise do not retry
+ return false;
+ }
+ };
- try {
- this.request = new GetMethod(downloadUrl);
- this.request.getParams().setParameter(HttpMethodParams.RETRY_HANDLER, myretryhandler);
- this.completionCallback = callback;
- //this.request.setFollowRedirects(false);
+ try {
+ request = new GetMethod(downloadUrl);
+ request.getParams().setParameter(HttpMethodParams.RETRY_HANDLER, myretryhandler);
+ completionCallback = callback;
+ //this.request.setFollowRedirects(false);
- File f = File.createTempFile("dnld", "tmp_", new File(toDir));
+ File f = File.createTempFile("dnld", "tmp_", new File(toDir));
- if (_storage != null) {
- _storage.setWorldReadableAndWriteable(f);
- }
+ if (_storage != null) {
+ _storage.setWorldReadableAndWriteable(f);
+ }
- toFile = f.getAbsolutePath();
- Pair hostAndPort = UriUtils.validateUrl(downloadUrl);
+ toFile = f.getAbsolutePath();
+ Pair hostAndPort = UriUtils.validateUrl(downloadUrl);
- if (proxy != null) {
- client.getHostConfiguration().setProxy(proxy.getHost(), proxy.getPort());
- if (proxy.getUserName() != null) {
- Credentials proxyCreds = new UsernamePasswordCredentials(proxy.getUserName(), proxy.getPassword());
- client.getState().setProxyCredentials(AuthScope.ANY, proxyCreds);
- }
- }
- if ((user != null) && (password != null)) {
- client.getParams().setAuthenticationPreemptive(true);
- Credentials defaultcreds = new UsernamePasswordCredentials(user, password);
- client.getState().setCredentials(new AuthScope(hostAndPort.first(), hostAndPort.second(), AuthScope.ANY_REALM), defaultcreds);
- s_logger.info("Added username=" + user + ", password=" + password + "for host " + hostAndPort.first() + ":" + hostAndPort.second());
- } else {
- s_logger.info("No credentials configured for host=" + hostAndPort.first() + ":" + hostAndPort.second());
- }
- } catch (IllegalArgumentException iae) {
- errorString = iae.getMessage();
- status = TemplateDownloader.Status.UNRECOVERABLE_ERROR;
- inited = false;
- } catch (Exception ex){
- errorString = "Unable to start download -- check url? ";
- status = TemplateDownloader.Status.UNRECOVERABLE_ERROR;
- s_logger.warn("Exception in constructor -- " + ex.toString());
- } catch (Throwable th) {
- s_logger.warn("throwable caught ", th);
- }
- }
+ if (proxy != null) {
+ client.getHostConfiguration().setProxy(proxy.getHost(), proxy.getPort());
+ if (proxy.getUserName() != null) {
+ Credentials proxyCreds = new UsernamePasswordCredentials(proxy.getUserName(), proxy.getPassword());
+ client.getState().setProxyCredentials(AuthScope.ANY, proxyCreds);
+ }
+ }
+ if ((user != null) && (password != null)) {
+ client.getParams().setAuthenticationPreemptive(true);
+ Credentials defaultcreds = new UsernamePasswordCredentials(user, password);
+ client.getState().setCredentials(new AuthScope(hostAndPort.first(), hostAndPort.second(), AuthScope.ANY_REALM), defaultcreds);
+ s_logger.info("Added username=" + user + ", password=" + password + "for host " + hostAndPort.first() + ":" + hostAndPort.second());
+ } else {
+ s_logger.info("No credentials configured for host=" + hostAndPort.first() + ":" + hostAndPort.second());
+ }
+ } catch (IllegalArgumentException iae) {
+ errorString = iae.getMessage();
+ status = TemplateDownloader.Status.UNRECOVERABLE_ERROR;
+ inited = false;
+ } catch (Exception ex){
+ errorString = "Unable to start download -- check url? ";
+ status = TemplateDownloader.Status.UNRECOVERABLE_ERROR;
+ s_logger.warn("Exception in constructor -- " + ex.toString());
+ } catch (Throwable th) {
+ s_logger.warn("throwable caught ", th);
+ }
+ }
- @Override
- public long download(boolean resume, DownloadCompleteCallback callback) {
- switch (status) {
- case ABORTED:
- case UNRECOVERABLE_ERROR:
- case DOWNLOAD_FINISHED:
- return 0;
- default:
+ @Override
+ public long download(boolean resume, DownloadCompleteCallback callback) {
+ switch (status) {
+ case ABORTED:
+ case UNRECOVERABLE_ERROR:
+ case DOWNLOAD_FINISHED:
+ return 0;
+ default:
- }
+ }
int bytes=0;
- File file = new File(toFile);
- try {
+ File file = new File(toFile);
+ try {
- long localFileSize = 0;
- if (file.exists() && resume) {
- localFileSize = file.length();
- s_logger.info("Resuming download to file (current size)=" + localFileSize);
- }
+ long localFileSize = 0;
+ if (file.exists() && resume) {
+ localFileSize = file.length();
+ s_logger.info("Resuming download to file (current size)=" + localFileSize);
+ }
Date start = new Date();
- int responseCode=0;
+ int responseCode=0;
- if (localFileSize > 0 ) {
- // require partial content support for resume
- request.addRequestHeader("Range", "bytes=" + localFileSize + "-");
- if (client.executeMethod(request) != HttpStatus.SC_PARTIAL_CONTENT) {
- errorString = "HTTP Server does not support partial get";
- status = TemplateDownloader.Status.UNRECOVERABLE_ERROR;
- return 0;
- }
- } else if ((responseCode = client.executeMethod(request)) != HttpStatus.SC_OK) {
- status = TemplateDownloader.Status.UNRECOVERABLE_ERROR;
- errorString = " HTTP Server returned " + responseCode + " (expected 200 OK) ";
+ if (localFileSize > 0 ) {
+ // require partial content support for resume
+ request.addRequestHeader("Range", "bytes=" + localFileSize + "-");
+ if (client.executeMethod(request) != HttpStatus.SC_PARTIAL_CONTENT) {
+ errorString = "HTTP Server does not support partial get";
+ status = TemplateDownloader.Status.UNRECOVERABLE_ERROR;
+ return 0;
+ }
+ } else if ((responseCode = client.executeMethod(request)) != HttpStatus.SC_OK) {
+ status = TemplateDownloader.Status.UNRECOVERABLE_ERROR;
+ errorString = " HTTP Server returned " + responseCode + " (expected 200 OK) ";
return 0; //FIXME: retry?
}
@@ -202,16 +199,16 @@ public class HttpTemplateDownloader implements TemplateDownloader {
boolean chunked = false;
long remoteSize2 = 0;
if (contentLengthHeader == null) {
- Header chunkedHeader = request.getResponseHeader("Transfer-Encoding");
- if (chunkedHeader == null || !"chunked".equalsIgnoreCase(chunkedHeader.getValue())) {
- status = TemplateDownloader.Status.UNRECOVERABLE_ERROR;
- errorString=" Failed to receive length of download ";
- return 0; //FIXME: what status do we put here? Do we retry?
- } else if ("chunked".equalsIgnoreCase(chunkedHeader.getValue())){
- chunked = true;
- }
+ Header chunkedHeader = request.getResponseHeader("Transfer-Encoding");
+ if (chunkedHeader == null || !"chunked".equalsIgnoreCase(chunkedHeader.getValue())) {
+ status = TemplateDownloader.Status.UNRECOVERABLE_ERROR;
+ errorString=" Failed to receive length of download ";
+ return 0; //FIXME: what status do we put here? Do we retry?
+ } else if ("chunked".equalsIgnoreCase(chunkedHeader.getValue())){
+ chunked = true;
+ }
} else {
- remoteSize2 = Long.parseLong(contentLengthHeader.getValue());
+ remoteSize2 = Long.parseLong(contentLengthHeader.getValue());
if ( remoteSize2 == 0 ) {
status = TemplateDownloader.Status.DOWNLOAD_FINISHED;
String downloaded = "(download complete remote=" + remoteSize + "bytes)";
@@ -222,22 +219,22 @@ public class HttpTemplateDownloader implements TemplateDownloader {
}
if (remoteSize == 0) {
- remoteSize = remoteSize2;
+ remoteSize = remoteSize2;
}
if (remoteSize > MAX_TEMPLATE_SIZE_IN_BYTES) {
- s_logger.info("Remote size is too large: " + remoteSize + " , max=" + MAX_TEMPLATE_SIZE_IN_BYTES);
- status = Status.UNRECOVERABLE_ERROR;
- errorString = "Download file size is too large";
- return 0;
+ s_logger.info("Remote size is too large: " + remoteSize + " , max=" + MAX_TEMPLATE_SIZE_IN_BYTES);
+ status = Status.UNRECOVERABLE_ERROR;
+ errorString = "Download file size is too large";
+ return 0;
}
if (remoteSize == 0) {
- remoteSize = MAX_TEMPLATE_SIZE_IN_BYTES;
+ remoteSize = MAX_TEMPLATE_SIZE_IN_BYTES;
}
- InputStream in = !chunked?new BufferedInputStream(request.getResponseBodyAsStream())
- : new ChunkedInputStream(request.getResponseBodyAsStream());
+ InputStream in = !chunked ? new BufferedInputStream(request.getResponseBodyAsStream()) : new ChunkedInputStream(
+ request.getResponseBodyAsStream());
RandomAccessFile out = new RandomAccessFile(file, "rwd");
out.seek(localFileSize);
@@ -249,187 +246,193 @@ public class HttpTemplateDownloader implements TemplateDownloader {
boolean done=false;
status = TemplateDownloader.Status.IN_PROGRESS;
while (!done && status != Status.ABORTED && offset <= remoteSize) {
- if ( (bytes = in.read(block, 0, CHUNK_SIZE)) > -1) {
- out.write(block, 0, bytes);
- offset +=bytes;
- out.seek(offset);
- totalBytes += bytes;
- } else {
- done = true;
- }
+ if ( (bytes = in.read(block, 0, CHUNK_SIZE)) > -1) {
+ out.write(block, 0, bytes);
+ offset +=bytes;
+ out.seek(offset);
+ totalBytes += bytes;
+ } else {
+ done = true;
+ }
}
Date finish = new Date();
String downloaded = "(incomplete download)";
if (totalBytes >= remoteSize) {
- status = TemplateDownloader.Status.DOWNLOAD_FINISHED;
- downloaded = "(download complete remote=" + remoteSize + "bytes)";
+ status = TemplateDownloader.Status.DOWNLOAD_FINISHED;
+ downloaded = "(download complete remote=" + remoteSize + "bytes)";
}
errorString = "Downloaded " + totalBytes + " bytes " + downloaded;
downloadTime += finish.getTime() - start.getTime();
+ in.close();
out.close();
return totalBytes;
- }catch (HttpException hte) {
- status = TemplateDownloader.Status.UNRECOVERABLE_ERROR;
- errorString = hte.getMessage();
- } catch (IOException ioe) {
- status = TemplateDownloader.Status.UNRECOVERABLE_ERROR; //probably a file write error?
- errorString = ioe.getMessage();
- } finally {
- if (status == Status.UNRECOVERABLE_ERROR && file.exists() && !file.isDirectory()) {
- file.delete();
- }
- request.releaseConnection();
- if (callback != null) {
- callback.downloadComplete(status);
+ }catch (HttpException hte) {
+ status = TemplateDownloader.Status.UNRECOVERABLE_ERROR;
+ errorString = hte.getMessage();
+ } catch (IOException ioe) {
+ status = TemplateDownloader.Status.UNRECOVERABLE_ERROR; //probably a file write error?
+ errorString = ioe.getMessage();
+ } finally {
+ if (status == Status.UNRECOVERABLE_ERROR && file.exists() && !file.isDirectory()) {
+ file.delete();
}
- }
- return 0;
- }
+ request.releaseConnection();
+ if (callback != null) {
+ callback.downloadComplete(status);
+ }
+ }
+ return 0;
+ }
- public String getDownloadUrl() {
- return downloadUrl;
- }
+ public String getDownloadUrl() {
+ return downloadUrl;
+ }
- public String getToFile() {
+ public String getToFile() {
File file = new File(toFile);
- return file.getAbsolutePath();
- }
+ return file.getAbsolutePath();
+ }
- public TemplateDownloader.Status getStatus() {
- return status;
- }
+ @Override
+ public TemplateDownloader.Status getStatus() {
+ return status;
+ }
- public long getDownloadTime() {
- return downloadTime;
- }
+ @Override
+ public long getDownloadTime() {
+ return downloadTime;
+ }
- public long getDownloadedBytes() {
- return totalBytes;
- }
+ @Override
+ public long getDownloadedBytes() {
+ return totalBytes;
+ }
- @Override
- @SuppressWarnings("fallthrough")
- public boolean stopDownload() {
- switch (getStatus()) {
- case IN_PROGRESS:
- if (request != null) {
- request.abort();
- }
- status = TemplateDownloader.Status.ABORTED;
- return true;
- case UNKNOWN:
- case NOT_STARTED:
- case RECOVERABLE_ERROR:
- case UNRECOVERABLE_ERROR:
- case ABORTED:
- status = TemplateDownloader.Status.ABORTED;
- case DOWNLOAD_FINISHED:
- File f = new File(toFile);
- if (f.exists()) {
- f.delete();
- }
- return true;
+ @Override
+ @SuppressWarnings("fallthrough")
+ public boolean stopDownload() {
+ switch (getStatus()) {
+ case IN_PROGRESS:
+ if (request != null) {
+ request.abort();
+ }
+ status = TemplateDownloader.Status.ABORTED;
+ return true;
+ case UNKNOWN:
+ case NOT_STARTED:
+ case RECOVERABLE_ERROR:
+ case UNRECOVERABLE_ERROR:
+ case ABORTED:
+ status = TemplateDownloader.Status.ABORTED;
+ case DOWNLOAD_FINISHED:
+ File f = new File(toFile);
+ if (f.exists()) {
+ f.delete();
+ }
+ return true;
- default:
- return true;
- }
- }
+ default:
+ return true;
+ }
+ }
- @Override
- public int getDownloadPercent() {
- if (remoteSize == 0) {
- return 0;
- }
+ @Override
+ public int getDownloadPercent() {
+ if (remoteSize == 0) {
+ return 0;
+ }
- return (int)(100.0*totalBytes/remoteSize);
- }
+ return (int)(100.0*totalBytes/remoteSize);
+ }
- @Override
- public void run() {
- try {
- download(resume, completionCallback);
- } catch (Throwable t) {
- s_logger.warn("Caught exception during download "+ t.getMessage(), t);
- errorString = "Failed to install: " + t.getMessage();
- status = TemplateDownloader.Status.UNRECOVERABLE_ERROR;
- }
+ @Override
+ protected void runInContext() {
+ try {
+ download(resume, completionCallback);
+ } catch (Throwable t) {
+ s_logger.warn("Caught exception during download "+ t.getMessage(), t);
+ errorString = "Failed to install: " + t.getMessage();
+ status = TemplateDownloader.Status.UNRECOVERABLE_ERROR;
+ }
- }
+ }
- @Override
- public void setStatus(TemplateDownloader.Status status) {
- this.status = status;
- }
+ @Override
+ public void setStatus(TemplateDownloader.Status status) {
+ this.status = status;
+ }
- public boolean isResume() {
- return resume;
- }
+ public boolean isResume() {
+ return resume;
+ }
- @Override
- public String getDownloadError() {
- return errorString;
- }
+ @Override
+ public String getDownloadError() {
+ return errorString;
+ }
- @Override
- public String getDownloadLocalPath() {
- return getToFile();
- }
+ @Override
+ public String getDownloadLocalPath() {
+ return getToFile();
+ }
- public void setResume(boolean resume) {
- this.resume = resume;
- }
+ @Override
+ public void setResume(boolean resume) {
+ this.resume = resume;
+ }
- public void setToDir(String toDir) {
- this.toDir = toDir;
- }
+ public void setToDir(String toDir) {
+ this.toDir = toDir;
+ }
- public String getToDir() {
- return toDir;
- }
+ public String getToDir() {
+ return toDir;
+ }
- public long getMaxTemplateSizeInBytes() {
- return this.MAX_TEMPLATE_SIZE_IN_BYTES;
- }
+ @Override
+ public long getMaxTemplateSizeInBytes() {
+ return MAX_TEMPLATE_SIZE_IN_BYTES;
+ }
- public static void main(String[] args) {
- String url ="http:// dev.mysql.com/get/Downloads/MySQL-5.0/mysql-noinstall-5.0.77-win32.zip/from/http://mirror.services.wisc.edu/mysql/";
- try {
- URI uri = new java.net.URI(url);
- } catch (URISyntaxException e) {
- // TODO Auto-generated catch block
- e.printStackTrace();
- }
- TemplateDownloader td = new HttpTemplateDownloader(null, url,"/tmp/mysql", null, TemplateDownloader.DEFAULT_MAX_TEMPLATE_SIZE_IN_BYTES, null, null, null, null);
- long bytes = td.download(true, null);
- if (bytes > 0) {
- System.out.println("Downloaded (" + bytes + " bytes)" + " in " + td.getDownloadTime()/1000 + " secs");
- } else {
- System.out.println("Failed download");
- }
+ public static void main(String[] args) {
+ String url ="http:// dev.mysql.com/get/Downloads/MySQL-5.0/mysql-noinstall-5.0.77-win32.zip/from/http://mirror.services.wisc.edu/mysql/";
+ try {
+ new java.net.URI(url);
+ } catch (URISyntaxException e) {
+ // TODO Auto-generated catch block
+ e.printStackTrace();
+ }
+ TemplateDownloader td = new HttpTemplateDownloader(null, url,"/tmp/mysql", null, TemplateDownloader.DEFAULT_MAX_TEMPLATE_SIZE_IN_BYTES, null, null, null, null);
+ long bytes = td.download(true, null);
+ if (bytes > 0) {
+ System.out.println("Downloaded (" + bytes + " bytes)" + " in " + td.getDownloadTime()/1000 + " secs");
+ } else {
+ System.out.println("Failed download");
+ }
- }
+ }
- @Override
- public void setDownloadError(String error) {
- errorString = error;
- }
+ @Override
+ public void setDownloadError(String error) {
+ errorString = error;
+ }
- @Override
- public boolean isInited() {
- return inited;
- }
+ @Override
+ public boolean isInited() {
+ return inited;
+ }
- public ResourceType getResourceType() {
- return resourceType;
- }
+ public ResourceType getResourceType() {
+ return resourceType;
+ }
}
diff --git a/core/src/com/cloud/storage/template/LocalTemplateDownloader.java b/core/src/com/cloud/storage/template/LocalTemplateDownloader.java
index c8927a117d3..581524bb2f1 100644
--- a/core/src/com/cloud/storage/template/LocalTemplateDownloader.java
+++ b/core/src/com/cloud/storage/template/LocalTemplateDownloader.java
@@ -34,7 +34,7 @@ import com.cloud.storage.StorageLayer;
public class LocalTemplateDownloader extends TemplateDownloaderBase implements TemplateDownloader {
public static final Logger s_logger = Logger.getLogger(LocalTemplateDownloader.class);
-
+
public LocalTemplateDownloader(StorageLayer storageLayer, String downloadUrl, String toDir, long maxTemplateSizeInBytes, DownloadCompleteCallback callback) {
super(storageLayer, downloadUrl, toDir, maxTemplateSizeInBytes, callback);
String filename = downloadUrl.substring(downloadUrl.lastIndexOf(File.separator));
@@ -44,14 +44,14 @@ public class LocalTemplateDownloader extends TemplateDownloaderBase implements T
@Override
public long download(boolean resume, DownloadCompleteCallback callback) {
if (_status == Status.ABORTED ||
- _status == Status.UNRECOVERABLE_ERROR ||
- _status == Status.DOWNLOAD_FINISHED) {
+ _status == Status.UNRECOVERABLE_ERROR ||
+ _status == Status.DOWNLOAD_FINISHED) {
return 0;
}
_start = System.currentTimeMillis();
_resume = resume;
-
+
File src;
try {
src = new File(new URI(_downloadUrl));
@@ -61,18 +61,20 @@ public class LocalTemplateDownloader extends TemplateDownloaderBase implements T
return 0;
}
File dst = new File(_toFile);
-
+
FileChannel fic = null;
FileChannel foc = null;
-
+ FileInputStream fis = null;
+ FileOutputStream fos = null;
+
try {
- if (_storage != null) {
- dst.createNewFile();
- _storage.setWorldReadableAndWriteable(dst);
- }
-
+ if (_storage != null) {
+ dst.createNewFile();
+ _storage.setWorldReadableAndWriteable(dst);
+ }
+
ByteBuffer buffer = ByteBuffer.allocate(1024 * 512);
- FileInputStream fis;
+
try {
fis = new FileInputStream(src);
} catch (FileNotFoundException e) {
@@ -81,7 +83,6 @@ public class LocalTemplateDownloader extends TemplateDownloaderBase implements T
return -1;
}
fic = fis.getChannel();
- FileOutputStream fos;
try {
fos = new FileOutputStream(dst);
} catch (FileNotFoundException e) {
@@ -89,11 +90,11 @@ public class LocalTemplateDownloader extends TemplateDownloaderBase implements T
return -1;
}
foc = fos.getChannel();
-
+
_remoteSize = src.length();
- this._totalBytes = 0;
+ _totalBytes = 0;
_status = TemplateDownloader.Status.IN_PROGRESS;
-
+
try {
while (_status != Status.ABORTED && fic.read(buffer) != -1) {
buffer.flip();
@@ -104,13 +105,13 @@ public class LocalTemplateDownloader extends TemplateDownloaderBase implements T
} catch (IOException e) {
s_logger.warn("Unable to download", e);
}
-
+
String downloaded = "(incomplete download)";
if (_totalBytes == _remoteSize) {
_status = TemplateDownloader.Status.DOWNLOAD_FINISHED;
downloaded = "(download complete)";
}
-
+
_errorString = "Downloaded " + _remoteSize + " bytes " + downloaded;
_downloadTime += System.currentTimeMillis() - _start;
return _totalBytes;
@@ -125,14 +126,28 @@ public class LocalTemplateDownloader extends TemplateDownloaderBase implements T
} catch (IOException e) {
}
}
-
+
if (foc != null) {
try {
foc.close();
} catch (IOException e) {
}
}
-
+
+ if (fis != null) {
+ try {
+ fis.close();
+ } catch (IOException e) {
+ }
+ }
+
+ if (fos != null) {
+ try {
+ fos.close();
+ } catch (IOException e) {
+ }
+ }
+
if (_status == Status.UNRECOVERABLE_ERROR && dst.exists()) {
dst.delete();
}
@@ -141,7 +156,7 @@ public class LocalTemplateDownloader extends TemplateDownloaderBase implements T
}
}
}
-
+
public static void main(String[] args) {
String url ="file:///home/ahuang/Download/E3921_P5N7A-VM_manual.zip";
TemplateDownloader td = new LocalTemplateDownloader(null, url,"/tmp/mysql", TemplateDownloader.DEFAULT_MAX_TEMPLATE_SIZE_IN_BYTES, null);
diff --git a/core/src/com/cloud/storage/template/RawImageProcessor.java b/core/src/com/cloud/storage/template/RawImageProcessor.java
index 0e4c8c1822a..f516d75cfa5 100644
--- a/core/src/com/cloud/storage/template/RawImageProcessor.java
+++ b/core/src/com/cloud/storage/template/RawImageProcessor.java
@@ -25,9 +25,8 @@ import javax.naming.ConfigurationException;
import org.apache.log4j.Logger;
import com.cloud.exception.InternalErrorException;
-import com.cloud.storage.StorageLayer;
import com.cloud.storage.Storage.ImageFormat;
-import com.cloud.storage.template.Processor.FormatInfo;
+import com.cloud.storage.StorageLayer;
import com.cloud.utils.component.AdapterBase;
@Local(value=Processor.class)
diff --git a/core/src/com/cloud/storage/template/S3TemplateDownloader.java b/core/src/com/cloud/storage/template/S3TemplateDownloader.java
index 340e0dba868..9dacbd31282 100644
--- a/core/src/com/cloud/storage/template/S3TemplateDownloader.java
+++ b/core/src/com/cloud/storage/template/S3TemplateDownloader.java
@@ -24,7 +24,6 @@ import java.io.IOException;
import java.io.InputStream;
import java.util.Date;
-import org.apache.cloudstack.storage.command.DownloadCommand.ResourceType;
import org.apache.commons.httpclient.ChunkedInputStream;
import org.apache.commons.httpclient.Credentials;
import org.apache.commons.httpclient.Header;
@@ -43,15 +42,15 @@ import org.apache.commons.lang.StringUtils;
import org.apache.log4j.Logger;
import com.amazonaws.AmazonClientException;
-import com.amazonaws.auth.AWSCredentials;
-import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.ProgressEvent;
import com.amazonaws.services.s3.model.ProgressListener;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.StorageClass;
-import com.amazonaws.services.s3.transfer.TransferManager;
-import com.amazonaws.services.s3.transfer.Upload;
+
+import org.apache.cloudstack.managed.context.ManagedContextRunnable;
+import org.apache.cloudstack.storage.command.DownloadCommand.ResourceType;
+
import com.cloud.agent.api.storage.Proxy;
import com.cloud.agent.api.to.S3TO;
import com.cloud.utils.Pair;
@@ -62,7 +61,7 @@ import com.cloud.utils.UriUtils;
* Download a template file using HTTP
*
*/
-public class S3TemplateDownloader implements TemplateDownloader {
+public class S3TemplateDownloader extends ManagedContextRunnable implements TemplateDownloader {
public static final Logger s_logger = Logger.getLogger(S3TemplateDownloader.class.getName());
private static final MultiThreadedHttpConnectionManager s_httpClientManager = new MultiThreadedHttpConnectionManager();
@@ -89,15 +88,15 @@ public class S3TemplateDownloader implements TemplateDownloader {
public S3TemplateDownloader(S3TO storageLayer, String downloadUrl, String installPath,
DownloadCompleteCallback callback, long maxTemplateSizeInBytes, String user, String password, Proxy proxy,
ResourceType resourceType) {
- this.s3 = storageLayer;
+ s3 = storageLayer;
this.downloadUrl = downloadUrl;
this.installPath = installPath;
- this.status = TemplateDownloader.Status.NOT_STARTED;
+ status = TemplateDownloader.Status.NOT_STARTED;
this.resourceType = resourceType;
- this.maxTemplateSizeInByte = maxTemplateSizeInBytes;
+ maxTemplateSizeInByte = maxTemplateSizeInBytes;
- this.totalBytes = 0;
- this.client = new HttpClient(s_httpClientManager);
+ totalBytes = 0;
+ client = new HttpClient(s_httpClientManager);
myretryhandler = new HttpMethodRetryHandler() {
@Override
@@ -121,12 +120,12 @@ public class S3TemplateDownloader implements TemplateDownloader {
};
try {
- this.request = new GetMethod(downloadUrl);
- this.request.getParams().setParameter(HttpMethodParams.RETRY_HANDLER, myretryhandler);
- this.completionCallback = callback;
+ request = new GetMethod(downloadUrl);
+ request.getParams().setParameter(HttpMethodParams.RETRY_HANDLER, myretryhandler);
+ completionCallback = callback;
Pair hostAndPort = UriUtils.validateUrl(downloadUrl);
- this.fileName = StringUtils.substringAfterLast(downloadUrl, "/");
+ fileName = StringUtils.substringAfterLast(downloadUrl, "/");
if (proxy != null) {
client.getHostConfiguration().setProxy(proxy.getHost(), proxy.getPort());
@@ -226,9 +225,6 @@ public class S3TemplateDownloader implements TemplateDownloader {
// compute s3 key
s3Key = join(asList(installPath, fileName), S3Utils.SEPARATOR);
- // multi-part upload using S3 api to handle > 5G input stream
- TransferManager tm = new TransferManager(S3Utils.acquireClient(s3));
-
// download using S3 API
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(remoteSize);
@@ -261,11 +257,20 @@ public class S3TemplateDownloader implements TemplateDownloader {
}
});
- // TransferManager processes all transfers asynchronously,
- // so this call will return immediately.
- Upload upload = tm.upload(putObjectRequest);
+
- upload.waitForCompletion();
+ if ( !s3.getSingleUpload(remoteSize) ){
+ // use TransferManager to do multipart upload
+ S3Utils.mputObject(s3, putObjectRequest);
+ } else{
+ // single part upload, with 5GB limit in Amazon
+ S3Utils.putObject(s3, putObjectRequest);
+ while (status != TemplateDownloader.Status.DOWNLOAD_FINISHED &&
+ status != TemplateDownloader.Status.UNRECOVERABLE_ERROR &&
+ status != TemplateDownloader.Status.ABORTED) {
+ // wait for completion
+ }
+ }
// finished or aborted
Date finish = new Date();
@@ -361,7 +366,7 @@ public class S3TemplateDownloader implements TemplateDownloader {
}
@Override
- public void run() {
+ protected void runInContext() {
try {
download(resume, completionCallback);
} catch (Throwable t) {
@@ -388,7 +393,7 @@ public class S3TemplateDownloader implements TemplateDownloader {
@Override
public String getDownloadLocalPath() {
- return this.s3Key;
+ return s3Key;
}
@Override
@@ -398,7 +403,7 @@ public class S3TemplateDownloader implements TemplateDownloader {
@Override
public long getMaxTemplateSizeInBytes() {
- return this.maxTemplateSizeInByte;
+ return maxTemplateSizeInByte;
}
@Override
diff --git a/core/src/com/cloud/storage/template/ScpTemplateDownloader.java b/core/src/com/cloud/storage/template/ScpTemplateDownloader.java
index 724392f812f..fbc756f16b1 100644
--- a/core/src/com/cloud/storage/template/ScpTemplateDownloader.java
+++ b/core/src/com/cloud/storage/template/ScpTemplateDownloader.java
@@ -22,9 +22,10 @@ import java.net.URISyntaxException;
import org.apache.log4j.Logger;
+import com.trilead.ssh2.SCPClient;
+
import com.cloud.storage.StorageLayer;
import com.cloud.utils.exception.CloudRuntimeException;
-import com.trilead.ssh2.SCPClient;
public class ScpTemplateDownloader extends TemplateDownloaderBase implements TemplateDownloader {
private static final Logger s_logger = Logger.getLogger(ScpTemplateDownloader.class);
@@ -83,7 +84,6 @@ public class ScpTemplateDownloader extends TemplateDownloaderBase implements Tem
if (port == -1) {
port = 22;
}
- long length = 0;
File file = new File(_toFile);
com.trilead.ssh2.Connection sshConnection = new com.trilead.ssh2.Connection(uri.getHost(), port);
diff --git a/core/src/com/cloud/storage/template/TemplateDownloaderBase.java b/core/src/com/cloud/storage/template/TemplateDownloaderBase.java
index bdbdd457be1..7cbd4efe02d 100644
--- a/core/src/com/cloud/storage/template/TemplateDownloaderBase.java
+++ b/core/src/com/cloud/storage/template/TemplateDownloaderBase.java
@@ -18,11 +18,12 @@ package com.cloud.storage.template;
import java.io.File;
+import org.apache.cloudstack.managed.context.ManagedContextRunnable;
import org.apache.log4j.Logger;
import com.cloud.storage.StorageLayer;
-public abstract class TemplateDownloaderBase implements TemplateDownloader {
+public abstract class TemplateDownloaderBase extends ManagedContextRunnable implements TemplateDownloader {
private static final Logger s_logger = Logger.getLogger(TemplateDownloaderBase.class);
protected String _downloadUrl;
@@ -123,7 +124,7 @@ public abstract class TemplateDownloaderBase implements TemplateDownloader {
}
@Override
- public void run() {
+ protected void runInContext() {
try {
download(_resume, _callback);
} catch (Exception e) {
diff --git a/core/src/com/cloud/storage/template/TemplateUploader.java b/core/src/com/cloud/storage/template/TemplateUploader.java
index 8e0373a5d15..32e877144e6 100755
--- a/core/src/com/cloud/storage/template/TemplateUploader.java
+++ b/core/src/com/cloud/storage/template/TemplateUploader.java
@@ -16,9 +16,6 @@
// under the License.
package com.cloud.storage.template;
-import com.cloud.storage.template.TemplateUploader.UploadCompleteCallback;
-import com.cloud.storage.template.TemplateUploader.Status;
-
public interface TemplateUploader extends Runnable{
/**
diff --git a/core/src/org/apache/cloudstack/storage/command/AttachCommand.java b/core/src/org/apache/cloudstack/storage/command/AttachCommand.java
index 44bce910d02..7e47ba4e317 100644
--- a/core/src/org/apache/cloudstack/storage/command/AttachCommand.java
+++ b/core/src/org/apache/cloudstack/storage/command/AttachCommand.java
@@ -24,14 +24,6 @@ import com.cloud.agent.api.to.DiskTO;
public final class AttachCommand extends Command implements StorageSubSystemCommand {
private DiskTO disk;
private String vmName;
- private String _storageHost;
- private int _storagePort;
- private boolean _managed;
- private String _iScsiName;
- private String _chapInitiatorUsername;
- private String _chapInitiatorPassword;
- private String _chapTargetUsername;
- private String _chapTargetPassword;
public AttachCommand(DiskTO disk, String vmName) {
super();
@@ -59,68 +51,4 @@ public final class AttachCommand extends Command implements StorageSubSystemComm
public void setVmName(String vmName) {
this.vmName = vmName;
}
-
- public void setStorageHost(String storageHost) {
- _storageHost = storageHost;
- }
-
- public String getStorageHost() {
- return _storageHost;
- }
-
- public void setStoragePort(int storagePort) {
- _storagePort = storagePort;
- }
-
- public int getStoragePort() {
- return _storagePort;
- }
-
- public void setManaged(boolean managed) {
- _managed = managed;
- }
-
- public boolean isManaged() {
- return _managed;
- }
-
- public void set_iScsiName(String iScsiName) {
- this._iScsiName = iScsiName;
- }
-
- public String get_iScsiName() {
- return _iScsiName;
- }
-
- public void setChapInitiatorUsername(String chapInitiatorUsername) {
- _chapInitiatorUsername = chapInitiatorUsername;
- }
-
- public String getChapInitiatorUsername() {
- return _chapInitiatorUsername;
- }
-
- public void setChapInitiatorPassword(String chapInitiatorPassword) {
- _chapInitiatorPassword = chapInitiatorPassword;
- }
-
- public String getChapInitiatorPassword() {
- return _chapInitiatorPassword;
- }
-
- public void setChapTargetUsername(String chapTargetUsername) {
- _chapTargetUsername = chapTargetUsername;
- }
-
- public String getChapTargetUsername() {
- return _chapTargetUsername;
- }
-
- public void setChapTargetPassword(String chapTargetPassword) {
- _chapTargetPassword = chapTargetPassword;
- }
-
- public String getChapTargetPassword() {
- return _chapTargetPassword;
- }
}
diff --git a/core/src/org/apache/cloudstack/storage/command/CopyCommand.java b/core/src/org/apache/cloudstack/storage/command/CopyCommand.java
index 629fafe545f..e9ec0b35f11 100644
--- a/core/src/org/apache/cloudstack/storage/command/CopyCommand.java
+++ b/core/src/org/apache/cloudstack/storage/command/CopyCommand.java
@@ -63,4 +63,8 @@ public final class CopyCommand extends Command implements StorageSubSystemComman
this.cacheTO = cacheTO;
}
+ public int getWaitInMillSeconds() {
+ return this.getWait() * 1000;
+ }
+
}
diff --git a/core/src/org/apache/cloudstack/storage/command/DownloadCommand.java b/core/src/org/apache/cloudstack/storage/command/DownloadCommand.java
index 84dd59db9f6..9cc3e497c19 100644
--- a/core/src/org/apache/cloudstack/storage/command/DownloadCommand.java
+++ b/core/src/org/apache/cloudstack/storage/command/DownloadCommand.java
@@ -26,7 +26,6 @@ import com.cloud.agent.api.storage.Proxy;
import com.cloud.agent.api.to.DataStoreTO;
import com.cloud.agent.api.to.NfsTO;
import com.cloud.storage.Storage.ImageFormat;
-import com.cloud.storage.Volume;
public class DownloadCommand extends AbstractDownloadCommand implements InternalIdentity {
@@ -53,29 +52,29 @@ public class DownloadCommand extends AbstractDownloadCommand implements Internal
public DownloadCommand(DownloadCommand that) {
super(that);
- this.hvm = that.hvm;
- this.checksum = that.checksum;
- this.id = that.id;
- this.description = that.description;
- this.auth = that.getAuth();
- this.setSecUrl(that.getSecUrl());
- this.maxDownloadSizeInBytes = that.getMaxDownloadSizeInBytes();
- this.resourceType = that.resourceType;
- this.installPath = that.installPath;
- this._store = that._store;
+ hvm = that.hvm;
+ checksum = that.checksum;
+ id = that.id;
+ description = that.description;
+ auth = that.getAuth();
+ setSecUrl(that.getSecUrl());
+ maxDownloadSizeInBytes = that.getMaxDownloadSizeInBytes();
+ resourceType = that.resourceType;
+ installPath = that.installPath;
+ _store = that._store;
}
public DownloadCommand(TemplateObjectTO template, Long maxDownloadSizeInBytes) {
super(template.getName(), template.getOrigUrl(), template.getFormat(), template.getAccountId());
- this._store = template.getDataStore();
- this.installPath = template.getPath();
- this.hvm = template.isRequiresHvm();
- this.checksum = template.getChecksum();
- this.id = template.getId();
- this.description = template.getDescription();
+ _store = template.getDataStore();
+ installPath = template.getPath();
+ hvm = template.isRequiresHvm();
+ checksum = template.getChecksum();
+ id = template.getId();
+ description = template.getDescription();
if (_store instanceof NfsTO) {
- this.setSecUrl(((NfsTO) _store).getUrl());
+ setSecUrl(((NfsTO) _store).getUrl());
}
this.maxDownloadSizeInBytes = maxDownloadSizeInBytes;
}
@@ -87,12 +86,12 @@ public class DownloadCommand extends AbstractDownloadCommand implements Internal
public DownloadCommand(VolumeObjectTO volume, Long maxDownloadSizeInBytes, String checkSum, String url, ImageFormat format) {
super(volume.getName(), url, format, volume.getAccountId());
- this.checksum = checkSum;
- this.id = volume.getVolumeId();
- this.installPath = volume.getPath();
- this._store = volume.getDataStore();
+ checksum = checkSum;
+ id = volume.getVolumeId();
+ installPath = volume.getPath();
+ _store = volume.getDataStore();
this.maxDownloadSizeInBytes = maxDownloadSizeInBytes;
- this.resourceType = ResourceType.VOLUME;
+ resourceType = ResourceType.VOLUME;
}
@Override
public long getId() {
@@ -184,6 +183,6 @@ public class DownloadCommand extends AbstractDownloadCommand implements Internal
}
public DataStoreTO getCacheStore() {
- return this.cacheStore;
+ return cacheStore;
}
}
diff --git a/core/src/org/apache/cloudstack/storage/command/ForgetObjectCmd.java b/core/src/org/apache/cloudstack/storage/command/ForgetObjectCmd.java
new file mode 100644
index 00000000000..58fb7802019
--- /dev/null
+++ b/core/src/org/apache/cloudstack/storage/command/ForgetObjectCmd.java
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.cloudstack.storage.command;
+
+import com.cloud.agent.api.Command;
+import com.cloud.agent.api.to.DataTO;
+
+public class ForgetObjectCmd extends Command implements StorageSubSystemCommand {
+ private DataTO dataTO;
+ public ForgetObjectCmd(DataTO data) {
+ this.dataTO = data;
+ }
+
+ public DataTO getDataTO() {
+ return this.dataTO;
+ }
+ @Override
+ public boolean executeInSequence() {
+ return false;
+ }
+}
diff --git a/core/src/org/apache/cloudstack/storage/command/IntroduceObjectAnswer.java b/core/src/org/apache/cloudstack/storage/command/IntroduceObjectAnswer.java
new file mode 100644
index 00000000000..03c74b8aaa0
--- /dev/null
+++ b/core/src/org/apache/cloudstack/storage/command/IntroduceObjectAnswer.java
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.cloudstack.storage.command;
+
+import com.cloud.agent.api.Answer;
+import com.cloud.agent.api.to.DataTO;
+
+public class IntroduceObjectAnswer extends Answer {
+ private DataTO dataTO;
+ public IntroduceObjectAnswer(DataTO dataTO) {
+ this.dataTO = dataTO;
+ }
+
+ public DataTO getDataTO() {
+ return this.dataTO;
+ }
+}
diff --git a/core/src/org/apache/cloudstack/storage/command/IntroduceObjectCmd.java b/core/src/org/apache/cloudstack/storage/command/IntroduceObjectCmd.java
new file mode 100644
index 00000000000..1aabed2d279
--- /dev/null
+++ b/core/src/org/apache/cloudstack/storage/command/IntroduceObjectCmd.java
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.cloudstack.storage.command;
+
+import com.cloud.agent.api.Command;
+import com.cloud.agent.api.to.DataTO;
+
+public class IntroduceObjectCmd extends Command implements StorageSubSystemCommand {
+ private DataTO dataTO;
+ public IntroduceObjectCmd(DataTO dataTO) {
+ this.dataTO = dataTO;
+ }
+
+ public DataTO getDataTO() {
+ return this.dataTO;
+ }
+
+ @Override
+ public boolean executeInSequence() {
+ return false;
+ }
+}
diff --git a/core/src/org/apache/cloudstack/storage/to/ImageStoreTO.java b/core/src/org/apache/cloudstack/storage/to/ImageStoreTO.java
index 0037ea57242..ec6c24092d3 100644
--- a/core/src/org/apache/cloudstack/storage/to/ImageStoreTO.java
+++ b/core/src/org/apache/cloudstack/storage/to/ImageStoreTO.java
@@ -26,6 +26,7 @@ public class ImageStoreTO implements DataStoreTO {
private String uri;
private String providerName;
private DataStoreRole role;
+ private String uuid;
public ImageStoreTO() {
@@ -76,4 +77,13 @@ public class ImageStoreTO implements DataStoreTO {
return new StringBuilder("ImageStoreTO[type=").append(type).append("|provider=").append(providerName)
.append("|role=").append(role).append("|uri=").append(uri).append("]").toString();
}
+
+ @Override
+ public String getUuid() {
+ return uuid;
+ }
+
+ public void setUuid(String uuid) {
+ this.uuid = uuid;
+ }
}
diff --git a/core/src/org/apache/cloudstack/storage/to/PrimaryDataStoreTO.java b/core/src/org/apache/cloudstack/storage/to/PrimaryDataStoreTO.java
index 5e870df3716..91d78a49350 100644
--- a/core/src/org/apache/cloudstack/storage/to/PrimaryDataStoreTO.java
+++ b/core/src/org/apache/cloudstack/storage/to/PrimaryDataStoreTO.java
@@ -46,6 +46,7 @@ public class PrimaryDataStoreTO implements DataStoreTO {
return this.id;
}
+ @Override
public String getUuid() {
return this.uuid;
}
diff --git a/core/src/org/apache/cloudstack/storage/to/VolumeObjectTO.java b/core/src/org/apache/cloudstack/storage/to/VolumeObjectTO.java
index 5685fad59c4..46659a3a2d0 100644
--- a/core/src/org/apache/cloudstack/storage/to/VolumeObjectTO.java
+++ b/core/src/org/apache/cloudstack/storage/to/VolumeObjectTO.java
@@ -38,6 +38,8 @@ public class VolumeObjectTO implements DataTO {
private String chainInfo;
private Storage.ImageFormat format;
private long id;
+
+ private Long deviceId;
private Long bytesReadRate;
private Long bytesWriteRate;
private Long iopsReadRate;
@@ -70,6 +72,7 @@ public class VolumeObjectTO implements DataTO {
this.iopsReadRate = volume.getIopsReadRate();
this.iopsWriteRate = volume.getIopsWriteRate();
this.hypervisorType = volume.getHypervisorType();
+ setDeviceId(volume.getDeviceId());
}
public String getUuid() {
@@ -220,4 +223,13 @@ public class VolumeObjectTO implements DataTO {
return iopsWriteRate;
}
+ public Long getDeviceId() {
+ return deviceId;
+ }
+
+ public void setDeviceId(Long deviceId) {
+ this.deviceId = deviceId;
+ }
+
+
}
diff --git a/core/test/com/cloud/network/HAProxyConfiguratorTest.java b/core/test/com/cloud/network/HAProxyConfiguratorTest.java
new file mode 100644
index 00000000000..d854231f985
--- /dev/null
+++ b/core/test/com/cloud/network/HAProxyConfiguratorTest.java
@@ -0,0 +1,97 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+package com.cloud.network;
+
+import static org.junit.Assert.*;
+
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import com.cloud.agent.api.routing.LoadBalancerConfigCommand;
+import com.cloud.agent.api.to.LoadBalancerTO;
+
+/**
+ * @author dhoogland
+ *
+ */
+public class HAProxyConfiguratorTest {
+
+ /**
+ * @throws java.lang.Exception
+ */
+ @BeforeClass
+ public static void setUpBeforeClass() throws Exception {
+ }
+
+ /**
+ * @throws java.lang.Exception
+ */
+ @AfterClass
+ public static void tearDownAfterClass() throws Exception {
+ }
+
+ /**
+ * @throws java.lang.Exception
+ */
+ @Before
+ public void setUp() throws Exception {
+ }
+
+ /**
+ * @throws java.lang.Exception
+ */
+ @After
+ public void tearDown() throws Exception {
+ }
+
+ /**
+ * Test method for {@link com.cloud.network.HAProxyConfigurator#generateConfiguration(com.cloud.agent.api.routing.LoadBalancerConfigCommand)}.
+ */
+ @Test
+ public void testGenerateConfigurationLoadBalancerConfigCommand() {
+ LoadBalancerTO lb = new LoadBalancerTO("1", "10.2.0.1", 80, "http", "bla", false, false, false, null);
+ LoadBalancerTO[] lba = new LoadBalancerTO[1];
+ lba[0] = lb;
+ HAProxyConfigurator hpg = new HAProxyConfigurator();
+ LoadBalancerConfigCommand cmd = new LoadBalancerConfigCommand(lba, "10.0.0.1", "10.1.0.1", "10.1.1.1", null, 1L, "12", false);
+ String result = genConfig(hpg, cmd);
+ assertTrue("keepalive disabled should result in 'mode http' in the resulting haproxy config", result.contains("mode http"));
+
+ cmd = new LoadBalancerConfigCommand(lba, "10.0.0.1", "10.1.0.1", "10.1.1.1", null, 1L, "4", true);
+ result = genConfig(hpg, cmd);
+ assertTrue("keepalive enabled should not result in 'mode http' in the resulting haproxy config",! result.contains("mode http"));
+ // TODO
+ // create lb command
+ // setup tests for
+ // maxconn (test for maxpipes as well)
+ // httpmode
+ }
+
+ private String genConfig(HAProxyConfigurator hpg, LoadBalancerConfigCommand cmd) {
+ String [] sa = hpg.generateConfiguration(cmd);
+ StringBuilder sb = new StringBuilder();
+ for(String s: sa) {
+ sb.append(s).append('\n');
+ }
+ return sb.toString();
+ }
+
+}
diff --git a/core/test/org/apache/cloudstack/api/agent/test/AttachVolumeAnswerTest.java b/core/test/org/apache/cloudstack/api/agent/test/AttachVolumeAnswerTest.java
index 0b2bb1f4f3f..5262d3b78a6 100644
--- a/core/test/org/apache/cloudstack/api/agent/test/AttachVolumeAnswerTest.java
+++ b/core/test/org/apache/cloudstack/api/agent/test/AttachVolumeAnswerTest.java
@@ -27,7 +27,7 @@ import com.cloud.storage.Storage.StoragePoolType;
public class AttachVolumeAnswerTest {
AttachVolumeCommand avc = new AttachVolumeCommand(true, false, "vmname",
- StoragePoolType.Filesystem, "vPath", "vName",
+ StoragePoolType.Filesystem, "vPath", "vName", 1073741824L,
123456789L, "chainInfo");
AttachVolumeAnswer ava1 = new AttachVolumeAnswer(avc);
String results = "";
diff --git a/core/test/org/apache/cloudstack/api/agent/test/AttachVolumeCommandTest.java b/core/test/org/apache/cloudstack/api/agent/test/AttachVolumeCommandTest.java
index 6f413c0268d..1c5caca5f5c 100644
--- a/core/test/org/apache/cloudstack/api/agent/test/AttachVolumeCommandTest.java
+++ b/core/test/org/apache/cloudstack/api/agent/test/AttachVolumeCommandTest.java
@@ -26,7 +26,7 @@ import com.cloud.storage.Storage.StoragePoolType;
public class AttachVolumeCommandTest {
AttachVolumeCommand avc = new AttachVolumeCommand(true, false, "vmname",
- StoragePoolType.Filesystem, "vPath", "vName",
+ StoragePoolType.Filesystem, "vPath", "vName", 1073741824L,
123456789L, "chainInfo");
@Test
diff --git a/core/test/org/apache/cloudstack/api/agent/test/BackupSnapshotCommandTest.java b/core/test/org/apache/cloudstack/api/agent/test/BackupSnapshotCommandTest.java
index 0fee8c64d87..a7a1fd2a3c7 100644
--- a/core/test/org/apache/cloudstack/api/agent/test/BackupSnapshotCommandTest.java
+++ b/core/test/org/apache/cloudstack/api/agent/test/BackupSnapshotCommandTest.java
@@ -27,7 +27,6 @@ import java.util.Date;
import org.junit.Test;
import com.cloud.agent.api.BackupSnapshotCommand;
-import com.cloud.agent.api.to.StorageFilerTO;
import com.cloud.agent.api.to.SwiftTO;
import com.cloud.storage.Storage.StoragePoolType;
import com.cloud.storage.StoragePool;
diff --git a/debian/changelog b/debian/changelog
index dc9c65d2066..d6af31f69dc 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,4 +1,4 @@
-cloudstack (4.3.0) unstable; urgency=low
+cloudstack (4.3.0-snapshot) unstable; urgency=low
* Update the version to 4.3.0.snapshot
diff --git a/debian/cloudstack-agent.install b/debian/cloudstack-agent.install
index a3cc86964dd..d708514fd14 100644
--- a/debian/cloudstack-agent.install
+++ b/debian/cloudstack-agent.install
@@ -21,6 +21,7 @@
/etc/init.d/cloudstack-agent
/usr/bin/cloudstack-setup-agent
/usr/bin/cloudstack-ssh
+/usr/bin/cloudstack-agent-upgrade
/var/log/cloudstack/agent
/usr/share/cloudstack-agent/lib/*
/usr/share/cloudstack-agent/plugins
diff --git a/debian/cloudstack-agent.postinst b/debian/cloudstack-agent.postinst
index 499ae6a695a..9bad1380bf0 100644
--- a/debian/cloudstack-agent.postinst
+++ b/debian/cloudstack-agent.postinst
@@ -34,7 +34,15 @@ case "$1" in
fi
done
fi
+
+ # Running cloudstack-agent-upgrade to update bridge name for upgrade from CloudStack 4.0.x (and before) to CloudStack 4.1 (and later)
+ /usr/bin/cloudstack-agent-upgrade
+ if [ ! -d "/etc/libvirt/hooks" ] ; then
+ mkdir /etc/libvirt/hooks
+ fi
+ cp -a /usr/share/cloudstack-agent/lib/libvirtqemuhook /etc/libvirt/hooks/qemu
+ /etc/init.d/libvirt-bin restart
;;
esac
-exit 0
\ No newline at end of file
+exit 0
diff --git a/debian/cloudstack-management.install b/debian/cloudstack-management.install
index a1325cdb2b5..f06ab86dda1 100644
--- a/debian/cloudstack-management.install
+++ b/debian/cloudstack-management.install
@@ -21,8 +21,6 @@
/etc/cloudstack/management/logging.properties
/etc/cloudstack/management/commands.properties
/etc/cloudstack/management/ehcache.xml
-/etc/cloudstack/management/componentContext.xml
-/etc/cloudstack/management/applicationContext.xml
/etc/cloudstack/management/server-ssl.xml
/etc/cloudstack/management/server-nonssl.xml
/etc/cloudstack/management/server.xml
@@ -33,7 +31,6 @@
/etc/cloudstack/management/tomcat6.conf
/etc/cloudstack/management/web.xml
/etc/cloudstack/management/environment.properties
-/etc/cloudstack/management/nonossComponentContext.xml
/etc/cloudstack/management/log4j-cloud.xml
/etc/cloudstack/management/tomcat-users.xml
/etc/cloudstack/management/context.xml
diff --git a/debian/control b/debian/control
index e6d1ef088f2..c756dcd0d8e 100644
--- a/debian/control
+++ b/debian/control
@@ -22,7 +22,7 @@ Description: CloudStack server library
Package: cloudstack-agent
Architecture: all
-Depends: openjdk-6-jre | openjdk-7-jre, cloudstack-common (= ${source:Version}), lsb-base (>= 3.2), libcommons-daemon-java, libjna-java, openssh-client, libvirt0, sysvinit-utils, qemu-kvm, libvirt-bin, uuid-runtime, rsync, grep, iproute, perl-base, perl-modules, ebtables, vlan, wget, jsvc, ipset, python-libvirt
+Depends: openjdk-6-jre | openjdk-7-jre, cloudstack-common (= ${source:Version}), lsb-base (>= 3.2), libcommons-daemon-java, openssh-client, libvirt0, sysvinit-utils, qemu-kvm, libvirt-bin, uuid-runtime, rsync, grep, iproute, perl-base, perl-modules, ebtables, vlan, wget, jsvc, ipset, python-libvirt, ethtool, iptables
Conflicts: cloud-agent, cloud-agent-libs, cloud-agent-deps, cloud-agent-scripts
Description: CloudStack agent
The CloudStack agent is in charge of managing shared computing resources in
diff --git a/debian/rules b/debian/rules
index 5e3d58c4da3..4edf8930605 100755
--- a/debian/rules
+++ b/debian/rules
@@ -12,6 +12,7 @@
DEBVERS := $(shell dpkg-parsechangelog | sed -n -e 's/^Version: //p')
VERSION := $(shell echo '$(DEBVERS)' | sed -e 's/^[[:digit:]]*://' -e 's/[~-].*//')
+MVNADD := $(shell if echo '$(DEBVERS)' | grep -q snapshot; then echo -SNAPSHOT; fi )
PACKAGE = $(shell dh_listpackages|head -n 1|cut -d '-' -f 1)
SYSCONFDIR = "/etc"
DESTDIR = "debian/tmp"
@@ -65,12 +66,14 @@ install:
mkdir $(DESTDIR)/var/log/$(PACKAGE)/agent
mkdir $(DESTDIR)/usr/share/$(PACKAGE)-agent
mkdir $(DESTDIR)/usr/share/$(PACKAGE)-agent/plugins
- install -D agent/target/cloud-agent-$(VERSION)-SNAPSHOT.jar $(DESTDIR)/usr/share/$(PACKAGE)-agent/lib/$(PACKAGE)-agent.jar
- install -D plugins/hypervisors/kvm/target/cloud-plugin-hypervisor-kvm-$(VERSION)-SNAPSHOT.jar $(DESTDIR)/usr/share/$(PACKAGE)-agent/lib/
+ install -D agent/target/cloud-agent-$(VERSION)$(MVNADD).jar $(DESTDIR)/usr/share/$(PACKAGE)-agent/lib/$(PACKAGE)-agent.jar
+ install -D plugins/hypervisors/kvm/target/cloud-plugin-hypervisor-kvm-$(VERSION)$(MVNADD).jar $(DESTDIR)/usr/share/$(PACKAGE)-agent/lib/
install -D plugins/hypervisors/kvm/target/dependencies/* $(DESTDIR)/usr/share/$(PACKAGE)-agent/lib/
install -D packaging/debian/init/cloud-agent $(DESTDIR)/$(SYSCONFDIR)/init.d/$(PACKAGE)-agent
install -D agent/target/transformed/cloud-setup-agent $(DESTDIR)/usr/bin/cloudstack-setup-agent
install -D agent/target/transformed/cloud-ssh $(DESTDIR)/usr/bin/cloudstack-ssh
+ install -D agent/target/transformed/cloudstack-agent-upgrade $(DESTDIR)/usr/bin/cloudstack-agent-upgrade
+ install -D agent/target/transformed/libvirtqemuhook $(DESTDIR)/usr/share/$(PACKAGE)-agent/lib/
install -D agent/target/transformed/* $(DESTDIR)/$(SYSCONFDIR)/$(PACKAGE)/agent
# cloudstack-management
@@ -90,7 +93,7 @@ install:
mkdir $(DESTDIR)/var/lib/$(PACKAGE)/management
mkdir $(DESTDIR)/var/lib/$(PACKAGE)/mnt
cp -r client/target/utilities/scripts/db/* $(DESTDIR)/usr/share/$(PACKAGE)-management/setup/
- cp -r client/target/cloud-client-ui-$(VERSION)-SNAPSHOT/* $(DESTDIR)/usr/share/$(PACKAGE)-management/webapps/client/
+ cp -r client/target/cloud-client-ui-$(VERSION)$(MVNADD)/* $(DESTDIR)/usr/share/$(PACKAGE)-management/webapps/client/
cp server/target/conf/* $(DESTDIR)/$(SYSCONFDIR)/$(PACKAGE)/server/
cp client/target/conf/* $(DESTDIR)/$(SYSCONFDIR)/$(PACKAGE)/management/
@@ -130,7 +133,7 @@ install:
install -D client/target/utilities/bin/cloud-setup-management $(DESTDIR)/usr/bin/cloudstack-setup-management
install -D client/target/utilities/bin/cloud-setup-encryption $(DESTDIR)/usr/bin/cloudstack-setup-encryption
install -D client/target/utilities/bin/cloud-sysvmadm $(DESTDIR)/usr/bin/cloudstack-sysvmadm
- install -D services/console-proxy/server/dist/systemvm.iso $(DESTDIR)/usr/share/$(PACKAGE)-common/vms/systemvm.iso
+ install -D systemvm/dist/systemvm.iso $(DESTDIR)/usr/share/$(PACKAGE)-common/vms/systemvm.iso
# We need jasypt for cloud-install-sys-tmplt, so this is a nasty hack to get it into the right place
install -D agent/target/dependencies/jasypt-1.9.0.jar $(DESTDIR)/usr/share/$(PACKAGE)-common/lib
@@ -143,7 +146,7 @@ install:
mkdir $(DESTDIR)/var/log/$(PACKAGE)/usage
mkdir $(DESTDIR)/usr/share/$(PACKAGE)-usage
mkdir $(DESTDIR)/usr/share/$(PACKAGE)-usage/plugins
- install -D usage/target/cloud-usage-$(VERSION)-SNAPSHOT.jar $(DESTDIR)/usr/share/$(PACKAGE)-usage/lib/$(PACKAGE)-usage.jar
+ install -D usage/target/cloud-usage-$(VERSION)$(MVNADD).jar $(DESTDIR)/usr/share/$(PACKAGE)-usage/lib/$(PACKAGE)-usage.jar
install -D usage/target/dependencies/* $(DESTDIR)/usr/share/$(PACKAGE)-usage/lib/
cp usage/target/transformed/db.properties $(DESTDIR)/$(SYSCONFDIR)/$(PACKAGE)/usage/
cp usage/target/transformed/log4j-cloud_usage.xml $(DESTDIR)/$(SYSCONFDIR)/$(PACKAGE)/usage/log4j-cloud.xml
@@ -156,7 +159,7 @@ install:
mkdir -p $(DESTDIR)/usr/share/$(PACKAGE)-bridge/webapps/awsapi
mkdir $(DESTDIR)/usr/share/$(PACKAGE)-bridge/setup
ln -s /usr/share/$(PACKAGE)-bridge/webapps/awsapi $(DESTDIR)/usr/share/$(PACKAGE)-management/webapps7080/awsapi
- cp -r awsapi/target/cloud-awsapi-$(VERSION)-SNAPSHOT/* $(DESTDIR)/usr/share/$(PACKAGE)-bridge/webapps/awsapi
+ cp -r awsapi/target/cloud-awsapi-$(VERSION)$(MVNADD)/* $(DESTDIR)/usr/share/$(PACKAGE)-bridge/webapps/awsapi
install -D awsapi-setup/setup/cloud-setup-bridge $(DESTDIR)/usr/bin/cloudstack-setup-bridge
install -D awsapi-setup/setup/cloudstack-aws-api-register $(DESTDIR)/usr/bin/cloudstack-aws-api-register
cp -r awsapi-setup/db/mysql/* $(DESTDIR)/usr/share/$(PACKAGE)-bridge/setup
diff --git a/deps/install-non-oss.sh b/deps/install-non-oss.sh
index 0bf8e48d70c..940bd32ae59 100755
--- a/deps/install-non-oss.sh
+++ b/deps/install-non-oss.sh
@@ -16,7 +16,12 @@
# specific language governing permissions and limitations
# under the License.
+# From https://devcentral.f5.com
+# Version: unknown
mvn install:install-file -Dfile=cloud-iControl.jar -DgroupId=com.cloud.com.f5 -DartifactId=icontrol -Dversion=1.0 -Dpackaging=jar
+
+# From Citrix
+# Version: unknown
mvn install:install-file -Dfile=cloud-netscaler-sdx.jar -DgroupId=com.cloud.com.citrix -DartifactId=netscaler-sdx -Dversion=1.0 -Dpackaging=jar
# From http://support.netapp.com/ (not available online, contact your support representative)
diff --git a/developer/pom.xml b/developer/pom.xml
index be14494b047..0eb18bf2d3f 100644
--- a/developer/pom.xml
+++ b/developer/pom.xml
@@ -74,7 +74,6 @@
maven-antrun-plugin
- 1.7generate-resources
diff --git a/docs/README.txt b/docs/README.txt
deleted file mode 100644
index e327fb9101c..00000000000
--- a/docs/README.txt
+++ /dev/null
@@ -1,325 +0,0 @@
-Author: Jessica Tomechak
-
-Updated: August 8, 2012
-
-
--------------------------------------------
-
-WHAT'S IN THIS REPOSITORY: WORK IN PROGRESS
-
--------------------------------------------
-
-This repository contains the source files for CloudStack documentation. The files are currently incomplete as we are in the process of converting documentation from an outdated file format into XML files for this repo.
-The complete documentation can be seen at docs.cloudstack.org.
-
-
-
-----------------------------------
-
-DOCUMENTATION SUBDIRECTORIES
-
-----------------------------------
-
-United States English language source files are in the en-US subdirectory.
-Additional language subdirectories can be added.
-
-
-Each file in a language subdirectory contains one chunk of information that may be termed a section, module, or topic. The files are written in Docbook XML, using the Docbook version and tag supported by the Publican open-source documentation tool.
-
-
-
-----------------------------------
-
-VALID XML TAGS
-
-----------------------------------
-
-Certain tags are disallowed by Publican. Please consult their documentation for more details.
-http://jfearn.fedorapeople.org/en-US/Publican/2.7/html/Users_Guide/
-
-Your best bet is to copy an existing XML file and fill in your own content between the tags.
-
-At the bottom of this README, there is a fill-in-the-blanks XML template that you can go from. It shows the commonly used tags and explains a bit about how to use them.
-
-
-----------------------------------
-
-SECTIONS, CHAPTERS, AND BOOK FILES
-
-----------------------------------
-
-The files for every topic and audience are in a single directory. The content is not divided into separate subdirectories for each book, or separate repositories for each book. Therefore, the content can be flexibly and easily re-used. In most cases, a file contains a single section that can be assembled with other sections to build any desired set of information. These files contain ... tags.
-
-
-Some of the XML files contain only a series of include tags to pull in content from other files. Such an "include file" is either a major section, a chapter in a book, or the master book file. A chapter contains ... tags.
-
-
-The master book file contains ... tags. This file is referred to in the Publican configuration file, and is used as the controlling file when building the book.
-
-
-Document names are derived from the docname setting in the appropriate .cfg file.
-This should not have CloudStack in the name (which is redundant because of
-the CloudStack brand that the documentation is built with. The docname variable
-sets the name in the doc site table of contents. This name also needs to exist
-as .xml and .ent in the en-US directory. Examples of appropriate docnames:
-Admin_Guide
-API_Developers_Guide
-Installation_Guide
-
-
-
-
-A Publican book file must also have certain other tags that are expected by
-Publican when it builds the project. Copy an existing master book file to
-get these tags.
-
-
-----------------------------------
-
-CONFIG FILES
-
-----------------------------------
-
-For each book file, there must be a corresponding publican.cfg (or
-.cfg) file in order to build the book with Publican. The
-docname: attribute in the config file matches the name of the master book file;
-for example, docname: cloudstack corresponds to the master book file
-cloudstack.xml.
-
-
-The .cfg files reside in the main directory, docs. To build a different book,
-just use the Publican command line flag --config=.cfg. (We also
-need per-book entities, Book_Info, Author_Info, and other Publican files.
-The technique for pulling these in is TBD.)
-
-
-----------------------------------
-
-TO BUILD A BOOK
-
-----------------------------------
-
-We will set up an automatic Publican job that generates new output whenever we
-check in changes to this repository. You can also build a book locally as
-follows.
-
-
-First, install Publican, and get a local copy of the book source files.
-
-
-Put the desired publican.cfg in the docs directory. Go to the command line, cd
-to that directory, and run the publican build command. Specify what output
-format(s) and what language(s) you want to build. Always start with a test
-run. For example:
-
-
-publican build --formats test --langs en-US
-
-
-...followed by this command if the test is successful:
-
-
-publican build --formats html,pdf --langs en-US
-
-
-Output will be found in the tmp subdirectory of the docs directory.
-
-
-
-----------------------------------
-
-LOCALIZATION
-
-----------------------------------
-
-Localized versions of the documentation files can be stored in appropriately
-named subdirectories parallel to en-US. The language code names to use for
-these directories are listed in Publican documentation,
-http://jfearn.fedorapeople.org/en-US/Publican/2.7/html/Users_Guide/appe-Users_Guide-Language_codes.html.
-For example, Japanese XML files would be stored in the docs/ja-JP directory.
-
-Localization currently happens using Transifex and you can find the strings
-to be translated at this location:
-https://www.transifex.com/projects/p/ACS_DOCS/
-
-In preparation for l10n, authors and docs folks must take not of a number of
-things.
-All .xml files must contain a translatable string. tags are not enough.
-All new .xml files must have a corresponding entry in docs/.tx/config
-Filenames should be less than 50 characters long.
-
-To generate new POT files and upload source do the following:
-publican update_pot --config=./publican-all.cfg
-tx push -s
-
-To receive translated files from publican, run the following command:
-tx pull
-
-
-----------------------------------
-
-CONTRIBUTING
-
-----------------------------------
-
-Contributors can create new section, chapter, book, publican.cfg, or localized
-.xml files at any time. Submit them following the same patch approval procedure
-that is used for contributing to CloudStack code. More information for
-contributors is available at
-https://cwiki.apache.org/confluence/display/CLOUDSTACK/Documentation+Team.
-
-----------------------------------
-
-TAGS FOR A SECTION
-----------------------------------
-
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
-
-
-
- Text of the section title
- Here's the text of a paragraph in this section.
- Always use &PRODUCT; rather than typing CloudStack.
- Indent with 4 spaces, not with tab characters.
- To hyperlink to a URL outside this document: Display text of the link here
- To hyperlink to another section in this document:
- The publication tools will automatically insert the display text of the link for you.
- Use this for all tips and asides. Don't use other tags such as tip.
- Our publication tool (publican) prefers the note tag. The tool will
- automatically insert the text NOTE: for you, so please don't type it.
- Use this for anything that is vital to avoid runtime errors. Don't use
- other tags such as caution. Our publication tool (publican) prefers the warning tag. The tool will automatically insert the text WARNING: for you, so please don't type it.
- Here's how to do a bulleted list:
-
- Bulleted list item text.
-
- Here's how to do a numbered list. These are used for step by step instructions
- or to describe a sequence of events in time. For everything else, use a bulleted list.
-
- Text of the step
- You might also want a sub-list within one of the list items. Like this:
-
- Inner list item text.
-
-
-
- Here's how to insert an image. Put the graphic file in images/, a subdirectory of the directory where this XML file is.
- Refer to it using this tag. The tag is admittedly complex, but it's the one we need to use with publican:
-
-
-
-
- YOUR_FILENAME_HERE.png: Alt text describing this image, such as
- “structure of a zone.” Required for accessibility.
-
- A section can contain sub-sections. Please make each sub-section a separate file to enable reuse.
- Then include the sub-section like this:
-
-
-
-
-
-----------------------------------
-
-TAGS FOR A CHAPTER
-----------------------------------
-
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
-
-
-
- Text of the chapter title
-
-
-
-
-
-
-----------------------------------
-
-TAGS FOR A BOOK
-----------------------------------
-
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
-
-
-
-
-
-
-
-
-----------------------------------
-
-BASIC RULES FOR INCLUDE STATEMENTS
-----------------------------------
-
-A book file must include chapter files.
-A chapter file must include section files.
-A section file can include other section files, but it doesn't have to.
diff --git a/docs/en-US/Admin_Guide.ent b/docs/en-US/Admin_Guide.ent
deleted file mode 100644
index abb18851bcf..00000000000
--- a/docs/en-US/Admin_Guide.ent
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
-
-
-
-
diff --git a/docs/en-US/Admin_Guide.xml b/docs/en-US/Admin_Guide.xml
deleted file mode 100644
index d3b9706f84e..00000000000
--- a/docs/en-US/Admin_Guide.xml
+++ /dev/null
@@ -1,74 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
-
- &PRODUCT; Administrator's Guide
- Apache CloudStack
- 4.2.0
- 1
-
-
-
- Administration Guide for &PRODUCT;.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/Author_Group.xml b/docs/en-US/Author_Group.xml
deleted file mode 100644
index ba9e651f876..00000000000
--- a/docs/en-US/Author_Group.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
-
-
- Apache
- CloudStack
-
-
-
diff --git a/docs/en-US/Book_Info.xml b/docs/en-US/Book_Info.xml
deleted file mode 100644
index 327668dfc9d..00000000000
--- a/docs/en-US/Book_Info.xml
+++ /dev/null
@@ -1,47 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
-
- &PRODUCT; Guide
- Revised August 9, 2012 10:48 pm Pacific
- Apache CloudStack
- 4.2.0
- 1
-
-
-
- Complete technical documentation of &PRODUCT;.
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/Book_Info_Release_Notes_4.xml b/docs/en-US/Book_Info_Release_Notes_4.xml
deleted file mode 100644
index e1c270f3e14..00000000000
--- a/docs/en-US/Book_Info_Release_Notes_4.xml
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Version 4.2.0 Release Notes
- Apache &PRODUCT;
-
-
-
- Release notes for the Apache &PRODUCT; 4.2.0 release.
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/CloudStack_GSoC_Guide.ent b/docs/en-US/CloudStack_GSoC_Guide.ent
deleted file mode 100644
index 17415873334..00000000000
--- a/docs/en-US/CloudStack_GSoC_Guide.ent
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
-
-
-
-
diff --git a/docs/en-US/CloudStack_GSoC_Guide.xml b/docs/en-US/CloudStack_GSoC_Guide.xml
deleted file mode 100644
index 2f537d40cef..00000000000
--- a/docs/en-US/CloudStack_GSoC_Guide.xml
+++ /dev/null
@@ -1,52 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
-
-
-
- &PRODUCT; Guide for the 2013 Google Summer of Code
- Apache CloudStack
- 4.3.0
- 1
-
-
-
- Guide for 2013 Google Summer of Code Projects.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/CloudStack_Nicira_NVP_Guide.ent b/docs/en-US/CloudStack_Nicira_NVP_Guide.ent
deleted file mode 100644
index abb18851bcf..00000000000
--- a/docs/en-US/CloudStack_Nicira_NVP_Guide.ent
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
-
-
-
-
diff --git a/docs/en-US/CloudStack_Nicira_NVP_Guide.xml b/docs/en-US/CloudStack_Nicira_NVP_Guide.xml
deleted file mode 100644
index 5431fc1cb43..00000000000
--- a/docs/en-US/CloudStack_Nicira_NVP_Guide.xml
+++ /dev/null
@@ -1,55 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
-
-
-
- &PRODUCT; Plugin Guide for the Nicira NVP Plugin
- Apache CloudStack
- 4.2.0
- 1
-
-
-
- Plugin Guide for the Nicira NVP Plugin.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/Common_Content/Legal_Notice.xml b/docs/en-US/Common_Content/Legal_Notice.xml
deleted file mode 100644
index 2a2e3a7b3e7..00000000000
--- a/docs/en-US/Common_Content/Legal_Notice.xml
+++ /dev/null
@@ -1,15 +0,0 @@
-
-
-
-
- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
-
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
-
-
-
diff --git a/docs/en-US/Common_Content/feedback.xml b/docs/en-US/Common_Content/feedback.xml
deleted file mode 100644
index 4b06c9f3898..00000000000
--- a/docs/en-US/Common_Content/feedback.xml
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Feedback
- to-do
-
diff --git a/docs/en-US/Developers_Guide.ent b/docs/en-US/Developers_Guide.ent
deleted file mode 100644
index 47a2b6757f8..00000000000
--- a/docs/en-US/Developers_Guide.ent
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
-
-
-
\ No newline at end of file
diff --git a/docs/en-US/Developers_Guide.xml b/docs/en-US/Developers_Guide.xml
deleted file mode 100644
index 7452e29ecf2..00000000000
--- a/docs/en-US/Developers_Guide.xml
+++ /dev/null
@@ -1,61 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
-
- &PRODUCT; Developer's Guide
- Apache CloudStack
- 4.2.0
-
-
-
-
- This guide shows how to develop &PRODUCT;, use the API for operation and integration, access the usage data and use &PRODUCT; specific tools to ease development, testing and integration.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/Installation_Guide.ent b/docs/en-US/Installation_Guide.ent
deleted file mode 100644
index abb18851bcf..00000000000
--- a/docs/en-US/Installation_Guide.ent
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
-
-
-
-
diff --git a/docs/en-US/Installation_Guide.xml b/docs/en-US/Installation_Guide.xml
deleted file mode 100644
index ea97f25c99c..00000000000
--- a/docs/en-US/Installation_Guide.xml
+++ /dev/null
@@ -1,62 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- &PRODUCT; Installation Guide
- Apache CloudStack
- 4.2.0
- 1
-
-
- Installation Guide for &PRODUCT;.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/LDAP-for-user-authentication.xml b/docs/en-US/LDAP-for-user-authentication.xml
deleted file mode 100644
index 772d1c5e3e2..00000000000
--- a/docs/en-US/LDAP-for-user-authentication.xml
+++ /dev/null
@@ -1,51 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Using an LDAP Server for User Authentication
- You can use an external LDAP server such as Microsoft Active Directory or OpenLDAP to authenticate &PRODUCT; end-users.
- In order to do this you must:
-
- Set your LDAP configuration within &PRODUCT;
- Create &PRODUCT; accounts for LDAP users
-
- To set up LDAP authentication in &PRODUCT;, open the global settings page and search for LDAP
- Set ldap.basedn to match your sever's base directory.
- Review the defaults for the following, ensure that they match your schema.
-
- ldap.email.attribute
- ldap.firstname.attribute
- ldap.lastname.attribute
- ldap.username.attribute
- ldap.user.object
-
- Optionally you can set the following:
-
- If you do not want to use anonymous binding you can set ldap.bind.principle and ldap.bind.password as credentials for your LDAP server that will grant &PRODUCT; permission to perform a search on the LDAP server.
- For SSL support set ldap.truststore to a path on the file system where your trusted store is located. Along with this set ldap.truststore.password as the password that unlocks the truststore.
- If you wish to filter down the user set that is granted access to &PRODUCT; via the LDAP attribute memberof you can do so using ldap.search.group.principle.
-
- Finally, you can add your LDAP server. To do so select LDAP Configuration from the views section within global settings. Click on "Configure LDAP" and fill in your server's hostname and port.
-
-
-
diff --git a/docs/en-US/MidoNet_Plugin_Guide.ent b/docs/en-US/MidoNet_Plugin_Guide.ent
deleted file mode 100644
index f31c40748c2..00000000000
--- a/docs/en-US/MidoNet_Plugin_Guide.ent
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
-
-
-
-
diff --git a/docs/en-US/MidoNet_Plugin_Guide.xml b/docs/en-US/MidoNet_Plugin_Guide.xml
deleted file mode 100644
index 86182e60b71..00000000000
--- a/docs/en-US/MidoNet_Plugin_Guide.xml
+++ /dev/null
@@ -1,52 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
-
-
-
- &PRODUCT; Plugin Guide for the MidoNet Plugin
- Apache CloudStack
- 4.2.0
- 1
-
-
-
- Plugin Guide for the MidoNet Plugin.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/Preface.xml b/docs/en-US/Preface.xml
deleted file mode 100644
index e046410234d..00000000000
--- a/docs/en-US/Preface.xml
+++ /dev/null
@@ -1,31 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Preface
-
-
-
-
-
diff --git a/docs/en-US/Release_Notes.ent b/docs/en-US/Release_Notes.ent
deleted file mode 100644
index 7858ad5f2e0..00000000000
--- a/docs/en-US/Release_Notes.ent
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
-
-
-
-
diff --git a/docs/en-US/Release_Notes.xml b/docs/en-US/Release_Notes.xml
deleted file mode 100644
index d1def441685..00000000000
--- a/docs/en-US/Release_Notes.xml
+++ /dev/null
@@ -1,4582 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
-
- Welcome to &PRODUCT; 4.2
- Welcome to the 4.2.0 release of &PRODUCT;, the second major release from the Apache
- CloudStack project since its graduation from the Apache Incubator. &PRODUCT; 4.2 includes more
- than 70 new features and enhancements. The focus of the release is on three major
- areas:
-
-
- Improved support for both legacy-style and cloud-style workloads
-
-
- New third-party plug-in architecture
-
-
- Networking enhancements
-
-
- In addition to these major new areas of functionality, &PRODUCT; 4.2 provides many
- additional enhancements in a variety of product areas. All of the new features are summarized
- later in this Release Note.
- This document contains information specific to this release of &PRODUCT;, including
- upgrade instructions from prior releases, new features added to &PRODUCT;, API changes, and
- issues fixed in the release. For installation instructions, please see the Installation Guide. For usage and administration instructions, please see the
- &PRODUCT; Administrator's Guide. Developers and users who wish to work with the API
- will find instruction in the &PRODUCT; API Developer's Guide
- If you find any errors or problems in this guide, please see .
- We hope you enjoy working with &PRODUCT;!
-
-
- What's New in 4.2.0
- &PRODUCT; 4.2 includes the following new features.
-
- Features to Support Heterogeneous Workloads
- The following new features help &PRODUCT; 4.2 better support both legacy and cloud-era
- style zones.
-
- Regions
- To increase reliability of the cloud, you can optionally group resources into
- geographic regions. A region is the largest available organizational unit within a cloud
- deployment. A region is made up of several availability zones, where each zone is
- equivalent to a datacenter. Each region is controlled by its own cluster of Management
- Servers, running in one of the zones. The zones in a region are typically located in close
- geographical proximity. Regions are a useful technique for providing fault tolerance and
- disaster recovery.
- By grouping zones into regions, the cloud can achieve higher availability and
- scalability. User accounts can span regions, so that users can deploy VMs in multiple,
- widely-dispersed regions. Even if one of the regions becomes unavailable, the services are
- still available to the end-user through VMs deployed in another region. And by grouping
- communities of zones under their own nearby Management Servers, the latency of
- communications within the cloud is reduced compared to managing widely-dispersed zones
- from a single central Management Server.
- Usage records can also be consolidated and tracked at the region level, creating
- reports or invoices for each geographic region.
-
-
- Object Storage Plugin Architecture
- Artifacts such as templates, ISOs and snapshots are kept in storage which &PRODUCT;
- refers to as secondary storage. To improve scalability and performance, as when a number
- of hosts access secondary storage concurrently, object storage can be used for secondary
- storage. Object storage can also provide built-in high availability capability. When using
- object storage, access to secondary storage data can be made available across multiple
- zones in a region. This is a huge benefit, as it is no longer necessary to copy templates,
- snapshots etc. across zones as would be needed in an NFS-only environment.
- Object storage is provided through third-party software such as Amazon Simple Storage
- Service (S3) or any other object storage that supports the S3 interface. These third party
- object storages can be integrated with &PRODUCT; by writing plugin software that uses the
- object storage plugin capability introduced in &PRODUCT; 4.2. Several new pluggable
- service interfaces are available so that different storage providers can develop
- vendor-specific plugins based on the well-defined contracts that can be seamlessly managed
- by &PRODUCT;.
-
-
- Zone-Wide Primary Storage
- (Supported on KVM and VMware)
- In &PRODUCT; 4.2, you can provision primary storage on a per-zone basis. Data volumes
- in the primary storage can be attached to any VM on any host in the zone.
- In previous &PRODUCT; versions, each cluster had its own primary storage. Data in the
- primary storage was directly available only to VMs within that cluster. If a VM in a
- different cluster needed some of the data, it must be copied from one cluster to another,
- using the zone's secondary storage as an intermediate step. This operation was
- unnecessarily time-consuming.
-
-
- VMware Datacenter Now Visible As a &PRODUCT; Zone
- In order to support zone-wide functions for VMware, changes have been made so that
- &PRODUCT; is now aware of VMware Datacenters and can map each Datacenter to a &PRODUCT;
- zone. Previously, &PRODUCT; was only aware of VMware Clusters, a smaller organizational
- unit than Datacenters. This implies that a single &PRODUCT; zone could possibly contain
- clusters from different VMware Datacenters. In order for zone-wide functions, such as
- zone-wide primary storage, to work for VMware hosts, &PRODUCT; has to make sure that a
- zone contains only a single VMware Datacenter. Therefore, when you are creating a new
- &PRODUCT; zone, you will now be able to select a VMware Datacenter for the zone. If you
- are provisioning multiple VMware Datacenters, each one will be set up as a single zone in
- &PRODUCT;.
-
- If you are upgrading from a previous &PRODUCT; version, and your existing deployment
- contains a zone with clusters from multiple VMware Datacenters, that zone will not be
- forcibly migrated to the new model. It will continue to function as before. However, any
- new zone-wide operations, such as zone-wide primary storage, will not be available in
- that zone.
-
-
-
-
-
- Third-Party UI Plugin Framework
- Using the new third-party plugin framework, you can write and install extensions to
- &PRODUCT;. The installed and enabled plugins will appear in the UI.
- The basic procedure for adding a UI plugin is explained in the Developer Guide. In
- summary, the plugin developer creates the plugin code itself (in Javascript), a thumbnail
- image, the plugin listing, and a CSS file. The &PRODUCT; administrator adds the folder
- containing the plugin code under the &PRODUCT; PLUGINS folder and adds the plugin name to a
- configuration file (plugins.js).
- The next time the user refreshes the UI in the browser, the plugin will appear under the
- Plugins button in the left navigation bar.
-
-
- Networking Enhancements
- The following new features provide additional networking functionality in &PRODUCT;
- 4.2.
-
- IPv6
- &PRODUCT; 4.2 introduces initial support for IPv6. This feature is provided as a
- technical preview only. Full support is planned for a future release.
-
-
- Portable IPs
- Portable IPs in &PRODUCT; are elastic IPs that can be transferred across
- geographically separated zones. As an administrator, you can provision a pool of portable
- IPs at region level and are available for user consumption. The users can acquire portable
- IPs if admin has provisioned portable public IPs at the region level they are part of.
- These IPs can be used for any service within an advanced zone. You can also use portable
- IPs for EIP service in Basic zones. Additionally, a portable IP can be transferred from
- one network to another network.
-
-
- N-Tier Applications
- In &PRODUCT; 3.0.6, a functionality was added to allow users to create a multi-tier
- application connected to a single instance of a Virtual Router that supports inter-VLAN
- routing. Such a multi-tier application is called a virtual private cloud (VPC). Users were
- also able to connect their multi-tier applications to a private Gateway or a Site-to-Site
- VPN tunnel and route certain traffic to those gateways. For &PRODUCT; 4.2, additional
- features are implemented to enhance VPC applications.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Support for KVMVPC is now supported on KVM
- hypervisors.
-
- Load Balancing Support for VPC
- In a VPC, you can configure two types of load balancing—external LB and
- internal LB. External LB is nothing but a LB rule created to redirect the traffic
- received at a public IP of the VPC virtual router. The traffic is load balanced within a
- tier based on your configuration. Citrix NetScaler and VPC virtual router are supported
- for external LB. When you use internal LB service, traffic received at a tier is load
- balanced across different VMs within that tier. For example, traffic reached at Web tier
- is redirected to another VM in that tier. External load balancing devices are not
- supported for internal LB. The service is provided by a internal LB VM configured on the
- target tier.
-
- Load Balancing Within a Tier (External LB)
- A &PRODUCT; user or administrator may create load balancing rules that balance
- traffic received at a public IP to one or more VMs that belong to a network tier that
- provides load balancing service in a VPC. A user creates a rule, specifies an
- algorithm, and assigns the rule to a set of VMs within a tier.
-
-
- Load Balancing Across Tiers
- &PRODUCT; supports sharing workload across different tiers within your VPC. Assume
- that multiple tiers are set up in your environment, such as Web tier and Application
- tier. Traffic to each tier is balanced on the VPC virtual router on the public side.
- If you want the traffic coming from the Web tier to the Application tier to be
- balanced, use the internal load balancing feature offered by &PRODUCT;.
-
-
- Netscaler Support for VPC
- Citrix NetScaler is supported for external LB. Certified version for this feature
- is NetScaler 10.0 Build 74.4006.e.
-
-
-
- Enhanced Access Control List
- Network Access Control List (ACL) on the VPC virtual router is enhanced. The network
- ACLs can be created for the tiers only if the NetworkACL service is supported. In
- &PRODUCT; terminology, Network ACL is a group of Network ACL items. Network ACL items
- are nothing but numbered rules that are evaluated in order, starting with the lowest
- numbered rule. These rules determine whether traffic is allowed in or out of any tier
- associated with the network ACL. You need to add the Network ACL items to the Network
- ACL, then associate the Network ACL with a tier. Network ACL is associated with a VPC
- and can be assigned to multiple VPC tiers within a VPC. A Tier is associated with a
- Network ACL at all the times. Each tier can be associated with only one ACL.
- The default Network ACL is used when no ACL is associated. Default behavior is all
- incoming traffic to guest networks is blocked and all outgoing traffic from guest
- networks is allowed. Default network ACL cannot be removed or modified.
-
- ACL on Private Gateway
- The traffic on the VPC private gateway is controlled by creating both ingress and
- egress network ACL rules. The ACLs contains both allow and deny rules. As per the
- rule, all the ingress traffic to the private gateway interface and all the egress
- traffic out from the private gateway interface are blocked. You can change this
- default behaviour while creating a private gateway.
-
-
- Allow ACL on All Level 4 Protocols
- In addition to the existing protocol support for ICMP, TCP, UDP, support for All
- Level 4 protocols is added. The protocol numbers from 0 to 255 are supported.
-
-
- Support for ACL Deny Rules
- In addition to the existing support for ACL Allow rules, support for ACL Deny
- rules has been added in &PRODUCT; 4.2. As part of this, two operations are supported:
- Number and Action. You can configure a rule, allow or deny, by using action. Use
- Number to add a rule number.
-
-
-
- Deploying VMs to a VPC Tier and Shared Networks
- &PRODUCT; allows you to deploy VMs on a VPC tier and one or more shared networks.
- With this feature, the VMs deployed in a multi-tier application can receive services
- offered by a service provider over the shared network. One example of such a service is
- monitoring service.
-
-
- Adding a Private Gateway to a VPC
- A private gateway can be added by the root admin only. The VPC private network has
- 1:1 relationship with the NIC of the physical network. You can configure multiple
- private gateways to a single VPC. No gateways with duplicated VLAN and IP are allowed in
- the same data center.
-
- Source NAT on Private Gateway
- You might want to deploy multiple VPCs with the same super CIDR and guest tier
- CIDR. Therefore, multiple guest VMs from different VPCs can have the same IPs to reach
- a enterprise data center through the private gateway. In such cases, a NAT service
- need to be configured on the private gateway. If Source NAT is enabled, the guest VMs
- in VPC reaches the enterprise network via private gateway IP address by using the NAT
- service.
- The Source NAT service on a private gateway can be enabled while adding the
- private gateway. On deletion of a private gateway, source NAT rules specific to the
- private gateway are deleted.
-
-
- VPN Gateways
- Support up to 8 VPN Gateways is added.
-
-
- Creating a Static Route
- &PRODUCT; enables you to specify routing for the VPN connection you create. You
- can enter one or CIDR addresses to indicate which traffic is to be routed back to the
- gateway.
-
-
- Blacklisting Routes
- &PRODUCT; enables you to block a list of routes so that they are not assigned to
- any of the VPC private gateways. Specify the list of routes that you want to blacklist
- in the blacklisted.routes global parameter. Note that the parameter
- update affects only new static route creations. If you block an existing static route,
- it remains intact and continue functioning. You cannot add a static route if the route
- is blacklisted for the zone.
-
-
-
-
- Assigning VLANs to Isolated Networks
- &PRODUCT; provides you the ability to control VLAN assignment to Isolated networks.
- You can assign a VLAN ID when a network is created, just the way it's done for Shared
- networks.
- The former behaviour also is supported — VLAN is randomly allocated to a network
- from the VNET range of the physical network when the network turns to Implemented state.
- The VLAN is released back to the VNET pool when the network shuts down as a part of the
- Network Garbage Collection. The VLAN can be re-used either by the same network when it is
- implemented again, or by any other network. On each subsequent implementation of a
- network, a new VLAN can be assigned.
-
- You cannot change a VLAN once it's assigned to the network. The VLAN remains with
- the network for its entire life cycle.
-
-
-
- Persistent Networks
- &PRODUCT; 4.2 supports Persistent Networks. The network that you can provision without
- having to deploy any VMs on it is called a Persistent Network. A Persistent Network can be
- part of a VPC or a non-VPC environment. With the addition of this feature, you will have
- the ability to create a network in &PRODUCT; in which physical devices can be deployed
- without having to run any VMs. Additionally, you can deploy physical devices on that
- network. Another advantages is that you can create a VPC with a tier that consists only
- physical devices. For example, you might create a VPC for a three-tier application, deploy
- VMs for Web and Application tier, and use physical machines for the Database tier. Another
- use case is that if you are providing services by using physical hardware, you can define
- the network as persistent and therefore even if all its VMs are destroyed the services
- will not be discontinued.
-
-
- Cisco VNMC Support
- Cisco Virtual Network Management Center (VNMC) provides centralized multi-device and
- policy management for Cisco Network Virtual Services. When Cisco VNMC is integrated with
- ASA 1000v Cloud Firewall and Cisco Nexus 1000v dvSwitch in &PRODUCT; you will be able to:
-
-
- Configure Cisco ASA 1000v Firewalls
-
-
- Create and apply security profiles that contain ACL policy sets for both ingress
- and egress traffic, and NAT policy sets
-
-
- &PRODUCT; supports Cisco VNMC on Cisco Nexus 1000v dvSwich-enabled VMware
- hypervisors.
-
-
- VMware vNetwork Distributed vSwitch
- &PRODUCT; supports VMware vSphere Distributed Switch (VDS) for virtual network
- configuration in a VMware vSphere environment. Each vCenter server instance can support up
- to 128 VDSs and each VDS can manage up to 500 VMware hosts. &PRODUCT; supports configuring
- virtual networks in a deployment with a mix of Virtual Distributed Switch, Standard
- Virtual Switch and Nexus 1000v Virtual Switch.
-
-
- IP Reservation in Isolated Guest Networks
- In Isolated guest networks in &PRODUCT; 4.2, a part of the guest IP address space can
- be reserved for non-&PRODUCT; VMs or physical servers. To do so, you configure a range of
- Reserved IP addresses by specifying the CIDR when a guest network is in Implemented state.
- The advantage of having this feature is that if your customers wish to have non-&PRODUCT;
- controlled VMs or physical servers on the same network, they can use a part of the IP
- address space that is primarily provided to the guest network. When IP reservation is
- configured, the administrator can add additional VMs or physical servers that are not part
- of &PRODUCT; to the same network and assign them the Reserved IP addresses. &PRODUCT;
- guest VMs cannot acquire IPs from the Reserved IP Range.
-
-
- Dedicated Resources: Public IP Addresses and VLANs Per Account
- &PRODUCT; provides you the ability to reserve a set of public IP addresses and VLANs
- exclusively for an account. During zone creation, you can continue to define a set of
- VLANs and multiple public IP ranges. This feature extends the functionality to enable you
- to dedicate a fixed set of VLANs and guest IP addresses for a tenant.
- This feature provides you the following capabilities:
-
-
- Reserve a VLAN range and public IP address range from an Advanced zone and assign
- it to an account
-
-
- Disassociate a VLAN and public IP address range from an account
-
-
-
- Ensure that you check whether the required range is available and conforms to
- account limits. The maximum IPs per account limit cannot be superseded.
-
-
-
- Enhanced Juniper SRX Support for Egress Firewall Rules
- Egress firewall rules were previously supported on virtual routers, and now they are
- also supported on Juniper SRX external networking devices.
- Egress traffic originates from a private network to a public network, such as the
- Internet. By default, the egress traffic is blocked, so no outgoing traffic is allowed
- from a guest network to the Internet. However, you can control the egress traffic in an
- Advanced zone by creating egress firewall rules. When an egress firewall rule is applied,
- the traffic specific to the rule is allowed and the remaining traffic is blocked. When all
- the firewall rules are removed the default policy, Block, is applied.
-
- Egress firewall rules are not supported on Shared networks. They are supported only
- on Isolated guest networks.
-
-
-
- Configuring the Default Egress Policy
- The default egress policy for Isolated guest network can be configured by using
- Network offering. Use the create network offering option to determine whether the default
- policy should be block or allow all the traffic to the public network from a guest
- network. Use this network offering to create the network. If no policy is specified, by
- default all the traffic is allowed from the guest network that you create by using this
- network offering.
- You have two options: Allow and Deny.
- If you select Allow for a network offering, by default egress traffic is allowed.
- However, when an egress rule is configured for a guest network, rules are applied to block
- the specified traffic and rest are allowed. If no egress rules are configured for the
- network, egress traffic is accepted. If you select Deny for a network offering, by default
- egress traffic for the guest network is blocked. However, when an egress rules is
- configured for a guest network, rules are applied to allow the specified traffic. While
- implementing a guest network, &PRODUCT; adds the firewall egress rule specific to the
- default egress policy for the guest network.
- This feature is supported only on virtual router and Juniper SRX.
-
-
- Non-Contiguous VLAN Ranges
- &PRODUCT; provides you with the flexibility to add non contiguous VLAN ranges to your
- network. The administrator can either update an existing VLAN range or add multiple non
- contiguous VLAN ranges while creating a zone. You can also use the UpdatephysicalNetwork
- API to extend the VLAN range.
-
-
- Isolation in Advanced Zone Using Private VLAN
- Isolation of guest traffic in shared networks can be achieved by using Private VLANs
- (PVLAN). PVLANs provide Layer 2 isolation between ports within the same VLAN. In a
- PVLAN-enabled shared network, a user VM cannot reach other user VM though they can reach
- the DHCP server and gateway, this would in turn allow users to control traffic within a
- network and help them deploy multiple applications without communication between
- application as well as prevent communication with other users’ VMs.
-
-
- Isolate VMs in a shared networks by using Private VLANs.
-
-
- Supported on KVM, XenServer, and VMware hypervisors.
-
-
- PVLAN-enabled shared network can be a part of multiple networks of a guest VM.
-
-
-
- For further reading:
-
-
- Understanding Private VLANs
-
-
- Cisco Systems' Private VLANs:
- Scalable Security in a Multi-Client Environment
-
-
- Private VLAN (PVLAN) on vNetwork Distributed
- Switch - Concept Overview (1010691)
-
-
-
-
- Configuring Multiple IP Addresses on a Single NIC
- (Supported on XenServer, KVM, and VMware hypervisors)
- &PRODUCT; now provides you the ability to associate multiple private IP addresses per
- guest VM NIC. This feature is supported on all the network configurations—Basic,
- Advanced, and VPC. Security Groups, Static NAT and Port forwarding services are supported
- on these additional IPs. In addition to the primary IP, you can assign additional IPs to
- the guest VM NIC. Up to 256 IP addresses are allowed per NIC.
- As always, you can specify an IP from the guest subnet; if not specified, an IP is
- automatically picked up from the guest VM subnet. You can view the IPs associated with for
- each guest VM NICs on the UI. You can apply NAT on these additional guest IPs by using
- firewall configuration in the &PRODUCT; UI. You must specify the NIC to which the IP
- should be associated.
-
-
- Adding Multiple IP Ranges
- (Supported on KVM, xenServer, and VMware hypervisors)
- &PRODUCT; 4.2 provides you with the flexibility to add guest IP ranges from different
- subnets in Basic zones and security groups-enabled Advanced zones. For security
- groups-enabled Advanced zones, it implies multiple subnets can be added to the same VLAN.
- With the addition of this feature, you will be able to add IP address ranges from the same
- subnet or from a different one when IP address are exhausted. This would in turn allows
- you to employ higher number of subnets and thus reduce the address management
- overhead.
- Ensure that you manually configure the gateway of the new subnet before adding the IP
- range. Note that &PRODUCT; supports only one gateway for a subnet; overlapping subnets are
- not currently supported.
- You can also delete IP ranges. This operation fails if an IP from the remove range is
- in use. If the remove range contains the IP address on which the DHCP server is running,
- &PRODUCT; acquires a new IP from the same subnet. If no IP is available in the subnet, the
- remove operation fails.
-
- The feature can only be implemented on IPv4 addresses.
-
-
-
- Support for Multiple Networks in VMs
- (Supported on XenServer, VMware and KVM hypervisors)
- &PRODUCT; 4.2 provides you the ability to add and remove multiple networks to a VM.
- You can remove a network from a VM and add a new network. You can also change the default
- network of a VM. With this functionality, hybrid or traditional server loads can be
- accommodated with ease.
- For adding or removing a NIC to work on VMware, ensure that vm-tools are running on
- guest VMs.
-
-
- Global Server Load Balancing
- &PRODUCT; 4.2 supports Global Server Load Balancing (GSLB) functionalities to provide
- business continuity by load balancing traffic to an instance on active zones only in case
- of zone failures . &PRODUCT; achieve this by extending its functionality of integrating
- with NetScaler Application Delivery Controller (ADC), which also provides various GSLB
- capabilities, such as disaster recovery and load balancing. The DNS redirection technique
- is used to achieve GSLB in &PRODUCT;. In order to support this functionality, region level
- services and service provider are introduced. A new service 'GSLB' is introduced as a
- region level service. The GSLB service provider is introduced that will provider the GSLB
- service. Currently, NetScaler is the supported GSLB provider in &PRODUCT;. GSLB
- functionality works in an Active-Active data center environment.
-
-
- Enhanced Load Balancing Services Using External Provider on Shared VLANs
- Network services like Firewall, Load Balancing, and NAT are now supported in shared
- networks created in an advanced zone. In effect, the following network services shall be
- made available to a VM in a shared network: Source NAT, Static NAT, Port Forwarding,
- Firewall and Load balancing. Subset of these service can be chosen while creating a
- network offering for shared networks. Services available in a shared network is defined by
- the network offering and the service chosen in the network offering. For example, if
- network offering for a shared network has source NAT service enabled, a public IP shall be
- provisioned and source NAT is configured on the firewall device to provide public access
- to the VMs on the shared network. Static NAT, Port Forwarding, Load Balancing, and
- Firewall services shall be available only on the acquired public IPs associated with a
- shared network.
- Additionally, Netscaler and Juniper SRX firewall device can be configured inline or
- side-by-side mode.
-
-
- Health Checks for Load Balanced Instances
-
- This feature is supported only on NetScaler version 10.0 and beyond.
-
- (NetScaler load balancer only) A load balancer rule distributes requests among a pool
- of services (a service in this context means an application running on a virtual machine).
- When creating a load balancer rule, you can specify a health check which will ensure that
- the rule forwards requests only to services that are healthy (running and available). When
- a health check is in effect, the load balancer will stop forwarding requests to any
- resources that it has found to be unhealthy. If the resource later becomes available
- again, the periodic health check (periodicity is configurable) will discover it and the
- resource will once again be made available to the load balancer.
- To configure how often the health check is performed by default, use the global
- configuration setting healthcheck.update.interval. This default applies to all the health
- check policies in the cloud. You can override this value for an individual health check
- policy.
-
-
-
- Host and Virtual Machine Enhancements
- The following new features expand the ways you can use hosts and virtual
- machines.
-
- VMware DRS Support
- The VMware vSphere Distributed Resources Scheduler (DRS) is supported.
-
-
- Windows 8 and Windows Server 2012 as VM Guest OS
- (Supported on XenServer, VMware, and KVM)
- Windows 8 and Windows Server 2012 can now be used as OS types on guest virtual
- machines. The OS would be made available the same as any other, by uploading an ISO or a
- template. The instructions for uploading ISOs and templates are given in the
- Administrator's Guide.
-
- Limitation: When used with VMware hosts, this
- feature works only for the following versions: vSphere ESXi 5.1 and ESXi 5.0 Patch
- 4.
-
-
-
-
- Change Account Ownership of Virtual Machines
- A root administrator can now change the ownership of any virtual machine from one
- account to any other account. A domain or sub-domain administrator can do the same for VMs
- within the domain from one account to any other account in the domain.
-
-
- Private Pod, Cluster, or Host
- Dedicating pod, cluster or host to a specific domain/account means that the
- domain/account will have sole access to the dedicated pod, cluster or hosts such that
- scalability, security and manageability within a domain/account can be improved. The
- resources which belong to that tenant will be placed into that dedicated pod, cluster or
- host.
-
-
- Resizing Volumes
- &PRODUCT; provides the ability to resize data disks; &PRODUCT; controls volume size by
- using disk offerings. This provides &PRODUCT; administrators with the flexibility to
- choose how much space they want to make available to the end users. Volumes within the
- disk offerings with the same storage tag can be resized. For example, if you only want to
- offer 10, 50, and 100 GB offerings, the allowed resize should stay within those limits.
- That implies if you define a 10 GB, a 50 GB and a 100 GB disk offerings, a user can
- upgrade from 10 GB to 50 GB, or 50 GB to 100 GB. If you create a custom-sized disk
- offering, then you have the option to resize the volume by specifying a new, larger size.
- Additionally, using the resizeVolume API, a data volume can be moved from a static disk
- offering to a custom disk offering with the size specified. This functionality allows
- those who might be billing by certain volume sizes or disk offerings to stick to that
- model, while providing the flexibility to migrate to whatever custom size necessary. This
- feature is supported on KVM, XenServer, and VMware hosts. However, shrinking volumes is
- not supported on VMware hosts
-
-
- VMware Volume Snapshot Improved Performance
- When you take a snapshot of a data volume on VMware, &PRODUCT; will now use a more
- efficient storage technique to improve performance.
- Previously, every snapshot was immediately exported from vCenter to a mounted NFS
- share and packaged into an OVA file format. This operation consumed time and resources.
- Starting from 4.2, the original file formats (e.g., VMDK) provided by vCenter will be
- retained. An OVA file will only be created as needed, on demand.
- The new process applies only to newly created snapshots after upgrade to &PRODUCT;
- 4.2. Snapshots that have already been taken and stored in OVA format will continue to
- exist in that format, and will continue to work as expected.
-
-
- Storage Migration: XenMotion and vMotion
- (Supported on XenServer and VMware)
- Storage migration allows VMs to be moved from one host to another, where the VMs are
- not located on storage shared between the two hosts. It provides the option to live
- migrate a VM’s disks along with the VM itself. It is now possible to migrate a VM from one
- XenServer resource pool / VMware cluster to another, or to migrate a VM whose disks are on
- local storage, or even to migrate a VM’s disks from one storage repository to another, all
- while the VM is running.
-
-
- Configuring Usage of Linked Clones on VMware
- (For ESX hypervisor in conjunction with vCenter)
- In &PRODUCT; 4.2, the creation of VMs as full clones is allowed. In previous versions,
- only linked clones were possible.
- For a full description of clone types, refer to VMware documentation. In summary: A
- full clone is a copy of an existing virtual machine which, once created, does not depend
- in any way on the original virtual machine. A linked clone is also a copy of an existing
- virtual machine, but it has ongoing dependency on the original. A linked clone shares the
- virtual disk of the original VM, and retains access to all files that were present at the
- time the clone was created.
- A new global configuration setting has been added, vmware.create.full.clone. When the
- administrator sets this to true, end users can create guest VMs only as full clones. The
- default value is true for new installations. For customers upgrading from a previous
- version of &PRODUCT;, the default value of vmware.create.full.clone is false.
-
-
- VM Deployment Rules
- Rules can be set up to ensure that particular VMs are not placed on the same physical
- host. These "anti-affinity rules" can increase the reliability of applications by ensuring
- that the failure of a single host can not take down the entire group of VMs supporting a
- given application. See Affinity Groups in the &PRODUCT; 4.2 Administration Guide.
-
-
- CPU and Memory Scaling for Running VMs
- (Supported on VMware and XenServer)
- You can now change the CPU and RAM values for a running virtual machine. In previous
- versions of &PRODUCT;, this could only be done on a stopped VM.
- It is not always possible to accurately predict the CPU and RAM requirements when you
- first deploy a VM. You might need to increase or decrease these resources at any time
- during the life of a VM. With the new ability to dynamically modify CPU and RAM levels,
- you can change these resources for a running VM without incurring any downtime.
- Dynamic CPU and RAM scaling can be used in the following cases:
-
-
- New VMs that are created after the installation of &PRODUCT; 4.2. If you are
- upgrading from a previous version of &PRODUCT;, your existing VMs created with
- previous versions will not have the dynamic scaling capability.
-
-
- User VMs on hosts running VMware and XenServer.
-
-
- System VMs on VMware.
-
-
- VM Tools or XenServer Tools must be installed on the virtual machine.
-
-
- The new requested CPU and RAM values must be within the constraints allowed by the
- hypervisor and the VM operating system.
-
-
- To configure this feature, use the following new global configuration
- variables:
-
-
- enable.dynamic.scale.vm: Set to True to enable the feature. By default, the
- feature is turned off.
-
-
- scale.retry: How many times to attempt the scaling operation. Default = 2.
-
-
-
-
- CPU and Memory Over-Provisioning
- (Supported for XenServer, KVM, and VMware)
- In &PRODUCT; 4.2, CPU and memory (RAM) over-provisioning factors can be set for each
- cluster to change the number of VMs that can run on each host in the cluster. This helps
- optimize the use of resources. By increasing the over-provisioning ratio, more resource
- capacity will be used. If the ratio is set to 1, no over-provisioning is done.
- In previous releases, &PRODUCT; did not perform memory over-provisioning. It performed
- CPU over-provisioning based on a ratio configured by the administrator in the global
- configuration setting cpu.overprovisioning.factor. Starting in 4.2, the administrator can
- specify a memory over-provisioning ratio, and can specify both CPU and memory
- over-provisioning ratios on a per-cluster basis, rather than only on a global
- basis.
- In any given cloud, the optimum number of VMs for each host is affected by such things
- as the hypervisor, storage, and hardware configuration. These may be different for each
- cluster in the same cloud. A single global over-provisioning setting could not provide the
- best utilization for all the different clusters in the cloud. It had to be set for the
- lowest common denominator. The new per-cluster setting provides a finer granularity for
- better utilization of resources, no matter where the &PRODUCT; placement algorithm decides
- to place a VM.
-
-
- Kickstart Installation for Bare Metal Provisioning
- &PRODUCT; 4.2 supports the kick start installation method for RPM-based Linux
- operating systems on baremetal hosts in basic zones. Users can provision a baremetal host
- managed by &PRODUCT; as long as they have the kick start file and corresponding OS
- installation ISO ready.
- Tested on CentOS 5.5, CentOS 6.2, CentOS 6.3, Ubuntu 12.04.
- For more information, see the Baremetal Installation Guide.
-
-
- Enhanced Bare Metal Support on Cisco UCS
- You can now more easily provision new Cisco UCS server blades into &PRODUCT; for use
- as bare metal hosts. The goal is to enable easy expansion of the cloud by leveraging the
- programmability of the UCS converged infrastructure and &PRODUCT;’s knowledge of the cloud
- architecture and ability to orchestrate. With this new feature, &PRODUCT; can
- automatically understand the UCS environment, server profiles, etc. to make it easy to
- deploy a bare metal OS on a Cisco UCS.
-
-
- Changing a VM's Base Image
- Every VM is created from a base image, which is a template or ISO which has been
- created and stored in &PRODUCT;. Both cloud administrators and end users can create and
- modify templates, ISOs, and VMs.
- In &PRODUCT; 4.2, there is a new way to modify an existing VM. You can change an
- existing VM from one base image to another. For example, suppose there is a template based
- on a particular operating system, and the OS vendor releases a software patch. The
- administrator or user naturally wants to apply the patch and then make sure existing VMs
- start using it. Whether a software update is involved or not, it's also possible to simply
- switch a VM from its current template to any other desired template.
-
-
- Reset VM on Reboot
- In &PRODUCT; 4.2, you can specify that you want to discard the root disk and create a
- new one whenever a given VM is rebooted. This is useful for secure environments that need
- a fresh start on every boot and for desktops that should not retain state. The IP address
- of the VM will not change due to this operation.
-
-
- Virtual Machine Snapshots for VMware
- (VMware hosts only) In addition to the existing &PRODUCT; ability to snapshot
- individual VM volumes, you can now take a VM snapshot to preserve all the VM's data
- volumes as well as (optionally) its CPU/memory state. This is useful for quick restore of
- a VM. For example, you can snapshot a VM, then make changes such as software upgrades. If
- anything goes wrong, simply restore the VM to its previous state using the previously
- saved VM snapshot.
- The snapshot is created using the VMware native snapshot facility. The VM snapshot
- includes not only the data volumes, but optionally also whether the VM is running or
- turned off (CPU state) and the memory contents. The snapshot is stored in &PRODUCT;'s
- primary storage.
- VM snapshots can have a parent/child relationship. Each successive snapshot of the
- same VM is the child of the snapshot that came before it. Each time you take an additional
- snapshot of the same VM, it saves only the differences between the current state of the VM
- and the state stored in the most recent previous snapshot. The previous snapshot becomes a
- parent, and the new snapshot is its child. It is possible to create a long chain of these
- parent/child snapshots, which amount to a "redo" record leading from the current state of
- the VM back to the original.
-
-
- Increased Userdata Size When Deploying a VM
- You can now specify up to 32KB of userdata when deploying a virtual machine through
- the &PRODUCT; UI or the deployVirtualMachine API call.
-
-
- Set VMware Cluster Size Limit Depending on VMware Version
- The maximum number of hosts in a vSphere cluster is determined by the VMware
- hypervisor software. For VMware versions 4.2, 4.1, 5.0, and 5.1, the limit is 32
- hosts.
- For &PRODUCT; 4.2, the global configuration setting vmware.percluster.host.max has
- been removed. The maximum number of hosts in a VMware cluster is now determined by the
- underlying hypervisor software.
-
- Best Practice: It is advisable for VMware clusters in &PRODUCT; to be smaller than
- the VMware hypervisor's maximum size. A cluster size of up to 8 hosts has been found
- optimal for most real-world situations.
-
-
-
- Limiting Resource Usage
- Previously in &PRODUCT;, resource usage limit was imposed based on the resource count,
- that is, restrict a user or domain on the basis of the number of VMs, volumes, or
- snapshots used. In &PRODUCT; 4.2, a new set of resource types has been added to the
- existing pool of resources (VMs, Volumes, and Snapshots) to support the customization
- model—need-basis usage, such as large VM or small VM. The new resource types are now
- broadly classified as CPU, RAM, Primary storage, and Secondary storage. &PRODUCT; 4.2
- allows the root administrator to impose resource usage limit by the following resource
- types for Domain, Project and Accounts.
-
-
- CPUs
-
-
- Memory (RAM)
-
-
- Primary Storage (Volumes)
-
-
- Secondary Storage (Snapshots, Templates, ISOs)
-
-
-
-
-
- Monitoring, Maintenance, and Operations Enhancements
-
- Deleting and Archiving Events and Alerts
- In addition to viewing a list of events and alerts in the UI, the administrator can
- now delete and archive them. In order to support deleting and archiving alerts, the
- following global parameters have been added:
-
-
- alert.purge.delay: The alerts older than
- specified number of days are purged. Set the value to 0 to never purge alerts
- automatically.
-
-
- alert.purge.interval: The interval in seconds to
- wait before running the alert purge thread. The default is 86400 seconds (one
- day).
-
-
-
- Archived alerts or events cannot be viewed in the UI, or by using the API. They are
- maintained in the database for auditing or compliance purposes.
-
-
-
- Increased Granularity for Configuration Parameters
- Some configuration parameters which were previously available only at the global level
- of the cloud can now be set for smaller components of the cloud, such as at the zone
- level. To set these parameters, look for the new Settings tab in the UI. You will find it
- on the detail page for an account, cluster, zone, or primary storage.
- The account level parameters are: remote.access.vpn.client.iprange,
- allow.public.user.templates, use.system.public.ips, and
- use.system.guest.vlans
- The cluster level parameters are
- cluster.storage.allocated.capacity.notificationthreshold,
- cluster.storage.capacity.notificationthreshold,
- cluster.cpu.allocated.capacity.notificationthreshold,
- cluster.memory.allocated.capacity.notificationthreshold,
- cluster.cpu.allocated.capacity.disablethreshold,
- cluster.memory.allocated.capacity.disablethreshold,
- cpu.overprovisioning.factor, mem.overprovisioning.factor,
- vmware.reserve.cpu, and vmware.reserve.mem.
- The zone level parameters are
- pool.storage.allocated.capacity.disablethreshold,
- pool.storage.capacity.disablethreshold,
- storage.overprovisioning.factor, network.throttling.rate,
- guest.domain.suffix, router.template.xen,
- router.template.kvm, router.template.vmware,
- router.template.hyperv, router.template.lxc,
- enable.dynamic.scale.vm, use.external.dns, and
- blacklisted.routes.
-
-
- API Request Throttling
- In &PRODUCT; 4.2, you can limit the rate at which API requests can be placed for each
- account. This is useful to avoid malicious attacks on the Management Server, prevent
- performance degradation, and provide fairness to all accounts.
- If the number of API calls exceeds the threshold, an error message is returned for any
- additional API calls. The caller will have to retry these API calls at another
- time.
- To control the API request throttling, use the following new global configuration
- settings:
-
-
- api.throttling.enabled - Enable/Disable API throttling. By default, this setting
- is false, so API throttling is not enabled.
-
-
- api.throttling.interval (in seconds) - Time interval during which the number of
- API requests is to be counted. When the interval has passed, the API count is reset to
- 0.
-
-
- api.throttling.max - Maximum number of APIs that can be placed within the
- api.throttling.interval period.
-
-
- api.throttling.cachesize - Cache size for storing API counters. Use a value higher
- than the total number of accounts managed by the cloud. One cache entry is needed for
- each account, to store the running API total for that account within the current time
- window.
-
-
-
-
- Sending Alerts to External SNMP and Syslog Managers
- In addition to showing administrator alerts on the Dashboard in the &PRODUCT; UI and
- sending them in email, &PRODUCT; now can also send the same alerts to external SNMP or
- Syslog management software. This is useful if you prefer to use an SNMP or Syslog manager
- to monitor your cloud.
- The supported protocol is SNMP version 2.
-
-
- Changing the Default Password Encryption
- Passwords are encoded when creating or updating users. The new default preferred
- encoder, replacing MD5, is SHA256. It is more secure than MD5 hashing. If you take no
- action to customize password encryption and authentication, SHA256 Salt will be
- used.
- If you prefer a different authentication mechanism, &PRODUCT; 4.2 provides a way for
- you to determine the default encoding and authentication mechanism for admin and user
- logins. Two new configurable lists have been introduced: userPasswordEncoders and
- userAuthenticators. userPasswordEncoders allow you to configure the order of preference
- for encoding passwords, and userAuthenticator allows you to configure the order in which
- authentication schemes are invoked to validate user passwords.
- The plain text user authenticator has been modified not to convert supplied passwords
- to their md5 sums before checking them with the database entries. It performs a simple
- string comparison between retrieved and supplied login passwords instead of comparing the
- retrieved md5 hash of the stored password against the supplied md5 hash of the password,
- because clients no longer hash the password.
-
-
- Log Collection Utility cloud-bugtool
- &PRODUCT; provides a command-line utility called cloud-bugtool to make it easier to
- collect the logs and other diagnostic data required for troubleshooting. This is
- especially useful when interacting with Citrix Technical Support.
- You can use cloud-bugtool to collect the following:
-
-
- Basic system and environment information and network configuration including IP
- addresses, routing, and name resolver settings
-
-
- Information about running processes
-
-
- Management Server logs
-
-
- System logs in /var/log/
-
-
- Dump of the cloud database
-
-
-
- cloud-bugtool collects information which might be considered sensitive and
- confidential. Using the --nodb option to avoid the cloud database can
- reduce this concern, though it is not guaranteed to exclude all sensitive data.
-
-
-
-
- Snaphotting, Backups, Cloning and System VMs for RBD Primary Storage
-
- These new RBD features require at least librbd 0.61.7 (Cuttlefish) and libvirt
- 0.9.14 on the KVM hypervisors.
-
- This release of &PRODUCT; will leverage the features of RBD format 2. This allows
- snapshotting and backing up those snapshots.
- Backups of snapshots to Secondary Storage are full copies of the RBD snapshot, they
- are not RBD diffs. This because when restoring a backup of a snapshot it is not mandatory
- that this backup is deployed on RBD again, it could also be a NFS Primary Storage.
- Another key feature of RBD format 2 is cloning. With this release templates will be
- copied to Primary Storage once and by using the cloning mechanism new disks will be cloned
- from this parent template. This saves space and decreases deployment time for instances
- dramatically.
- Before this release, a NFS Primary Storage was still required for running the System
- VMs from. The reason was a so called 'patch disk' that was generated by the hypervisor
- which contained metadata for the System VM. The scripts generating this disk didn't
- support RBD and thus System VMs had to be deployed from NFS. With 4.2 instead of the patch
- disk a VirtIO serial console is used to pass meta information to System VMs. This enabled
- the deployment of System VMs on RBD Primary Storage.
-
-
-
- Issues Fixed in 4.2.0
- Apache CloudStack uses Jira to track its issues. All new features and bugs for 4.2.0 have been tracked
- in Jira, and have a standard naming convention of "CLOUDSTACK-NNNN" where "NNNN" is the
- issue number.
- For list of issues fixed, see Issues Fixed in
- 4.2.
-
-
- Known Issues in 4.2.0
- This section includes a summary of known issues that were fixed in 4.2.0. For list of
- known issues, see Known
- Issues.
-
-
-
- Upgrade Instructions for 4.2
- This section contains upgrade instructions from prior versions of CloudStack to Apache
- CloudStack 4.2.0. We include instructions on upgrading to Apache CloudStack from pre-Apache
- versions of Citrix CloudStack (last version prior to Apache is 3.0.2) and from the releases
- made while CloudStack was in the Apache Incubator.
- If you run into any issues during upgrades, please feel free to ask questions on
- users@cloudstack.apache.org or dev@cloudstack.apache.org.
-
- Upgrade from 4.x.x to 4.2.0
- This section will guide you from &PRODUCT; 4.0.x versions to &PRODUCT; 4.2.0.
- Any steps that are hypervisor-specific will be called out with a note.
-
- Package Structure Changes
- The package structure for &PRODUCT; has changed significantly since the 4.0.x
- releases. If you've compiled your own packages, you'll notice that the package names and
- the number of packages has changed. This is not a bug.
- However, this does mean that the procedure is not as simple as an apt-get
- upgrade or yum update, so please follow this section
- carefully.
-
- We recommend reading through this section once or twice before beginning your upgrade
- procedure, and working through it on a test system before working on a production
- system.
-
-
- Most users of &PRODUCT; manage the installation and upgrades of &PRODUCT; with one
- of Linux's predominant package systems, RPM or APT. This guide assumes you'll be using
- RPM and Yum (for Red Hat Enterprise Linux or CentOS), or APT and Debian packages (for
- Ubuntu).
-
-
- Create RPM or Debian packages (as appropriate) and a repository from the 4.2.0
- source, or check the Apache CloudStack downloads page at http://cloudstack.apache.org/downloads.html for package repositories supplied
- by community members. You will need them for step
- or step .
- Instructions for creating packages from the &PRODUCT; source are in the Installation
- Guide.
-
-
- Stop your management server or servers. Run this on all management server
- hosts:
- # service cloud-management stop
-
-
- If you are running a usage server or usage servers, stop those as well:
- # service cloud-usage stop
-
-
- Make a backup of your MySQL database. If you run into any issues or need to roll
- back the upgrade, this will assist in debugging or restoring your existing environment.
- You'll be prompted for your password.
- # mysqldump -u root -p cloud > cloudstack-backup.sql
-
-
- If you have made changes to
- /etc/cloud/management/components.xml, you'll need to carry these
- over manually to the new file,
- /etc/cloudstack/management/componentContext.xml. This is not done
- automatically. (If you're unsure, we recommend making a backup of the original
- components.xml to be on the safe side.
-
-
- After upgrading to 4.2, API clients are expected to send plain text passwords for
- login and user creation, instead of MD5 hash. If API client changes are not acceptable,
- following changes are to be made for backward compatibility:
- Modify componentsContext.xml, and make PlainTextUserAuthenticator as the default
- authenticator (1st entry in the userAuthenticators adapter list is default)
-
-<!-- Security adapters -->
-<bean id="userAuthenticators" class="com.cloud.utils.component.AdapterList">
- <property name="Adapters">
- <list>
- <ref bean="PlainTextUserAuthenticator"/>
- <ref bean="MD5UserAuthenticator"/>
- <ref bean="LDAPUserAuthenticator"/>
- </list>
- </property>
-</bean>
-
- PlainTextUserAuthenticator works the same way MD5UserAuthenticator worked prior to
- 4.2.
-
-
- If you are using Ubuntu, follow this procedure to upgrade your packages. If not,
- skip to step .
-
- Community Packages
- This section assumes you're using the community supplied packages for &PRODUCT;.
- If you've created your own packages and APT repository, substitute your own URL for
- the ones used in these examples.
-
-
-
- The first order of business will be to change the sources list for each system
- with &PRODUCT; packages. This means all management servers, and any hosts that have
- the KVM agent. (No changes should be necessary for hosts that are running VMware or
- Xen.)
- Start by opening /etc/apt/sources.list.d/cloudstack.list on
- any systems that have &PRODUCT; packages installed.
- This file should have one line, which contains:
- deb http://cloudstack.apt-get.eu/ubuntu precise 4.0
- We'll change it to point to the new package repository:
- deb http://cloudstack.apt-get.eu/ubuntu precise 4.2
- If you're using your own package repository, change this line to read as
- appropriate for your 4.2.0 repository.
-
-
- Now update your apt package list:
- $ sudo apt-get update
-
-
- Now that you have the repository configured, it's time to install the
- cloudstack-management package. This will pull in any other
- dependencies you need.
- $ sudo apt-get install cloudstack-management
-
-
- You will need to manually install the cloudstack-agent
- package:
- $ sudo apt-get install cloudstack-agent
- During the installation of cloudstack-agent, APT will copy
- your agent.properties, log4j-cloud.xml,
- and environment.properties from
- /etc/cloud/agent to
- /etc/cloudstack/agent.
- When prompted whether you wish to keep your configuration, say Yes.
-
-
- Verify that the file
- /etc/cloudstack/agent/environment.properties has a line that
- reads:
- paths.script=/usr/share/cloudstack-common
- If not, add the line.
-
-
- Restart the agent:
-
-service cloud-agent stop
-killall jsvc
-service cloudstack-agent start
-
-
-
- During the upgrade, log4j-cloud.xml was simply copied over,
- so the logs will continue to be added to
- /var/log/cloud/agent/agent.log. There's nothing
- wrong with this, but if you prefer to be consistent, you can
- change this by copying over the sample configuration file:
-
-cd /etc/cloudstack/agent
-mv log4j-cloud.xml.dpkg-dist log4j-cloud.xml
-service cloudstack-agent restart
-
-
-
- Once the agent is running, you can uninstall the old cloud-* packages from your
- system:
- sudo dpkg --purge cloud-agent
-
-
-
-
- (VMware only) Additional steps are required for each VMware cluster. These steps
- will not affect running guests in the cloud. These steps are required only for clouds
- using VMware clusters:
-
-
- Stop the Management Server:
- service cloudstack-management stop
-
-
- Generate the encrypted equivalent of your vCenter password:
- java -classpath /usr/share/cloudstack-common/lib/jasypt-1.9.0.jar org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI encrypt.sh input="_your_vCenter_password_" password="`cat /etc/cloudstack/management/key`" verbose=false
- Store the output from this step, we need to add this in cluster_details table
- and vmware_data_center tables in place of the plain text password
-
-
- Find the ID of the row of cluster_details table that you have to update:
- mysql -u <username> -p<password>
- select * from cloud.cluster_details;
-
-
- Update the plain text password with the encrypted one
- update cloud.cluster_details set value = '_ciphertext_from_step_1_' where id = _id_from_step_2_;
-
-
- Confirm that the table is updated:
- select * from cloud.cluster_details;
-
-
- Find the ID of the correct row of vmware_data_center that you want to
- update
- select * from cloud.vmware_data_center;
-
-
- update the plain text password with the encrypted one:
- update cloud.vmware_data_center set password = '_ciphertext_from_step_1_' where id = _id_from_step_5_;
-
-
- Confirm that the table is updated:
- select * from cloud.vmware_data_center;
-
-
- Start the &PRODUCT; Management server
- service cloudstack-management start
-
-
-
-
- (KVM only) Additional steps are required for each KVM host. These steps will not
- affect running guests in the cloud. These steps are required only for clouds using KVM
- as hosts and only on the KVM hosts.
-
-
- Manually clean up /var/cache/cloudstack.
-
-
- Copy the 4.2 tar file to the host, untar it, and change directory to the
- resulting directory.
-
-
- Stop the running agent.
- # service cloud-agent stop
-
-
- Update the agent software.
- # ./install.sh
-
-
- Choose "U" to update the packages.
-
-
- Start the agent.
- # service cloudstack-agent start
-
-
-
-
- If you are using CentOS or RHEL, follow this procedure to upgrade your packages. If
- not, skip to step .
-
- Community Packages
- This section assumes you're using the community supplied packages for &PRODUCT;.
- If you've created your own packages and yum repository, substitute your own URL for
- the ones used in these examples.
-
-
-
- The first order of business will be to change the yum repository for each system
- with &PRODUCT; packages. This means all management servers, and any hosts that have
- the KVM agent.
- (No changes should be necessary for hosts that are running VMware or
- Xen.)
- Start by opening /etc/yum.repos.d/cloudstack.repo on any
- systems that have &PRODUCT; packages installed.
- This file should have content similar to the following:
-
-[apache-cloudstack]
-name=Apache CloudStack
-baseurl=http://cloudstack.apt-get.eu/rhel/4.0/
-enabled=1
-gpgcheck=0
-
- If you are using the community provided package repository, change the base url
- to http://cloudstack.apt-get.eu/rhel/4.2/
- If you're using your own package repository, change this line to read as
- appropriate for your 4.2.0 repository.
-
-
- Now that you have the repository configured, it's time to install the
- cloudstack-management package by upgrading the older
- cloud-client package.
- $ sudo yum upgrade cloud-client
-
-
- For KVM hosts, you will need to upgrade the cloud-agent
- package, similarly installing the new version as
- cloudstack-agent.
- $ sudo yum upgrade cloud-agent
- During the installation of cloudstack-agent, the RPM will
- copy your agent.properties,
- log4j-cloud.xml, and
- environment.properties from
- /etc/cloud/agent to
- /etc/cloudstack/agent.
-
-
- For CentOS 5.5, perform the following:
-
-
- Run the following command:
- rpm -Uvh http://download.cloud.com/support/jsvc/jakarta-commons-daemon-jsvc-1.0.1-8.9.el6.x86_64.rpm
-
-
- Upgrade the Usage server.
- sudo yum upgrade cloud-usage
-
-
-
-
- Verify that the file
- /etc/cloudstack/agent/environment.properties has a line that
- reads:
- paths.script=/usr/share/cloudstack-common
- If not, add the line.
-
-
- Restart the agent:
-
-service cloud-agent stop
-killall jsvc
-service cloudstack-agent start
-
-
-
-
-
- Once you've upgraded the packages on your management servers, you'll need to restart
- the system VMs. Make sure port 8096 is open in your local host firewall to do
- this.
- There is a script that will do this for you, all you need to do is run the script
- and supply the IP address for your MySQL instance and your MySQL credentials:
- # nohup cloudstack-sysvmadm -d IP address -u cloud -p -a > sysvm.log 2>&1 &
- You can monitor the log for progress. The process of restarting the system VMs can
- take an hour or more.
- # tail -f sysvm.log
- The output to sysvm.log will look something like this:
-
-Stopping and starting 1 secondary storage vm(s)...
-Done stopping and starting secondary storage vm(s)
-Stopping and starting 1 console proxy vm(s)...
-Done stopping and starting console proxy vm(s).
-Stopping and starting 4 running routing vm(s)...
-Done restarting router(s).
-
-
-
-
- For Xen Hosts: Copy vhd-utils
- This step is only for CloudStack installs that are using Xen hosts.
-
- Copy the file vhd-utils to
- /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver.
-
-
-
-
- Upgrade from 3.0.2 to 4.2.0
- This section will guide you from Citrix CloudStack 3.0.2 to Apache CloudStack 4.2.0.
- Sections that are hypervisor-specific will be called out with a note.
-
-
-
- The following upgrade instructions apply only if you're using VMware hosts. If
- you're not using VMware hosts, skip this step and move on to .
-
- In each zone that includes VMware hosts, you need to add a new system VM template.
-
-
- While running the existing 3.0.2 system, log in to the UI as root
- administrator.
-
-
- In the left navigation bar, click Templates.
-
-
- In Select view, click Templates.
-
-
- Click Register template.
- The Register template dialog box is displayed.
-
-
- In the Register template dialog box, specify the following values (do not change
- these):
-
-
-
-
-
-
- Field
- Value
-
-
-
-
- Name
- systemvm-vmware-4.2
-
-
- Description
- systemvm-vmware-4.2
-
-
- URL
- http://download.cloud.com/templates/burbank/burbank-systemvm-08012012.ova
-
-
- Zone
- Choose the zone where this hypervisor is used
-
-
- Hypervisor
- VMware
-
-
- Format
- OVA
-
-
- OS Type
- Debian GNU/Linux 5.0 (32-bit)
-
-
- Extractable
- no
-
-
- Password Enabled
- no
-
-
- Public
- no
-
-
- Featured
- no
-
-
-
-
-
-
- Watch the screen to be sure that the template downloads successfully and enters
- the READY state. Do not proceed until this is successful.
-
-
-
-
- Stop all Usage Servers if running. Run this on all Usage Server hosts.
- # service cloud-usage stop
-
-
- Stop the Management Servers. Run this on all Management Server hosts.
- # service cloud-management stop
-
-
- On the MySQL master, take a backup of the MySQL databases. We recommend performing
- this step even in test upgrades. If there is an issue, this will assist with
- debugging.
- In the following commands, it is assumed that you have set the root password on the
- database, which is a CloudStack recommended best practice. Substitute your own MySQL
- root password.
- #mysqldump -u root -pmysql_password cloud > cloud-backup.dmp
- #mysqldump -u root -pmysql_password cloud_usage > cloud-usage-backup.dmp
-
-
- Either build RPM/DEB packages as detailed in the Installation Guide, or use one of
- the community provided yum/apt repositories to gain access to the &PRODUCT;
- binaries.
-
-
- If you are using Ubuntu, follow this procedure to upgrade your packages. If not,
- skip to step .
-
- Community Packages
- This section assumes you're using the community supplied packages for &PRODUCT;.
- If you've created your own packages and APT repository, substitute your own URL for
- the ones used in these examples.
-
-
-
- The first order of business will be to change the sources list for each system
- with &PRODUCT; packages. This means all management servers, and any hosts that have
- the KVM agent. (No changes should be necessary for hosts that are running VMware or
- Xen.)
- Start by opening /etc/apt/sources.list.d/cloudstack.list on
- any systems that have &PRODUCT; packages installed.
- This file should have one line, which contains:
- deb http://cloudstack.apt-get.eu/ubuntu precise 4.0
- We'll change it to point to the new package repository:
- deb http://cloudstack.apt-get.eu/ubuntu precise 4.2
- If you're using your own package repository, change this line to read as
- appropriate for your 4.2.0 repository.
-
-
- Now update your apt package list:
- $ sudo apt-get update
-
-
- Now that you have the repository configured, it's time to install the
- cloudstack-management package. This will pull in any other
- dependencies you need.
- $ sudo apt-get install cloudstack-management
-
-
- You will need to manually install the cloudstack-agent
- package:
- $ sudo apt-get install cloudstack-agent
- During the installation of cloudstack-agent, APT will copy
- your agent.properties, log4j-cloud.xml,
- and environment.properties from
- /etc/cloud/agent to
- /etc/cloudstack/agent.
- When prompted whether you wish to keep your configuration, say Yes.
-
-
- Verify that the file
- /etc/cloudstack/agent/environment.properties has a line that
- reads:
- paths.script=/usr/share/cloudstack-common
- If not, add the line.
-
-
- Restart the agent:
-
-service cloud-agent stop
-killall jsvc
-service cloudstack-agent start
-
-
-
- During the upgrade, log4j-cloud.xml was simply copied over,
- so the logs will continue to be added to
- /var/log/cloud/agent/agent.log. There's nothing
- wrong with this, but if you prefer to be consistent, you can
- change this by copying over the sample configuration file:
-
-cd /etc/cloudstack/agent
-mv log4j-cloud.xml.dpkg-dist log4j-cloud.xml
-service cloudstack-agent restart
-
-
-
- Once the agent is running, you can uninstall the old cloud-* packages from your
- system:
- sudo dpkg --purge cloud-agent
-
-
-
-
- (KVM only) Additional steps are required for each KVM host. These steps will not
- affect running guests in the cloud. These steps are required only for clouds using KVM
- as hosts and only on the KVM hosts.
-
-
- Copy the CloudPlatform 4.2 tar file to the host, untar it, and change directory
- to the resulting directory.
-
-
- Stop the running agent.
- # service cloud-agent stop
-
-
- Update the agent software.
- # ./install.sh
-
-
- Choose "U" to update the packages.
-
-
- Start the agent.
- # service cloudstack-agent start
-
-
-
-
- If you are using CentOS or RHEL, follow this procedure to upgrade your packages. If
- not, skip to step .
-
- Community Packages
- This section assumes you're using the community supplied packages for &PRODUCT;.
- If you've created your own packages and yum repository, substitute your own URL for
- the ones used in these examples.
-
-
-
- The first order of business will be to change the yum repository for each system
- with &PRODUCT; packages. This means all management servers, and any hosts that have
- the KVM agent. (No changes should be necessary for hosts that are running VMware or
- Xen.)
- Start by opening /etc/yum.repos.d/cloudstack.repo on any
- systems that have &PRODUCT; packages installed.
- This file should have content similar to the following:
-
-[apache-cloudstack]
-name=Apache CloudStack
-baseurl=http://cloudstack.apt-get.eu/rhel/4.0/
-enabled=1
-gpgcheck=0
-
- If you are using the community provided package repository, change the baseurl
- to http://cloudstack.apt-get.eu/rhel/4.2/
- If you're using your own package repository, change this line to read as
- appropriate for your 4.2.0 repository.
-
-
- Now that you have the repository configured, it's time to install the
- cloudstack-management package by upgrading the older
- cloud-client package.
- $ sudo yum upgrade cloud-client
-
-
- For KVM hosts, you will need to upgrade the cloud-agent
- package, similarly installing the new version as
- cloudstack-agent.
- $ sudo yum upgrade cloud-agent
- During the installation of cloudstack-agent, the RPM will
- copy your agent.properties,
- log4j-cloud.xml, and
- environment.properties from
- /etc/cloud/agent to
- /etc/cloudstack/agent.
-
-
- Verify that the file
- /etc/cloudstack/agent/environment.properties has a line that
- reads:
- paths.script=/usr/share/cloudstack-common
- If not, add the line.
-
-
- Restart the agent:
-
-service cloud-agent stop
-killall jsvc
-service cloudstack-agent start
-
-
-
-
-
- If you have made changes to your copy of
- /etc/cloud/management/components.xml the changes will be
- preserved in the upgrade. However, you need to do the following steps to place these
- changes in a new version of the file which is compatible with version 4.2.0.
-
-
- Make a backup copy of /etc/cloud/management/components.xml.
- For example:
- # mv /etc/cloud/management/components.xml /etc/cloud/management/components.xml-backup
-
-
- Copy /etc/cloud/management/components.xml.rpmnew to create
- a new /etc/cloud/management/components.xml:
- # cp -ap /etc/cloud/management/components.xml.rpmnew /etc/cloud/management/components.xml
-
-
- Merge your changes from the backup file into the new
- components.xml.
- # vi /etc/cloudstack/management/components.xml
-
-
-
- If you have more than one management server node, repeat the upgrade steps on each
- node.
-
-
-
- After upgrading to 4.2, API clients are expected to send plain text passwords for
- login and user creation, instead of MD5 hash. Incase, api client changes are not
- acceptable, following changes are to be made for backward compatibility:
- Modify componentsContext.xml, and make PlainTextUserAuthenticator as the default
- authenticator (1st entry in the userAuthenticators adapter list is default)
-
-<!-- Security adapters -->
-<bean id="userAuthenticators" class="com.cloud.utils.component.AdapterList">
- <property name="Adapters">
- <list>
- <ref bean="PlainTextUserAuthenticator"/>
- <ref bean="MD5UserAuthenticator"/>
- <ref bean="LDAPUserAuthenticator"/>
- </list>
- </property>
-</bean>
-
- PlainTextUserAuthenticator works the same way MD5UserAuthenticator worked prior to
- 4.2.
-
-
- Start the first Management Server. Do not start any other Management Server nodes
- yet.
- # service cloudstack-management start
- Wait until the databases are upgraded. Ensure that the database upgrade is complete.
- After confirmation, start the other Management Servers one at a time by running the same
- command on each node.
-
- Failing to restart the Management Server indicates a problem in the upgrade.
- Having the Management Server restarted without any issues indicates that the upgrade
- is successfully completed.
-
-
-
- Start all Usage Servers (if they were running on your previous version). Perform
- this on each Usage Server host.
- # service cloudstack-usage start
-
-
- Additional steps are required for each KVM host. These steps will not affect running
- guests in the cloud. These steps are required only for clouds using KVM as hosts and
- only on the KVM hosts.
-
-
- Configure a yum or apt repository containing the &PRODUCT; packages as outlined
- in the Installation Guide.
-
-
- Stop the running agent.
- # service cloud-agent stop
-
-
- Update the agent software with one of the following command sets as appropriate
- for your environment.
- # yum update cloud-*
- # apt-get update
- # apt-get upgrade cloud-*
-
-
- Edit /etc/cloudstack/agent/agent.properties to change the
- resource parameter from
- "com.cloud.agent.resource.computing.LibvirtComputingResource" to
- "com.cloud.hypervisor.kvm.resource.LibvirtComputingResource".
-
-
- Start the cloud agent and cloud management services.
- # service cloudstack-agent start
-
-
- When the Management Server is up and running, log in to the CloudStack UI and
- restart the virtual router for proper functioning of all the features.
-
-
-
-
- Log in to the CloudStack UI as administrator, and check the status of the hosts. All
- hosts should come to Up state (except those that you know to be offline). You may need
- to wait 20 or 30 minutes, depending on the number of hosts.
-
- Troubleshooting: If login fails, clear your browser cache and reload the
- page.
-
- Do not proceed to the next step until the hosts show in Up state.
-
-
- If you are upgrading from 3.0.2, perform the following:
-
-
- Ensure that the admin port is set to 8096 by using the "integration.api.port"
- global parameter.
- This port is used by the cloud-sysvmadm script at the end of the upgrade
- procedure. For information about how to set this parameter, see "Setting Global
- Configuration Parameters" in the Installation Guide.
-
-
- Restart the Management Server.
-
- If you don't want the admin port to remain open, you can set it to null after
- the upgrade is done and restart the management server.
-
-
-
-
-
- Run the cloud-sysvmadm script to stop, then start, all Secondary
- Storage VMs, Console Proxy VMs, and virtual routers. Run the script once on each
- management server. Substitute your own IP address of the MySQL instance, the MySQL user
- to connect as, and the password to use for that user. In addition to those parameters,
- provide the -c and -r arguments. For
- example:
- # nohup cloud-sysvmadm -d 192.168.1.5 -u cloud -p password -c -r >
- sysvm.log 2>&1 &
- # tail -f sysvm.log
- This might take up to an hour or more to run, depending on the number of accounts in
- the system.
-
-
- If needed, upgrade all Citrix XenServer hypervisor hosts in your cloud to a version
- supported by CloudStack 4.2.0. The supported versions are XenServer 5.6 SP2 and 6.0.2.
- Instructions for upgrade can be found in the CloudStack 4.2.0 Installation Guide under
- "Upgrading XenServer Versions."
-
-
- Now apply the XenServer hotfix XS602E003 (and any other needed hotfixes) to
- XenServer v6.0.2 hypervisor hosts.
-
-
- Disconnect the XenServer cluster from CloudStack.
- In the left navigation bar of the CloudStack UI, select Infrastructure. Under
- Clusters, click View All. Select the XenServer cluster and click Actions -
- Unmanage.
- This may fail if there are hosts not in one of the states Up, Down,
- Disconnected, or Alert. You may need to fix that before unmanaging this
- cluster.
- Wait until the status of the cluster has reached Unmanaged. Use the CloudStack
- UI to check on the status. When the cluster is in the unmanaged state, there is no
- connection to the hosts in the cluster.
-
-
- To clean up the VLAN, log in to one XenServer host and run:
- /opt/xensource/bin/cloud-clean-vlan.sh
-
-
- Now prepare the upgrade by running the following on one XenServer host:
- /opt/xensource/bin/cloud-prepare-upgrade.sh
- If you see a message like "can't eject CD", log in to the VM and unmount the CD,
- then run this script again.
-
-
- Upload the hotfix to the XenServer hosts. Always start with the Xen pool master,
- then the slaves. Using your favorite file copy utility (e.g. WinSCP), copy the
- hotfixes to the host. Place them in a temporary folder such as /tmp.
- On the Xen pool master, upload the hotfix with this command:
- xe patch-upload file-name=XS602E003.xsupdate
- Make a note of the output from this command, which is a UUID for the hotfix
- file. You'll need it in another step later.
-
- (Optional) If you are applying other hotfixes as well, you can repeat the
- commands in this section with the appropriate hotfix number. For example,
- XS602E004.xsupdate.
-
-
-
- Manually live migrate all VMs on this host to another host. First, get a list of
- the VMs on this host:
- # xe vm-list
- Then use this command to migrate each VM. Replace the example host name and VM
- name with your own:
- # xe vm-migrate live=true host=host-name
- vm=VM-name
-
- Troubleshooting
- If you see a message like "You attempted an operation on a VM which requires
- PV drivers to be installed but the drivers were not detected," run:
- /opt/xensource/bin/make_migratable.sh
- b6cf79c8-02ee-050b-922f-49583d9f1a14.
-
-
-
- Apply the hotfix. First, get the UUID of this host:
- # xe host-list
- Then use the following command to apply the hotfix. Replace the example host
- UUID with the current host ID, and replace the hotfix UUID with the output from the
- patch-upload command you ran on this machine earlier. You can also get the hotfix
- UUID by running xe patch-list.
- xe patch-apply host-uuid=host-uuid uuid=hotfix-uuid
-
-
- Copy the following files from the CloudStack Management Server to the
- host.
-
-
-
-
-
-
- Copy from here...
- ...to here
-
-
-
-
- /usr/lib64/cloud/common/scripts/vm/hypervisor/xenserver/xenserver60/NFSSR.py
- /opt/xensource/sm/NFSSR.py
-
-
- /usr/lib64/cloud/common/scripts/vm/hypervisor/xenserver/setupxenserver.sh
- /opt/xensource/bin/setupxenserver.sh
-
-
- /usr/lib64/cloud/common/scripts/vm/hypervisor/xenserver/make_migratable.sh
- /opt/xensource/bin/make_migratable.sh
-
-
-
-
-
-
- (Only for hotfixes XS602E005 and XS602E007) You need to apply a new Cloud
- Support Pack.
-
-
- Download the CSP software onto the XenServer host from one of the following
- links:
- For hotfix XS602E005: http://coltrane.eng.hq.xensource.com/release/XenServer-6.x/XS-6.0.2/hotfixes/XS602E005/56710/xe-phase-2/xenserver-cloud-supp.tgz
- For hotfix XS602E007: http://coltrane.eng.hq.xensource.com/release/XenServer-6.x/XS-6.0.2/hotfixes/XS602E007/57824/xe-phase-2/xenserver-cloud-supp.tgz
-
-
- Extract the file:
- # tar xf xenserver-cloud-supp.tgz
-
-
- Run the following script:
- # xe-install-supplemental-pack xenserver-cloud-supp.iso
-
-
- If the XenServer host is part of a zone that uses basic networking, disable
- Open vSwitch (OVS):
- # xe-switch-network-backend bridge
-
-
-
-
- Reboot this XenServer host.
-
-
- Run the following:
- /opt/xensource/bin/setupxenserver.sh
-
- If the message "mv: cannot stat `/etc/cron.daily/logrotate': No such file or
- directory" appears, you can safely ignore it.
-
-
-
- Run the following:
- for pbd in `xe pbd-list currently-attached=false| grep ^uuid | awk '{print $NF}'`; do xe pbd-plug uuid=$pbd ;
-
-
- On each slave host in the Xen pool, repeat these steps, starting from "manually
- live migrate VMs."
-
-
-
-
-
- Troubleshooting Tip
- If passwords which you know to be valid appear not to work after upgrade, or other UI
- issues are seen, try clearing your browser cache and reloading the UI page.
-
-
-
- Upgrade from 2.2.14 to 4.2.0
-
-
- Ensure that you query your IPaddress usage records and process them; for example,
- issue invoices for any usage that you have not yet billed users for.
- Starting in 3.0.2, the usage record format for IP addresses is the same as the rest
- of the usage types. Instead of a single record with the assignment and release dates,
- separate records are generated per aggregation period with start and end dates. After
- upgrading to 4.2.0, any existing IP address usage records in the old format will no
- longer be available.
-
-
- If you are using version 2.2.0 - 2.2.13, first upgrade to 2.2.14 by using the
- instructions in the 2.2.14
- Release Notes.
-
- KVM Hosts
- If KVM hypervisor is used in your cloud, be sure you completed the step to insert
- a valid username and password into the host_details table on each KVM node as
- described in the 2.2.14 Release Notes. This step is critical, as the database will be
- encrypted after the upgrade to 4.2.0.
-
-
-
- While running the 2.2.14 system, log in to the UI as root administrator.
-
-
- Using the UI, add a new System VM template for each hypervisor type that is used in
- your cloud. In each zone, add a system VM template for each hypervisor used in that
- zone
-
-
- In the left navigation bar, click Templates.
-
-
- In Select view, click Templates.
-
-
- Click Register template.
- The Register template dialog box is displayed.
-
-
- In the Register template dialog box, specify the following values depending on
- the hypervisor type (do not change these):
-
-
-
-
-
-
- Hypervisor
- Description
-
-
-
-
- XenServer
- Name: systemvm-xenserver-4.2.0
- Description: systemvm-xenserver-4.2.0
- URL:http://download.cloud.com/templates/4.2/systemvmtemplate-2013-07-12-master-xen.vhd.bz2
- Zone: Choose the zone where this hypervisor is used
- Hypervisor: XenServer
- Format: VHD
- OS Type: Debian GNU/Linux 6.0 (32-bit)
- Extractable: no
- Password Enabled: no
- Public: no
- Featured: no
-
-
-
- KVM
- Name: systemvm-kvm-4.2.0
- Description: systemvm-kvm-4.2.0
- URL:
- http://download.cloud.com/templates/4.2/systemvmtemplate-2013-06-12-master-kvm.qcow2.bz2
- Zone: Choose the zone where this hypervisor is used
- Hypervisor: KVM
- Format: QCOW2
- OS Type: Debian GNU/Linux 5.0 (32-bit)
- Extractable: no
- Password Enabled: no
- Public: no
- Featured: no
-
-
-
- VMware
- Name: systemvm-vmware-4.2.0
- Description: systemvm-vmware-4.2.0
- URL:
- http://download.cloud.com/templates/4.2/systemvmtemplate-4.2-vh7.ova
- Zone: Choose the zone where this hypervisor is used
- Hypervisor: VMware
- Format: OVA
- OS Type: Debian GNU/Linux 5.0 (32-bit)
- Extractable: no
- Password Enabled: no
- Public: no
- Featured: no
-
-
-
-
-
-
-
-
-
- Watch the screen to be sure that the template downloads successfully and enters the
- READY state. Do not proceed until this is successful
-
-
- WARNING: If you use more than one type of
- hypervisor in your cloud, be sure you have repeated these steps to download the system
- VM template for each hypervisor type. Otherwise, the upgrade will fail.
-
-
- Stop all Usage Servers if running. Run this on all Usage Server hosts.
- # service cloud-usage stop
-
-
- Stop the Management Servers. Run this on all Management Server hosts.
- # service cloud-management stop
-
-
- On the MySQL master, take a backup of the MySQL databases. We recommend performing
- this step even in test upgrades. If there is an issue, this will assist with
- debugging.
- In the following commands, it is assumed that you have set the root password on the
- database, which is a CloudStack recommended best practice. Substitute your own MySQL
- root password.
- #mysqldump -u root -pmysql_password cloud > cloud-backup.dmp
- #mysqldump -u root -pmysql_password cloud_usage > cloud-usage-backup.dmp
-
-
-
- Either build RPM/DEB packages as detailed in the Installation Guide, or use one of
- the community provided yum/apt repositories to gain access to the &PRODUCT; binaries.
-
-
-
- If you are using Ubuntu, follow this procedure to upgrade your packages. If not,
- skip to step .
-
- Community Packages
- This section assumes you're using the community supplied packages for &PRODUCT;.
- If you've created your own packages and APT repository, substitute your own URL for
- the ones used in these examples.
-
-
-
- The first order of business will be to change the sources list for each system
- with &PRODUCT; packages. This means all management servers, and any hosts that have
- the KVM agent. (No changes should be necessary for hosts that are running VMware or
- Xen.)
- Start by opening /etc/apt/sources.list.d/cloudstack.list on
- any systems that have &PRODUCT; packages installed.
- This file should have one line, which contains:
- deb http://cloudstack.apt-get.eu/ubuntu precise 4.0
- We'll change it to point to the new package repository:
- deb http://cloudstack.apt-get.eu/ubuntu precise 4.2
- If you're using your own package repository, change this line to read as
- appropriate for your 4.2.0 repository.
-
-
- Now update your apt package list:
- $ sudo apt-get update
-
-
- Now that you have the repository configured, it's time to install the
- cloudstack-management package. This will pull in any other
- dependencies you need.
- $ sudo apt-get install cloudstack-management
-
-
- On KVM hosts, you will need to manually install the
- cloudstack-agent package:
- $ sudo apt-get install cloudstack-agent
- During the installation of cloudstack-agent, APT will copy
- your agent.properties, log4j-cloud.xml,
- and environment.properties from
- /etc/cloud/agent to
- /etc/cloudstack/agent.
- When prompted whether you wish to keep your configuration, say Yes.
-
-
- Verify that the file
- /etc/cloudstack/agent/environment.properties has a line that
- reads:
- paths.script=/usr/share/cloudstack-common
- If not, add the line.
-
-
- Restart the agent:
-
-service cloud-agent stop
-killall jsvc
-service cloudstack-agent start
-
-
-
- During the upgrade, log4j-cloud.xml was simply copied over,
- so the logs will continue to be added to
- /var/log/cloud/agent/agent.log. There's nothing
- wrong with this, but if you prefer to be consistent, you can
- change this by copying over the sample configuration file:
-
-cd /etc/cloudstack/agent
-mv log4j-cloud.xml.dpkg-dist log4j-cloud.xml
-service cloudstack-agent restart
-
-
-
- Once the agent is running, you can uninstall the old cloud-* packages from your
- system:
- sudo dpkg --purge cloud-agent
-
-
-
-
- If you are using CentOS or RHEL, follow this procedure to upgrade your packages. If
- not, skip to step .
-
- Community Packages
- This section assumes you're using the community supplied packages for &PRODUCT;.
- If you've created your own packages and yum repository, substitute your own URL for
- the ones used in these examples.
-
-
-
- The first order of business will be to change the yum repository for each system
- with &PRODUCT; packages. This means all management servers, and any hosts that have
- the KVM agent. (No changes should be necessary for hosts that are running VMware or
- Xen.)
- Start by opening /etc/yum.repos.d/cloudstack.repo on any
- systems that have &PRODUCT; packages installed.
- This file should have content similar to the following:
-
-[apache-cloudstack]
-name=Apache CloudStack
-baseurl=http://cloudstack.apt-get.eu/rhel/4.0/
-enabled=1
-gpgcheck=0
-
- If you are using the community provided package repository, change the baseurl
- to http://cloudstack.apt-get.eu/rhel/4.2/
- If you're using your own package repository, change this line to read as
- appropriate for your 4.2.0 repository.
-
-
- Now that you have the repository configured, it's time to install the
- cloudstack-management package by upgrading the older
- cloud-client package.
- $ sudo yum upgrade cloud-client
-
-
- For KVM hosts, you will need to upgrade the cloud-agent
- package, similarly installing the new version as
- cloudstack-agent.
- $ sudo yum upgrade cloud-agent
- During the installation of cloudstack-agent, the RPM will
- copy your agent.properties,
- log4j-cloud.xml, and
- environment.properties from
- /etc/cloud/agent to
- /etc/cloudstack/agent.
-
-
- Verify that the file
- /etc/cloudstack/agent/environment.properties has a line that
- reads:
- paths.script=/usr/share/cloudstack-common
- If not, add the line.
-
-
- Restart the agent:
-
-service cloud-agent stop
-killall jsvc
-service cloudstack-agent start
-
-
-
-
-
- If you have made changes to your existing copy of the file components.xml in your
- previous-version CloudStack installation, the changes will be preserved in the upgrade.
- However, you need to do the following steps to place these changes in a new version of
- the file which is compatible with version 4.0.0-incubating.
-
- How will you know whether you need to do this? If the upgrade output in the
- previous step included a message like the following, then some custom content was
- found in your old components.xml, and you need to merge the two files:
-
- warning: /etc/cloud/management/components.xml created as /etc/cloud/management/components.xml.rpmnew
-
-
- Make a backup copy of your
- /etc/cloud/management/components.xml file. For
- example:
- #mv/etc/cloud/management/components.xml/etc/cloud/management/components.xml-backup
-
-
- Copy /etc/cloud/management/components.xml.rpmnew to create
- a new /etc/cloud/management/components.xml:
- #cp -ap /etc/cloud/management/components.xml.rpmnew/etc/cloud/management/components.xml
-
-
- Merge your changes from the backup file into the new components.xml file.
- #vi/etc/cloud/management/components.xml
-
-
-
-
-
- After upgrading to 4.2, API clients are expected to send plain text passwords for
- login and user creation, instead of MD5 hash. If API client changes are not acceptable,
- following changes are to be made for backward compatibility:
- Modify componentsContext.xml, and make PlainTextUserAuthenticator as the default
- authenticator (1st entry in the userAuthenticators adapter list is default)
-
-<!-- Security adapters -->
-<bean id="userAuthenticators" class="com.cloud.utils.component.AdapterList">
- <property name="Adapters">
- <list>
- <ref bean="PlainTextUserAuthenticator"/>
- <ref bean="MD5UserAuthenticator"/>
- <ref bean="LDAPUserAuthenticator"/>
- </list>
- </property>
-</bean>
-
- PlainTextUserAuthenticator works the same way MD5UserAuthenticator worked prior to
- 4.2.
-
-
- If you have made changes to your existing copy of the
- /etc/cloud/management/db.properties file in your previous-version
- CloudStack installation, the changes will be preserved in the upgrade. However, you need
- to do the following steps to place these changes in a new version of the file which is
- compatible with version 4.0.0-incubating.
-
-
- Make a backup copy of your file
- /etc/cloud/management/db.properties. For example:
- #mv/etc/cloud/management/db.properties/etc/cloud/management/db.properties-backup
-
-
- Copy /etc/cloud/management/db.properties.rpmnew to create a
- new /etc/cloud/management/db.properties:
- #cp -ap /etc/cloud/management/db.properties.rpmnewetc/cloud/management/db.properties
-
-
- Merge your changes from the backup file into the new db.properties file.
- #vi/etc/cloud/management/db.properties
-
-
-
-
- On the management server node, run the following command. It is recommended that you
- use the command-line flags to provide your own encryption keys. See Password and Key
- Encryption in the Installation Guide.
- #cloudstack-setup-encryption -e encryption_type -m management_server_key -k database_key
- When used without arguments, as in the following example, the default encryption
- type and keys will be used:
-
-
- (Optional) For encryption_type, use file or web to indicate the technique used
- to pass in the database encryption password. Default: file.
-
-
- (Optional) For management_server_key, substitute the default key that is used to
- encrypt confidential parameters in the properties file. Default: password. It is
- highly recommended that you replace this with a more secure value
-
-
- (Optional) For database_key, substitute the default key that is used to encrypt
- confidential parameters in the CloudStack database. Default: password. It is highly
- recommended that you replace this with a more secure value.
-
-
-
-
- Repeat steps 10 - 14 on every management server node. If you provided your own
- encryption key in step 14, use the same key on all other management servers.
-
-
- Start the first Management Server. Do not start any other Management Server nodes
- yet.
- # service cloudstack-management start
- Wait until the databases are upgraded. Ensure that the database upgrade is complete.
- You should see a message like "Complete! Done." After confirmation, start the other
- Management Servers one at a time by running the same command on each node.
-
-
- Start all Usage Servers (if they were running on your previous version). Perform
- this on each Usage Server host.
- # service cloudstack-usage start
-
-
- (KVM only) Additional steps are required for each KVM host. These steps will not
- affect running guests in the cloud. These steps are required only for clouds using KVM
- as hosts and only on the KVM hosts.
-
-
- Copy the CloudPlatform 4.2 tar file to the host, untar it, and change directory
- to the resulting directory.
-
-
- Stop the running agent.
- # service cloud-agent stop
-
-
- Update the agent software.
- # ./install.sh
-
-
- Choose "U" to update the packages.
-
-
- Start the agent.
- # service cloudstack-agent start
-
-
-
-
- (KVM only) Perform the following additional steps on each KVM host.
- These steps will not affect running guests in the cloud. These steps are required
- only for clouds using KVM as hosts and only on the KVM hosts.
-
-
- Configure your CloudStack package repositories as outlined in the Installation
- Guide
-
-
- Stop the running agent.
- # service cloud-agent stop
-
-
- Update the agent software with one of the following command sets as
- appropriate.
- #yum update cloud-*
-
- #apt-get update
- #apt-get upgrade cloud-*
-
-
-
- Start the agent.
- # service cloudstack-agent start
-
-
- Copy the contents of the agent.properties file to the new
- agent.properties file by using the following command
- sed -i 's/com.cloud.agent.resource.computing.LibvirtComputingResource/com.cloud.hypervisor.kvm.resource.LibvirtComputingResource/g' /etc/cloud/agent/agent.properties
-
-
- Start the cloud agent and cloud management services.
-
-
- When the Management Server is up and running, log in to the CloudStack UI and
- restart the virtual router for proper functioning of all the features.
-
-
-
-
- Log in to the CloudStack UI as admin, and check the status of the hosts. All hosts
- should come to Up state (except those that you know to be offline). You may need to wait
- 20 or 30 minutes, depending on the number of hosts.
- Do not proceed to the next step until the hosts show in the Up state. If the hosts
- do not come to the Up state, contact support.
-
-
- Run the following script to stop, then start, all Secondary Storage VMs, Console
- Proxy VMs, and virtual routers.
-
-
- Run the command once on one management server. Substitute your own IP address of
- the MySQL instance, the MySQL user to connect as, and the password to use for that
- user. In addition to those parameters, provide the "-c" and "-r" arguments. For
- example:
- #nohup cloud-sysvmadm -d 192.168.1.5 -u cloud -p password -c -r > sysvm.log 2>&1 &
- #tail -f sysvm.log
- This might take up to an hour or more to run, depending on the number of
- accounts in the system.
-
-
- After the script terminates, check the log to verify correct execution:
- #tail -f sysvm.log
- The content should be like the following:
-
- Stopping and starting 1 secondary storage vm(s)...
- Done stopping and starting secondary storage vm(s)
- Stopping and starting 1 console proxy vm(s)...
- Done stopping and starting console proxy vm(s).
- Stopping and starting 4 running routing vm(s)...
- Done restarting router(s).
-
-
-
-
-
- If you would like additional confirmation that the new system VM templates were
- correctly applied when these system VMs were rebooted, SSH into the System VM and check
- the version.
- Use one of the following techniques, depending on the hypervisor.
-
- XenServer or KVM:
- SSH in by using the link local IP address of the system VM. For example, in the
- command below, substitute your own path to the private key used to log in to the
- system VM and your own link local IP.
-
- Run the following commands on the XenServer or KVM host on which the system VM is
- present:
- #ssh -i private-key-pathlink-local-ip -p 3922
- # cat /etc/cloudstack-release
- The output should be like the following:
- Cloudstack Release 4.0.0-incubating Mon Oct 9 15:10:04 PST 2012
-
- ESXi
- SSH in using the private IP address of the system VM. For example, in the command
- below, substitute your own path to the private key used to log in to the system VM and
- your own private IP.
-
- Run the following commands on the Management Server:
- #ssh -i private-key-pathprivate-ip -p 3922
- #cat/etc/cloudstack-release
-
- The output should be like the following:
- Cloudstack Release 4.0.0-incubating Mon Oct 9 15:10:04 PST 2012
-
-
- If needed, upgrade all Citrix XenServer hypervisor hosts in your cloud to a version
- supported by CloudStack 4.0.0-incubating. The supported versions are XenServer 5.6 SP2
- and 6.0.2. Instructions for upgrade can be found in the CloudStack 4.0.0-incubating
- Installation Guide.
-
-
- Apply the XenServer hotfix XS602E003 (and any other needed hotfixes) to XenServer
- v6.0.2 hypervisor hosts.
-
-
- Disconnect the XenServer cluster from CloudStack.
- In the left navigation bar of the CloudStack UI, select Infrastructure. Under
- Clusters, click View All. Select the XenServer cluster and click Actions -
- Unmanage.
- This may fail if there are hosts not in one of the states Up, Down,
- Disconnected, or Alert. You may need to fix that before unmanaging this
- cluster.
- Wait until the status of the cluster has reached Unmanaged. Use the CloudStack
- UI to check on the status. When the cluster is in the unmanaged state, there is no
- connection to the hosts in the cluster.
-
-
- To clean up the VLAN, log in to one XenServer host and run:
- /opt/xensource/bin/cloud-clean-vlan.sh
-
-
- Prepare the upgrade by running the following on one XenServer host:
- /opt/xensource/bin/cloud-prepare-upgrade.sh
- If you see a message like "can't eject CD", log in to the VM and umount the CD,
- then run this script again.
-
-
- Upload the hotfix to the XenServer hosts. Always start with the Xen pool master,
- then the slaves. Using your favorite file copy utility (e.g. WinSCP), copy the
- hotfixes to the host. Place them in a temporary folder such as /root or /tmp.
- On the Xen pool master, upload the hotfix with this command:
- xe patch-upload file-name=XS602E003.xsupdate
- Make a note of the output from this command, which is a UUID for the hotfix
- file. You'll need it in another step later.
-
- (Optional) If you are applying other hotfixes as well, you can repeat the
- commands in this section with the appropriate hotfix number. For example,
- XS602E004.xsupdate.
-
-
-
- Manually live migrate all VMs on this host to another host. First, get a list of
- the VMs on this host:
- # xe vm-list
- Then use this command to migrate each VM. Replace the example host name and VM
- name with your own:
- #xe vm-migrate live=true host=host-name vm=VM-name
-
- Troubleshooting
- If you see a message like "You attempted an operation on a VM which requires
- PV drivers to be installed but the drivers were not detected," run:
- /opt/xensource/bin/make_migratable.sh
- b6cf79c8-02ee-050b-922f-49583d9f1a14.
-
-
-
- Apply the hotfix. First, get the UUID of this host:
- # xe host-list
- Then use the following command to apply the hotfix. Replace the example host
- UUID with the current host ID, and replace the hotfix UUID with the output from the
- patch-upload command you ran on this machine earlier. You can also get the hotfix
- UUID by running xe patch-list.
- xe patch-apply host-uuid=host-uuid
- uuid=hotfix-uuid
-
-
- Copy the following files from the CloudStack Management Server to the
- host.
-
-
-
-
-
-
- Copy from here...
- ...to here
-
-
-
-
- /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/xenserver60/NFSSR.py
- /opt/xensource/sm/NFSSR.py
-
-
- /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/setupxenserver.sh
- /opt/xensource/bin/setupxenserver.sh
-
-
- /usr/lib64/cloudstack-common/scripts/vm/hypervisor/xenserver/make_migratable.sh
- /opt/xensource/bin/make_migratable.sh
-
-
-
-
-
-
- (Only for hotfixes XS602E005 and XS602E007) You need to apply a new Cloud
- Support Pack.
-
-
- Download the CSP software onto the XenServer host from one of the following
- links:
- For hotfix XS602E005: http://coltrane.eng.hq.xensource.com/release/XenServer-6.x/XS-6.0.2/hotfixes/XS602E005/56710/xe-phase-2/xenserver-cloud-supp.tgz
- For hotfix XS602E007: http://coltrane.eng.hq.xensource.com/release/XenServer-6.x/XS-6.0.2/hotfixes/XS602E007/57824/xe-phase-2/xenserver-cloud-supp.tgz
-
-
- Extract the file:
- # tar xf xenserver-cloud-supp.tgz
-
-
- Run the following script:
- # xe-install-supplemental-pack
- xenserver-cloud-supp.iso
-
-
- If the XenServer host is part of a zone that uses basic networking, disable
- Open vSwitch (OVS):
- # xe-switch-network-backend bridge
-
-
-
-
- Reboot this XenServer host.
-
-
- Run the following:
- /opt/xensource/bin/setupxenserver.sh
-
- If the message "mv: cannot stat `/etc/cron.daily/logrotate': No such file or
- directory" appears, you can safely ignore it.
-
-
-
- Run the following:
- for pbd in `xe pbd-list currently-attached=false| grep ^uuid | awk
- '{print $NF}'`; do xe pbd-plug uuid=$pbd ;
-
-
-
- On each slave host in the Xen pool, repeat these steps, starting from "manually
- live migrate VMs."
-
-
-
-
-
-
-
- API Changes in 4.2
-
- Added API Commands in 4.2
-
- Secondary Storage
-
-
- addImageStore (Adds all types of secondary storage providers, S3/Swift/NFS)
-
-
- createSecondaryStagingStore (Adds a staging secondary storage in each zone)
-
-
- listImageStores (Lists all secondary storages, S3/Swift/NFS)
-
-
- listSecondaryStagingStores (Lists all staging secondary storages)
-
-
- addS3 (Adds a Amazon Simple Storage Service instance.) It is recommended to use
- addImageStore instead.
-
-
- listS3s (Lists all the Amazon Simple Storage Service instances.) It is recommended
- to use listImageStores instead.
-
-
-
-
- VM Snapshot
-
-
- createVMSnapshot (Creates a virtual machine snapshot; see )
-
-
- deleteVMSnapshot (Deletes a virtual machine snapshot)
-
-
- listVMSnapshot (Shows a virtual machine snapshot)
-
-
- revertToVMSnapshot (Returns a virtual machine to the state and data saved in a
- given snapshot)
-
-
-
-
- Load Balancer Health Check
-
-
- createLBHealthCheckPolicy (Creates a new health check policy for a load balancer
- rule; see )
-
-
- deleteLBHealthCheckPolicy (Deletes an existing health check policy from a load
- balancer rule)
-
-
- listLBHealthCheckPolicies (Displays the health check policy for a load balancer
- rule)
-
-
-
-
- Egress Firewall Rules
-
-
- createEgressFirewallRules (Creates an egress firewall rule on the guest network;
- see )
-
-
- deleteEgressFirewallRules (Deletes a egress firewall rule on the guest
- network.)
-
-
- listEgressFirewallRules (Lists the egress firewall rules configured for a guest
- network.)
-
-
-
-
- SSH Key
-
-
- resetSSHKeyForVirtualMachine (Resets the SSHkey for virtual machine.)
-
-
-
-
- Bare Metal
-
-
- addBaremetalHost (Adds a new host. Technically, this API command was present in
- v3.0.6, but its functionality was disabled. See )
-
-
- addBaremetalDhcp (Adds a DHCP server for bare metal hosts)
-
-
- addBaremetalPxePingServer (Adds a PXE PING server for bare metal hosts)
-
-
- addBaremetalPxeKickStartServer (Adds a PXE server for bare metal hosts)
-
-
- listBaremetalDhcp (Shows the DHCP servers currently defined for bare metal
- hosts)
-
-
- listBaremetalPxePingServer (Shows the PXE PING servers currently defined for bare
- metal hosts)
-
-
-
-
- NIC
-
-
- addNicToVirtualMachine (Adds a new NIC to the specified VM on a selected network;
- see )
-
-
- removeNicFromVirtualMachine (Removes the specified NIC from a selected VM.)
-
-
- updateDefaultNicForVirtualMachine (Updates the specified NIC to be the default one
- for a selected VM.)
-
-
- addIpToNic (Assigns secondary IP to a NIC.)
-
-
- removeIpFromNic (Assigns secondary IP to a NIC.)
-
-
- listNics (Lists the NICs associated with a VM.)
-
-
-
-
- Regions
-
-
- addRegion (Registers a Region into another Region; see )
-
-
- updateRegion (Updates Region details: ID, Name, Endpoint, User API Key, and User
- Secret Key.)
-
-
- removeRegion (Removes a Region from current Region.)
-
-
- listRegions (Get all the Regions. They can be filtered by using the ID or
- Name.)
-
-
-
-
- User
-
-
- getUser (This API can only be used by the Admin. Get user account details by using
- the API Key.)
-
-
-
-
- API Throttling
-
-
- getApiLimit (Show number of remaining APIs for the invoking user in current
- window)
-
-
- resetApiLimit (For root admin, if accountId parameter is passed, it will reset
- count for that particular account, otherwise it will reset all counters)
-
-
- resetApiLimit (Reset the API count.)
-
-
-
-
- Locking
-
-
- lockAccount (Locks an account)
-
-
- lockUser (Locks a user account)
-
-
-
-
- VM Scaling
-
-
- scaleVirtualMachine (Scales the virtual machine to a new service offering.)
-
-
-
-
- Migrate Volume
-
-
- migrateVirtualMachineWithVolume (Attempts migrating VM with its volumes to a
- different host.)
-
-
- listStorageProviders (Lists storage providers.)
-
-
- findStoragePoolsForMigration (Lists storage pools available for migrating a
- volume.)
-
-
-
-
- Dedicated IP and VLAN
-
-
- dedicatePublicIpRange (Dedicates a Public IP range to an account.)
-
-
- releasePublicIpRange (Releases a Public IP range back to the system pool.)
-
-
- dedicateGuestVlanRange (Dedicates a guest VLAN range to an account.)
-
-
- releaseDedicatedGuestVlanRange (Releases a dedicated guest VLAN range to the
- system.)
-
-
- listDedicatedGuestVlanRanges (Lists dedicated guest VLAN ranges.)
-
-
-
-
- Port Forwarding
-
-
- updatePortForwardingRule (Updates a port forwarding rule. Only the private port
- and the VM can be updated.)
-
-
-
-
- Scale System VM
-
-
- scaleSystemVm (Scale the service offering for a systemVM, console proxy, or
- secondary storage.)
-
-
-
-
- Deployment Planner
-
-
- listDeploymentPlanners (Lists all the deployment planners available.)
-
-
-
-
- Archive and Delete Events and Alerts
-
-
- archiveEvents (Archive one or more events.)
-
-
- deleteEvents (Delete one or more events.)
-
-
- archiveAlerts (Archive one or more alerts.)
-
-
- deleteAlerts (Delete one or more alerts.)
-
-
-
-
- Host Reservation
-
-
- releaseHostReservation (Releases host reservation.)
-
-
-
-
- Resize Volume
-
-
- resizeVolume (Resizes a volume.)
-
-
- updateVolume (Updates the volume.)
-
-
-
-
- Egress Firewall Rules
-
-
- createEgressFirewallRule (Creates a egress firewall rule for a given network. )
-
-
-
- deleteEgressFirewallRule (Deletes an egress firewall rule.)
-
-
- listEgressFirewallRules (Lists all egress firewall rules for network.)
-
-
-
-
- Network ACL
-
-
- updateNetworkACLItem (Updates ACL item with specified ID.)
-
-
- createNetworkACLList (Creates a Network ACL for the given VPC.)
-
-
- deleteNetworkACLList (Deletes a Network ACL.)
-
-
- replaceNetworkACLList (Replaces ACL associated with a Network or private gateway.)
-
-
-
- listNetworkACLLists (Lists all network ACLs.)
-
-
-
-
- Resource Detail
-
-
- addResourceDetail (Adds detail for the Resource.)
-
-
- removeResourceDetail (Removes detail for the Resource.)
-
-
- listResourceDetails (List resource details.)
-
-
-
-
- Nicira Integration
-
-
- addNiciraNvpDevice (Adds a Nicira NVP device.)
-
-
- deleteNiciraNvpDevice (Deletes a Nicira NVP device.)
-
-
- listNiciraNvpDevices (Lists Nicira NVP devices.)
-
-
- listNiciraNvpDeviceNetworks (Lists network that are using a Nicira NVP device.)
-
-
-
-
-
- BigSwitch VNS
-
-
- addBigSwitchVnsDevice (Adds a BigSwitch VNS device.)
-
-
- deleteBigSwitchVnsDevice (Deletes a BigSwitch VNS device.)
-
-
- listBigSwitchVnsDevices (Lists BigSwitch VNS devices.)
-
-
-
-
- Simulator
-
-
- configureSimulator (Configures a simulator.)
-
-
-
-
- API Discovery
-
-
- listApis (Lists all the available APIs on the server, provided by the API
- Discovery plugin.)
-
-
-
-
- Global Load Balancer
-
-
- createGlobalLoadBalancerRule (Creates a global load balancer rule.)
-
-
- deleteGlobalLoadBalancerRule (Deletes a global load balancer rule.)
-
-
- updateGlobalLoadBalancerRule (update global load balancer rules.)
-
-
- listGlobalLoadBalancerRules (Lists load balancer rules.)
-
-
- assignToGlobalLoadBalancerRule (Assign load balancer rule or list of load balancer
- rules to a global load balancer rules.)
-
-
- removeFromGlobalLoadBalancerRule (Removes a load balancer rule association with
- global load balancer rule)
-
-
-
-
- Load Balancer
-
-
- createLoadBalancer (Creates a Load Balancer)
-
-
- listLoadBalancers (Lists Load Balancers)
-
-
- deleteLoadBalancer (Deletes a load balancer)
-
-
- configureInternalLoadBalancerElement (Configures an Internal Load Balancer
- element.)
-
-
- createInternalLoadBalancerElement (Create an Internal Load Balancer element.)
-
-
-
- listInternalLoadBalancerElements (Lists all available Internal Load Balancer
- elements.)
-
-
-
-
- Affinity Group
-
-
- createAffinityGroup (Creates an affinity or anti-affinity group.)
-
-
- deleteAffinityGroup (Deletes an affinity group.)
-
-
- listAffinityGroups (Lists all the affinity groups.)
-
-
- updateVMAffinityGroup (Updates the affinity or anti-affinity group associations of
- a VM. The VM has to be stopped and restarted for the new properties to take effect.)
-
-
-
- listAffinityGroupTypes (Lists affinity group types available.)
-
-
-
-
- Portable IP
-
-
- createPortableIpRange (Adds a range of portable portable IPs to a Region.)
-
-
- deletePortableIpRange (Deletes a range of portable portable IPs associated with a
- Region.)
-
-
- listPortableIpRanges (Lists portable IP ranges.)
-
-
-
-
- Internal Load Balancer VM
-
-
- stopInternalLoadBalancerVM (Stops an Internal LB VM.)
-
-
- startInternalLoadBalancerVM (Starts an existing Internal LB VM.)
-
-
- listInternalLoadBalancerVMs (List internal LB VMs.)
-
-
-
-
- Network Isolation
-
-
- listNetworkIsolationMethods (Lists supported methods of network isolation.)
-
-
-
-
-
- Dedicated Resources
-
-
- dedicateZone (Dedicates a zone.)
-
-
- dedicatePod (Dedicates a pod.)
-
-
- dedicateCluster (Dedicate an existing cluster.)
-
-
- dedicateHost (Dedicates a host.)
-
-
- releaseDedicatedZone (Release dedication of zone.)
-
-
- releaseDedicatedPod (Release dedication for the pod.)
-
-
- releaseDedicatedCluster (Release dedication for cluster.)
-
-
- releaseDedicatedHost (Release dedication for host.)
-
-
- listDedicatedZones (List dedicated zones.)
-
-
- listDedicatedPods (Lists dedicated pods.)
-
-
- listDedicatedClusters (Lists dedicated clusters.)
-
-
- listDedicatedHosts (Lists dedicated hosts.)
-
-
-
-
-
- Changed API Commands in 4.2
-
-
-
-
-
-
-
- API Commands
-
-
- Description
-
-
-
-
-
-
- listNetworkACLs
-
-
- The following new request parameters are added: aclid (optional), action
- (optional), protocol (optional)
- The following new response parameters are added: aclid, action, number
-
-
-
-
- copyTemplate
-
-
- The following new response parameters are added: isdynamicallyscalable,
- sshkeyenabled
-
-
-
-
- listRouters
-
-
- The following new response parameters are added: ip6dns1, ip6dns2, role
-
-
-
-
- updateConfiguration
-
-
- The following new request parameters are added: accountid (optional),
- clusterid (optional), storageid (optional), zoneid (optional)
- The following new response parameters are added: id, scope
-
-
-
-
- listVolumes
-
-
- The following request parameter is removed: details
- The following new response parameter is added: displayvolume
-
-
-
-
- suspendProject
-
-
- The following new response parameters are added: cpuavailable, cpulimit,
- cputotal, ipavailable, iplimit, iptotal, memoryavailable, memorylimit,
- memorytotal, networkavailable, networklimit, networktotal,
- primarystorageavailable, primarystoragelimit, primarystoragetotal,
- secondarystorageavailable, secondarystoragelimit, secondarystoragetotal,
- snapshotavailable, snapshotlimit, snapshottotal, templateavailable, templatelimit,
- templatetotal, vmavailable, vmlimit, vmrunning, vmstopped, vmtotal,
- volumeavailable, volumelimit, volumetotal, vpcavailable, vpclimit, vpctotal
-
-
-
-
-
- listRemoteAccessVpns
-
-
- The following new response parameters are added: id
-
-
-
-
- registerTemplate
-
-
- The following new request parameters are added: imagestoreuuid (optional),
- isdynamicallyscalable (optional), isrouting (optional)
- The following new response parameters are added: isdynamicallyscalable,
- sshkeyenabled
-
-
-
-
- addTrafficMonitor
-
-
- The following response parameters are removed: privateinterface, privatezone,
- publicinterface, publiczone, usageinterface, username
-
-
-
-
- createTemplate
-
-
- The following response parameters are removed: clusterid, clustername,
- disksizeallocated, disksizetotal, disksizeused, ipaddress, path, podid, podname,
- state, tags, type
- The following new response parameters are added: account, accountid, bootable,
- checksum, crossZones, details, displaytext, domain, domainid, format, hostid,
- hostname, hypervisor, isdynamicallyscalable, isextractable, isfeatured, ispublic,
- isready, ostypeid, ostypename, passwordenabled, project, projectid, removed, size,
- sourcetemplateid, sshkeyenabled, status, templatetag, templatetype, tags
-
-
-
-
- listLoadBalancerRuleInstances
-
-
- The following new response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- migrateVolume
-
-
- The following new request parameters is added: livemigrate (optional)
- The following new response parameters is added: displayvolume
-
-
-
-
- createAccount
-
-
- The following new request parameters are added: accountid (optional), userid
- (optional)
- The following new response parameters are added: accountdetails, cpuavailable,
- cpulimit, cputotal, defaultzoneid, ipavailable, iplimit, iptotal,
- iscleanuprequired, isdefault, memoryavailable, memorylimit, memorytotal, name,
- networkavailable, networkdomain, networklimit, networktotal,
- primarystorageavailable, primarystoragelimit, primarystoragetotal,
- projectavailable, projectlimit, projecttotal, receivedbytes,
- secondarystorageavailable, secondarystoragelimit, secondarystoragetotal,
- sentbytes, snapshotavailable, snapshotlimit, snapshottotal, templateavailable,
- templatelimit, templatetotal, vmavailable, vmlimit, vmrunning, vmstopped, vmtotal,
- volumeavailable, volumelimit, volumetotal, vpcavailable, vpclimit, vpctotal,
- user
- The following parameters are removed: account, accountid, apikey, created,
- email, firstname, lastname, secretkey, timezone, username
-
-
-
-
- updatePhysicalNetwork
-
-
- The following new request parameters is added: removevlan (optional)
-
-
-
-
- listTrafficMonitors
-
-
- The following response parameters are removed: privateinterface, privatezone,
- publicinterface, publiczone, usageinterface, username
-
-
-
-
- attachIso
-
-
- The following new response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- listProjects
-
-
- The following new request parameters are added: cpuavailable, cpulimit,
- cputotal, ipavailable, iplimit, iptotal, memoryavailable, memorylimit,
- memorytotal, networkavailable, networklimit, networktotal,
- primarystorageavailable, primarystoragelimit, primarystoragetotal,
- secondarystorageavailable, secondarystoragelimit, secondarystoragetotal,
- snapshotavailable, snapshotlimit, snapshottotal, templateavailable, templatelimit,
- templatetotal, vmavailable, vmlimit, vmrunning, vmstopped, vmtotal,
- volumeavailable, volumelimit, volumetotal, vpcavailable, vpclimit, vpctotal
-
-
-
-
-
- enableAccount
-
-
- The following new response parameters are added: cpuavailable, cpulimit,
- cputotal, isdefault, memoryavailable, memorylimit, memorytotal,
- primarystorageavailable, primarystoragelimit, primarystoragetotal,
- secondarystorageavailable, secondarystoragelimit, secondarystoragetotal
-
-
-
-
- listPublicIpAddresses
-
-
- The following new response parameters are added: isportable, vmipaddress
-
-
-
-
-
- enableStorageMaintenance
-
-
- The following new response parameters are added: hypervisor, scope,
- suitableformigration
-
-
-
-
- listLoadBalancerRules
-
-
- The following new request parameters is added: networkid (optional)
- The following new response parameters is added: networkid
-
-
-
-
- stopRouter
-
-
- The following new response parameters are added: ip6dns1, ip6dns2, role
-
-
-
-
-
- listClusters
-
-
- The following new response parameters are added: cpuovercommitratio,
- memoryovercommitratio
-
-
-
-
- attachVolume
-
-
- The following new response parameter is added: displayvolume
-
-
-
-
- updateVPCOffering
-
-
- The following request parameters is made mandatory: id
-
-
-
-
- resetSSHKeyForVirtualMachine
-
-
- The following new request parameter is added: keypair (required)
- The following parameter is removed: name
- The following new response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- updateCluster
-
-
- The following request parameters are removed: cpuovercommitratio,
- memoryovercommitratio (optional)
-
-
-
-
- listPrivateGateways
-
-
- The following new response parameters are added: aclid, sourcenatsupported
-
-
-
-
-
- ldapConfig
-
-
- The following new request parameters are added: listall (optional)
- The following parameters has been made optional: searchbase, hostname,
- queryfilter
- The following new response parameter is added: ssl
-
-
-
-
- listTemplates
-
-
- The following new response parameters are added: isdynamicallyscalable,
- sshkeyenabled
-
-
-
-
- listNetworks
-
-
- The following new response parameters are added: aclid, displaynetwork,
- ip6cidr, ip6gateway, ispersistent, networkcidr, reservediprange
-
-
-
-
- restartNetwork
-
-
- The following new response parameters are added: isportable, vmipaddress
-
-
-
-
-
- prepareTemplate
-
-
- The following new response parameters are added: isdynamicallyscalable,
- sshkeyenabled
-
-
-
-
- rebootVirtualMachine
-
-
- The following new response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- changeServiceForRouter
-
-
- The following new request parameters are added: aclid (optional), action
- (optional), protocol (optional)
- The following new response parameters are added: id, scope
-
-
-
-
- updateZone
-
-
- The following new request parameters are added: ip6dns1 (optional), ip6dns2
- (optional)
- The following new response parameters are added: ip6dns1, ip6dns2
-
-
-
-
- ldapRemove
-
-
- The following new response parameters are added: ssl
-
-
-
-
- updateServiceOffering
-
-
- The following new response parameters are added: deploymentplanner, isvolatile
-
-
-
-
-
- updateStoragePool
-
-
- The following new response parameters are added: hypervisor, scope,
- suitableformigration
-
-
-
-
- listFirewallRules
-
-
- The following request parameter is removed: traffictype
- The following new response parameters are added: networkid
-
-
-
-
- updateUser
-
-
- The following new response parameters are added: iscallerchilddomain,
- isdefault
-
-
-
-
- updateProject
-
-
- The following new response parameters are added: cpuavailable, cpulimit,
- cputotal, ipavailable, iplimit, iptotal, memoryavailable, memorylimit,
- memorytotal, networkavailable, networklimit, networktotal,
- primarystorageavailable, primarystoragelimit, primarystoragetotal,
- secondarystorageavailable, secondarystoragelimit, secondarystoragetotal,
- snapshotavailable, snapshotlimit, snapshottotal, templateavailable, templatelimit,
- templatetotal, vmavailable, vmlimit, vmrunning, vmstopped, vmtotal,
- volumeavailable, volumelimit, volumetotal, vpcavailable, vpclimit, vpctotal
-
-
-
-
-
- updateTemplate
-
-
- The following new request parameters are added: isdynamicallyscalable
- (optional), isrouting (optional)
- The following new response parameters are added: isdynamicallyscalable,
- sshkeyenabled
-
-
-
-
- disableUser
-
-
- The following new response parameters are added: iscallerchilddomain,
- isdefault
-
-
-
-
- activateProject
-
-
- The following new response parameters are added: cpuavailable, cpulimit,
- cputotal, ipavailable, iplimit, iptotal, memoryavailable, memorylimit,
- memorytotal, networkavailable, networklimit, networktotal,
- primarystorageavailable, primarystoragelimit, primarystoragetotal,
- secondarystorageavailable, secondarystoragelimit, secondarystoragetotal,
- snapshotavailable, snapshotlimit, snapshottotal, templateavailable, templatelimit,
- templatetotal, vmavailable, vmlimit, vmrunning, vmstopped, vmtotal,
- volumeavailable, volumelimit, volumetotal, vpcavailable, vpclimit, vpctotal
-
-
-
-
-
- createNetworkACL
-
-
- The following new request parameters are added: aclid (optional), action
- (optional), number (optional)
- The following request parameter is now optional: networkid
- The following new response parameters are added: aclid, action, number
-
-
-
-
- enableStaticNat
-
-
- The following new request parameters are added: vmguestip (optional)
-
-
-
-
- registerIso
-
-
- The following new request parameters are added: imagestoreuuid (optional),
- isdynamicallyscalable (optional)
- The following new response parameters are added: isdynamicallyscalable,
- sshkeyenabled
-
-
-
-
- createIpForwardingRule
-
-
- The following new response parameter is added: vmguestip
-
-
-
-
- resetPasswordForVirtualMachine
-
-
- The following new response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- createVolume
-
-
- The following new request parameter is added: displayvolume (optional)
- The following new response parameter is added: displayvolume
-
-
-
-
- startRouter
-
-
- The following new response parameters are added: ip6dns1, ip6dns2, role
-
-
-
-
-
- listCapabilities
-
-
- The following new response parameters are added: apilimitinterval and
- apilimitmax.
- See .
-
-
-
-
- createServiceOffering
-
-
- The following new request parameters are added: deploymentplanner (optional),
- isvolatile (optional), serviceofferingdetails (optional).
- isvolatie indicates whether the service offering includes Volatile VM
- capability, which will discard the VM's root disk and create a new one on reboot.
- See .
- The following new response parameters are added: deploymentplanner, isvolatile
-
-
-
-
-
- restoreVirtualMachine
-
-
- The following request parameter is added: templateID (optional). This is used
- to point to the new template ID when the base image is updated. The parameter
- templateID can be an ISO ID in case of restore vm deployed using ISO. See .
- The following response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- createNetwork
-
-
- The following new request parameters are added: aclid (optional),
- displaynetwork (optional), endipv6 (optional), ip6cidr (optional), ip6gateway
- (optional), isolatedpvlan (optional), startipv6 (optional)
- The following new response parameters are added: aclid, displaynetwork,
- ip6cidr, ip6gateway, ispersistent, networkcidr, reservediprange
-
-
-
-
- createVlanIpRange
-
-
- The following new request parameters are added: startipv6, endipv6,
- ip6gateway, ip6cidr
- Changed parameters: startip (is now optional)
- The following new response parameters are added: startipv6, endipv6,
- ip6gateway, ip6cidr
-
-
-
-
- CreateZone
-
-
- The following new request parameters are added: ip6dns1, ip6dns2
- The following new response parameters are added: ip6dns1, ip6dns2
-
-
-
-
- deployVirtualMachine
-
-
- The following request parameters are added: affinitygroupids (optional),
- affinitygroupnames (optional), displayvm (optional), ip6address (optional)
- The following request parameter is modified: iptonetworklist has a new
- possible value, ipv6
- The following new response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- createNetworkOffering
-
-
- The following request parameters are added: details (optional),
- egressdefaultpolicy (optional), ispersistent (optional)
- ispersistent determines if the network or network offering created or listed
- by using this offering are persistent or not.
- The following response parameters are added: details, egressdefaultpolicy,
- ispersistent
-
-
-
-
- listNetworks
-
-
- The following request parameters is added: isPersistent.
- This parameter determines if the network or network offering created or listed
- by using this offering are persistent or not.
-
-
-
-
- listNetworkOfferings
-
-
- The following request parameters is added: isPersistent.
- This parameter determines if the network or network offering created or listed
- by using this offering are persistent or not.
- For listNetworkOfferings, the following response parameter has been added:
- details, egressdefaultpolicy, ispersistent
-
-
-
-
- addF5LoadBalancer
- configureNetscalerLoadBalancer
- addNetscalerLoadBalancer
- listF5LoadBalancers
- configureF5LoadBalancer
- listNetscalerLoadBalancers
-
-
- The following response parameter is removed: inline.
-
-
-
-
- listRouters
-
-
- For nic responses, the following fields have been added.
-
-
- ip6address
-
-
- ip6gateway
-
-
- ip6cidr
-
-
-
-
-
-
- listVirtualMachines
-
-
- The following request parameters are added: affinitygroupid (optional), vpcid
- (optional)
- The following response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- listRouters
- listZones
-
-
- For DomainRouter and DataCenter response, the following fields have been
- added.
-
-
- ip6dns1
-
-
- ip6dns2
-
-
- For listZones, the following optional request parameters are added: name,
- networktype
-
-
-
-
- listFirewallRules
- createFirewallRule
-
-
- The following request parameter is added: traffictype (optional).
- The following response parameter is added: networkid
-
-
-
-
- listUsageRecords
-
-
- The following response parameter is added: virtualsize.
-
-
-
-
- deleteIso
-
-
- The following request parameter is removed: forced
-
-
-
-
- addCluster
-
-
- The following request parameters are added: guestvswitchtype (optional),
- guestvswitchtype (optional), publicvswitchtype (optional), publicvswitchtype
- (optional)
- See .
- The following request parameters are removed: cpuovercommitratio,
- memoryovercommitratio
-
-
-
-
- updateCluster
-
-
- The following request parameters are added: cpuovercommitratio,
- ramovercommitratio
- See .
-
-
-
-
- createStoragePool
-
-
- The following request parameters are added: hypervisor (optional), provider
- (optional), scope (optional)
- The following request parameters have been made mandatory: podid,
- clusterid
- See .
- The following response parameter has been added: hypervisor, scope,
- suitableformigration
-
-
-
-
- listStoragePools
-
-
- The following request parameter is added: scope (optional)
- See .
- The following response parameters are added: hypervisor, scope,
- suitableformigration
-
-
-
-
- updateDiskOffering
-
-
- The following response parameter is added: displayoffering
-
-
-
-
- changeServiceForVirtualMachine
-
-
- The following response parameter are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- recoverVirtualMachine
-
-
- The following response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- listCapabilities
-
-
- The following response parameters are added: apilimitinterval, apilimitmax
-
-
-
-
-
- createRemoteAccessVpn
-
-
- The following response parameters are added: id
-
-
-
-
- startVirtualMachine
-
-
- The following response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- detachIso
-
-
- The following response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- updateVPC
-
-
- The following request parameters has been made mandatory: id, name
-
-
-
-
- associateIpAddress
-
-
- The following request parameters are added: isportable (optional), regionid
- (optional)
- The following response parameters are added: isportable, vmipaddress
-
-
-
-
- listProjectAccounts
-
-
- The following response parameters are added: cpuavailable, cpulimit, cputotal,
- ipavailable, iplimit, iptotal, memoryavailable, memorylimit, memorytotal,
- networkavailable, networklimit, networktotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal, snapshotavailable, snapshotlimit,
- snapshottotal, templateavailable, templatelimit, templatetotal, vmavailable,
- vmlimit, vmrunning, vmstopped, vmtotal, volumeavailable, volumelimit, volumetotal,
- vpcavailable, vpclimit, vpctotal
-
-
-
-
- disableAccount
-
-
- The following response parameters are added: cpuavailable, cpulimit, cputotal,
- isdefault, memoryavailable, memorylimit, memorytotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal
-
-
-
-
- listPortForwardingRules
-
-
- The following response parameters are added: vmguestip
-
-
-
-
- migrateVirtualMachine
-
-
- The following response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- cancelStorageMaintenance
-
-
- The following response parameters are added: hypervisor, scope,
- suitableformigration
-
-
-
-
- createPortForwardingRule
-
- The following request parameter is added: vmguestip (optional) The
- following response parameter is added: vmguestip
-
-
-
- addVpnUser
-
-
- The following response parameter is added: state
-
-
-
-
- createVPCOffering
-
-
- The following request parameter is added: serviceproviderlist (optional)
-
-
-
-
-
- assignVirtualMachine
-
-
- The following response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- listConditions
-
-
- The following response parameters are added: account, counter, domain,
- domainid, project, projectid, relationaloperator, threshold
- Removed response parameters: name, source, value
-
-
-
-
- createPrivateGateway
-
-
- The following request parameters are added: aclid (optional),
- sourcenatsupported (optional)
- The following response parameters are added: aclid, sourcenatsupported
-
-
-
-
- updateVirtualMachine
-
-
- The following request parameters are added: displayvm (optional),
- isdynamicallyscalable (optional)
- The following response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- destroyRouter
-
-
- The following response parameters are added: ip6dns1, ip6dns2, role
-
-
-
-
- listServiceOfferings
-
-
- The following response parameters are added: deploymentplanner, isvolatile
-
-
-
-
-
- listUsageRecords
-
-
- The following response parameters are removed: virtualsize
-
-
-
-
- createProject
-
-
- The following response parameters are added: cpuavailable, cpulimit, cputotal,
- ipavailable, iplimit, iptotal, memoryavailable, memorylimit, memorytotal,
- networkavailable, networklimit, networktotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal, snapshotavailable, snapshotlimit,
- snapshottotal, templateavailable, templatelimit, templatetotal, vmavailable,
- vmlimit, vmrunning, vmstopped, vmtotal, volumeavailable, volumelimit, volumetotal,
- vpcavailable, vpclimit, vpctotal
-
-
-
-
- enableUser
-
-
- The following response parameters are added: iscallerchilddomain, isdefault
-
-
-
-
-
- createLoadBalancerRule
-
-
- The following response parameter is added: networkid
-
-
-
-
- updateAccount
-
-
- The following response parameters are added: cpuavailable, cpulimit, cputotal,
- isdefault, memoryavailable, memorylimit, memorytotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal
-
-
-
-
- copyIso
-
-
- The following response parameters are added: isdynamicallyscalable,
- sshkeyenabled
-
-
-
-
- uploadVolume
-
-
- The following request parameters are added: imagestoreuuid (optional),
- projectid (optional
- The following response parameters are added: displayvolume
-
-
-
-
- createDomain
-
-
- The following request parameter is added: domainid (optional)
-
-
-
-
- stopVirtualMachine
-
-
- The following response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- listAccounts
-
-
- The following response parameters are added: cpuavailable, cpulimit, cputotal,
- isdefault, memoryavailable, memorylimit, memorytotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal
-
-
-
-
- createSnapshot
-
-
- The following response parameter is added: zoneid
-
-
-
-
- updateIso
-
-
- The following request parameters are added: isdynamicallyscalable (optional),
- isrouting (optional)
- The following response parameters are added: isdynamicallyscalable,
- sshkeyenabled
-
-
-
-
- listIpForwardingRules
-
-
- The following response parameter is added: vmguestip
-
-
-
-
- updateNetwork
-
-
- The following request parameters are added: displaynetwork (optional),
- guestvmcidr (optional)
- The following response parameters are added: aclid, displaynetwork, ip6cidr,
- ip6gateway, ispersistent, networkcidr, reservediprange
-
-
-
-
- destroyVirtualMachine
-
-
- The following response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- createDiskOffering
-
-
- The following request parameter is added: displayoffering (optional)
- The following response parameter is added: displayoffering
-
-
-
-
- rebootRouter
-
-
- The following response parameters are added: ip6dns1, ip6dns2, role
-
-
-
-
- listConfigurations
-
-
- The following request parameters are added: accountid (optional), clusterid
- (optional), storageid (optional), zoneid (optional)
- The following response parameters are added: id, scope
-
-
-
-
- createUser
-
-
- The following request parameter is added: userid (optional)
- The following response parameters are added: iscallerchilddomain,
- isdefault
-
-
-
-
- listDiskOfferings
-
-
- The following response parameter is added: displayoffering
-
-
-
-
- detachVolume
-
-
- The following response parameter is added: displayvolume
-
-
-
-
- deleteUser
-
-
- The following response parameters are added: displaytext, success
- Removed parameters: id, account, accountid, accounttype, apikey, created,
- domain, domainid, email, firstname, lastname, secretkey, state, timezone, username
-
-
-
-
-
- listSnapshots
-
-
- The following request parameter is added: zoneid (optional)
- The following response parameter is added: zoneid
-
-
-
-
- markDefaultZoneForAccount
-
-
- The following response parameters are added: cpuavailable, cpulimit, cputotal,
- isdefault, memoryavailable, memorylimit, memorytotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal
-
-
-
-
- restartVPC
-
-
- The following request parameters are made mandatory: id
-
-
-
-
- updateHypervisorCapabilities
-
-
- The following response parameters are added: hypervisor, hypervisorversion,
- maxdatavolumeslimit, maxguestslimit, maxhostspercluster, securitygroupenabled,
- storagemotionenabled
- Removed parameters: cpunumber, cpuspeed, created, defaultuse, displaytext,
- domain, domainid, hosttags, issystem, limitcpuuse, memory, name, networkrate,
- offerha, storagetype, systemvmtype, tags
-
-
-
-
- updateLoadBalancerRule
-
-
- The following response parameter is added: networkid
-
-
-
-
- listVlanIpRanges
-
-
- The following response parameters are added: endipv6, ip6cidr, ip6gateway,
- startipv6
-
-
-
-
- listHypervisorCapabilities
-
-
- The following response parameters are added: maxdatavolumeslimit,
- maxhostspercluster, storagemotionenabled
-
-
-
-
- updateNetworkOffering
-
-
- The following response parameters are added: details, egressdefaultpolicy,
- ispersistent
-
-
-
-
- createVirtualRouterElement
-
-
- The following request parameters are added: providertype (optional)
-
-
-
-
- listVpnUsers
-
-
- The following response parameter is added: state
-
-
-
-
- listUsers
-
-
- The following response parameters are added: iscallerchilddomain, isdefault
-
-
-
-
-
- listSupportedNetworkServices
-
-
- The following response parameter is added: provider
-
-
-
-
- listIsos
-
-
- The following response parameters are added: isdynamicallyscalable,
- sshkeyenabled
-
-
-
-
-
-
-
- Deprecated APIs
-
-
- addExternalLoadBalancer (Adds F5 external load balancer appliance.)
-
-
- deleteExternalLoadBalancer (Deletes a F5 external load balancer appliance added in a
- zone.)
-
-
- listExternalLoadBalancers (Lists F5 external load balancer appliances added in a
- zone.)
-
-
-
-
-
diff --git a/docs/en-US/Revision_History.xml b/docs/en-US/Revision_History.xml
deleted file mode 100644
index 55d741a64f2..00000000000
--- a/docs/en-US/Revision_History.xml
+++ /dev/null
@@ -1,45 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Revision History
-
-
-
- 0-0
- Tue May 29 2012
-
- Jessica
- Tomechak
-
-
-
-
- Initial creation of book by publican
-
-
-
-
-
-
diff --git a/docs/en-US/Revision_History_Install_Guide.xml b/docs/en-US/Revision_History_Install_Guide.xml
deleted file mode 100644
index ee8dd31325a..00000000000
--- a/docs/en-US/Revision_History_Install_Guide.xml
+++ /dev/null
@@ -1,55 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Revision History
-
-
-
- 1-0
- October 5 2012
-
- Jessica
- Tomechak
-
-
-
- Radhika
- PC
-
-
-
- Wido
- den Hollander
-
-
-
-
- Initial publication
-
-
-
-
-
-
diff --git a/docs/en-US/SSL-keystore-path-and-password.xml b/docs/en-US/SSL-keystore-path-and-password.xml
deleted file mode 100644
index f7b7426874d..00000000000
--- a/docs/en-US/SSL-keystore-path-and-password.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- SSL Keystore Path and Password
- If the LDAP server requires SSL, you need to enable it in the ldapConfig command by setting the parameters ssl, truststore, and truststorepass. Before enabling SSL for ldapConfig, you need to get the certificate which the LDAP server is using and add it to a trusted keystore. You will need to know the path to the keystore and the password.
-
diff --git a/docs/en-US/VPN-user-usage-record-format.xml b/docs/en-US/VPN-user-usage-record-format.xml
deleted file mode 100644
index dd66fb4d0d4..00000000000
--- a/docs/en-US/VPN-user-usage-record-format.xml
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- VPN User Usage Record Format
-
- account – name of the account
- accountid – ID of the account
- domainid – ID of the domain in which this account resides
- zoneid – Zone where the usage occurred
- description – A string describing what the usage record is tracking
- usage – String representation of the usage, including the units of usage (e.g. 'Hrs' for hours)
- usagetype – A number representing the usage type (see Usage Types)
- rawusage – A number representing the actual usage in hours
- usageid – VPN user ID
- usagetype – A number representing the usage type (see Usage Types)
- startdate, enddate – The range of time for which the usage is aggregated; see Dates in the Usage Record
-
-
diff --git a/docs/en-US/about-clusters.xml b/docs/en-US/about-clusters.xml
deleted file mode 100644
index aa8604ccd52..00000000000
--- a/docs/en-US/about-clusters.xml
+++ /dev/null
@@ -1,63 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- About Clusters
-
- A cluster provides a way to group hosts. To be precise, a cluster is a
- XenServer server pool, a set of KVM servers, , or a
- VMware cluster preconfigured in vCenter. The hosts in a cluster all
- have identical hardware, run the same hypervisor, are on the same subnet,
- and access the same shared primary storage. Virtual machine instances
- (VMs) can be live-migrated from one host to another within the same
- cluster, without interrupting service to the user.
-
-
- A cluster is the third-largest organizational unit within a &PRODUCT;
- deployment. Clusters are contained within pods, and pods are contained
- within zones. Size of the cluster is limited by the underlying hypervisor,
- although the &PRODUCT; recommends less in most cases; see Best Practices.
-
-
- A cluster consists of one or more hosts and one or more primary storage
- servers.
-
-
-
-
-
- cluster-overview.png: Structure of a simple cluster
-
- &PRODUCT; allows multiple clusters in a cloud deployment.
-
- Even when local storage is used exclusively, clusters are still required
- organizationally, even if there is just one host per cluster.
-
-
- When VMware is used, every VMware cluster is managed by a vCenter server.
- Administrator must register the vCenter server with &PRODUCT;. There may
- be multiple vCenter servers per zone. Each vCenter server may manage
- multiple VMware clusters.
-
-
diff --git a/docs/en-US/about-hosts.xml b/docs/en-US/about-hosts.xml
deleted file mode 100644
index 87b6bab1ee1..00000000000
--- a/docs/en-US/about-hosts.xml
+++ /dev/null
@@ -1,46 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- About Hosts
- A host is a single computer. Hosts provide the computing resources that run the guest virtual machines. Each host has hypervisor software installed on it to manage the guest VMs. For example, a Linux KVM-enabled server, a Citrix XenServer server, and an ESXi server are hosts.
- The host is the smallest organizational unit within a &PRODUCT; deployment. Hosts are contained within clusters, clusters are contained within pods, and pods are contained within zones.
- Hosts in a &PRODUCT; deployment:
-
- Provide the CPU, memory, storage, and networking resources needed to host the virtual
- machines
- Interconnect using a high bandwidth TCP/IP network and connect to the Internet
- May reside in multiple data centers across different geographic locations
- May have different capacities (different CPU speeds, different amounts of RAM, etc.), although the hosts within a cluster must all be homogeneous
-
- Additional hosts can be added at any time to provide more capacity for guest VMs.
- &PRODUCT; automatically detects the amount of CPU and memory resources provided by the Hosts.
- Hosts are not visible to the end user. An end user cannot determine which host their guest has been assigned to.
- For a host to function in &PRODUCT;, you must do the following:
-
- Install hypervisor software on the host
- Assign an IP address to the host
- Ensure the host is connected to the &PRODUCT; Management Server
-
-
diff --git a/docs/en-US/about-password-encryption.xml b/docs/en-US/about-password-encryption.xml
deleted file mode 100644
index a13ff60fc95..00000000000
--- a/docs/en-US/about-password-encryption.xml
+++ /dev/null
@@ -1,65 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- About Password and Key Encryption
- &PRODUCT; stores several sensitive passwords and secret keys that are used to provide
- security. These values are always automatically encrypted:
-
-
- Database secret key
-
-
- Database password
-
-
- SSH keys
-
-
- Compute node root password
-
-
- VPN password
-
-
- User API secret key
-
-
- VNC password
-
-
- &PRODUCT; uses the Java Simplified Encryption (JASYPT) library. The data values are
- encrypted and decrypted using a database secret key, which is stored in one of &PRODUCT;’s
- internal properties files along with the database password. The other encrypted values listed
- above, such as SSH keys, are in the &PRODUCT; internal database.
- Of course, the database secret key itself can not be stored in the open – it must be
- encrypted. How then does &PRODUCT; read it? A second secret key must be provided from an
- external source during Management Server startup. This key can be provided in one of two ways:
- loaded from a file or provided by the &PRODUCT; administrator. The &PRODUCT; database has a
- configuration setting that lets it know which of these methods will be used. If the encryption
- type is set to "file," the key must be in a file in a known location. If the encryption type is
- set to "web," the administrator runs the utility
- com.cloud.utils.crypt.EncryptionSecretKeySender, which relays the key to the Management Server
- over a known port.
- The encryption type, database secret key, and Management Server secret key are set during
- &PRODUCT; installation. They are all parameters to the &PRODUCT; database setup script
- (cloudstack-setup-databases). The default values are file, password, and password. It is, of course,
- highly recommended that you change these to more secure keys.
-
diff --git a/docs/en-US/about-physical-networks.xml b/docs/en-US/about-physical-networks.xml
deleted file mode 100644
index b22e48b7779..00000000000
--- a/docs/en-US/about-physical-networks.xml
+++ /dev/null
@@ -1,42 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- About Physical Networks
- Part of adding a zone is setting up the physical network. One or (in an advanced zone) more physical networks can be associated with each zone. The network corresponds to a NIC on the hypervisor host. Each physical network can carry one or more types of network traffic. The choices of traffic type for each network vary depending on whether you are creating a zone with basic networking or advanced networking.
- A physical network is the actual network hardware and wiring in a zone. A zone can have multiple physical networks. An administrator can:
-
- Add/Remove/Update physical networks in a zone
- Configure VLANs on the physical network
- Configure a name so the network can be recognized by hypervisors
- Configure the service providers (firewalls, load balancers, etc.) available on a physical network
- Configure the IP addresses trunked to a physical network
- Specify what type of traffic is carried on the physical network, as well as other properties like network speed
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/about-pods.xml b/docs/en-US/about-pods.xml
deleted file mode 100644
index 57ae1a319b3..00000000000
--- a/docs/en-US/about-pods.xml
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- About Pods
- A pod often represents a single rack. Hosts in the same pod are in the same subnet.
- A pod is the second-largest organizational unit within a &PRODUCT; deployment. Pods are contained within zones. Each zone can contain one or more pods.
- A pod consists of one or more clusters of hosts and one or more primary storage servers.
- Pods are not visible to the end user.
-
-
-
-
-
- pod-overview.png: Nested structure of a simple pod
-
-
diff --git a/docs/en-US/about-primary-storage.xml b/docs/en-US/about-primary-storage.xml
deleted file mode 100644
index 9af9f2dae13..00000000000
--- a/docs/en-US/about-primary-storage.xml
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- About Primary Storage
- Primary storage is associated with a cluster and/or a zone. It stores the disk volumes for all of the VMs running on hosts in that cluster. You can add multiple primary storage servers to a cluster or a zone (at least one is required at the cluster level). Primary storage is typically located close to the hosts for increased performance. &PRODUCT; manages the allocation of guest virtual disks to particular primary storage devices.
- Primary storage uses the concept of a storage tag. A storage tag is a label that is used to identify the primary storage. Each primary storage can be associated with zero, one, or more storage tags. When a VM is spun up or a data disk attached to a VM for the first time, these tags, if supplied, are used to determine which primary storage can support the VM or data disk (ex. say you need to guarantee a certain number of IOPS to a particular volume).
- Primary storage can be either static or dynamic. Static primary storage is what CloudStack has traditionally supported. In this model, the administrator must present CloudStack with a certain amount of preallocated storage (ex. a volume from a SAN) and CloudStack can place many of its volumes on this storage. In the newer, dynamic model, the administrator can present CloudStack with a storage system itself (ex. a SAN). CloudStack, working in concert with a plug-in developed for that storage system, can dynamically create volumes on the storage system. A valuable use for this ability is Quality of Service (QoS). If a volume created in CloudStack can be backed by a dedicated volume on a SAN (i.e. a one-to-one mapping between a SAN volume and a CloudStack volume) and the SAN provides QoS, then CloudStack can provide QoS.
- &PRODUCT; is designed to work with all standards-compliant iSCSI and NFS servers that are supported by the underlying hypervisor, including, for example:
-
- SolidFire for iSCSI
- Dell EqualLogicâ„¢ for iSCSI
- Network Appliances filers for NFS and iSCSI
- Scale Computing for NFS
-
- If you intend to use only local disk for your installation, you can skip to Add Secondary Storage.
-
diff --git a/docs/en-US/about-regions.xml b/docs/en-US/about-regions.xml
deleted file mode 100644
index a12c183abd3..00000000000
--- a/docs/en-US/about-regions.xml
+++ /dev/null
@@ -1,50 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- About Regions
- To increase reliability of the cloud, you can optionally group resources into multiple geographic regions.
- A region is the largest available organizational unit within a &PRODUCT; deployment.
- A region is made up of several availability zones, where each zone is roughly equivalent to a datacenter.
- Each region is controlled by its own cluster of Management Servers, running in one of the zones.
- The zones in a region are typically located in close geographical proximity.
- Regions are a useful technique for providing fault tolerance and disaster recovery.
- By grouping zones into regions, the cloud can achieve higher availability and scalability.
- User accounts can span regions, so that users can deploy VMs in multiple, widely-dispersed regions.
- Even if one of the regions becomes unavailable, the services are still available to the end-user through VMs deployed in another region.
- And by grouping communities of zones under their own nearby Management Servers, the latency of communications within the cloud is reduced
- compared to managing widely-dispersed zones from a single central Management Server.
-
-
- Usage records can also be consolidated and tracked at the region level, creating reports or invoices for each geographic region.
-
-
-
-
-
- region-overview.png: Nested structure of a region.
-
- Regions are visible to the end user. When a user starts a guest VM on a particular &PRODUCT; Management Server,
- the user is implicitly selecting that region for their guest.
- Users might also be required to copy their private templates to additional regions to enable creation of guest VMs using their templates in those regions.
-
\ No newline at end of file
diff --git a/docs/en-US/about-secondary-storage.xml b/docs/en-US/about-secondary-storage.xml
deleted file mode 100644
index 516ec0e6b78..00000000000
--- a/docs/en-US/about-secondary-storage.xml
+++ /dev/null
@@ -1,51 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- About Secondary Storage
- Secondary storage stores the following:
-
- Templates — OS images that can be used to boot VMs and can include additional configuration information, such as installed applications
- ISO images — disc images containing data or bootable media for operating systems
- Disk volume snapshots — saved copies of VM data which can be used for data recovery or to create new templates
-
- The items in secondary storage are available to all hosts in the scope of
- the secondary storage, which may be defined as per zone or per region.
- To make items in secondary storage available to all hosts throughout the cloud, you can
- add object storage in addition to the
- zone-based NFS Secondary Staging Store.
- It is not necessary to
- copy templates and snapshots from one zone to another, as would be required when using zone
- NFS alone. Everything is available everywhere.
- &PRODUCT; provides plugins that enable both
- OpenStack Object Storage (Swift,
- swift.openstack.org)
- and Amazon Simple Storage Service (S3) object storage.
- When using one of these storage plugins, you configure Swift or S3 storage for
- the entire &PRODUCT;, then set up the NFS Secondary Staging Store for each zone. The NFS
- storage in each zone acts as a staging area through which all templates and other secondary
- storage data pass before being forwarded to Swoft or S3.
- The backing object storage acts as a cloud-wide
- resource, making templates and other data available to any zone in the cloud.
-
diff --git a/docs/en-US/about-security-groups.xml b/docs/en-US/about-security-groups.xml
deleted file mode 100644
index 6a31b25ef48..00000000000
--- a/docs/en-US/about-security-groups.xml
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- About Security Groups
- Security groups provide a way to isolate traffic to VMs. A security group is a group of
- VMs that filter their incoming and outgoing traffic according to a set of rules, called
- ingress and egress rules. These rules filter network traffic according to the IP address
- that is attempting to communicate with the VM. Security groups are particularly useful in
- zones that use basic networking, because there is a single guest network for all guest VMs.
- In advanced zones, security groups are supported only on the KVM hypervisor.
- In a zone that uses advanced networking, you can instead define multiple guest networks to isolate traffic to VMs.
-
-
- Each &PRODUCT; account comes with a default security group that denies all inbound traffic and allows all outbound traffic. The default security group can be modified so that all new VMs inherit some other desired set of rules.
- Any &PRODUCT; user can set up any number of additional security groups. When a new VM is launched, it is assigned to the default security group unless another user-defined security group is specified. A VM can be a member of any number of security groups. Once a VM is assigned to a security group, it remains in that group for its entire lifetime; you can not move a running VM from one security group to another.
- You can modify a security group by deleting or adding any number of ingress and egress rules. When you do, the new rules apply to all VMs in the group, whether running or stopped.
- If no ingress rules are specified, then no traffic will be allowed in, except for responses to any traffic that has been allowed out through an egress rule.
-
diff --git a/docs/en-US/about-virtual-networks.xml b/docs/en-US/about-virtual-networks.xml
deleted file mode 100644
index 4dbd2018b27..00000000000
--- a/docs/en-US/about-virtual-networks.xml
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- About Virtual Networks
- A virtual network is a logical construct that enables multi-tenancy on a single physical network. In &PRODUCT; a virtual network can be shared or isolated.
-
-
-
-
diff --git a/docs/en-US/about-working-with-vms.xml b/docs/en-US/about-working-with-vms.xml
deleted file mode 100644
index 90e5abf07f8..00000000000
--- a/docs/en-US/about-working-with-vms.xml
+++ /dev/null
@@ -1,64 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- About Working with Virtual Machines
- &PRODUCT; provides administrators with complete control over the lifecycle of all guest VMs
- executing in the cloud. &PRODUCT; provides several guest management operations for end users and
- administrators. VMs may be stopped, started, rebooted, and destroyed.
- Guest VMs have a name and group. VM names and groups are opaque to &PRODUCT; and are
- available for end users to organize their VMs. Each VM can have three names for use in different
- contexts. Only two of these names can be controlled by the user:
-
-
- Instance name – a unique, immutable ID that is generated by &PRODUCT; and can not
- be modified by the user. This name conforms to the requirements in IETF RFC 1123.
-
-
- Display name – the name displayed in the &PRODUCT; web UI. Can be set by the user.
- Defaults to instance name.
-
-
- Name – host name that the DHCP server assigns to the VM. Can be set by the user.
- Defaults to instance name
-
-
-
- You can append the display name of a guest VM to its internal name. For more information,
- see .
-
- Guest VMs can be configured to be Highly Available (HA). An HA-enabled VM is monitored by
- the system. If the system detects that the VM is down, it will attempt to restart the VM,
- possibly on a different host. For more information, see HA-Enabled Virtual Machines on
- Each new VM is allocated one public IP address. When the VM is started, &PRODUCT;
- automatically creates a static NAT between this public IP address and the private IP address of
- the VM.
- If elastic IP is in use (with the NetScaler load balancer), the IP address initially
- allocated to the new VM is not marked as elastic. The user must replace the automatically
- configured IP with a specifically acquired elastic IP, and set up the static NAT mapping between
- this new IP and the guest VM’s private IP. The VM’s original IP address is then released and
- returned to the pool of available public IPs. Optionally, you can also decide not to allocate a
- public IP to a VM in an EIP-enabled Basic zone. For more information on Elastic IP, see .
- &PRODUCT; cannot distinguish a guest VM that was shut down by the user (such as with the
- “shutdown†command in Linux) from a VM that shut down unexpectedly. If an HA-enabled VM is shut
- down from inside the VM, &PRODUCT; will restart it. To shut down an HA-enabled VM, you must go
- through the &PRODUCT; UI or API.
-
diff --git a/docs/en-US/about-zones.xml b/docs/en-US/about-zones.xml
deleted file mode 100644
index 2a4eeb4659f..00000000000
--- a/docs/en-US/about-zones.xml
+++ /dev/null
@@ -1,74 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- About Zones
- A zone is the second largest organizational unit within a &PRODUCT; deployment. A zone
- typically corresponds to a single datacenter, although it is permissible to have multiple
- zones in a datacenter. The benefit of organizing infrastructure into zones is to provide
- physical isolation and redundancy. For example, each zone can have its own power supply and
- network uplink, and the zones can be widely separated geographically (though this is not
- required).
- A zone consists of:
-
- One or more pods. Each pod contains one or more clusters of hosts and one or more primary storage servers.
- A zone may contain one or more primary storage servers, which are shared by all the pods in the zone.
- Secondary storage, which is shared by all the pods in the zone.
-
-
-
-
-
- zone-overview.png: Nested structure of a simple zone.
-
- Zones are visible to the end user. When a user starts a guest VM, the user must select a zone for their guest. Users might also be required to copy their private templates to additional zones to enable creation of guest VMs using their templates in those zones.
- Zones can be public or private. Public zones are visible to all users. This means that any user may create a guest in that zone. Private zones are reserved for a specific domain. Only users in that domain or its subdomains may create guests in that zone.
- Hosts in the same zone are directly accessible to each other without having to go through a firewall. Hosts in different zones can access each other through statically configured VPN tunnels.
- For each zone, the administrator must decide the following.
-
- How many pods to place in each zone.
- How many clusters to place in each pod.
- How many hosts to place in each cluster.
- (Optional) How many primary storage servers to place in each zone and total capacity for these storage servers.
- How many primary storage servers to place in each cluster and total capacity for these storage servers.
- How much secondary storage to deploy in a zone.
-
- When you add a new zone using the &PRODUCT; UI, you will be prompted to configure the zone’s physical network
- and add the first pod, cluster, host, primary storage, and secondary storage.
- In order to support zone-wide functions for VMware, &PRODUCT; is aware of VMware Datacenters and can map each Datacenter to a
- &PRODUCT; zone. To enable features like storage live migration and zone-wide
- primary storage for VMware hosts, &PRODUCT; has to make sure that a zone
- contains only a single VMware Datacenter. Therefore, when you are creating a new
- &PRODUCT; zone, you can select a VMware Datacenter for the zone. If you
- are provisioning multiple VMware Datacenters, each one will be set up as a single zone
- in &PRODUCT;.
-
- If you are upgrading from a previous &PRODUCT; version, and your existing
- deployment contains a zone with clusters from multiple VMware Datacenters, that zone
- will not be forcibly migrated to the new model. It will continue to function as
- before. However, any new zone-wide operations, such as zone-wide primary storage
- and live storage migration, will
- not be available in that zone.
-
-
-
diff --git a/docs/en-US/accept-membership-invite.xml b/docs/en-US/accept-membership-invite.xml
deleted file mode 100644
index dc59d00af65..00000000000
--- a/docs/en-US/accept-membership-invite.xml
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Accepting a Membership Invitation
- If you have received an invitation to join a &PRODUCT; project, and you want to accept the invitation, follow these steps:
-
- Log in to the &PRODUCT; UI.
- In the left navigation, click Projects.
- In Select View, choose Invitations.
- If you see the invitation listed onscreen, click the Accept button.Invitations listed on screen were sent to you using your &PRODUCT; account name.
- If you received an email invitation, click the Enter Token button, and provide the project ID and unique ID code (token) from the email.
-
-
-
diff --git a/docs/en-US/accessing-system-vms.xml b/docs/en-US/accessing-system-vms.xml
deleted file mode 100755
index e1b6090d7af..00000000000
--- a/docs/en-US/accessing-system-vms.xml
+++ /dev/null
@@ -1,66 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Accessing System VMs
- It may sometimes be necessary to access System VMs for diagnostics of certain issues, for example if you are experiencing SSVM (Secondary Storage VM) connection issues. Use the steps below in order to connect to the SSH console of a running System VM.
-
- Accessing System VMs over the network requires the use of private keys and connecting to System VMs SSH Daemon on port 3922.
- XenServer/KVM Hypervisors store this key at /root/.ssh/id_rsa.cloud on each &PRODUCT; agent.
- To access System VMs running on ESXi, the key is stored on the management server at /var/lib/cloudstack/management/.ssh/id_rsa.
-
-
-
- Find the details of the System VM
-
- Log in with admin privileges to the &PRODUCT; UI.
- Click Infrastructure, then System VMs, and then click the name of a running VM.
- Take a note of the 'Host', 'Private IP Address' and 'Link Local IP Address' of the System VM you wish to access.
-
-
-
-
- XenServer/KVM Hypervisors
-
- Connect to the Host of which the System VM is running.
- SSH the 'Link Local IP Address' of the System VM from the Host on which the VM is running.
- Format: ssh -i <path-to-private-key> <link-local-ip> -p 3922
- Example: root@faith:~# ssh -i /root/.ssh/id_rsa.cloud 169.254.3.93 -p 3922
-
-
-
- ESXi Hypervisors
-
- Connect to your &PRODUCT; Management Server.
- ESXi users should SSH to the private IP address of the System VM.
- Format: ssh -i <path-to-private-key> <vm-private-ip> -p 3922
- Example: root@management:~# ssh -i /var/lib/cloudstack/management/.ssh/id_rsa 172.16.0.250 -p 3922
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/accessing-vms.xml b/docs/en-US/accessing-vms.xml
deleted file mode 100644
index 67d9d774172..00000000000
--- a/docs/en-US/accessing-vms.xml
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Accessing VMs
- Any user can access their own virtual machines. The administrator can access all VMs running in the cloud.
- To access a VM through the &PRODUCT; UI:
-
- Log in to the &PRODUCT; UI as a user or admin.
- Click Instances, then click the name of a running VM.
- Click the View Console button .
-
- To access a VM directly over the network:
-
- The VM must have some port open to incoming traffic. For example, in a basic zone, a new VM might be assigned to a security group which allows incoming traffic. This depends on what security group you picked when creating the VM. In other cases, you can open a port by setting up a port forwarding policy. See .
- If a port is open but you can not access the VM using ssh, it’s possible that ssh is not already enabled on the VM. This will depend on whether ssh is enabled in the template you picked when creating the VM. Access the VM through the &PRODUCT; UI and enable ssh on the machine using the commands for the VM’s operating system.
- If the network has an external firewall device, you will need to create a firewall rule to allow access. See .
-
-
-
diff --git a/docs/en-US/accounts-users-domains.xml b/docs/en-US/accounts-users-domains.xml
deleted file mode 100644
index 3accbbe9b84..00000000000
--- a/docs/en-US/accounts-users-domains.xml
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Accounts, Users, and Domains
-
- Accounts
- An account typically represents a customer of the service provider or a department in a large organization. Multiple users can exist in an account.
-
-
- Domains
- Accounts are grouped by domains. Domains usually contain multiple accounts that have some logical relationship to each other and a set of delegated administrators with some authority over the domain and its subdomains. For example, a service provider with several resellers could create a domain for each reseller.
-
- For each account created, the Cloud installation creates three different types of user accounts: root administrator, domain administrator, and user.
-
- Users
- Users are like aliases in the account. Users in the same account are not isolated from each other, but they are isolated from users in other accounts. Most installations need not surface the notion of users; they just have one user per account. The same user cannot belong to multiple accounts.
-
- Username is unique in a domain across accounts in that domain. The same username can exist in other domains, including sub-domains. Domain name can repeat only if the full pathname from root is unique. For example, you can create root/d1, as well as root/foo/d1, and root/sales/d1.
- Administrators are accounts with special privileges in the system. There may be multiple administrators in the system. Administrators can create or delete other administrators, and change the password for any user in the system.
-
- Domain Administrators
- Domain administrators can perform administrative operations for users who belong to that domain. Domain administrators do not have visibility into physical servers or other domains.
-
-
- Root Administrator
- Root administrators have complete access to the system, including managing templates, service offerings, customer care administrators, and domains
-
-
- Resource Ownership
- Resources belong to the account, not individual users in that account. For example,
- billing, resource limits, and so on are maintained by the account, not the users. A user
- can operate on any resource in the account provided the user has privileges for that
- operation. The privileges are determined by the role. A root administrator can change
- the ownership of any virtual machine from one account to any other account by using the
- assignVirtualMachine API. A domain or sub-domain administrator can do the same for VMs
- within the domain from one account to any other account in the domain or any of its
- sub-domains.
-
-
- Dedicating Resources to Accounts and Domains
- The root administrator can dedicate resources to a specific domain or account
- that needs private infrastructure for additional security or performance guarantees.
- A zone, pod, cluster, or host can be reserved by the root administrator for a specific domain or account.
- Only users in that domain or its subdomain may use the infrastructure.
- For example, only users in a given domain can create guests in a zone dedicated to that domain.
- There are several types of dedication available:
-
-
- Explicit dedication. A zone, pod, cluster, or host is dedicated to an account or
- domain by the root administrator during initial deployment and
- configuration.
- Strict implicit dedication. A host will not be shared across multiple accounts. For example,
- strict implicit dedication is useful for deployment of certain types of
- applications, such as desktops, where no host can be shared
- between different accounts without violating the desktop software's terms of license.
- Preferred implicit dedication. The VM will be deployed in dedicated infrastructure if
- possible. Otherwise, the VM can be deployed in shared
- infrastructure.
-
-
- How to Dedicate a Zone, Cluster, Pod, or Host to an Account or Domain
- For explicit dedication: When deploying a new zone, pod, cluster, or host, the
- root administrator can click the Dedicated checkbox, then choose a domain or account
- to own the resource.
- To explicitly dedicate an existing zone, pod, cluster, or host: log in as the root admin,
- find the resource in the UI, and click the Dedicate button.
-
-
-
-
- dedicate-resource-button.png: button to dedicate a zone, pod, cluster, or host
-
-
- For implicit dedication: The administrator creates a compute service offering and
- in the Deployment Planner field, chooses ImplicitDedicationPlanner. Then in Planner
- Mode, the administrator specifies either Strict or Preferred, depending on whether
- it is permissible to allow some use of shared resources when dedicated resources are
- not available. Whenever a user creates a VM based on this service offering, it is
- allocated on one of the dedicated hosts.
-
-
- How to Use Dedicated Hosts
- To use an explicitly dedicated host, use the explicit-dedicated type of affinity
- group (see ). For example, when creating a new VM,
- an end user can choose to place it on dedicated infrastructure. This operation will
- succeed only if some infrastructure has already been assigned as dedicated to the
- user's account or domain.
-
-
- Behavior of Dedicated Hosts, Clusters, Pods, and Zones
- The administrator can live migrate VMs away from dedicated hosts if desired, whether the destination
- is a host reserved for a different account/domain or a host that is shared (not dedicated to any particular account or domain).
- &PRODUCT; will generate an alert, but the operation is allowed.
- Dedicated hosts can be used in conjunction with host tags. If both a host tag and dedication are requested,
- the VM will be placed only on a host that meets both requirements. If there is no dedicated resource available
- to that user that also has the host tag requested by the user, then the VM will not deploy.
- If you delete an account or domain, any hosts, clusters, pods, and zones that were
- dedicated to it are freed up. They will now be available to be shared by any account
- or domain, or the administrator may choose to re-dedicate them to a different
- account or domain.
- System VMs and virtual routers affect the behavior of host dedication.
- System VMs and virtual routers are owned by the &PRODUCT; system account,
- and they can be deployed on any host. They do not adhere to explicit dedication.
- The presence of system vms and virtual routers on a host makes it unsuitable for strict implicit dedication.
- The host can not be used for strict implicit dedication,
- because the host already has VMs of a specific account (the default system account).
- However, a host with system VMs or virtual routers can be used
- for preferred implicit dedication.
-
-
-
-
diff --git a/docs/en-US/accounts.xml b/docs/en-US/accounts.xml
deleted file mode 100644
index 1c4454c6a3f..00000000000
--- a/docs/en-US/accounts.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Accounts
-
-
-
diff --git a/docs/en-US/acquire-new-ip-address.xml b/docs/en-US/acquire-new-ip-address.xml
deleted file mode 100644
index 3dbd79e3f2d..00000000000
--- a/docs/en-US/acquire-new-ip-address.xml
+++ /dev/null
@@ -1,52 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Acquiring a New IP Address
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- Click the name of the network where you want to work with.
-
-
- Click View IP Addresses.
-
-
- Click Acquire New IP.
- The Acquire New IP window is displayed.
-
-
- Specify whether you want cross-zone IP or not.
- If you want Portable IP click Yes in the confirmation dialog. If you want a normal
- Public IP click No.
- For more information on Portable IP, see .
- Within a few moments, the new IP address should appear with the state Allocated. You can
- now use the IP address in port forwarding or static NAT rules.
-
-
-
diff --git a/docs/en-US/acquire-new-ip-for-vpc.xml b/docs/en-US/acquire-new-ip-for-vpc.xml
deleted file mode 100644
index c0cb876d483..00000000000
--- a/docs/en-US/acquire-new-ip-for-vpc.xml
+++ /dev/null
@@ -1,88 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Acquiring a New IP Address for a VPC
- When you acquire an IP address, all IP addresses are allocated to VPC, not to the guest
- networks within the VPC. The IPs are associated to the guest network only when the first
- port-forwarding, load balancing, or Static NAT rule is created for the IP or the network. IP
- can't be associated to more than one network at a time.
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In the Select view, select VPC.
- All the VPCs that you have created for the account is listed in the page.
-
-
- Click the Configure button of the VPC to which you want to deploy the VMs.
- The VPC page is displayed where all the tiers you created are listed in a
- diagram.
- The following options are displayed.
-
-
- Internal LB
-
-
- Public LB IP
-
-
- Static NAT
-
-
- Virtual Machines
-
-
- CIDR
-
-
- The following router information is displayed:
-
-
- Private Gateways
-
-
- Public IP Addresses
-
-
- Site-to-Site VPNs
-
-
- Network ACL Lists
-
-
-
-
- Select IP Addresses.
- The Public IP Addresses page is displayed.
-
-
- Click Acquire New IP, and click Yes in the confirmation dialog.
- You are prompted for confirmation because, typically, IP addresses are a limited
- resource. Within a few moments, the new IP address should appear with the state Allocated.
- You can now use the IP address in port forwarding, load balancing, and static NAT
- rules.
-
-
-
diff --git a/docs/en-US/add-additional-guest-network.xml b/docs/en-US/add-additional-guest-network.xml
deleted file mode 100644
index c684da023da..00000000000
--- a/docs/en-US/add-additional-guest-network.xml
+++ /dev/null
@@ -1,65 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Adding an Additional Guest Network
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- Click Add guest network. Provide the following information:
-
-
- Name: The name of the network. This will be
- user-visible.
-
-
- Display Text: The description of the network. This
- will be user-visible.
-
-
- Zone. The name of the zone this network applies to.
- Each zone is a broadcast domain, and therefore each zone has a different IP range for
- the guest network. The administrator must configure the IP range for each zone.
-
-
- Network offering: If the administrator has
- configured multiple network offerings, select the one you want to use for this
- network.
-
-
- Guest Gateway: The gateway that the guests should
- use.
-
-
- Guest Netmask: The netmask in use on the subnet the
- guests will use.
-
-
-
-
- Click Create.
-
-
-
diff --git a/docs/en-US/add-clusters-kvm-xenserver.xml b/docs/en-US/add-clusters-kvm-xenserver.xml
deleted file mode 100644
index ad5737191fd..00000000000
--- a/docs/en-US/add-clusters-kvm-xenserver.xml
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Add Cluster: KVM or XenServer
- These steps assume you have already installed the hypervisor on the hosts and logged in to
- the &PRODUCT; UI.
-
-
- In the left navigation, choose Infrastructure. In Zones, click View More, then click the
- zone in which you want to add the cluster.
-
-
- Click the Compute tab.
-
-
- In the Clusters node of the diagram, click View All.
-
-
- Click Add Cluster.
-
-
- Choose the hypervisor type for this cluster.
-
-
- Choose the pod in which you want to create the cluster.
-
-
- Enter a name for the cluster. This can be text of your choosing and is not used by
- &PRODUCT;.
-
-
- Click OK.
-
-
-
diff --git a/docs/en-US/add-clusters-ovm.xml b/docs/en-US/add-clusters-ovm.xml
deleted file mode 100644
index d0b0688e6a3..00000000000
--- a/docs/en-US/add-clusters-ovm.xml
+++ /dev/null
@@ -1,43 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Add Cluster: OVM
- To add a Cluster of hosts that run Oracle VM (OVM):
-
- Add a companion non-OVM cluster to the Pod. This cluster provides an environment where the &PRODUCT; System VMs can run. You should have already installed a non-OVM hypervisor on at least one Host to prepare for this step. Depending on which hypervisor you used:
-
- For VMWare, follow the steps in . When finished, return here and continue with the next step.
- For KVM or XenServer, follow the steps in . When finished, return here and continue with the next step
-
-
- In the left navigation, choose Infrastructure. In Zones, click View All, then click the zone in which you want to add the cluster.
- Click the Compute and Storage tab. In the Pods node, click View All.
- Click View Clusters, then click Add Cluster.
- The Add Cluster dialog is displayed.
- In Hypervisor, choose OVM.
- In Cluster, enter a name for the cluster.
- Click Add.
-
-
diff --git a/docs/en-US/add-clusters-vsphere.xml b/docs/en-US/add-clusters-vsphere.xml
deleted file mode 100644
index c3a0902be8f..00000000000
--- a/docs/en-US/add-clusters-vsphere.xml
+++ /dev/null
@@ -1,178 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Add Cluster: vSphere
- Host management for vSphere is done through a combination of vCenter and the &PRODUCT; admin
- UI. &PRODUCT; requires that all hosts be in a &PRODUCT; cluster, but the cluster may consist of
- a single host. As an administrator you must decide if you would like to use clusters of one host
- or of multiple hosts. Clusters of multiple hosts allow for features like live migration.
- Clusters also require shared storage such as NFS or iSCSI.
- For vSphere servers, we recommend creating the cluster of hosts in vCenter and then adding
- the entire cluster to &PRODUCT;. Follow these requirements:
-
-
- Do not put more than 8 hosts in a vSphere cluster
-
-
- Make sure the hypervisor hosts do not have any VMs already running before you add them
- to &PRODUCT;.
-
-
- To add a vSphere cluster to &PRODUCT;:
-
-
- Create the cluster of hosts in vCenter. Follow the vCenter instructions to do this. You
- will create a cluster that looks something like this in vCenter.
-
-
-
-
-
- vsphereclient.png: vSphere client
-
-
-
-
- Log in to the UI.
-
-
- In the left navigation, choose Infrastructure. In Zones, click View More, then click the
- zone in which you want to add the cluster.
-
-
- Click the Compute tab, and click View All on Pods. Choose the pod to which you want to
- add the cluster.
-
-
- Click View Clusters.
-
-
- Click Add Cluster.
-
-
- In Hypervisor, choose VMware.
-
-
- Provide the following information in the dialog. The fields below make reference to the
- values from vCenter.
-
-
-
-
-
- addcluster.png: add a cluster
-
-
-
-
- Cluster Name: Enter the name of the cluster you
- created in vCenter. For example, "cloud.cluster.2.2.1"
-
-
- vCenter Username: Enter the username that &PRODUCT;
- should use to connect to vCenter. This user must have all the administrative
- privileges.
-
-
- CPU overcommit ratio: Enter the CPU overcommit
- ratio for the cluster. The value you enter determines the CPU consumption of each VM in
- the selected cluster. By increasing the over-provisioning ratio, more resource capacity
- will be used. If no value is specified, the value is defaulted to 1, which implies no
- over-provisioning is done.
-
-
- RAM overcommit ratio: Enter the RAM overcommit
- ratio for the cluster. The value you enter determines the memory consumption of each VM
- in the selected cluster. By increasing the over-provisioning ratio, more resource
- capacity will be used. If no value is specified, the value is defaulted to 1, which
- implies no over-provisioning is done.
-
-
- vCenter Host: Enter the hostname or IP address of
- the vCenter server.
-
-
- vCenter Password: Enter the password for the user
- named above.
-
-
- vCenter Datacenter: Enter the vCenter datacenter
- that the cluster is in. For example, "cloud.dc.VM".
-
-
- Override Public Traffic: Enable this option to
- override the zone-wide public traffic for the cluster you are creating.
-
-
- Public Traffic vSwitch Type: This option is
- displayed only if you enable the Override Public Traffic option. Select a desirable
- switch. If the vmware.use.dvswitch global parameter is true, the default option will be
- VMware vNetwork Distributed Virtual Switch.
- If you have enabled Nexus dvSwitch in the environment, the following parameters for
- dvSwitch configuration are displayed:
-
-
- Nexus dvSwitch IP Address: The IP address of the Nexus VSM appliance.
-
-
- Nexus dvSwitch Username: The username required to access the Nexus VSM
- appliance.
-
-
- Nexus dvSwitch Password: The password associated with the username specified
- above.
-
-
-
-
- Override Guest Traffic: Enable this option to
- override the zone-wide guest traffic for the cluster you are creating.
-
-
- Guest Traffic vSwitch Type: This option is
- displayed only if you enable the Override Guest Traffic option. Select a desirable
- switch.
- If the vmware.use.dvswitch global parameter is true, the default option will be
- VMware vNetwork Distributed Virtual Switch.
- If you have enabled Nexus dvSwitch in the environment, the following parameters for
- dvSwitch configuration are displayed:
-
-
- Nexus dvSwitch IP Address: The IP address of the Nexus VSM appliance.
-
-
- Nexus dvSwitch Username: The username required to access the Nexus VSM
- appliance.
-
-
- Nexus dvSwitch Password: The password associated with the username specified
- above.
-
-
-
-
- There might be a slight delay while the cluster is provisioned. It will
- automatically display in the UI.
-
-
-
-
-
diff --git a/docs/en-US/add-gateway-vpc.xml b/docs/en-US/add-gateway-vpc.xml
deleted file mode 100644
index 403302df532..00000000000
--- a/docs/en-US/add-gateway-vpc.xml
+++ /dev/null
@@ -1,227 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Adding a Private Gateway to a VPC
- A private gateway can be added by the root admin only. The VPC private network has 1:1
- relationship with the NIC of the physical network. You can configure multiple private gateways
- to a single VPC. No gateways with duplicated VLAN and IP are allowed in the same data
- center.
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In the Select view, select VPC.
- All the VPCs that you have created for the account is listed in the page.
-
-
- Click the Configure button of the VPC to which you want to configure load balancing
- rules.
- The VPC page is displayed where all the tiers you created are listed in a
- diagram.
-
-
- Click the Settings icon.
- The following options are displayed.
-
-
- Internal LB
-
-
- Public LB IP
-
-
- Static NAT
-
-
- Virtual Machines
-
-
- CIDR
-
-
- The following router information is displayed:
-
-
- Private Gateways
-
-
- Public IP Addresses
-
-
- Site-to-Site VPNs
-
-
- Network ACL Lists
-
-
-
-
- Select Private Gateways.
- The Gateways page is displayed.
-
-
- Click Add new gateway:
-
-
-
-
-
- add-new-gateway-vpc.png: adding a private gateway for the VPC.
-
-
-
-
- Specify the following:
-
-
- Physical Network: The physical network you have
- created in the zone.
-
-
- IP Address: The IP address associated with the VPC
- gateway.
-
-
- Gateway: The gateway through which the traffic is
- routed to and from the VPC.
-
-
- Netmask: The netmask associated with the VPC
- gateway.
-
-
- VLAN: The VLAN associated with the VPC
- gateway.
-
-
- Source NAT: Select this option to enable the source
- NAT service on the VPC private gateway.
- See .
-
-
- ACL: Controls both ingress and egress traffic on a
- VPC private gateway. By default, all the traffic is blocked.
- See .
-
-
- The new gateway appears in the list. You can repeat these steps to add more gateway for
- this VPC.
-
-
-
- Source NAT on Private Gateway
- You might want to deploy multiple VPCs with the same super CIDR and guest tier CIDR.
- Therefore, multiple guest VMs from different VPCs can have the same IPs to reach a enterprise
- data center through the private gateway. In such cases, a NAT service need to be configured on
- the private gateway to avoid IP conflicts. If Source NAT is enabled, the guest VMs in VPC
- reaches the enterprise network via private gateway IP address by using the NAT service.
- The Source NAT service on a private gateway can be enabled while adding the private
- gateway. On deletion of a private gateway, source NAT rules specific to the private gateway
- are deleted.
- To enable source NAT on existing private gateways, delete them and create afresh with
- source NAT.
-
-
- ACL on Private Gateway
- The traffic on the VPC private gateway is controlled by creating both ingress and egress
- network ACL rules. The ACLs contains both allow and deny rules. As per the rule, all the
- ingress traffic to the private gateway interface and all the egress traffic out from the
- private gateway interface are blocked.
- You can change this default behaviour while creating a private gateway. Alternatively, you
- can do the following:
-
-
- In a VPC, identify the Private Gateway you want to work with.
-
-
- In the Private Gateway page, do either of the following:
-
-
- Use the Quickview. See .
-
-
- Use the Details tab. See through .
-
-
-
-
- In the Quickview of the selected Private Gateway, click Replace ACL, select the ACL
- rule, then click OK
-
-
- Click the IP address of the Private Gateway you want to work with.
-
-
- In the Detail tab, click the Replace ACL button.
-
-
-
-
- replace-acl-icon.png: button to replace the default ACL behaviour.
-
-
- The Replace ACL dialog is displayed.
-
-
- select the ACL rule, then click OK.
- Wait for few seconds. You can see that the new ACL rule is displayed in the Details
- page.
-
-
-
-
- Creating a Static Route
- &PRODUCT; enables you to specify routing for the VPN connection you create. You can enter
- one or CIDR addresses to indicate which traffic is to be routed back to the gateway.
-
-
- In a VPC, identify the Private Gateway you want to work with.
-
-
- In the Private Gateway page, click the IP address of the Private Gateway you want to
- work with.
-
-
- Select the Static Routes tab.
-
-
- Specify the CIDR of destination network.
-
-
- Click Add.
- Wait for few seconds until the new route is created.
-
-
-
-
- Blacklisting Routes
- &PRODUCT; enables you to block a list of routes so that they are not assigned to any of
- the VPC private gateways. Specify the list of routes that you want to blacklist in the
- blacklisted.routes global parameter. Note that the parameter update affects
- only new static route creations. If you block an existing static route, it remains intact and
- continue functioning. You cannot add a static route if the route is blacklisted for the zone.
-
-
-
diff --git a/docs/en-US/add-ingress-egress-rules.xml b/docs/en-US/add-ingress-egress-rules.xml
deleted file mode 100644
index 2490cec43cc..00000000000
--- a/docs/en-US/add-ingress-egress-rules.xml
+++ /dev/null
@@ -1,131 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Adding Ingress and Egress Rules to a Security Group
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network
-
-
- In Select view, choose Security Groups, then click the security group you want .
-
-
- To add an ingress rule, click the Ingress Rules tab and fill out the following fields to
- specify what network traffic is allowed into VM instances in this security group. If no
- ingress rules are specified, then no traffic will be allowed in, except for responses to any
- traffic that has been allowed out through an egress rule.
-
-
- Add by CIDR/Account. Indicate whether the source of
- the traffic will be defined by IP address (CIDR) or an existing security group in a
- &PRODUCT; account (Account). Choose Account if you want to allow incoming traffic from
- all VMs in another security group
-
-
- Protocol. The networking protocol that sources will
- use to send traffic to the security group. TCP and UDP are typically used for data
- exchange and end-user communications. ICMP is typically used to send error messages or
- network monitoring data.
-
-
- Start Port, End Port. (TCP, UDP only) A range of
- listening ports that are the destination for the incoming traffic. If you are opening a
- single port, use the same number in both fields.
-
-
- ICMP Type, ICMP Code. (ICMP only) The type of
- message and error code that will be accepted.
-
-
- CIDR. (Add by CIDR only) To accept only traffic
- from IP addresses within a particular address block, enter a CIDR or a comma-separated
- list of CIDRs. The CIDR is the base IP address of the incoming traffic. For example,
- 192.168.0.0/22. To allow all CIDRs, set to 0.0.0.0/0.
-
-
- Account, Security Group. (Add by Account only) To
- accept only traffic from another security group, enter the &PRODUCT; account and name of
- a security group that has already been defined in that account. To allow traffic between
- VMs within the security group you are editing now, enter the same name you used in step
- 7.
-
-
- The following example allows inbound HTTP access from anywhere:
-
-
-
-
-
- httpaccess.png: allows inbound HTTP access from anywhere
-
-
-
-
- To add an egress rule, click the Egress Rules tab and fill out the following fields to
- specify what type of traffic is allowed to be sent out of VM instances in this security
- group. If no egress rules are specified, then all traffic will be allowed out. Once egress
- rules are specified, the following types of traffic are allowed out: traffic specified in
- egress rules; queries to DNS and DHCP servers; and responses to any traffic that has been
- allowed in through an ingress rule
-
-
- Add by CIDR/Account. Indicate whether the
- destination of the traffic will be defined by IP address (CIDR) or an existing security
- group in a &PRODUCT; account (Account). Choose Account if you want to allow outgoing
- traffic to all VMs in another security group.
-
-
- Protocol. The networking protocol that VMs will use
- to send outgoing traffic. TCP and UDP are typically used for data exchange and end-user
- communications. ICMP is typically used to send error messages or network monitoring
- data.
-
-
- Start Port, End Port. (TCP, UDP only) A range of
- listening ports that are the destination for the outgoing traffic. If you are opening a
- single port, use the same number in both fields.
-
-
- ICMP Type, ICMP Code. (ICMP only) The type of
- message and error code that will be sent
-
-
- CIDR. (Add by CIDR only) To send traffic only to IP
- addresses within a particular address block, enter a CIDR or a comma-separated list of
- CIDRs. The CIDR is the base IP address of the destination. For example, 192.168.0.0/22.
- To allow all CIDRs, set to 0.0.0.0/0.
-
-
- Account, Security Group. (Add by Account only) To
- allow traffic to be sent to another security group, enter the &PRODUCT; account and name
- of a security group that has already been defined in that account. To allow traffic
- between VMs within the security group you are editing now, enter its name.
-
-
-
-
- Click Add.
-
-
-
diff --git a/docs/en-US/add-ip-range.xml b/docs/en-US/add-ip-range.xml
deleted file mode 100644
index 6da0668ec2b..00000000000
--- a/docs/en-US/add-ip-range.xml
+++ /dev/null
@@ -1,124 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Multiple Subnets in Shared Network
- &PRODUCT; provides you with the flexibility to add guest IP ranges from different subnets in
- Basic zones and security groups-enabled Advanced zones. For security groups-enabled Advanced
- zones, it implies multiple subnets can be added to the same VLAN. With the addition of this
- feature, you will be able to add IP address ranges from the same subnet or from a different one
- when IP address are exhausted. This would in turn allows you to employ higher number of subnets
- and thus reduce the address management overhead. You can delete the IP ranges you have
- added.
-
- Prerequisites and Guidelines
-
-
- This feature can only be implemented:
-
-
- on IPv4 addresses
-
-
- if virtual router is the DHCP provider
-
-
- on KVM, xenServer, and VMware hypervisors
-
-
-
-
- Manually configure the gateway of the new subnet before adding the IP range.
-
-
- &PRODUCT; supports only one gateway for a subnet; overlapping subnets are not
- currently supported
-
-
-
-
- Adding Multiple Subnets to a Shared Network
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Infrastructure.
-
-
- On Zones, click View More, then click the zone to which you want to work with..
-
-
- Click Physical Network.
-
-
- In the Guest node of the diagram, click Configure.
-
-
- Click Networks.
-
-
- Select the networks you want to work with.
-
-
- Click View IP Ranges.
-
-
- Click Add IP Range.
- The Add IP Range dialog is displayed, as follows:
-
-
-
-
-
- add-ip-range.png: adding an IP range to a network.
-
-
-
-
- Specify the following:
- All the fields are mandatory.
-
-
- Gateway: The gateway for the tier you create.
- Ensure that the gateway is within the Super CIDR range that you specified while
- creating the VPC, and is not overlapped with the CIDR of any existing tier within the
- VPC.
-
-
- Netmask: The netmask for the tier you create.
- For example, if the VPC CIDR is 10.0.0.0/16 and the network tier CIDR is
- 10.0.1.0/24, the gateway of the tier is 10.0.1.1, and the netmask of the tier is
- 255.255.255.0.
-
-
- Start IP/ End IP: A range of IP addresses that
- are accessible from the Internet and will be allocated to guest VMs. Enter the first
- and last IP addresses that define a range that &PRODUCT; can assign to guest VMs
- .
-
-
-
-
- Click OK.
-
-
-
-
diff --git a/docs/en-US/add-iso.xml b/docs/en-US/add-iso.xml
deleted file mode 100644
index 25986e02e92..00000000000
--- a/docs/en-US/add-iso.xml
+++ /dev/null
@@ -1,151 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Adding an ISO
- To make additional operating system or other software available for use with guest VMs, you
- can add an ISO. The ISO is typically thought of as an operating system image, but you can also
- add ISOs for other types of software, such as desktop applications that you want to be installed
- as part of a template.
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation bar, click Templates.
-
-
- In Select View, choose ISOs.
-
-
- Click Add ISO.
-
-
- In the Add ISO screen, provide the following:
-
-
- Name: Short name for the ISO image. For example,
- CentOS 6.2 64-bit.
-
-
- Description: Display test for the ISO image. For
- example, CentOS 6.2 64-bit.
-
-
- URL: The URL that hosts the ISO image. The
- Management Server must be able to access this location via HTTP. If needed you can place
- the ISO image directly on the Management Server
-
-
- Zone: Choose the zone where you want the ISO to be
- available, or All Zones to make it available throughout &PRODUCT;.
-
-
- Bootable: Whether or not a guest could boot off
- this ISO image. For example, a CentOS ISO is bootable, a Microsoft Office ISO is not
- bootable.
-
-
- OS Type: This helps &PRODUCT; and the hypervisor
- perform certain operations and make assumptions that improve the performance of the
- guest. Select one of the following.
-
-
- If the operating system of your desired ISO image is listed, choose it.
-
-
- If the OS Type of the ISO is not listed or if the ISO is not bootable, choose
- Other.
-
-
- (XenServer only) If you want to boot from this ISO in PV mode, choose Other PV
- (32-bit) or Other PV (64-bit)
-
-
- (KVM only) If you choose an OS that is PV-enabled, the VMs created from this ISO
- will have a SCSI (virtio) root disk. If the OS is not PV-enabled, the VMs will have
- an IDE root disk. The PV-enabled types are:
-
-
-
-
- Fedora 13
- Fedora 12
- Fedora 11
-
-
- Fedora 10
- Fedora 9
- Other PV
-
-
- Debian GNU/Linux
- CentOS 5.3
- CentOS 5.4
-
-
- CentOS 5.5
- Red Hat Enterprise Linux 5.3
- Red Hat Enterprise Linux 5.4
-
-
- Red Hat Enterprise Linux 5.5
- Red Hat Enterprise Linux 6
-
-
-
-
-
-
-
-
- It is not recommended to choose an older version of the OS than the version in the
- image. For example, choosing CentOS 5.4 to support a CentOS 6.2 image will usually not
- work. In these cases, choose Other.
-
-
-
- Extractable: Choose Yes if the ISO should be
- available for extraction.
-
-
- Public: Choose Yes if this ISO should be available
- to other users.
-
-
- Featured: Choose Yes if you would like this ISO to
- be more prominent for users to select. The ISO will appear in the Featured ISOs list.
- Only an administrator can make an ISO Featured.
-
-
-
-
- Click OK.
- The Management Server will download the ISO. Depending on the size of the ISO, this may
- take a long time. The ISO status column will display Ready once it has been successfully
- downloaded into secondary storage. Clicking Refresh updates the download percentage.
-
-
- Important: Wait for the ISO to finish downloading. If
- you move on to the next task and try to use the ISO right away, it will appear to fail. The
- entire ISO must be available before &PRODUCT; can work with it.
-
-
-
diff --git a/docs/en-US/add-load-balancer-rule.xml b/docs/en-US/add-load-balancer-rule.xml
deleted file mode 100644
index ef3305e98e8..00000000000
--- a/docs/en-US/add-load-balancer-rule.xml
+++ /dev/null
@@ -1,102 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Adding a Load Balancer Rule
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- Click the name of the network where you want to load balance the traffic.
-
-
- Click View IP Addresses.
-
-
- Click the IP address for which you want to create the rule, then click the Configuration
- tab.
-
-
- In the Load Balancing node of the diagram, click View All.
- In a Basic zone, you can also create a load balancing rule without acquiring or
- selecting an IP address. &PRODUCT; internally assign an IP when you create the load
- balancing rule, which is listed in the IP Addresses page when the rule is created.
- To do that, select the name of the network, then click Add Load Balancer tab. Continue
- with .
-
-
- Fill in the following:
-
-
- Name: A name for the load balancer rule.
-
-
- Public Port: The port receiving incoming traffic to
- be balanced.
-
-
- Private Port: The port that the VMs will use to
- receive the traffic.
-
-
- Algorithm: Choose the load balancing algorithm you
- want &PRODUCT; to use. &PRODUCT; supports a variety of well-known algorithms. If you are
- not familiar with these choices, you will find plenty of information about them on the
- Internet.
-
-
- Stickiness: (Optional) Click Configure and choose
- the algorithm for the stickiness policy. See .
-
-
- AutoScale: Click Configure and complete the
- AutoScale configuration as explained in .
-
- Health Check: (Optional; NetScaler load balancers only) Click
- Configure and fill in the characteristics of the health check policy. See .
-
- Ping path (Optional): Sequence of destinations to which to send health check queries.
- Default: / (all).
- Response time (Optional): How long to wait for a response from the health check (2 - 60 seconds).
- Default: 5 seconds.
- Interval time (Optional): Amount of time between health checks (1 second - 5 minutes).
- Default value is set in the global configuration parameter lbrule_health check_time_interval.
- Healthy threshold (Optional): Number of consecutive health check successes
- that are required before declaring an instance healthy.
- Default: 2.
- Unhealthy threshold (Optional): Number of consecutive health check failures that are required before declaring an instance unhealthy.
- Default: 10.
-
-
-
-
- Click Add VMs, then select two or more VMs that will divide the load of incoming
- traffic, and click Apply.
- The new load balancer rule appears in the list. You can repeat these steps to add more
- load balancer rules for this IP address.
-
-
-
diff --git a/docs/en-US/add-loadbalancer-rule-vpc.xml b/docs/en-US/add-loadbalancer-rule-vpc.xml
deleted file mode 100644
index 90247b0a6f9..00000000000
--- a/docs/en-US/add-loadbalancer-rule-vpc.xml
+++ /dev/null
@@ -1,462 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Adding Load Balancing Rules on a VPC
- In a VPC, you can configure two types of load balancing—external LB and internal LB.
- External LB is nothing but a LB rule created to redirect the traffic received at a public IP of
- the VPC virtual router. The traffic is load balanced within a tier based on your configuration.
- Citrix NetScaler and VPC virtual router are supported for external LB. When you use internal LB
- service, traffic received at a tier is load balanced across different VMs within that tier. For
- example, traffic reached at Web tier is redirected to another VM in that tier. External load
- balancing devices are not supported for internal LB. The service is provided by a internal LB VM
- configured on the target tier.
-
- Load Balancing Within a Tier (External LB)
- A &PRODUCT; user or administrator may create load balancing rules that balance traffic
- received at a public IP to one or more VMs that belong to a network tier that provides load
- balancing service in a VPC. A user creates a rule, specifies an algorithm, and assigns the
- rule to a set of VMs within a tier.
-
- Enabling NetScaler as the LB Provider on a VPC Tier
-
-
- Add and enable Netscaler VPX in dedicated mode.
- Netscaler can be used in a VPC environment only if it is in dedicated mode.
-
-
- Create a network offering, as given in .
-
-
- Create a VPC with Netscaler as the Public LB provider.
- For more information, see .
-
-
- For the VPC, acquire an IP.
-
-
- Create an external load balancing rule and apply, as given in .
-
-
-
-
- Creating a Network Offering for External LB
- To have external LB support on VPC, create a network offering as follows:
-
-
- Log in to the &PRODUCT; UI as a user or admin.
-
-
- From the Select Offering drop-down, choose Network Offering.
-
-
- Click Add Network Offering.
-
-
- In the dialog, make the following choices:
-
-
- Name: Any desired name for the network
- offering.
-
-
- Description: A short description of the
- offering that can be displayed to users.
-
-
- Network Rate: Allowed data transfer rate in MB
- per second.
-
-
- Traffic Type: The type of network traffic that
- will be carried on the network.
-
-
- Guest Type: Choose whether the guest network is
- isolated or shared.
-
-
- Persistent: Indicate whether the guest network
- is persistent or not. The network that you can provision without having to deploy a
- VM on it is termed persistent network.
-
-
- VPC: This option indicate whether the guest
- network is Virtual Private Cloud-enabled. A Virtual Private Cloud (VPC) is a
- private, isolated part of &PRODUCT;. A VPC can have its own virtual network topology
- that resembles a traditional physical network. For more information on VPCs, see
- .
-
-
- Specify VLAN: (Isolated guest networks only)
- Indicate whether a VLAN should be specified when this offering is used.
-
-
- Supported Services: Select Load Balancer. Use
- Netscaler or VpcVirtualRouter.
-
-
- Load Balancer Type: Select Public LB from the
- drop-down.
-
-
- LB Isolation: Select Dedicated if Netscaler is
- used as the external LB provider.
-
-
- System Offering: Choose the system service
- offering that you want virtual routers to use in this network.
-
-
- Conserve mode: Indicate whether to use conserve
- mode. In this mode, network resources are allocated only when the first virtual
- machine starts in the network.
-
-
-
-
- Click OK and the network offering is created.
-
-
-
-
- Creating an External LB Rule
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In the Select view, select VPC.
- All the VPCs that you have created for the account is listed in the page.
-
-
- Click the Configure button of the VPC, for which you want to configure load
- balancing rules.
- The VPC page is displayed where all the tiers you created listed in a
- diagram.
- For each tier, the following options are displayed:
-
-
- Internal LB
-
-
- Public LB IP
-
-
- Static NAT
-
-
- Virtual Machines
-
-
- CIDR
-
-
- The following router information is displayed:
-
-
- Private Gateways
-
-
- Public IP Addresses
-
-
- Site-to-Site VPNs
-
-
- Network ACL Lists
-
-
-
-
- In the Router node, select Public IP Addresses.
- The IP Addresses page is displayed.
-
-
- Click the IP address for which you want to create the rule, then click the
- Configuration tab.
-
-
- In the Load Balancing node of the diagram, click View All.
-
-
- Select the tier to which you want to apply the rule.
-
-
- Specify the following:
-
-
- Name: A name for the load balancer rule.
-
-
- Public Port: The port that receives the
- incoming traffic to be balanced.
-
-
- Private Port: The port that the VMs will use to
- receive the traffic.
-
-
- Algorithm. Choose the load balancing algorithm
- you want &PRODUCT; to use. &PRODUCT; supports the following well-known
- algorithms:
-
-
- Round-robin
-
-
- Least connections
-
-
- Source
-
-
-
-
- Stickiness. (Optional) Click Configure and
- choose the algorithm for the stickiness policy. See Sticky Session Policies for Load
- Balancer Rules.
-
-
- Add VMs: Click Add VMs, then select two or more
- VMs that will divide the load of incoming traffic, and click Apply.
-
-
-
-
- The new load balancing rule appears in the list. You can repeat these steps to add more
- load balancing rules for this IP address.
-
-
-
- Load Balancing Across Tiers
- &PRODUCT; supports sharing workload across different tiers within your VPC. Assume that
- multiple tiers are set up in your environment, such as Web tier and Application tier. Traffic
- to each tier is balanced on the VPC virtual router on the public side, as explained in . If you want the traffic coming from the Web tier to
- the Application tier to be balanced, use the internal load balancing feature offered by
- &PRODUCT;.
-
- How Does Internal LB Work in VPC?
- In this figure, a public LB rule is created for the public IP 72.52.125.10 with public
- port 80 and private port 81. The LB rule, created on the VPC virtual router, is applied on
- the traffic coming from the Internet to the VMs on the Web tier. On the Application tier two
- internal load balancing rules are created. An internal LB rule for the guest IP 10.10.10.4
- with load balancer port 23 and instance port 25 is configured on the VM, InternalLBVM1.
- Another internal LB rule for the guest IP 10.10.10.4 with load balancer port 45 and instance
- port 46 is configured on the VM, InternalLBVM1. Another internal LB rule for the guest IP
- 10.10.10.6, with load balancer port 23 and instance port 25 is configured on the VM,
- InternalLBVM2.
-
-
-
-
-
- vpc-lb.png: Configuring internal LB for VPC
-
-
-
-
- Guidelines
-
- Internal LB and Public LB are mutually exclusive on a tier. If the tier has LB on the public
- side, then it can't have the Internal LB.
- Internal LB is supported just on VPC networks in &PRODUCT; 4.2 release.
- Only Internal LB VM can act as the Internal LB provider in &PRODUCT; 4.2 release.
- Network upgrade is not supported from the network offering with Internal LB to the network
- offering with Public LB.
- Multiple tiers can have internal LB support in a VPC.
- Only one tier can have Public LB support in a VPC.
-
-
-
- Enabling Internal LB on a VPC Tier
-
-
- Create a network offering, as given in .
-
-
- Create an internal load balancing rule and apply, as given in .
-
-
-
-
- Creating a Network Offering for Internal LB
- To have internal LB support on VPC, either use the default offering,
- DefaultIsolatedNetworkOfferingForVpcNetworksWithInternalLB, or create a network offering as
- follows:
-
-
- Log in to the &PRODUCT; UI as a user or admin.
-
-
- From the Select Offering drop-down, choose Network Offering.
-
-
- Click Add Network Offering.
-
-
- In the dialog, make the following choices:
-
-
- Name: Any desired name for the network
- offering.
-
-
- Description: A short description of the
- offering that can be displayed to users.
-
-
- Network Rate: Allowed data transfer rate in MB
- per second.
-
-
- Traffic Type: The type of network traffic that
- will be carried on the network.
-
-
- Guest Type: Choose whether the guest network is
- isolated or shared.
-
-
- Persistent: Indicate whether the guest network
- is persistent or not. The network that you can provision without having to deploy a
- VM on it is termed persistent network.
-
-
- VPC: This option indicate whether the guest
- network is Virtual Private Cloud-enabled. A Virtual Private Cloud (VPC) is a
- private, isolated part of &PRODUCT;. A VPC can have its own virtual network topology
- that resembles a traditional physical network. For more information on VPCs, see
- .
-
-
- Specify VLAN: (Isolated guest networks only)
- Indicate whether a VLAN should be specified when this offering is used.
-
-
- Supported Services: Select Load Balancer.
- Select InternalLbVM from the provider list.
-
-
- Load Balancer Type: Select Internal LB from the
- drop-down.
-
-
- System Offering: Choose the system service
- offering that you want virtual routers to use in this network.
-
-
- Conserve mode: Indicate whether to use conserve
- mode. In this mode, network resources are allocated only when the first virtual
- machine starts in the network.
-
-
-
-
- Click OK and the network offering is created.
-
-
-
-
- Creating an Internal LB Rule
- When you create the Internal LB rule and applies to a VM, an Internal LB VM, which is
- responsible for load balancing, is created.
- You can view the created Internal LB VM in the Instances page if you navigate to
- Infrastructure > Zones >
- <zone_ name> > <physical_network_name> > Network Service
- Providers > Internal LB VM. You can manage the
- Internal LB VMs as and when required from the location.
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In the Select view, select VPC.
- All the VPCs that you have created for the account is listed in the page.
-
-
- Locate the VPC for which you want to configure internal LB, then click
- Configure.
- The VPC page is displayed where all the tiers you created listed in a
- diagram.
-
-
- Locate the Tier for which you want to configure an internal LB rule, click Internal
- LB.
- In the Internal LB page, click Add Internal LB.
-
-
- In the dialog, specify the following:
-
-
- Name: A name for the load balancer rule.
-
-
- Description: A short description of the rule
- that can be displayed to users.
-
-
- Source IP Address: (Optional) The source IP
- from which traffic originates. The IP is acquired from the CIDR of that particular
- tier on which you want to create the Internal LB rule. If not specified, the IP
- address is automatically allocated from the network CIDR.
- For every Source IP, a new Internal LB VM is created for load balancing.
-
-
- Source Port: The port associated with the
- source IP. Traffic on this port is load balanced.
-
-
- Instance Port: The port of the internal LB
- VM.
-
-
- Algorithm. Choose the load balancing algorithm
- you want &PRODUCT; to use. &PRODUCT; supports the following well-known
- algorithms:
-
-
- Round-robin
-
-
- Least connections
-
-
- Source
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/add-members-to-projects.xml b/docs/en-US/add-members-to-projects.xml
deleted file mode 100644
index 39c3edfb2c3..00000000000
--- a/docs/en-US/add-members-to-projects.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Adding Members to a Project
- New members can be added to a project by the project’s administrator, the domain administrator of the domain where the project resides or any parent domain, or the &PRODUCT; root administrator. There are two ways to add members in &PRODUCT;, but only one way is enabled at a time:
-
- If invitations have been enabled, you can send invitations to new members.
- If invitations are not enabled, you can add members directly through the UI.
-
-
-
-
-
diff --git a/docs/en-US/add-more-clusters.xml b/docs/en-US/add-more-clusters.xml
deleted file mode 100644
index 894b4d80737..00000000000
--- a/docs/en-US/add-more-clusters.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Add More Clusters (Optional)
- You need to tell &PRODUCT; about the hosts that it will manage. Hosts exist inside clusters,
- so before you begin adding hosts to the cloud, you must add at least one cluster.
-
-
-
-
-
diff --git a/docs/en-US/add-password-management-to-templates.xml b/docs/en-US/add-password-management-to-templates.xml
deleted file mode 100644
index 60de951a1e5..00000000000
--- a/docs/en-US/add-password-management-to-templates.xml
+++ /dev/null
@@ -1,42 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Adding Password Management to Your Templates
- &PRODUCT; provides an optional password reset feature that allows users to set a temporary
- admin or root password as well as reset the existing admin or root password from the &PRODUCT;
- UI.
- To enable the Reset Password feature, you will need to download an additional script to
- patch your template. When you later upload the template into &PRODUCT;, you can specify whether
- reset admin/root password feature should be enabled for this template.
- The password management feature works always resets the account password on instance boot.
- The script does an HTTP call to the virtual router to retrieve the account password that should
- be set. As long as the virtual router is accessible the guest will have access to the account
- password that should be used. When the user requests a password reset the management server
- generates and sends a new password to the virtual router for the account. Thus an instance
- reboot is necessary to effect any password changes.
- If the script is unable to contact the virtual router during instance boot it will not set
- the password but boot will continue normally.
-
-
-
diff --git a/docs/en-US/add-portforward-rule-vpc.xml b/docs/en-US/add-portforward-rule-vpc.xml
deleted file mode 100644
index 5b1bb49a0a3..00000000000
--- a/docs/en-US/add-portforward-rule-vpc.xml
+++ /dev/null
@@ -1,117 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Adding a Port Forwarding Rule on a VPC
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In the Select view, select VPC.
- All the VPCs that you have created for the account is listed in the page.
-
-
- Click the Configure button of the VPC to which you want to deploy the VMs.
- The VPC page is displayed where all the tiers you created are listed in a
- diagram.
- For each tier, the following options are displayed:
-
-
- Internal LB
-
-
- Public LB IP
-
-
- Static NAT
-
-
- Virtual Machines
-
-
- CIDR
-
-
- The following router information is displayed:
-
-
- Private Gateways
-
-
- Public IP Addresses
-
-
- Site-to-Site VPNs
-
-
- Network ACL Lists
-
-
-
-
- In the Router node, select Public IP Addresses.
- The IP Addresses page is displayed.
-
-
- Click the IP address for which you want to create the rule, then click the Configuration
- tab.
-
-
- In the Port Forwarding node of the diagram, click View All.
-
-
- Select the tier to which you want to apply the rule.
-
-
- Specify the following:
-
-
- Public Port: The port to which public traffic will
- be addressed on the IP address you acquired in the previous step.
-
-
- Private Port: The port on which the instance is
- listening for forwarded public traffic.
-
-
- Protocol: The communication protocol in use between
- the two ports.
-
-
- TCP
-
-
- UDP
-
-
-
-
- Add VM: Click Add VM. Select the name of the
- instance to which this rule applies, and click Apply.
- You can test the rule by opening an SSH session to the instance.
-
-
-
-
-
diff --git a/docs/en-US/add-primary-storage.xml b/docs/en-US/add-primary-storage.xml
deleted file mode 100644
index a43567f5562..00000000000
--- a/docs/en-US/add-primary-storage.xml
+++ /dev/null
@@ -1,108 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Adding Primary Storage
-
- Ensure that nothing stored on the server. Adding the server to CloudStack will destroy any
- existing data.
-
- When you create a new zone, the first primary storage is added as part of that procedure.
- You can add primary storage servers at any time, such as when adding a new cluster or adding
- more servers to an existing cluster.
-
-
- Log in to the &PRODUCT; UI.
-
-
- In the left navigation, choose Infrastructure. In Zones, click View More, then click the
- zone in which you want to add the primary storage.
-
-
- Click the Compute tab.
-
-
- In the Primary Storage node of the diagram, click View All.
-
-
- Click Add Primary Storage.
-
-
- Provide the following information in the dialog. The information required varies
- depending on your choice in Protocol.
-
-
- Pod. The pod for the storage device.
-
-
- Cluster. The cluster for the storage device.
-
-
- Name. The name of the storage device
-
-
- Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS
- or SharedMountPoint. For vSphere choose either VMFS (iSCSI or FiberChannel) or
- NFS
-
-
- Server (for NFS, iSCSI, or PreSetup). The IP address or DNS name of the storage
- device
-
-
- Server (for VMFS). The IP address or DNS name of the vCenter server.
-
-
- Path (for NFS). In NFS this is the exported path from the server.
-
-
- Path (for VMFS). In vSphere this is a combination of the datacenter name and the
- datastore name. The format is "/" datacenter name "/" datastore name. For example,
- "/cloud.dc.VM/cluster1datastore".
-
-
- Path (for SharedMountPoint). With KVM this is the path on each host that is where
- this primary storage is mounted. For example, "/mnt/primary".
-
-
- SR Name-Label (for PreSetup). Enter the name-label of the SR that has been set up
- outside &PRODUCT;.
-
-
- Target IQN (for iSCSI). In iSCSI this is the IQN of the target. For example,
- iqn.1986-03.com.sun:02:01ec9bb549-1271378984
-
-
- Lun # (for iSCSI). In iSCSI this is the LUN number. For example, 3.
-
-
- Tags (optional). The comma-separated list of tags for this storage device. It should
- be an equivalent set or superset of the tags on your disk offerings.
-
-
- The tag sets on primary storage across clusters in a Zone must be identical. For
- example, if cluster A provides primary storage that has tags T1 and T2, all other clusters
- in the Zone must also provide primary storage that has tags T1 and T2.
-
-
- Click OK.
-
-
-
diff --git a/docs/en-US/add-projects-members-from-ui.xml b/docs/en-US/add-projects-members-from-ui.xml
deleted file mode 100644
index 670a0ec75ab..00000000000
--- a/docs/en-US/add-projects-members-from-ui.xml
+++ /dev/null
@@ -1,37 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Adding Project Members From the UI
- The steps below tell how to add a new member to a project if the invitations feature is not enabled in the cloud. If the invitations feature is enabled cloud,as described in , use the procedure in .
-
- Log in to the &PRODUCT; UI.
- In the left navigation, click Projects.
- In Select View, choose Projects.
- Click the name of the project you want to work with.
- Click the Accounts tab. The current members of the project are listed.
- Type the account name of the new member you want to add, and click Add Account. You can add only people who have an account in this cloud and within the same domain as the project.
-
-
-
diff --git a/docs/en-US/add-remove-nic-ui.xml b/docs/en-US/add-remove-nic-ui.xml
deleted file mode 100644
index a671329eb00..00000000000
--- a/docs/en-US/add-remove-nic-ui.xml
+++ /dev/null
@@ -1,152 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Reconfiguring Networks in VMs
- &PRODUCT; provides you the ability to move VMs between networks and reconfigure a VM's
- network. You can remove a VM from a network and add to a new network. You can also change the
- default network of a virtual machine. With this functionality, hybrid or traditional server
- loads can be accommodated with ease.
- This feature is supported on XenServer, VMware, and KVM hypervisors.
-
- Prerequisites
- Ensure that vm-tools are running on guest VMs for adding or removing networks to work on
- VMware hypervisor.
-
-
- Adding a Network
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, click Instances.
-
-
- Choose the VM that you want to work with.
-
-
- Click the NICs tab.
-
-
- Click Add network to VM.
- The Add network to VM dialog is displayed.
-
-
- In the drop-down list, select the network that you would like to add this VM
- to.
- A new NIC is added for this network. You can view the following details in the NICs
- page:
-
-
- ID
-
-
- Network Name
-
-
- Type
-
-
- IP Address
-
-
- Gateway
-
-
- Netmask
-
-
- Is default
-
-
- CIDR (for IPv6)
-
-
-
-
-
-
- Removing a Network
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, click Instances.
-
-
- Choose the VM that you want to work with.
-
-
- Click the NICs tab.
-
-
- Locate the NIC you want to remove.
-
-
- Click Remove NIC button.
-
-
-
-
- remove-nic.png: button to remove a NIC
-
-
-
-
- Click Yes to confirm.
-
-
-
-
- Selecting the Default Network
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, click Instances.
-
-
- Choose the VM that you want to work with.
-
-
- Click the NICs tab.
-
-
- Locate the NIC you want to work with.
-
-
- Click the Set default NIC button.
-
-
-
-
- set-default-nic.png: button to set a NIC as default one.
-
-
-
-
- Click Yes to confirm.
-
-
-
-
diff --git a/docs/en-US/add-remove-nic.xml b/docs/en-US/add-remove-nic.xml
deleted file mode 100644
index fb23390b31b..00000000000
--- a/docs/en-US/add-remove-nic.xml
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Reconfiguring Networks in VMs
- &PRODUCT; provides you the ability to move VMs between networks and reconfigure a VM's
- network. You can remove a VM from a network and add to a new network. You can
- also change the default network of a virtual machine. With this functionality, hybrid
- or traditional server loads can be accommodated with ease.
- This feature is supported on XenServer and KVM hypervisors.
- The following APIs have been added to support this feature. These API calls can function
- only while the VM is in running or stopped state.
-
- Prerequisites
- Ensure that vm-tools are running on guest VMs for adding or removing networks to work on VMware hypervisor.
-
-
- addNicToVirtualMachine
- The addNicToVirtualMachine API adds a new NIC to the specified VM on a selected
- network.
-
-
-
-
- parameter
- description
- Value
-
-
-
-
- virtualmachineid
- The unique ID of the VM to which the NIC is to be added.
- true
-
-
- networkid
- The unique ID of the network the NIC that you add should apply
- to.
- true
-
-
- ipaddress
- The IP address of the VM on the network.
- false
-
-
-
-
- The network and VM must reside in the same zone. Two VMs with the same name cannot reside
- in the same network. Therefore, adding a second VM that duplicates a name on a network will
- fail.
-
-
- removeNicFromVirtualMachine
- The removeNicFromVirtualMachine API removes a NIC from the specified VM on a selected
- network.
-
-
-
-
- parameter
- description
- Value
-
-
-
-
- virtualmachineid
- The unique ID of the VM from which the NIC is to be removed.
-
- true
-
-
- nicid
- The unique ID of the NIC that you want to remove.
- true
-
-
-
-
- Removing the default NIC is not allowed.
-
-
- updateDefaultNicForVirtualMachine
- The updateDefaultNicForVirtualMachine API updates the specified NIC to be the default one
- for a selected VM.
- The NIC is only updated in the database. You must manually update the default NIC on the
- VM. You get an alert to manually update the NIC.
-
-
-
-
- parameter
- description
- Value
-
-
-
-
- virtualmachineid
- The unique ID of the VM for which you want to specify the default NIC.
-
- true
-
-
- nicid
- The unique ID of the NIC that you want to set as the default
- one.
- true
-
-
-
-
-
-
diff --git a/docs/en-US/add-secondary-storage.xml b/docs/en-US/add-secondary-storage.xml
deleted file mode 100644
index 318a6ea79b6..00000000000
--- a/docs/en-US/add-secondary-storage.xml
+++ /dev/null
@@ -1,48 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Adding Secondary Storage
-
- Be sure there is nothing stored on the server. Adding the server to CloudStack will
- destroy any existing data.
-
- When you create a new zone, the first secondary storage is added as part of that procedure.
- You can add secondary storage servers at any time to add more servers to an existing
- zone.
-
-
- If you are going to use Swift for cloud-wide secondary storage, you must add the Swift
- storage to &PRODUCT; before you add the local zone secondary storage servers.
-
-
- To prepare for local zone secondary storage, you should have created and mounted an NFS
- share during Management Server installation.
-
-
- Make sure you prepared the system VM template during Management Server
- installation.
-
-
- 4. Now that the secondary storage server for per-zone storage is prepared, add it to
- &PRODUCT;. Secondary storage is added as part of the procedure for adding a new zone.
-
-
-
diff --git a/docs/en-US/add-security-group.xml b/docs/en-US/add-security-group.xml
deleted file mode 100644
index 85a6ba0b38a..00000000000
--- a/docs/en-US/add-security-group.xml
+++ /dev/null
@@ -1,49 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Adding a Security Group
- A user or administrator can define a new security group.
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network
-
-
- In Select view, choose Security Groups.
-
-
- Click Add Security Group.
-
-
- Provide a name and description.
-
-
- Click OK.
- The new security group appears in the Security Groups Details tab.
-
-
- To make the security group useful, continue to Adding Ingress and Egress Rules to a
- Security Group.
-
-
-
diff --git a/docs/en-US/add-tier.xml b/docs/en-US/add-tier.xml
deleted file mode 100644
index 94a8237c066..00000000000
--- a/docs/en-US/add-tier.xml
+++ /dev/null
@@ -1,102 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Adding Tiers
- Tiers are distinct locations within a VPC that act as isolated networks, which do not have
- access to other tiers by default. Tiers are set up on different VLANs that can communicate with
- each other by using a virtual router. Tiers provide inexpensive, low latency network
- connectivity to other tiers within the VPC.
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In the Select view, select VPC.
- All the VPC that you have created for the account is listed in the page.
-
- The end users can see their own VPCs, while root and domain admin can see any VPC they
- are authorized to see.
-
-
-
- Click the Configure button of the VPC for which you want to set up tiers.
-
-
- Click Create network.
- The Add new tier dialog is displayed, as follows:
-
-
-
-
-
- add-tier.png: adding a tier to a vpc.
-
-
- If you have already created tiers, the VPC diagram is displayed. Click Create Tier to
- add a new tier.
-
-
- Specify the following:
- All the fields are mandatory.
-
-
- Name: A unique name for the tier you create.
-
-
- Network Offering: The following default network
- offerings are listed: Internal LB, DefaultIsolatedNetworkOfferingForVpcNetworksNoLB,
- DefaultIsolatedNetworkOfferingForVpcNetworks
- In a VPC, only one tier can be created by using LB-enabled network offering.
-
-
- Gateway: The gateway for the tier you create.
- Ensure that the gateway is within the Super CIDR range that you specified while creating
- the VPC, and is not overlapped with the CIDR of any existing tier within the VPC.
-
-
- VLAN: The VLAN ID for the tier that the root admin
- creates.
- This option is only visible if the network offering you selected is
- VLAN-enabled.
- For more information, see the Assigning VLANs to Isolated
- Networks section in the &PRODUCT; Administration Guide.
- For more information, see .
-
-
- Netmask: The netmask for the tier you create.
- For example, if the VPC CIDR is 10.0.0.0/16 and the network tier CIDR is
- 10.0.1.0/24, the gateway of the tier is 10.0.1.1, and the netmask of the tier is
- 255.255.255.0.
-
-
-
-
- Click OK.
-
-
- Continue with configuring access control list for the tier.
-
-
-
diff --git a/docs/en-US/add-vm-tier-sharednw.xml b/docs/en-US/add-vm-tier-sharednw.xml
deleted file mode 100644
index a68860419eb..00000000000
--- a/docs/en-US/add-vm-tier-sharednw.xml
+++ /dev/null
@@ -1,62 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Deploying VMs to VPC Tier and Shared Networks
- &PRODUCT; allows you deploy VMs on a VPC tier and one or more shared networks. With this
- feature, VMs deployed in a multi-tier application can receive monitoring services via a shared
- network provided by a service provider.
-
-
- Log in to the &PRODUCT; UI as an administrator.
-
-
- In the left navigation, choose Instances.
-
-
- Click Add Instance.
-
-
- Select a zone.
-
-
- Select a template or ISO, then follow the steps in the wizard.
-
-
- Ensure that the hardware you have allows starting the selected service offering.
-
-
- Under Networks, select the desired networks for the VM you are launching.
- You can deploy a VM to a VPC tier and multiple shared networks.
-
-
-
-
-
- addvm-tier-sharednw.png: adding a VM to a VPC tier and shared network.
-
-
-
-
- Click Next, review the configuration and click Launch.
- Your VM will be deployed to the selected VPC tier and shared network.
-
-
-
diff --git a/docs/en-US/add-vm-to-tier.xml b/docs/en-US/add-vm-to-tier.xml
deleted file mode 100644
index c7d769d9d11..00000000000
--- a/docs/en-US/add-vm-to-tier.xml
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Deploying VMs to the Tier
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In the Select view, select VPC.
- All the VPCs that you have created for the account is listed in the page.
-
-
- Click the Configure button of the VPC to which you want to deploy the VMs.
- The VPC page is displayed where all the tiers you have created are listed.
-
-
- Click Virtual Machines tab of the tier to which you want to add a VM.
-
-
-
-
-
- add-vm-vpc.png: adding a VM to a vpc.
-
-
- The Add Instance page is displayed.
- Follow the on-screen instruction to add an instance. For information on adding an
- instance, see the Installation Guide.
-
-
-
diff --git a/docs/en-US/add-vpc.xml b/docs/en-US/add-vpc.xml
deleted file mode 100644
index b8034c4b4c8..00000000000
--- a/docs/en-US/add-vpc.xml
+++ /dev/null
@@ -1,80 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Adding a Virtual Private Cloud
- When creating the VPC, you simply provide the zone and a set of IP addresses for the VPC
- network address space. You specify this set of addresses in the form of a Classless Inter-Domain
- Routing (CIDR) block.
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In the Select view, select VPC.
-
-
- Click Add VPC. The Add VPC page is displayed as follows:
-
-
-
-
-
- add-vpc.png: adding a vpc.
-
-
- Provide the following information:
-
-
- Name: A short name for the VPC that you are
- creating.
-
-
- Description: A brief description of the VPC.
-
-
- Zone: Choose the zone where you want the VPC to be
- available.
-
-
- Super CIDR for Guest Networks: Defines the CIDR
- range for all the tiers (guest networks) within a VPC. When you create a tier, ensure
- that its CIDR is within the Super CIDR value you enter. The CIDR must be RFC1918
- compliant.
-
-
- DNS domain for Guest Networks: If you want to
- assign a special domain name, specify the DNS suffix. This parameter is applied to all
- the tiers within the VPC. That implies, all the tiers you create in the VPC belong to
- the same DNS domain. If the parameter is not specified, a DNS domain name is generated
- automatically.
-
-
- Public Load Balancer Provider: You have two
- options: VPC Virtual Router and Netscaler.
-
-
-
- Click OK.
-
-
diff --git a/docs/en-US/added-API-commands-4-0.xml b/docs/en-US/added-API-commands-4-0.xml
deleted file mode 100644
index 2d86ba4d6dc..00000000000
--- a/docs/en-US/added-API-commands-4-0.xml
+++ /dev/null
@@ -1,164 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Added API Commands in 4.0.0-incubating
-
-
- createCounter (Adds metric counter)
-
-
- deleteCounter (Deletes a counter)
-
-
- listCounters (List the counters)
-
-
- createCondition (Creates a condition)
-
-
- deleteCondition (Removes a condition)
-
-
- listConditions (List Conditions for the specific user)
-
-
- createTags. Add tags to one or more resources. Example:
- command=createTags
-&resourceIds=1,10,12
-&resourceType=userVm
-&tags[0].key=region
-&tags[0].value=canada
-&tags[1].key=city
-&tags[1].value=Toronto
-
-
- deleteTags. Remove tags from one or more resources. Example:
- command=deleteTags
-&resourceIds=1,12
-&resourceType=Snapshot
-&tags[0].key=city
-
-
- listTags (Show currently defined resource tags)
-
-
- createVPC (Creates a VPC)
-
-
- listVPCs (Lists VPCs)
-
-
- deleteVPC (Deletes a VPC)
-
-
- updateVPC (Updates a VPC)
-
-
- restartVPC (Restarts a VPC)
-
-
- createVPCOffering (Creates VPC offering)
-
-
- updateVPCOffering (Updates VPC offering)
-
-
- deleteVPCOffering (Deletes VPC offering)
-
-
- listVPCOfferings (Lists VPC offerings)
-
-
- createPrivateGateway (Creates a private gateway)
-
-
- listPrivateGateways (List private gateways)
-
-
- deletePrivateGateway (Deletes a Private gateway)
-
-
- createNetworkACL (Creates a ACL rule the given network (the network has to belong to
- VPC))
-
-
- deleteNetworkACL (Deletes a Network ACL)
-
-
- listNetworkACLs (Lists all network ACLs)
-
-
- createStaticRoute (Creates a static route)
-
-
- deleteStaticRoute (Deletes a static route)
-
-
- listStaticRoutes (Lists all static routes)
-
-
- createVpnCustomerGateway (Creates site to site vpn customer gateway)
-
-
- createVpnGateway (Creates site to site vpn local gateway)
-
-
- createVpnConnection (Create site to site vpn connection)
-
-
- deleteVpnCustomerGateway (Delete site to site vpn customer gateway)
-
-
- deleteVpnGateway (Delete site to site vpn gateway)
-
-
- deleteVpnConnection (Delete site to site vpn connection)
-
-
- updateVpnCustomerGateway (Update site to site vpn customer gateway)
-
-
- resetVpnConnection (Reset site to site vpn connection)
-
-
- listVpnCustomerGateways (Lists site to site vpn customer gateways)
-
-
- listVpnGateways (Lists site 2 site vpn gateways)
-
-
- listVpnConnections (Lists site to site vpn connection gateways)
-
-
- enableCiscoNexusVSM (Enables Nexus 1000v dvSwitch in &PRODUCT;.)
-
-
- disableCiscoNexusVSM (Disables Nexus 1000v dvSwitch in &PRODUCT;.)
-
-
- deleteCiscoNexusVSM (Deletes Nexus 1000v dvSwitch in &PRODUCT;.)
-
-
- listCiscoNexusVSMs (Lists the control VLAN ID, packet VLAN ID, and data VLAN ID, as well
- as the IP address of the Nexus 1000v dvSwitch.)
-
-
-
diff --git a/docs/en-US/added-API-commands-4-1.xml b/docs/en-US/added-API-commands-4-1.xml
deleted file mode 100644
index 006c65a5616..00000000000
--- a/docs/en-US/added-API-commands-4-1.xml
+++ /dev/null
@@ -1,73 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Added API Commands in 4.1
-
-
- createEgressFirewallRules (creates an egress firewall rule on the guest network.)
-
-
- deleteEgressFirewallRules (deletes a egress firewall rule on the guest network.)
-
-
- listEgressFirewallRules (lists the egress firewall rules configured for a guest
- network.)
-
-
- resetSSHKeyForVirtualMachine (Resets the SSHkey for virtual machine.)
-
-
- addBaremetalHost (Adds a new host.)
-
-
- addNicToVirtualMachine (Adds a new NIC to the specified VM on a selected
- network.)
-
-
- removeNicFromVirtualMachine (Removes the specified NIC from a selected VM.)
-
-
- updateDefaultNicForVirtualMachine (Updates the specified NIC to be the default one for a
- selected VM.)
-
-
- addRegion (Registers a Region into another Region.)
-
-
- updateRegion (Updates Region details: ID, Name, Endpoint, User API Key, and User Secret
- Key.)
-
-
- removeRegion (Removes a Region from current Region.)
-
-
- listRegions (Get all the Regions. They can be filtered by using the ID or Name.)
-
-
- getUser (This API can only be used by the Admin. Get user details by using the API Key.)
-
-
- addRegion (Add a region)
- removeRegion (Delete a region)
- updateRegion (Modify attributes of a region)
- listRegions (List regions)
-
-
diff --git a/docs/en-US/added-API-commands-4.2.xml b/docs/en-US/added-API-commands-4.2.xml
deleted file mode 100644
index 14a5f64b8ee..00000000000
--- a/docs/en-US/added-API-commands-4.2.xml
+++ /dev/null
@@ -1,554 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Added API Commands in 4.2
-
-
- addImageStore
- Adds all types of secondary storage providers, S3/Swift/NFS.
-
-
- createSecondaryStagingStore
- Adds a staging secondary storage in each zone.
-
-
- listImageStores
- Lists all secondary storages, S3/Swift/NFS.
-
-
- listSecondaryStagingStores
- Lists all staging secondary storages.
-
-
- addIpToNic
- Adds an IP address to the NIC from the guest subnet. The request parameters are: nicid,
- ipaddress.
- The response parameters are: nicid, ipaddress, networkid
-
-
- removeIpFromNic
- Removes the reserved IP for the NIC. The request parameters is: id.
- The response parameters are: true, false
-
-
- listNics
- Lists the NIC details of the user VM; the API response also contains the Secondary IP
- addresses of the NIC. The request parameters are: nicid, virtualmachineid.
- The response parameters are: id, ipaddress, secondaryips, gateway, netmask, macaddr,
- broadcasturi, isolationuri, isdefault,
-
-
- deleteAlerts
- Deletes the specified alerts. The request parameters are: ids (allowed to pass one or
- more IDs separated by comma); type (string); olderthan (yyyy-mm-dd format).
- The response parameters are: true, false
-
-
- archiveAlerts
- Archives the specified alerts. The request parameters are: ids (allowed to pass one or
- more IDs separated by comma); type (string); olderthan (yyyy-mm-dd format).
- The response parameters are: true, false
-
-
- deleteEvents
- Deletes the specified events. The request parameters are: ids (allowed to pass one or
- more IDs separated by comma); type (string); olderthan (yyyy-mm-dd format).
- The response parameters are: true, false
-
-
- archiveEvents
- Archives the specified events. The request parameters are: ids (allowed to pass one or
- more IDs separated by comma); type (string); olderthan (yyyy-mm-dd format).
- The response parameters are: true, false
-
-
- createGlobalLoadBalancerRule
- Creates a GSLB rule. The request parameters are name (the name of the global load
- balancer rule); domain name ( the preferred domain name for the service); lb algorithm (the
- algorithm used to load balance the traffic across the zones); session persistence (source IP
- and HTTP cookie); account name; and domain Id.
-
-
- assignToGlobalLoadBalancerRule
- Assigns a load balancing rule or list of load balancing rules to GSLB. The request
- parameters are: id (the UUID of global load balancer rule); loadbalancerrulelist (the list
- load balancer rules that will be assigned to global load balancer rule. These are second
- tier load balancing rules created with createLoadBalancerRule API. Weight is optional, the
- default is 1).
-
-
- removeFromGlobalLoadBalancerRule
- Removes a load balancer rule association with global load balancer rule. The request
- parameters are id (the UUID of global load balancer rule); loadbalancerrulelist (the list
- load balancer rules that will be assigned to global load balancer rule).
-
-
- deleteGlobalLoadBalancerRule
- Deletes a global load balancer rule. The request parameters is: id (the unique ID of the
- global load balancer rule).
-
-
- listGlobalLoadBalancerRule
- Lists load balancer rules.
- The request parameters are: account (lists resources by account. Use with the domainid
- parameter); domainid (lists only resources belonging to the domain specified); id (the
- unique ID of the global load balancer rule); isrecursive (defaults to false; but if true,
- lists all the resources from the parent specified by the domainid); keyword (lists by
- keyword); listall (if set to false, lists only resources belonging to the command's caller;
- if set to true, lists resources that the caller is authorized to see. Default value is
- false); page; pagesize; projectid (lists objects by project); regionid ; tags (lists
- resources by tags: key/value pairs).
-
-
- updateGlobalLoadBalancerRule
- Updates global load balancer rules.
- The request parameters are: id (the unique ID of the global load balancer rule); account
- (lists resources by account. Use with the domainid parameter); description (the description
- of the load balancer rule); domainid (lists only resources belonging to the domain
- specified); gslblbmethod (the load balancer algorithm that is used to distributed traffic
- across the zones participating in global server load balancing, if not specified defaults to
- round robin); gslbstickysessionmethodname (the session sticky method; if not specified
- defaults to sourceip); isrecursive (defaults to false, but if true, lists all resources from
- the parent specified by the domainid till leaves); keyword (lists by keyword); listall (if
- set to false, list only those resources belonging to the command's caller; if set to true,
- lists resources that the caller is authorized to see. Default value is false); page;
- pagesize; projectid (lists objects by project); regionid; tags (lists resources by tags:
- key/value pairs)
-
-
- createPortableIpRange
- Creates portable IP addresses in the portable public IP address pool.
- The request parameters are region id, start ip, end ip, netmask, gateway, and
- vlan.
-
-
- deletePortableIpRange
- Deletes portable IP addresses from the portable public IP address pool.
- The request parameters is portable ip address range id.
-
-
- listPortableIpRange
- Lists portable IP addresses in the portable public IP address pool associated with a
- Region.
- The request parameters are elastic ip id and region id.
-
-
- createVMSnapshot
- Creates a virtual machine snapshot.
-
-
- deleteVMSnapshot
- Deletes a virtual machine snapshot.
-
-
- listVMSnapshot
- Shows a virtual machine snapshot.
-
-
- revertToVMSnapshot
- Returns a virtual machine to the state and data saved in a given snapshot.
-
-
- createLBHealthCheckPolicy
- Creates a new health check policy for a load balancer rule.
-
-
- deleteLBHealthCheckPolicy
- Deletes an existing health check policy from a load balancer rule.
-
-
- listLBHealthCheckPolicies
- Displays the health check policy for a load balancer rule.
-
-
- createEgressFirewallRules
- Creates an egress firewall rule on the guest network.
-
-
- deleteEgressFirewallRules
- Deletes a egress firewall rule on the guest network.
-
-
- listEgressFirewallRules
- Lists the egress firewall rules configured for a guest network.
-
-
- resetSSHKeyForVirtualMachine
- Resets the SSHkey for virtual machine.
-
-
- addBaremetalHost
- Adds a new host. Technically, this API command was present in v3.0.6, but its
- functionality was disabled.
-
-
- addBaremetalDhcp
- Adds a DHCP server for bare metal hosts.
-
-
- addBaremetalPxePingServer
- Adds a PXE PING server for bare metal hosts.
-
-
- addBaremetalPxeKickStartServer (Adds a PXE server for bare metal hosts)
-
-
- listBaremetalDhcp
- Shows the DHCP servers currently defined for bare metal hosts.
-
-
- listBaremetalPxePingServer
- Shows the PXE PING servers currently defined for bare metal hosts.
-
-
- addNicToVirtualMachine
- Adds a new NIC to the specified VM on a selected network.
-
-
- removeNicFromVirtualMachine
- Removes the specified NIC from a selected VM.
-
-
- updateDefaultNicForVirtualMachine
- Updates the specified NIC to be the default one for a selected VM.
-
-
- addRegion
- Registers a Region into another Region.
-
-
- updateRegion
- Updates Region details: ID, Name, Endpoint, User API Key, and User Secret Key.
-
-
- removeRegion
- Removes a Region from current Region.
-
-
- listRegions
- Get all the Regions. They can be filtered by using the ID or Name.
-
-
- getUser
- This API can only be used by the Admin. Get user account details by using the API
- Key.
-
-
- getApiLimit
- Shows number of remaining APIs for the invoking user in current window.
-
-
- resetApiLimit
- For root admin, if account ID parameter is passed, it will reset count for that
- particular account, otherwise it will reset all counters.
-
-
- lockAccount
- Locks an account.
-
-
- lockUser
- Locks a user account.
-
-
- scaleVirtualMachine
- Scales the virtual machine to a new service offering.
-
-
- migrateVirtualMachineWithVolume
- Attempts migrating VM with its volumes to a different host.
-
-
- dedicatePublicIpRange
- Dedicates a Public IP range to an account.
-
-
- releasePublicIpRange
- Releases a Public IP range back to the system pool.
-
-
- dedicateGuestVlanRange
- Dedicates a guest VLAN range to an account.
-
-
- releaseDedicatedGuestVlanRange
- Releases a dedicated guest VLAN range to the system.
-
-
- listDedicatedGuestVlanRanges
- Lists dedicated guest VLAN ranges.
-
-
- updatePortForwardingRule
- Updates a port forwarding rule. Only the private port and the VM can be updated.
-
-
- scaleSystemVm
- Scales the service offering for a systemVM, console proxy, or secondary storage.
-
-
- listDeploymentPlanners
- Lists all the deployment planners available.
-
-
- addS3
- Adds a Amazon Simple Storage Service instance.
-
-
- listS3s
- Lists all the Amazon Simple Storage Service instances.
-
-
- findHostsForMigration
- Finds hosts suitable for migrating a VM to.
-
-
- releaseHostReservation
- Releases host reservation.
-
-
- resizeVolume
- Resizes a volume.
-
-
- updateVolume
- Updates the volume.
-
-
- listStorageProviders
- Lists storage providers.
-
-
- findStoragePoolsForMigration
- Lists storage pools available for migrating a volume.
-
-
- createEgressFirewallRule
- Creates a egress firewall rule for a given network.
-
-
- deleteEgressFirewallRule
- Deletes an egress firewall rule.
-
-
- listEgressFirewallRules
- Lists all egress firewall rules for network.
-
-
- updateNetworkACLItem
- Updates ACL item with specified ID.
-
-
- createNetworkACLList
- Creates a Network ACL for the given VPC.
-
-
- deleteNetworkACLList
- Deletes a Network ACL.
-
-
- replaceNetworkACLList
- Replaces ACL associated with a Network or private gateway.
-
-
- listNetworkACLLists
- Lists all network ACLs.
-
-
- addResourceDetail
- Adds detail for the Resource.
-
-
- removeResourceDetail
- Removes details of the resource.
-
-
- listResourceDetails
- Lists resource details.
-
-
- addNiciraNvpDevice
- Adds a Nicira NVP device.
-
-
- deleteNiciraNvpDevice
- Deletes a Nicira NVP device.
-
-
- listNiciraNvpDevices
- Lists Nicira NVP devices.
-
-
- listNiciraNvpDeviceNetworks
- Lists network that are using a Nicira NVP device.
-
-
- addBigSwitchVnsDevice
- Adds a BigSwitch VNS device.
-
-
- deleteBigSwitchVnsDevice
- Deletes a BigSwitch VNS device.
-
-
- listBigSwitchVnsDevices
- Lists BigSwitch VNS devices.
-
-
- configureSimulator
- Configures a simulator.
-
-
- listApis
- Lists all the available APIs on the server, provided by the API Discovery plugin.
-
-
- getApiLimit
- Gets the API limit count for the caller.
-
-
- resetApiLimit
- Resets the API count.
-
-
- assignToGlobalLoadBalancerRule
- Assigns load balancer rule or list of load balancer rules to a global load balancer
- rules.
-
-
- removeFromGlobalLoadBalancerRule
- Removes a load balancer rule association with global load balancer rule.
-
-
- listVMSnapshot
- Lists virtual machine snapshot by conditions.
-
-
- createLoadBalancer
- Creates a load balancer.
-
-
- listLoadBalancers
- Lists load balancers.
-
-
- deleteLoadBalancer
- Deletes a load balancer.
-
-
- configureInternalLoadBalancerElement
- Configures an Internal Load Balancer element.
-
-
- createInternalLoadBalancerElement
- Creates an Internal Load Balancer element.
-
-
- listInternalLoadBalancerElements
- Lists all available Internal Load Balancer elements.
-
-
- createAffinityGroup
- Creates an affinity or anti-affinity group.
-
-
- deleteAffinityGroup
- Deletes an affinity group.
-
-
- listAffinityGroups
- Lists all the affinity groups.
-
-
- updateVMAffinityGroup
- Updates the affinity or anti-affinity group associations of a VM. The VM has to be
- stopped and restarted for the new properties to take effect.
-
-
- listAffinityGroupTypes
- Lists affinity group types available.
-
-
- stopInternalLoadBalancerVM
- Stops an Internal LB VM.
-
-
- startInternalLoadBalancerVM
- Starts an existing Internal LB VM.
-
-
- listInternalLoadBalancerVMs
- Lists internal LB VMs.
-
-
- listNetworkIsolationMethods
- Lists supported methods of network isolation.
-
-
- dedicateZone
- Dedicates a zone.
-
-
- dedicatePod
- Dedicates a pod.
-
-
- dedicateCluster
- Dedicates an existing cluster.
-
-
- dedicateHost
- Dedicates a host.
-
-
- releaseDedicatedZone
- Releases dedication of zone.
-
-
- releaseDedicatedPod
- Releases dedication for the pod.
-
-
- releaseDedicatedCluster
- Releases dedication for cluster.
-
-
- releaseDedicatedHost
- Releases dedication for host.
-
-
- listDedicatedZones
- Lists dedicated zones.
-
-
- listDedicatedPods
- Lists dedicated pods.
-
-
- listDedicatedClusters
- Lists dedicated clusters.
-
-
- listDedicatedHosts
- Lists dedicated hosts.
-
-
-
diff --git a/docs/en-US/added-API-commands.xml b/docs/en-US/added-API-commands.xml
deleted file mode 100644
index 99635de4697..00000000000
--- a/docs/en-US/added-API-commands.xml
+++ /dev/null
@@ -1,195 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Added API commands in 3.0
-
- Added in 3.0.2
-
-
- changeServiceForSystemVm
- Changes the service offering for a system VM (console proxy or secondary storage). The
- system VM must be in a "Stopped" state for this command to take effect.
-
-
-
-
- Added in 3.0.1
-
-
- changeServiceForSystemVm
- Changes the service offering for a system VM (console proxy or secondary storage). The
- system VM must be in a "Stopped" state for this command to take effect.
-
-
-
-
- Added in 3.0.0
-
-
-
-
-
-
-
- assignVirtualMachine (Move a user VM to another user under same
- domain.)
- restoreVirtualMachine (Restore a VM to original template or specific
- snapshot)
- createLBStickinessPolicy (Creates a Load Balancer stickiness policy
- )
-
-
- deleteLBStickinessPolicy (Deletes a LB stickiness policy.)
- listLBStickinessPolicies (Lists LBStickiness policies.)
- ldapConfig (Configure the LDAP context for this site.)
-
-
- addSwift (Adds Swift.)
- listSwifts (List Swift.)
- migrateVolume (Migrate volume)
-
-
- updateStoragePool (Updates a storage pool.)
- authorizeSecurityGroupEgress (Authorizes a particular egress rule for this
- security group)
- revokeSecurityGroupEgress (Deletes a particular egress rule from this
- security group)
-
-
- createNetworkOffering (Creates a network offering.)
- deleteNetworkOffering (Deletes a network offering.)
- createProject (Creates a project)
-
-
- deleteProject (Deletes a project)
- updateProject (Updates a project)
- activateProject (Activates a project)
-
-
- suspendProject (Suspends a project)
- listProjects (Lists projects and provides detailed information for listed
- projects)
- addAccountToProject (Adds account to a project)
-
-
- deleteAccountFromProject (Deletes account from the project)
- listProjectAccounts (Lists project's accounts)
- listProjectInvitations (Lists an account's invitations to join
- projects)
-
-
- updateProjectInvitation (Accepts or declines project
- invitation)
- deleteProjectInvitation (Deletes a project invitation)
- updateHypervisorCapabilities (Updates a hypervisor
- capabilities.)
-
-
- listHypervisorCapabilities (Lists all hypervisor
- capabilities.)
- createPhysicalNetwork (Creates a physical network)
- deletePhysicalNetwork (Deletes a Physical Network.)
-
-
- listPhysicalNetworks (Lists physical networks)
- updatePhysicalNetwork (Updates a physical network)
- listSupportedNetworkServices (Lists all network services provided by
- &PRODUCT; or for the given Provider.)
-
-
- addNetworkServiceProvider (Adds a network serviceProvider to a physical
- network)
- deleteNetworkServiceProvider (Deletes a Network Service
- Provider.)
- listNetworkServiceProviders (Lists network serviceproviders for a given
- physical network.)
-
-
- updateNetworkServiceProvider (Updates a network serviceProvider of a physical
- network)
- addTrafficType (Adds traffic type to a physical network)
- deleteTrafficType (Deletes traffic type of a physical network)
-
-
- listTrafficTypes (Lists traffic types of a given physical
- network.)
- updateTrafficType (Updates traffic type of a physical network)
- listTrafficTypeImplementors (Lists implementors of implementor of a network
- traffic type or implementors of all network traffic types)
-
-
- createStorageNetworkIpRange (Creates a Storage network IP
- range.)
- deleteStorageNetworkIpRange (Deletes a storage network IP
- Range.)
- listStorageNetworkIpRange (List a storage network IP range.)
-
-
- updateStorageNetworkIpRange (Update a Storage network IP range, only allowed
- when no IPs in this range have been allocated.)
- listUsageTypes (List Usage Types)
- addF5LoadBalancer (Adds a F5 BigIP load balancer device)
-
-
- configureF5LoadBalancer (configures a F5 load balancer device)
- deleteF5LoadBalancer ( delete a F5 load balancer device)
- listF5LoadBalancers (lists F5 load balancer devices)
-
-
- listF5LoadBalancerNetworks (lists network that are using a F5 load balancer
- device)
- addSrxFirewall (Adds a SRX firewall device)
- deleteSrxFirewall ( delete a SRX firewall device)
-
-
- listSrxFirewalls (lists SRX firewall devices in a physical
- network)
- listSrxFirewallNetworks (lists network that are using SRX firewall
- device)
- addNetscalerLoadBalancer (Adds a netscaler load balancer
- device)
-
-
- deleteNetscalerLoadBalancer ( delete a netscaler load balancer
- device)
- configureNetscalerLoadBalancer (configures a netscaler load balancer
- device)
- listNetscalerLoadBalancers (lists netscaler load balancer
- devices)
-
-
- listNetscalerLoadBalancerNetworks (lists network that are using a netscaler
- load balancer device)
- createVirtualRouterElement (Create a virtual router element.)
- configureVirtualRouterElement (Configures a virtual router
- element.)
-
-
- listVirtualRouterElements (Lists all available virtual router
- elements.)
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/added-error-codes.xml b/docs/en-US/added-error-codes.xml
deleted file mode 100644
index ae7389122f9..00000000000
--- a/docs/en-US/added-error-codes.xml
+++ /dev/null
@@ -1,138 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Added &PRODUCT; Error Codes
- You can now find the &PRODUCT;-specific error code in the exception response for each type of exception. The following list of error codes is added to the new class named CSExceptionErrorCode.
-
-
-
-
-
-
-
- 4250 : "com.cloud.utils.exception.CloudRuntimeException"
- 4255 : "com.cloud.utils.exception.ExceptionUtil"
- 4260 : "com.cloud.utils.exception.ExecutionException"
-
-
- 4265 : "com.cloud.utils.exception.HypervisorVersionChangedException"
- 4270 : "com.cloud.utils.exception.RuntimeCloudException"
- 4275 : "com.cloud.exception.CloudException"
-
-
- 4280 : "com.cloud.exception.AccountLimitException"
- 4285 : "com.cloud.exception.AgentUnavailableException"
- 4290 : "com.cloud.exception.CloudAuthenticationException"
-
-
- 4295 : "com.cloud.exception.CloudExecutionException"
- 4300 : "com.cloud.exception.ConcurrentOperationException"
- 4305 : "com.cloud.exception.ConflictingNetworkSettingsException"
-
-
- 4310 : "com.cloud.exception.DiscoveredWithErrorException"
- 4315 : "com.cloud.exception.HAStateException"
- 4320 : "com.cloud.exception.InsufficientAddressCapacityException"
-
-
- 4325 : "com.cloud.exception.InsufficientCapacityException"
- 4330 : "com.cloud.exception.InsufficientNetworkCapacityException"
- 4335 : "com.cloud.exception.InsufficientServerCapacityException"
-
-
- 4340 : "com.cloud.exception.InsufficientStorageCapacityException"
- 4345 : "com.cloud.exception.InternalErrorException"
- 4350 : "com.cloud.exception.InvalidParameterValueException"
-
-
- 4355 : "com.cloud.exception.ManagementServerException"
- 4360 : "com.cloud.exception.NetworkRuleConflictException"
- 4365 : "com.cloud.exception.PermissionDeniedException"
-
-
- 4370 : "com.cloud.exception.ResourceAllocationException"
- 4375 : "com.cloud.exception.ResourceInUseException"
- 4380 : "com.cloud.exception.ResourceUnavailableException"
-
-
- 4385 : "com.cloud.exception.StorageUnavailableException"
- 4390 : "com.cloud.exception.UnsupportedServiceException"
- 4395 : "com.cloud.exception.VirtualMachineMigrationException"
-
-
- 4400 : "com.cloud.exception.AccountLimitException"
- 4405 : "com.cloud.exception.AgentUnavailableException"
- 4410 : "com.cloud.exception.CloudAuthenticationException"
-
-
- 4415 : "com.cloud.exception.CloudException"
- 4420 : "com.cloud.exception.CloudExecutionException"
- 4425 : "com.cloud.exception.ConcurrentOperationException"
-
-
- 4430 : "com.cloud.exception.ConflictingNetworkSettingsException"
- 4435 : "com.cloud.exception.ConnectionException"
- 4440 : "com.cloud.exception.DiscoveredWithErrorException"
-
-
- 4445 : "com.cloud.exception.DiscoveryException"
- 4450 : "com.cloud.exception.HAStateException"
- 4455 : "com.cloud.exception.InsufficientAddressCapacityException"
-
-
- 4460 : "com.cloud.exception.InsufficientCapacityException"
- 4465 : "com.cloud.exception.InsufficientNetworkCapacityException"
- 4470 : "com.cloud.exception.InsufficientServerCapacityException"
-
-
- 4475 : "com.cloud.exception.InsufficientStorageCapacityException"
- 4480 : "com.cloud.exception.InsufficientVirtualNetworkCapcityException"
- 4485 : "com.cloud.exception.InternalErrorException"
-
-
- 4490 : "com.cloud.exception.InvalidParameterValueException"
- 4495 : "com.cloud.exception.ManagementServerException"
- 4500 : "com.cloud.exception.NetworkRuleConflictException"
-
-
- 4505 : "com.cloud.exception.PermissionDeniedException"
- 4510 : "com.cloud.exception.ResourceAllocationException"
- 4515 : "com.cloud.exception.ResourceInUseException"
-
-
- 4520 : "com.cloud.exception.ResourceUnavailableException"
- 4525 : "com.cloud.exception.StorageUnavailableException"
- 4530 : "com.cloud.exception.UnsupportedServiceException"
-
-
- 4535 : "com.cloud.exception.VirtualMachineMigrationException"
- 9999 : "org.apache.cloudstack.api.ServerApiException"
-
-
-
-
-
-
-
diff --git a/docs/en-US/adding-IP-addresses-for-the-public-network.xml b/docs/en-US/adding-IP-addresses-for-the-public-network.xml
deleted file mode 100644
index abf4d0233cc..00000000000
--- a/docs/en-US/adding-IP-addresses-for-the-public-network.xml
+++ /dev/null
@@ -1,45 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Adding IP Addresses for the Public Network
- These instructions assume you have already logged in to the &PRODUCT; UI.
-
- In the left navigation, choose Infrastructure. In Zones, click View More, then click the desired zone .
- Click the Network tab.
- In the Public node of the diagram, click Configure.
- Click the IP Ranges tab.
- Provide the following information:
-
- Gateway. The gateway in use for these IP addresses
- Netmask. The netmask associated with this IP range
- VLAN. The VLAN that will be used for public traffic
- Start IP/End IP. A range of IP addresses that are assumed to be accessible from the Internet and will be allocated for access to guest networks.
-
-
- Click Add.
-
-
-
-
diff --git a/docs/en-US/additional-installation-options.xml b/docs/en-US/additional-installation-options.xml
deleted file mode 100644
index 622ef03d07e..00000000000
--- a/docs/en-US/additional-installation-options.xml
+++ /dev/null
@@ -1,31 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Additional Installation Options
- The next few sections describe &PRODUCT; features above and beyond the basic deployment options.
-
-
-
-
diff --git a/docs/en-US/admin-alerts.xml b/docs/en-US/admin-alerts.xml
deleted file mode 100644
index e98f79de06f..00000000000
--- a/docs/en-US/admin-alerts.xml
+++ /dev/null
@@ -1,128 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Administrator Alerts
- The system provides alerts and events to help with the management of the cloud. Alerts are notices to an administrator, generally delivered by e-mail, notifying the administrator that an error has occurred in the cloud. Alert behavior is configurable.
- Events track all of the user and administrator actions in the cloud. For example, every guest VM start creates an associated event. Events are stored in the Management Server’s database.
- Emails will be sent to administrators under the following circumstances:
-
- The Management Server cluster runs low on CPU, memory, or storage resources
- The Management Server loses heartbeat from a Host for more than 3 minutes
- The Host cluster runs low on CPU, memory, or storage resources
-
-
-
- Sending Alerts to External SNMP and Syslog Managers
- In addition to showing administrator alerts on the Dashboard in the &PRODUCT; UI and
- sending them in email, &PRODUCT; can also send the same alerts to external SNMP or
- Syslog management software. This is useful if you prefer to use an SNMP or Syslog
- manager to monitor your cloud.
- The alerts which can be sent are listed in . You can also
- display the most up to date list by calling the API command listAlerts.
-
- SNMP Alert Details
- The supported protocol is SNMP version 2.
- Each SNMP trap contains the following information: message, podId, dataCenterId, clusterId, and generationTime.
-
-
- Syslog Alert Details
- &PRODUCT; generates a syslog message for every alert. Each syslog message incudes
- the fields alertType, message, podId, dataCenterId, and clusterId, in the following
- format. If any field does not have a valid value, it will not be included.
- Date severity_level Management_Server_IP_Address/Name alertType:: value dataCenterId:: value podId:: value clusterId:: value message:: value
- For example:
- Mar 4 10:13:47 WARN localhost alertType:: managementNode message:: Management server node 127.0.0.1 is up
-
-
- Configuring SNMP and Syslog Managers
- To configure one or more SNMP managers or Syslog managers to receive alerts from
- &PRODUCT;:
-
- For an SNMP manager, install the &PRODUCT; MIB file on your SNMP manager system.
- This maps the SNMP OIDs to trap types that can be more easily read by users.
- The file must be publicly available.
- For more information on how to install this file, consult the documentation provided with the SNMP manager.
-
- Edit the file /etc/cloudstack/management/log4j-cloud.xml.
- # vi /etc/cloudstack/management/log4j-cloud.xml
-
-
- Add an entry using the syntax shown below. Follow the appropriate example
- depending on whether you are adding an SNMP manager or a Syslog manager. To specify
- multiple external managers, separate the IP addresses and other configuration values
- with commas (,).
-
- The recommended maximum number of SNMP or Syslog managers is 20 for
- each.
-
-
- The following example shows how to configure two SNMP managers at IP addresses
- 10.1.1.1 and 10.1.1.2. Substitute your own IP addresses, ports, and communities. Do
- not change the other values (name, threshold, class, and layout values).
- <appender name="SNMP" class="org.apache.cloudstack.alert.snmp.SnmpTrapAppender">
- <param name="Threshold" value="WARN"/> <!-- Do not edit. The alert feature assumes WARN. -->
- <param name="SnmpManagerIpAddresses" value="10.1.1.1,10.1.1.2"/>
- <param name="SnmpManagerPorts" value="162,162"/>
- <param name="SnmpManagerCommunities" value="public,public"/>
- <layout class="org.apache.cloudstack.alert.snmp.SnmpEnhancedPatternLayout"> <!-- Do not edit -->
- <param name="PairDelimeter" value="//"/>
- <param name="KeyValueDelimeter" value="::"/>
- </layout>
-</appender>
- The following example shows how to configure two Syslog managers at IP
- addresses 10.1.1.1 and 10.1.1.2. Substitute your own IP addresses. You can
- set Facility to any syslog-defined value, such as LOCAL0 - LOCAL7. Do not
- change the other values.
- <appender name="ALERTSYSLOG">
- <param name="Threshold" value="WARN"/>
- <param name="SyslogHosts" value="10.1.1.1,10.1.1.2"/>
- <param name="Facility" value="LOCAL6"/>
- <layout>
- <param name="ConversionPattern" value=""/>
- </layout>
-</appender>
-
-
- If your cloud has multiple Management Server nodes, repeat these steps to edit
- log4j-cloud.xml on every instance.
-
-
- If you have made these changes while the Management Server is running, wait a
- few minutes for the change to take effect.
-
-
- Troubleshooting: If no alerts appear at the
- configured SNMP or Syslog manager after a reasonable amount of time, it is likely that
- there is an error in the syntax of the <appender> entry in log4j-cloud.xml. Check
- to be sure that the format and settings are correct.
-
-
- Deleting an SNMP or Syslog Manager
- To remove an external SNMP manager or Syslog manager so that it no longer receives
- alerts from &PRODUCT;, remove the corresponding entry from the file
- /etc/cloudstack/management/log4j-cloud.xml.
-
-
-
diff --git a/docs/en-US/admin-guide.xml b/docs/en-US/admin-guide.xml
deleted file mode 100644
index f1b0327e9d1..00000000000
--- a/docs/en-US/admin-guide.xml
+++ /dev/null
@@ -1,92 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Administrator Guide
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/adv-zone-topology-req.xml b/docs/en-US/adv-zone-topology-req.xml
deleted file mode 100644
index 3764e926ebe..00000000000
--- a/docs/en-US/adv-zone-topology-req.xml
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Advanced Zone Topology Requirements
- With Advanced Networking, separate subnets must be used for private and public
- networks.
-
diff --git a/docs/en-US/advanced-zone-configuration.xml b/docs/en-US/advanced-zone-configuration.xml
deleted file mode 100644
index 451b5454eb2..00000000000
--- a/docs/en-US/advanced-zone-configuration.xml
+++ /dev/null
@@ -1,385 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Advanced Zone Configuration
-
-
- After you select Advanced in the Add Zone wizard and click Next, you will be asked to
- enter the following details. Then click Next.
-
-
- Name. A name for the zone.
-
-
- DNS 1 and 2. These are DNS servers for use by guest
- VMs in the zone. These DNS servers will be accessed via the public network you will add
- later. The public IP addresses for the zone must have a route to the DNS server named
- here.
-
-
- Internal DNS 1 and Internal DNS 2. These are DNS
- servers for use by system VMs in the zone(these are VMs used by &PRODUCT; itself, such
- as virtual routers, console proxies,and Secondary Storage VMs.) These DNS servers will
- be accessed via the management traffic network interface of the System VMs. The private
- IP address you provide for the pods must have a route to the internal DNS server named
- here.
-
-
- Network Domain. (Optional) If you want to assign a
- special domain name to the guest VM network, specify the DNS suffix.
-
-
- Guest CIDR. This is the CIDR that describes the IP
- addresses in use in the guest virtual networks in this zone. For example, 10.1.1.0/24.
- As a matter of good practice you should set different CIDRs for different zones. This
- will make it easier to set up VPNs between networks in different zones.
-
-
- Hypervisor. (Introduced in version 3.0.1) Choose
- the hypervisor for the first cluster in the zone. You can add clusters with different
- hypervisors later, after you finish adding the zone.
-
-
- Public. A public zone is available to all users. A
- zone that is not public will be assigned to a particular domain. Only users in that
- domain will be allowed to create guest VMs in this zone.
-
-
-
-
- Choose which traffic types will be carried by the physical network.
- The traffic types are management, public, guest, and storage traffic. For more
- information about the types, roll over the icons to display their tool tips, or see . This screen starts out with one network
- already configured. If you have multiple physical networks, you need to add more. Drag and
- drop traffic types onto a greyed-out network and it will become active. You can move the
- traffic icons from one network to another; for example, if the default traffic types shown
- for Network 1 do not match your actual setup, you can move them down. You can also change
- the network names if desired.
-
-
- (Introduced in version 3.0.1) Assign a network traffic label to each traffic type on
- each physical network. These labels must match the labels you have already defined on the
- hypervisor host. To assign each label, click the Edit button under the traffic type icon
- within each physical network. A popup dialog appears where you can type the label, then
- click OK.
- These traffic labels will be defined only for the hypervisor selected for the first
- cluster. For all other hypervisors, the labels can be configured after the zone is
- created.
- (VMware only) If you have enabled Nexus dvSwitch in the environment, you must specify
- the corresponding Ethernet port profile names as network traffic label for each traffic type
- on the physical network. For more information on Nexus dvSwitch, see Configuring a vSphere
- Cluster with Nexus 1000v Virtual Switch in the Installation Guide. If you have enabled
- VMware dvSwitch in the environment, you must specify the corresponding Switch name as
- network traffic label for each traffic type on the physical network. For more information,
- see Configuring a VMware Datacenter with VMware Distributed Virtual Switch in the
- Installation Guide.
-
-
- Click Next.
-
-
- Configure the IP range for public Internet traffic. Enter the following details, then
- click Add. If desired, you can repeat this step to add more public Internet IP ranges. When
- done, click Next.
-
-
- Gateway. The gateway in use for these IP
- addresses.
-
-
- Netmask. The netmask associated with this IP
- range.
-
-
- VLAN. The VLAN that will be used for public
- traffic.
-
-
- Start IP/End IP. A range of IP addresses that are
- assumed to be accessible from the Internet and will be allocated for access to guest
- networks.
-
-
-
-
- In a new zone, &PRODUCT; adds the first pod for you. You can always add more pods later.
- For an overview of what a pod is, see .
- To configure the first pod, enter the following, then click Next:
-
-
- Pod Name. A name for the pod.
-
-
- Reserved system gateway. The gateway for the hosts
- in that pod.
-
-
- Reserved system netmask. The network prefix that
- defines the pod's subnet. Use CIDR notation.
-
-
- Start/End Reserved System IP. The IP range in the
- management network that &PRODUCT; uses to manage various system VMs, such as Secondary
- Storage VMs, Console Proxy VMs, and DHCP. For more information, see .
-
-
-
-
- Specify a range of VLAN IDs to carry guest traffic for each physical network (see VLAN
- Allocation Example ), then click Next.
-
-
- In a new pod, &PRODUCT; adds the first cluster for you. You can always add more clusters
- later. For an overview of what a cluster is, see .
- To configure the first cluster, enter the following, then click Next:
-
-
- Hypervisor. (Version 3.0.0 only; in 3.0.1, this
- field is read only) Choose the type of hypervisor software that all hosts in this
- cluster will run. If you choose VMware, additional fields appear so you can give
- information about a vSphere cluster. For vSphere servers, we recommend creating the
- cluster of hosts in vCenter and then adding the entire cluster to &PRODUCT;. See Add
- Cluster: vSphere .
-
-
- Cluster name. Enter a name for the cluster. This
- can be text of your choosing and is not used by &PRODUCT;.
-
-
-
-
- In a new cluster, &PRODUCT; adds the first host for you. You can always add more hosts
- later. For an overview of what a host is, see .
-
- When you deploy &PRODUCT;, the hypervisor host must not have any VMs already
- running.
-
- Before you can configure the host, you need to install the hypervisor software on the
- host. You will need to know which version of the hypervisor software version is supported by
- &PRODUCT; and what additional configuration is required to ensure the host will work with
- &PRODUCT;. To find these installation details, see:
-
-
- Citrix XenServer Installation for &PRODUCT;
-
-
- VMware vSphere Installation and Configuration
-
-
- KVM Installation and Configuration
-
-
-
- To configure the first host, enter the following, then click Next:
-
-
- Host Name. The DNS name or IP address of the
- host.
-
-
- Username. Usually root.
-
-
- Password. This is the password for the user named
- above (from your XenServer or KVM install).
-
-
- Host Tags. (Optional) Any labels that you use to
- categorize hosts for ease of maintenance. For example, you can set to the cloud's HA tag
- (set in the ha.tag global configuration parameter) if you want this host to be used only
- for VMs with the "high availability" feature enabled. For more information, see
- HA-Enabled Virtual Machines as well as HA for Hosts, both in the Administration
- Guide.
-
-
-
-
- In a new cluster, &PRODUCT; adds the first primary storage server for you. You can
- always add more servers later. For an overview of what primary storage is, see .
- To configure the first primary storage server, enter the following, then click
- Next:
-
-
- Name. The name of the storage device.
-
-
- Protocol. For XenServer, choose either NFS, iSCSI,
- or PreSetup. For KVM, choose NFS, SharedMountPoint, CLVM, and RBD. For vSphere choose
- either VMFS (iSCSI or FiberChannel) or NFS. The remaining fields in the screen vary
- depending on what you choose here.
-
-
-
-
-
-
- NFS
-
-
-
- Server. The IP address or DNS name of
- the storage device.
-
-
- Path. The exported path from the
- server.
-
-
- Tags (optional). The comma-separated
- list of tags for this storage device. It should be an equivalent set or
- superset of the tags on your disk offerings.
-
-
- The tag sets on primary storage across clusters in a Zone must be
- identical. For example, if cluster A provides primary storage that has tags T1
- and T2, all other clusters in the Zone must also provide primary storage that
- has tags T1 and T2.
-
-
-
- iSCSI
-
-
-
- Server. The IP address or DNS name of
- the storage device.
-
-
- Target IQN. The IQN of the target.
- For example, iqn.1986-03.com.sun:02:01ec9bb549-1271378984.
-
-
- Lun. The LUN number. For example,
- 3.
-
-
- Tags (optional). The comma-separated
- list of tags for this storage device. It should be an equivalent set or
- superset of the tags on your disk offerings.
-
-
- The tag sets on primary storage across clusters in a Zone must be
- identical. For example, if cluster A provides primary storage that has tags T1
- and T2, all other clusters in the Zone must also provide primary storage that
- has tags T1 and T2.
-
-
-
- preSetup
-
-
-
- Server. The IP address or DNS name of
- the storage device.
-
-
- SR Name-Label. Enter the name-label
- of the SR that has been set up outside &PRODUCT;.
-
-
- Tags (optional). The comma-separated
- list of tags for this storage device. It should be an equivalent set or
- superset of the tags on your disk offerings.
-
-
- The tag sets on primary storage across clusters in a Zone must be
- identical. For example, if cluster A provides primary storage that has tags T1
- and T2, all other clusters in the Zone must also provide primary storage that
- has tags T1 and T2.
-
-
-
- SharedMountPoint
-
-
-
- Path. The path on each host that is
- where this primary storage is mounted. For example, "/mnt/primary".
-
-
- Tags (optional). The comma-separated
- list of tags for this storage device. It should be an equivalent set or
- superset of the tags on your disk offerings.
-
-
- The tag sets on primary storage across clusters in a Zone must be
- identical. For example, if cluster A provides primary storage that has tags T1
- and T2, all other clusters in the Zone must also provide primary storage that
- has tags T1 and T2.
-
-
-
- VMFS
-
-
-
- Server. The IP address or DNS name of
- the vCenter server.
-
-
- Path. A combination of the datacenter
- name and the datastore name. The format is "/" datacenter name "/"
- datastore name. For example, "/cloud.dc.VM/cluster1datastore".
-
-
- Tags (optional). The comma-separated
- list of tags for this storage device. It should be an equivalent set or
- superset of the tags on your disk offerings.
-
-
- The tag sets on primary storage across clusters in a Zone must be
- identical. For example, if cluster A provides primary storage that has tags T1
- and T2, all other clusters in the Zone must also provide primary storage that
- has tags T1 and T2.
-
-
-
-
-
-
-
-
-
- In a new zone, &PRODUCT; adds the first secondary storage server for you. For an
- overview of what secondary storage is, see .
- Before you can fill out this screen, you need to prepare the secondary storage by
- setting up NFS shares and installing the latest &PRODUCT; System VM template. See Adding
- Secondary Storage :
-
-
- NFS Server. The IP address of the server or fully
- qualified domain name of the server.
-
-
- Path. The exported path from the server.
-
-
-
-
- Click Launch.
-
-
-
diff --git a/docs/en-US/advanced-zone-guest-ip-addresses.xml b/docs/en-US/advanced-zone-guest-ip-addresses.xml
deleted file mode 100644
index 66bc0826683..00000000000
--- a/docs/en-US/advanced-zone-guest-ip-addresses.xml
+++ /dev/null
@@ -1,31 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Advanced Zone Guest IP Addresses
- When advanced networking is used, the administrator can create additional networks for use
- by the guests. These networks can span the zone and be available to all accounts, or they can be
- scoped to a single account, in which case only the named account may create guests that attach
- to these networks. The networks are defined by a VLAN ID, IP range, and gateway. The
- administrator may provision thousands of these networks if desired. Additionally, the
- administrator can reserve a part of the IP address space for non-&PRODUCT; VMs and
- servers.
-
diff --git a/docs/en-US/advanced-zone-network-traffic-types.xml b/docs/en-US/advanced-zone-network-traffic-types.xml
deleted file mode 100644
index 4d1f46592e0..00000000000
--- a/docs/en-US/advanced-zone-network-traffic-types.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Advanced Zone Network Traffic Types
- When advanced networking is used, there can be multiple physical networks in the zone. Each physical network can carry one or more traffic types, and you need to let &PRODUCT; know which type of network traffic you want each network to carry. The traffic types in an advanced zone are:
-
- Guest. When end users run VMs, they generate guest traffic. The guest VMs communicate with each other over a network that can be referred to as the guest network. This network can be isolated or shared. In an isolated guest network, the administrator needs to reserve VLAN ranges to provide isolation for each &PRODUCT; account’s network (potentially a large number of VLANs). In a shared guest network, all guest VMs share a single network.
- Management. When &PRODUCT;’s internal resources communicate with each other, they generate management traffic. This includes communication between hosts, system VMs (VMs used by &PRODUCT; to perform various tasks in the cloud), and any other component that communicates directly with the &PRODUCT; Management Server. You must configure the IP range for the system VMs to use.
- Public. Public traffic is generated when VMs in the cloud access the Internet. Publicly accessible IPs must be allocated for this purpose. End users can use the &PRODUCT; UI to acquire these IPs to implement NAT between their guest network and the public network, as described in “Acquiring a New IP Address†in the Administration Guide.
- Storage. While labeled "storage" this is specifically about secondary storage, and doesn't affect traffic for primary storage. This includes traffic such as VM templates and snapshots, which is sent between the secondary storage VM and secondary storage servers. &PRODUCT; uses a separate Network Interface Controller (NIC) named storage NIC for storage network traffic. Use of a storage NIC that always operates on a high bandwidth network allows fast template and snapshot copying. You must configure the IP range to use for the storage network.
-
- These traffic types can each be on a separate physical network, or they can be combined with certain restrictions. When you use the Add Zone wizard in the UI to create a new zone, you are guided into making only valid choices.
-
diff --git a/docs/en-US/advanced-zone-physical-network-configuration.xml b/docs/en-US/advanced-zone-physical-network-configuration.xml
deleted file mode 100644
index cfc6184c000..00000000000
--- a/docs/en-US/advanced-zone-physical-network-configuration.xml
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Advanced Zone Physical Network Configuration
- Within a zone that uses advanced networking, you need to tell the Management Server how the
- physical network is set up to carry different kinds of traffic in isolation.
-
-
-
-
diff --git a/docs/en-US/advanced-zone-public-ip-addresses.xml b/docs/en-US/advanced-zone-public-ip-addresses.xml
deleted file mode 100644
index 82b71d1f23a..00000000000
--- a/docs/en-US/advanced-zone-public-ip-addresses.xml
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Advanced Zone Public IP Addresses
- When advanced networking is used, the administrator can create additional networks for use by the guests. These networks can span the zone and be available to all accounts, or they can be scoped to a single account, in which case only the named account may create guests that attach to these networks. The networks are defined by a VLAN ID, IP range, and gateway. The administrator may provision thousands of these networks if desired.
-
diff --git a/docs/en-US/alerts.xml b/docs/en-US/alerts.xml
deleted file mode 100644
index ebea4b808a4..00000000000
--- a/docs/en-US/alerts.xml
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Alerts
- The following is the list of alert type numbers. The current alerts can be found by calling listAlerts.
- MEMORY = 0
- CPU = 1
- STORAGE =2
- STORAGE_ALLOCATED = 3
- PUBLIC_IP = 4
- PRIVATE_IP = 5
- HOST = 6
- USERVM = 7
- DOMAIN_ROUTER = 8
- CONSOLE_PROXY = 9
- ROUTING = 10// lost connection to default route (to the gateway)
- STORAGE_MISC = 11 // lost connection to default route (to the gateway)
- USAGE_SERVER = 12 // lost connection to default route (to the gateway)
- MANAGMENT_NODE = 13 // lost connection to default route (to the gateway)
- DOMAIN_ROUTER_MIGRATE = 14
- CONSOLE_PROXY_MIGRATE = 15
- USERVM_MIGRATE = 16
- VLAN = 17
- SSVM = 18
- USAGE_SERVER_RESULT = 19
- STORAGE_DELETE = 20;
- UPDATE_RESOURCE_COUNT = 21; //Generated when we fail to update the resource count
- USAGE_SANITY_RESULT = 22;
- DIRECT_ATTACHED_PUBLIC_IP = 23;
- LOCAL_STORAGE = 24;
- RESOURCE_LIMIT_EXCEEDED = 25; //Generated when the resource limit exceeds the limit. Currently used for recurring snapshots only
-
diff --git a/docs/en-US/allocators.xml b/docs/en-US/allocators.xml
deleted file mode 100644
index d8ce2b8612b..00000000000
--- a/docs/en-US/allocators.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Allocators
- &PRODUCT; enables administrators to write custom allocators that will choose the Host to place a new guest and the storage host from which to allocate guest virtual disk images.
-
diff --git a/docs/en-US/api-calls.xml b/docs/en-US/api-calls.xml
deleted file mode 100644
index af4073ac60b..00000000000
--- a/docs/en-US/api-calls.xml
+++ /dev/null
@@ -1,33 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Calling the &PRODUCT; API
-
-
-
-
-
-
-
diff --git a/docs/en-US/api-overview.xml b/docs/en-US/api-overview.xml
deleted file mode 100644
index a541049e116..00000000000
--- a/docs/en-US/api-overview.xml
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- &PRODUCT; API
- The &PRODUCT; API is a low level API that has been used to implement the &PRODUCT; web UIs.
- It is also a good basis for implementing other popular APIs such as EC2/S3 and emerging DMTF
- standards.
- Many &PRODUCT; API calls are asynchronous. These will return a Job ID immediately when
- called. This Job ID can be used to query the status of the job later. Also, status calls on
- impacted resources will provide some indication of their state.
- The API has a REST-like query basis and returns results in XML or JSON.
- See the
- Developer’s Guide and the API
- Reference.
-
-
-
-
diff --git a/docs/en-US/api-reference.xml b/docs/en-US/api-reference.xml
deleted file mode 100644
index 9a1acc145bd..00000000000
--- a/docs/en-US/api-reference.xml
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
-
- API Reference Documentation
- You can find all the API reference documentation at the below site:
- http://cloudstack.apache.org/docs/api/
-
-
diff --git a/docs/en-US/api-throttling.xml b/docs/en-US/api-throttling.xml
deleted file mode 100644
index 908e22389a8..00000000000
--- a/docs/en-US/api-throttling.xml
+++ /dev/null
@@ -1,67 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Limiting the Rate of API Requests
- You can limit the rate at which API requests can be placed for each
- account. This is useful to avoid malicious attacks on the Management Server, prevent
- performance degradation, and provide fairness to all accounts.
- If the number of API calls exceeds the threshold, an error message is returned for any additional API calls.
- The caller will have to retry these API calls at another time.
-
- Configuring the API Request Rate
- To control the API request rate, use the following global configuration
- settings:
-
- api.throttling.enabled - Enable/Disable API throttling. By default, this setting is false, so
- API throttling is not enabled.
- api.throttling.interval (in seconds) - Time interval during which the number of API requests is to be counted.
- When the interval has passed, the API count is reset to 0.
- api.throttling.max - Maximum number of APIs that can be placed within the api.throttling.interval period.
- api.throttling.cachesize - Cache size for storing API counters.
- Use a value higher than the total number of accounts managed by the cloud.
- One cache entry is needed for each account, to store the running API total for that account.
-
-
-
-
- Limitations on API Throttling
- The following limitations exist in the current implementation of this feature.
- Even with these limitations, &PRODUCT; is still able to effectively use API throttling to
- avoid malicious attacks causing denial of service.
-
-
- In a deployment with multiple Management Servers,
- the cache is not synchronized across them.
- In this case, &PRODUCT; might not be able to
- ensure that only the exact desired number of API requests are allowed.
- In the worst case, the number of API calls that might be allowed is
- (number of Management Servers) * (api.throttling.max).
-
- The API commands resetApiLimit and getApiLimit are limited to the
- Management Server where the API is invoked.
-
-
-
-
\ No newline at end of file
diff --git a/docs/en-US/append-displayname-vms.xml b/docs/en-US/append-displayname-vms.xml
deleted file mode 100644
index 592a6e863e8..00000000000
--- a/docs/en-US/append-displayname-vms.xml
+++ /dev/null
@@ -1,84 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Appending a Display Name to the Guest VM’s Internal Name
- Every guest VM has an internal name. The host uses the internal name to identify the guest
- VMs. &PRODUCT; gives you an option to provide a guest VM with a display name. You can set this
- display name as the internal name so that the vCenter can use it to identify the guest VM. A new
- global parameter, vm.instancename.flag, has now been added to achieve this functionality.
- The default format of the internal name is
- i-<user_id>-<vm_id>-<instance.name>, where instance.name is a global
- parameter. However, If vm.instancename.flag is set to true, and if a display name is provided
- during the creation of a guest VM, the display name is appended to the internal name of the
- guest VM on the host. This makes the internal name format as
- i-<user_id>-<vm_id>-<displayName>. The default value of vm.instancename.flag
- is set to false. This feature is intended to make the correlation between instance names and
- internal names easier in large data center deployments.
- The following table explains how a VM name is displayed in different scenarios.
-
-
-
-
-
-
-
-
-
- User-Provided Display Name
- vm.instancename.flag
- Hostname on the VM
- Name on vCenter
- Internal Name
-
-
-
-
- Yes
- True
- Display name
- i-<user_id>-<vm_id>-displayName
- i-<user_id>-<vm_id>-displayName
-
-
- No
- True
- UUID
- i-<user_id>-<vm_id>-<instance.name>
- i-<user_id>-<vm_id>-<instance.name>
-
-
- Yes
- False
- Display name
- i-<user_id>-<vm_id>-<instance.name>
- i-<user_id>-<vm_id>-<instance.name>
-
-
- No
- False
- UUID
- i-<user_id>-<vm_id>-<instance.name>
- i-<user_id>-<vm_id>-<instance.name>
-
-
-
-
-
diff --git a/docs/en-US/asynchronous-commands-example.xml b/docs/en-US/asynchronous-commands-example.xml
deleted file mode 100644
index 330f1255679..00000000000
--- a/docs/en-US/asynchronous-commands-example.xml
+++ /dev/null
@@ -1,106 +0,0 @@
-
-
-
- Example
-
- The following shows an example of using an asynchronous command. Assume the API command:
- command=deployVirtualMachine&zoneId=1&serviceOfferingId=1&diskOfferingId=1&templateId=1
-
- CloudStack will immediately return a job ID and any other additional data.
-
- <deployvirtualmachineresponse>
- <jobid>1</jobid>
- <id>100</id>
- </deployvirtualmachineresponse>
-
- Using the job ID, you can periodically poll for the results by using the queryAsyncJobResult command.
- command=queryAsyncJobResult&jobId=1
- Three possible results could come from this query.
- Job is still pending:
-
- <queryasyncjobresult>
- <jobid>1</jobid>
- <jobstatus>0</jobstatus>
- <jobprocstatus>1</jobprocstatus>
- </queryasyncjobresult>
-
- Job has succeeded:
-
- <queryasyncjobresultresponse cloud-stack-version="3.0.1.6">
- <jobid>1</jobid>
- <jobstatus>1</jobstatus>
- <jobprocstatus>0</jobprocstatus>
- <jobresultcode>0</jobresultcode>
- <jobresulttype>object</jobresulttype>
- <jobresult>
- <virtualmachine>
- <id>450</id>
- <name>i-2-450-VM</name>
- <displayname>i-2-450-VM</displayname>
- <account>admin</account>
- <domainid>1</domainid>
- <domain>ROOT</domain>
- <created>2011-03-10T18:20:25-0800</created>
- <state>Running</state>
- <haenable>false</haenable>
- <zoneid>1</zoneid>
- <zonename>San Jose 1</zonename>
- <hostid>2</hostid>
- <hostname>905-13.sjc.lab.vmops.com</hostname>
- <templateid>1</templateid>
- <templatename>CentOS 5.3 64bit LAMP</templatename>
- <templatedisplaytext>CentOS 5.3 64bit LAMP</templatedisplaytext>
- <passwordenabled>false</passwordenabled>
- <serviceofferingid>1</serviceofferingid>
- <serviceofferingname>Small Instance</serviceofferingname>
- <cpunumber>1</cpunumber>
- <cpuspeed>500</cpuspeed>
- <memory>512</memory>
- <guestosid>12</guestosid>
- <rootdeviceid>0</rootdeviceid>
- <rootdevicetype>NetworkFilesystem</rootdevicetype>
- <nic>
- <id>561</id>
- <networkid>205</networkid>
- <netmask>255.255.255.0</netmask>
- <gateway>10.1.1.1</gateway>
- <ipaddress>10.1.1.225</ipaddress>
- <isolationuri>vlan://295</isolationuri>
- <broadcasturi>vlan://295</broadcasturi>
- <traffictype>Guest</traffictype>
- <type>Virtual</type>
- <isdefault>true</isdefault>
- </nic>
- <hypervisor>XenServer</hypervisor>
- </virtualmachine>
- </jobresult>
- </queryasyncjobresultresponse>
-
- Job has failed:
-
- <queryasyncjobresult>
- <jobid>1</jobid>
- <jobstatus>2</jobstatus>
- <jobprocstatus>0</jobprocstatus>
- <jobresultcode>551</jobresultcode>
- <jobresulttype>text</jobresulttype>
- <jobresult>Unable to deploy virtual machine id = 100 due to not enough capacity</jobresult>
- </queryasyncjobresult>
-
-
diff --git a/docs/en-US/asynchronous-commands.xml b/docs/en-US/asynchronous-commands.xml
deleted file mode 100644
index 4c9b59cbc43..00000000000
--- a/docs/en-US/asynchronous-commands.xml
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Asynchronous Commands
- Asynchronous commands were introduced in &PRODUCT; 2.x. Commands are designated as asynchronous when they can potentially take a long period of time to complete such as creating a snapshot or disk volume. They differ from synchronous commands by the following:
-
-
- They are identified in the API Reference by an (A).
- They will immediately return a job ID to refer to the job that will be responsible in processing the command.
- If executed as a "create" resource command, it will return the resource ID as well as the job ID.
- You can periodically check the status of the job by making a simple API call to the command, queryAsyncJobResult and passing in the job ID.
-
-
-
-
-
diff --git a/docs/en-US/attach-iso-to-vm.xml b/docs/en-US/attach-iso-to-vm.xml
deleted file mode 100644
index 8e0d4247f9b..00000000000
--- a/docs/en-US/attach-iso-to-vm.xml
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Attaching an ISO to a VM
-
- In the left navigation, click Instances.
- Choose the virtual machine you want to work with.
- Click the Attach ISO button.
-
-
-
-
- iso.png: depicts adding an iso image
-
-
- In the Attach ISO dialog box, select the desired ISO.
- Click OK.
-
-
diff --git a/docs/en-US/attaching-volume.xml b/docs/en-US/attaching-volume.xml
deleted file mode 100644
index bb9196a93bb..00000000000
--- a/docs/en-US/attaching-volume.xml
+++ /dev/null
@@ -1,61 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Attaching a Volume
- You can attach a volume to a guest VM to provide extra disk storage. Attach a volume when
- you first create a new volume, when you are moving an existing volume from one VM to another, or
- after you have migrated a volume from one storage pool to another.
-
-
- Log in to the &PRODUCT; UI as a user or admin.
-
-
- In the left navigation, click Storage.
-
-
- In Select View, choose Volumes.
-
-
- Click the volume name in the Volumes list, then click the Attach Disk button
-
-
-
-
- AttachDiskButton.png: button to attach a volume
-
-
-
-
-
- In the Instance popup, choose the VM to which you want to attach the volume. You will
- only see instances to which you are allowed to attach volumes; for example, a user will see
- only instances created by that user, but the administrator will have more choices.
-
-
-
- When the volume has been attached, you should be able to see it by clicking Instances,
- the instance name, and View Volumes.
-
-
-
diff --git a/docs/en-US/automatic-snapshot-creation-retention.xml b/docs/en-US/automatic-snapshot-creation-retention.xml
deleted file mode 100644
index 54fbe68e5bb..00000000000
--- a/docs/en-US/automatic-snapshot-creation-retention.xml
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Automatic Snapshot Creation and Retention
- (Supported for the following hypervisors: XenServer,
- VMware vSphere, and KVM)
- Users can set up a recurring snapshot policy to automatically create multiple snapshots of a
- disk at regular intervals. Snapshots can be created on an hourly, daily, weekly, or monthly
- interval. One snapshot policy can be set up per disk volume. For example, a user can set up a
- daily snapshot at 02:30.
- With each snapshot schedule, users can also specify the number of scheduled snapshots to be
- retained. Older snapshots that exceed the retention limit are automatically deleted. This
- user-defined limit must be equal to or lower than the global limit set by the &PRODUCT;
- administrator. See . The limit applies only to those
- snapshots that are taken as part of an automatic recurring snapshot policy. Additional manual
- snapshots can be created and retained.
-
\ No newline at end of file
diff --git a/docs/en-US/autoscale.xml b/docs/en-US/autoscale.xml
deleted file mode 100644
index 26e795b7bf5..00000000000
--- a/docs/en-US/autoscale.xml
+++ /dev/null
@@ -1,286 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Configuring AutoScale
- AutoScaling allows you to scale your back-end services or application VMs up or down
- seamlessly and automatically according to the conditions you define. With AutoScaling enabled,
- you can ensure that the number of VMs you are using seamlessly scale up when demand increases,
- and automatically decreases when demand subsides. Thus it helps you save compute costs by
- terminating underused VMs automatically and launching new VMs when you need them, without the
- need for manual intervention.
- NetScaler AutoScaling is designed to seamlessly launch or terminate VMs based on
- user-defined conditions. Conditions for triggering a scaleup or scaledown action can vary from a
- simple use case like monitoring the CPU usage of a server to a complex use case of monitoring a
- combination of server's responsiveness and its CPU usage. For example, you can configure
- AutoScaling to launch an additional VM whenever CPU usage exceeds 80 percent for 15 minutes, or
- to remove a VM whenever CPU usage is less than 20 percent for 30 minutes.
- &PRODUCT; uses the NetScaler load balancer to monitor all aspects of a system's health and
- work in unison with &PRODUCT; to initiate scale-up or scale-down actions.
-
- AutoScale is supported on NetScaler Release 10 Build 73.e and beyond.
-
-
- Prerequisites
- Before you configure an AutoScale rule, consider the following:
-
-
-
- Ensure that the necessary template is prepared before configuring AutoScale. When a VM
- is deployed by using a template and when it comes up, the application should be up and
- running.
-
- If the application is not running, the NetScaler device considers the VM as
- ineffective and continues provisioning the VMs unconditionally until the resource limit is
- exhausted.
-
-
-
- Deploy the templates you prepared. Ensure that the applications come up on the first
- boot and is ready to take the traffic. Observe the time requires to deploy the template.
- Consider this time when you specify the quiet time while configuring AutoScale.
-
-
- The AutoScale feature supports the SNMP counters that can be used to define conditions
- for taking scale up or scale down actions. To monitor the SNMP-based counter, ensure that
- the SNMP agent is installed in the template used for creating the AutoScale VMs, and the
- SNMP operations work with the configured SNMP community and port by using standard SNMP
- managers. For example, see to configure SNMP on a RHEL
- machine.
-
-
- Ensure that the endpointe.url parameter present in the Global Settings is set to the
- Management Server API URL. For example, http://10.102.102.22:8080/client/api. In a
- multi-node Management Server deployment, use the virtual IP address configured in the load
- balancer for the management server’s cluster. Additionally, ensure that the NetScaler device
- has access to this IP address to provide AutoScale support.
- If you update the endpointe.url, disable the AutoScale functionality of the load
- balancer rules in the system, then enable them back to reflect the changes. For more
- information see
-
-
- If the API Key and Secret Key are regenerated for an AutoScale user, ensure that the
- AutoScale functionality of the load balancers that the user participates in are disabled and
- then enabled to reflect the configuration changes in the NetScaler.
-
-
- In an advanced Zone, ensure that at least one VM should be present before configuring a
- load balancer rule with AutoScale. Having one VM in the network ensures that the network is
- in implemented state for configuring AutoScale.
-
-
-
- Configuration
- Specify the following:
-
-
-
-
-
-
- autoscaleateconfig.png: Configuring AutoScale
-
-
-
-
- Template: A template consists of a base OS image and
- application. A template is used to provision the new instance of an application on a scaleup
- action. When a VM is deployed from a template, the VM can start taking the traffic from the
- load balancer without any admin intervention. For example, if the VM is deployed for a Web
- service, it should have the Web server running, the database connected, and so on.
-
-
- Compute offering: A predefined set of virtual hardware
- attributes, including CPU speed, number of CPUs, and RAM size, that the user can select when
- creating a new virtual machine instance. Choose one of the compute offerings to be used
- while provisioning a VM instance as part of scaleup action.
-
-
- Min Instance: The minimum number of active VM instances
- that is assigned to a load balancing rule. The active VM instances are the application
- instances that are up and serving the traffic, and are being load balanced. This parameter
- ensures that a load balancing rule has at least the configured number of active VM instances
- are available to serve the traffic.
-
- If an application, such as SAP, running on a VM instance is down for some reason, the
- VM is then not counted as part of Min Instance parameter, and the AutoScale feature
- initiates a scaleup action if the number of active VM instances is below the configured
- value. Similarly, when an application instance comes up from its earlier down state, this
- application instance is counted as part of the active instance count and the AutoScale
- process initiates a scaledown action when the active instance count breaches the Max
- instance value.
-
-
-
- Max Instance: Maximum number of active VM instances
- that should be assigned to a load balancing rule. This
- parameter defines the upper limit of active VM instances that can be assigned to a load
- balancing rule.
- Specifying a large value for the maximum instance parameter might result in provisioning
- large number of VM instances, which in turn leads to a single load balancing rule exhausting
- the VM instances limit specified at the account or domain level.
-
- If an application, such as SAP, running on a VM instance is down for some reason, the
- VM is not counted as part of Max Instance parameter. So there may be scenarios where the
- number of VMs provisioned for a scaleup action might be more than the configured Max
- Instance value. Once the application instances in the VMs are up from an earlier down
- state, the AutoScale feature starts aligning to the configured Max Instance value.
-
-
-
- Specify the following scale-up and scale-down policies:
-
-
- Duration: The duration, in seconds, for which the
- conditions you specify must be true to trigger a scaleup action. The conditions defined
- should hold true for the entire duration you specify for an AutoScale action to be invoked.
-
-
-
- Counter: The performance counters expose the state of
- the monitored instances. By default, &PRODUCT; offers four performance counters: Three SNMP
- counters and one NetScaler counter. The SNMP counters are Linux User CPU, Linux System CPU,
- and Linux CPU Idle. The NetScaler counter is ResponseTime. The root administrator can add
- additional counters into &PRODUCT; by using the &PRODUCT; API.
-
-
- Operator: The following five relational operators are
- supported in AutoScale feature: Greater than, Less than, Less than or equal to, Greater than
- or equal to, and Equal to.
-
-
- Threshold: Threshold value to be used for the counter.
- Once the counter defined above breaches the threshold value, the AutoScale feature initiates
- a scaleup or scaledown action.
-
-
- Add: Click Add to add the condition.
-
-
- Additionally, if you want to configure the advanced settings, click Show advanced settings,
- and specify the following:
-
-
- Polling interval: Frequency in which the conditions,
- combination of counter, operator and threshold, are to be evaluated before taking a scale up
- or down action. The default polling interval is 30 seconds.
-
-
- Quiet Time: This is the cool down period after an
- AutoScale action is initiated. The time includes the time taken to complete provisioning a
- VM instance from its template and the time taken by an application to be ready to serve
- traffic. This quiet time allows the fleet to come up to a stable state before any action can
- take place. The default is 300 seconds.
-
-
- Destroy VM Grace Period: The duration in seconds, after
- a scaledown action is initiated, to wait before the VM is destroyed as part of scaledown
- action. This is to ensure graceful close of any pending sessions or transactions being
- served by the VM marked for destroy. The default is 120 seconds.
-
-
- Security Groups: Security groups provide a way to
- isolate traffic to the VM instances. A security group is a group of VMs that filter their
- incoming and outgoing traffic according to a set of rules, called ingress and egress rules.
- These rules filter network traffic according to the IP address that is attempting to
- communicate with the VM.
-
-
- Disk Offerings: A predefined set of disk size for
- primary data storage.
-
-
- SNMP Community: The SNMP community string to be used by
- the NetScaler device to query the configured counter value from the provisioned VM
- instances. Default is public.
-
-
- SNMP Port: The port number on which the SNMP agent that
- run on the provisioned VMs is listening. Default port is 161.
-
-
- User: This is the user that the NetScaler device use to
- invoke scaleup and scaledown API calls to the cloud. If no option is specified, the user who
- configures AutoScaling is applied. Specify another user name to override.
-
-
- Apply: Click Apply to create the AutoScale
- configuration.
-
-
-
- Disabling and Enabling an AutoScale Configuration
- If you want to perform any maintenance operation on the AutoScale VM instances, disable
- the AutoScale configuration. When the AutoScale configuration is disabled, no scaleup or
- scaledown action is performed. You can use this downtime for the maintenance activities. To
- disable the AutoScale configuration, click the Disable AutoScale
-
-
-
-
- EnableDisable.png: button to enable or disable AutoScale.
-
- button.
-
- The button toggles between enable and disable, depending on whether AutoScale is currently
- enabled or not. After the maintenance operations are done, you can enable the AutoScale
- configuration back. To enable, open the AutoScale configuration page again, then click the
- Enable AutoScale
-
-
-
-
- EnableDisable.png: button to enable or disable AutoScale.
-
- button.
-
- Updating an AutoScale Configuration
- You can update the various parameters and add or delete the conditions in a scaleup or
- scaledown rule. Before you update an AutoScale configuration, ensure that you disable the
- AutoScale load balancer rule by clicking the Disable AutoScale button.
-
- After you modify the required AutoScale parameters, click Apply. To apply the new AutoScale
- policies, open the AutoScale configuration page again, then click the Enable AutoScale
- button.
-
- Runtime Considerations
-
-
-
-
- An administrator should not assign a VM to a load balancing rule which is configured for
- AutoScale.
-
-
- Before a VM provisioning is completed if NetScaler is shutdown or restarted, the
- provisioned VM cannot be a part of the load balancing rule though the intent was to assign
- it to a load balancing rule. To workaround, rename the AutoScale provisioned VMs based on
- the rule name or ID so at any point of time the VMs can be reconciled to its load balancing
- rule.
-
-
- Making API calls outside the context of AutoScale, such as destroyVM, on an autoscaled
- VM leaves the load balancing configuration in an inconsistent state. Though VM is destroyed
- from the load balancer rule, NetScaler continues to show the VM as a service assigned to a
- rule.
-
-
-
diff --git a/docs/en-US/aws-api-examples.xml b/docs/en-US/aws-api-examples.xml
deleted file mode 100644
index ee3b44a5bde..00000000000
--- a/docs/en-US/aws-api-examples.xml
+++ /dev/null
@@ -1,145 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Examples
- There are many tools available to interface with a AWS compatible API. In this section we provide
- a few examples that users of &PRODUCT; can build upon.
-
-
- Boto Examples
- Boto is one of them. It is a Python package available at https://github.com/boto/boto.
- In this section we provide two examples of Python scripts that use Boto and have been tested with the
- &PRODUCT; AWS API Interface.
- First is an EC2 example. Replace the Access and Secret Keys with your own and
- update the endpoint.
-
-
- An EC2 Boto example
- #!/usr/bin/env python
-
-import sys
-import os
-import boto
-import boto.ec2
-
-region = boto.ec2.regioninfo.RegionInfo(name="ROOT",endpoint="localhost")
-apikey='GwNnpUPrO6KgIdZu01z_ZhhZnKjtSdRwuYd4DvpzvFpyxGMvrzno2q05MB0ViBoFYtdqKd'
-secretkey='t4eXLEYWw7chBhDlaKf38adCMSHx_wlds6JfSx3z9fSpSOm0AbP9Moj0oGIzy2LSC8iw'
-
-def main():
- '''Establish connection to EC2 cloud'''
- conn =boto.connect_ec2(aws_access_key_id=apikey,
- aws_secret_access_key=secretkey,
- is_secure=False,
- region=region,
- port=7080,
- path="/awsapi",
- api_version="2010-11-15")
-
- '''Get list of images that I own'''
- images = conn.get_all_images()
- print images
- myimage = images[0]
- '''Pick an instance type'''
- vm_type='m1.small'
- reservation = myimage.run(instance_type=vm_type,security_groups=['default'])
-
-if __name__ == '__main__':
- main()
-
-
-
- Second is an S3 example. Replace the Access and Secret keys with your own,
- as well as the endpoint of the service. Be sure to also update the file paths to something
- that exists on your machine.
-
-
- An S3 Boto Example
- #!/usr/bin/env python
-
-import sys
-import os
-from boto.s3.key import Key
-from boto.s3.connection import S3Connection
-from boto.s3.connection import OrdinaryCallingFormat
-
-apikey='ChOw-pwdcCFy6fpeyv6kUaR0NnhzmG3tE7HLN2z3OB_s-ogF5HjZtN4rnzKnq2UjtnHeg_yLA5gOw'
-secretkey='IMY8R7CJQiSGFk4cHwfXXN3DUFXz07cCiU80eM3MCmfLs7kusgyOfm0g9qzXRXhoAPCH-IRxXc3w'
-
-cf=OrdinaryCallingFormat()
-
-def main():
- '''Establish connection to S3 service'''
- conn =S3Connection(aws_access_key_id=apikey,aws_secret_access_key=secretkey, \
- is_secure=False, \
- host='localhost', \
- port=7080, \
- calling_format=cf, \
- path="/awsapi/rest/AmazonS3")
-
- try:
- bucket=conn.create_bucket('cloudstack')
- k = Key(bucket)
- k.key = 'test'
- try:
- k.set_contents_from_filename('/Users/runseb/Desktop/s3cs.py')
- except:
- print 'could not write file'
- pass
- except:
- bucket = conn.get_bucket('cloudstack')
- k = Key(bucket)
- k.key = 'test'
- try:
- k.get_contents_to_filename('/Users/runseb/Desktop/foobar')
- except:
- print 'Could not get file'
- pass
-
- try:
- bucket1=conn.create_bucket('teststring')
- k=Key(bucket1)
- k.key('foobar')
- k.set_contents_from_string('This is my silly test')
- except:
- bucket1=conn.get_bucket('teststring')
- k = Key(bucket1)
- k.key='foobar'
- k.get_contents_as_string()
-
-if __name__ == '__main__':
- main()
-
-
-
-
-
-
-
- JClouds Examples
-
-
-
-
diff --git a/docs/en-US/aws-ec2-configuration.xml b/docs/en-US/aws-ec2-configuration.xml
deleted file mode 100644
index f0f2d0f6cdc..00000000000
--- a/docs/en-US/aws-ec2-configuration.xml
+++ /dev/null
@@ -1,109 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Enabling the EC2 and S3 Compatible Interface
-
- The software that provides AWS API compatibility is installed along with &PRODUCT;. You must enable the services and perform some setup steps prior to using it.
-
-
- Set the global configuration parameters for each service to true.
- See .
- Create a set of &PRODUCT; service offerings with names that match the Amazon service offerings.
- You can do this through the &PRODUCT; UI as described in the Administration Guide.
- Be sure you have included the Amazon default service offering, m1.small. As well as any EC2 instance types that you will use.
-
- If you did not already do so when you set the configuration parameter in step , restart the Management Server.
- # service cloudstack-management restart
-
-
- The following sections provides details to perform these steps
-
-
- Enabling the Services
- To enable the EC2 and S3 compatible services you need to set the configuration variables enable.ec2.api
- and enable.s3.api to true. You do not have to enable both at the same time. Enable the ones you need.
- This can be done via the &PRODUCT; GUI by going in Global Settings or via the API.
- The snapshot below shows you how to use the GUI to enable these services
-
-
-
-
-
-
-
- Use the GUI to set the configuration variable to true
-
-
-
-
- Using the &PRODUCT; API, the easiest is to use the so-called integration port on which you can make
- unauthenticated calls. In Global Settings set the port to 8096 and subsequently call the updateConfiguration method.
- The following urls shows you how:
-
-
-
- http://localhost:8096/client/api?command=updateConfiguration&name=enable.ec2.api&value=true
- http://localhost:8096/client/api?command=updateConfiguration&name=enable.ec2.api&value=true
-
-
-
- Once you have enabled the services, restart the server.
-
-
-
- Creating EC2 Compatible Service Offerings
- You will also need to define compute service offerings with names compatible with the
- Amazon EC2 instance types API names (e.g m1.small,m1.large). This can be done via the &PRODUCT; GUI.
- Go under Service Offerings select Compute offering and either create
- a new compute offering or modify an existing one, ensuring that the name matches an EC2 instance type API name. The snapshot below shows you how:
-
-
-
-
-
-
- Use the GUI to set the name of a compute service offering to an EC2 instance
- type API name.
-
-
-
-
-
- Modifying the AWS API Port
-
- (Optional) The AWS API listens for requests on port 7080. If you prefer AWS API to listen on another port, you can change it as follows:
-
- Edit the files /etc/cloudstack/management/server.xml, /etc/cloudstack/management/server-nonssl.xml,
- and /etc/cloudstack/management/server-ssl.xml.
- In each file, find the tag <Service name="Catalina7080">. Under this tag,
- locate <Connector executor="tomcatThreadPool-internal" port= ....<.
- Change the port to whatever port you want to use, then save the files.
- Restart the Management Server.
-
- If you re-install &PRODUCT;, you will have to re-enable the services and if need be update the port.
-
-
-
-
diff --git a/docs/en-US/aws-ec2-introduction.xml b/docs/en-US/aws-ec2-introduction.xml
deleted file mode 100644
index 4cf071bcbb2..00000000000
--- a/docs/en-US/aws-ec2-introduction.xml
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Amazon Web Services Compatible Interface
- &PRODUCT; can translate Amazon Web Services (AWS) API calls to native &PRODUCT; API calls
- so that users can continue using existing AWS-compatible tools. This translation service runs as
- a separate web application in the same tomcat server as the management server of &PRODUCT;,
- listening on a different port. The Amazon Web Services (AWS) compatible interface provides the
- EC2 SOAP and Query APIs as well as the S3 REST API.
-
- This service was previously enabled by separate software called CloudBridge. It is now
- fully integrated with the &PRODUCT; management server.
-
-
- The compatible interface for the EC2 Query API and the S3 API are Work In Progress. The S3 compatible API offers a way to store data on the management server file system, it is not an implementation of the S3 backend.
-
- Limitations
-
-
- Supported only in zones that use basic networking.
-
-
- Available in fresh installations of &PRODUCT;. Not available through upgrade of previous versions.
-
-
- Features such as Elastic IP (EIP) and Elastic Load Balancing (ELB) are only available in an infrastructure
- with a Citrix NetScaler device. Users accessing a Zone with a NetScaler device will need to use a
- NetScaler-enabled network offering (DefaultSharedNetscalerEIP and ELBNetworkOffering).
-
-
-
diff --git a/docs/en-US/aws-ec2-requirements.xml b/docs/en-US/aws-ec2-requirements.xml
deleted file mode 100644
index 62e94b1ac9f..00000000000
--- a/docs/en-US/aws-ec2-requirements.xml
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Supported API Version
-
- The EC2 interface complies with Amazon's WDSL version dated November 15, 2010, available at
- http://ec2.amazonaws.com/doc/2010-11-15/.
- The interface is compatible with the EC2 command-line
- tools EC2 tools v. 1.3.6230, which can be downloaded at http://s3.amazonaws.com/ec2-downloads/ec2-api-tools-1.3-62308.zip.
-
-
- Work is underway to support a more recent version of the EC2 API
-
diff --git a/docs/en-US/aws-ec2-supported-commands.xml b/docs/en-US/aws-ec2-supported-commands.xml
deleted file mode 100644
index 7cdbcad8095..00000000000
--- a/docs/en-US/aws-ec2-supported-commands.xml
+++ /dev/null
@@ -1,396 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Supported AWS API Calls
- The following Amazon EC2 commands are supported by &PRODUCT; when the AWS API compatible interface is enabled.
- For a few commands, there are differences between the &PRODUCT; and Amazon EC2 versions, and these differences are noted. The underlying SOAP call for each command is also given, for those who have built tools using those calls.
-
-
-
diff --git a/docs/en-US/aws-ec2-timeouts.xml b/docs/en-US/aws-ec2-timeouts.xml
deleted file mode 100644
index 73d0c16c4df..00000000000
--- a/docs/en-US/aws-ec2-timeouts.xml
+++ /dev/null
@@ -1,51 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Using Timeouts to Ensure AWS API Command Completion
- The Amazon EC2 command-line tools have a default connection timeout. When used with &PRODUCT;, a longer timeout might be needed for some commands. If you find that commands are not completing due to timeouts, you can specify a custom timeouts. You can add the following optional command-line parameters to any &PRODUCT;-supported EC2 command:
-
-
-
-
-
-
- --connection-timeout TIMEOUT
- Specifies a connection timeout (in seconds).
- Example: --connection-timeout 30
-
-
-
- --request-timeout TIMEOUT
- Specifies a request timeout (in seconds).
- Example: --request-timeout 45
-
-
-
-
-
- Example:
- ec2-run-instances 2 –z us-test1 –n 1-3 --connection-timeout 120 --request-timeout 120
- The timeouts optional arguments are not specific to &PRODUCT;.
-
diff --git a/docs/en-US/aws-ec2-user-setup.xml b/docs/en-US/aws-ec2-user-setup.xml
deleted file mode 100644
index a2d89187feb..00000000000
--- a/docs/en-US/aws-ec2-user-setup.xml
+++ /dev/null
@@ -1,105 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- AWS API User Setup
- In general, users need not be aware that they are using a translation service provided by &PRODUCT;.
- They only need to send AWS API calls to &PRODUCT;'s endpoint, and it will translate the calls to the native &PRODUCT; API. Users of the Amazon EC2 compatible interface will be able to keep their existing EC2 tools
- and scripts and use them with their &PRODUCT; deployment, by specifying the endpoint of the
- management server and using the proper user credentials. In order to do this, each user must
- perform the following configuration steps:
-
-
-
- Generate user credentials.
-
-
- Register with the service.
-
-
- For convenience, set up environment variables for the EC2 SOAP command-line tools.
-
-
-
-
- AWS API User Registration
- Each user must perform a one-time registration. The user follows these steps:
-
-
- Obtain the following by looking in the &PRODUCT; UI, using the API, or asking the cloud administrator:
-
-
- The &PRODUCT; server's publicly available DNS name or IP address
- The user account's Access key and Secret key
-
-
-
- Generate a private key and a self-signed X.509 certificate. The user substitutes their own desired storage location for /path/to/… below.
-
-
- $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /path/to/private_key.pem -out /path/to/cert.pem
-
-
-
- Register the user X.509 certificate and Access/Secret keys with the AWS compatible service.
- If you have the source code of &PRODUCT; go to the awsapi-setup/setup directory and use the Python script
- cloudstack-aws-api-register. If you do not have the source then download the script using the following command.
-
-
- wget -O cloudstack-aws-api-register "https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=awsapi-setup/setup/cloudstack-aws-api-register;hb=4.1"
-
-
- Then execute it, using the access and secret keys that were obtained in step . An example is shown below.
-
- $ cloudstack-aws-api-register --apikey=User’s &PRODUCT; API key --secretkey=User’s &PRODUCT; Secret key --cert=/path/to/cert.pem --url=http://&PRODUCT;.server:7080/awsapi
-
-
-
-
-
- A user with an existing AWS certificate could choose to use the same certificate with &PRODUCT;, but note that the certificate would be uploaded to the &PRODUCT; management server database.
-
-
-
-
- AWS API Command-Line Tools Setup
- To use the EC2 command-line tools, the user must perform these steps:
-
-
- Be sure you have the right version of EC2 Tools.
- The supported version is available at http://s3.amazonaws.com/ec2-downloads/ec2-api-tools-1.3-62308.zip.
-
-
-
- Set up the EC2 environment variables. This can be done every time you use the service or you can set them up in the proper shell profile. Replace the endpoint (i.e EC2_URL) with the proper address of your &PRODUCT; management server and port. In a bash shell do the following.
-
-
- $ export EC2_CERT=/path/to/cert.pem
- $ export EC2_PRIVATE_KEY=/path/to/private_key.pem
- $ export EC2_URL=http://localhost:7080/awsapi
- $ export EC2_HOME=/path/to/EC2_tools_directory
-
-
-
-
-
diff --git a/docs/en-US/aws-interface-compatibility.xml b/docs/en-US/aws-interface-compatibility.xml
deleted file mode 100644
index 2c85c24b36a..00000000000
--- a/docs/en-US/aws-interface-compatibility.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Amazon Web Services Compatible Interface
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/basic-adv-networking.xml b/docs/en-US/basic-adv-networking.xml
deleted file mode 100644
index 46f0650e69f..00000000000
--- a/docs/en-US/basic-adv-networking.xml
+++ /dev/null
@@ -1,113 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Basic and Advanced Networking
- &PRODUCT; provides two styles of networking:.
-
- Basic
- For AWS-style networking. Provides a single network where guest isolation can be provided
- through layer-3 means such as security groups (IP address source filtering).
-
-
- Advanced
- For more sophisticated network topologies. This network model provides the most
- flexibility in defining guest networks, but requires more configuration steps than basic
- networking.
-
- Each zone has either basic or advanced networking. Once the choice of networking model for a
- zone has been made and configured in &PRODUCT;, it can not be changed. A zone is either
- basic or advanced for its entire lifetime.
- The following table compares the networking features in the two networking models.
-
-
-
-
- Networking Feature
- Basic Network
- Advanced Network
-
-
-
-
- Number of networks
- Single network
- Multiple networks
-
-
- Firewall type
- Physical
- Physical and Virtual
-
-
- Load balancer
- Physical
- Physical and Virtual
-
-
- Isolation type
- Layer 3
- Layer 2 and Layer 3
-
-
- VPN support
- No
- Yes
-
-
- Port forwarding
- Physical
- Physical and Virtual
-
-
- 1:1 NAT
- Physical
- Physical and Virtual
-
-
- Source NAT
- No
- Physical and Virtual
-
-
- Userdata
- Yes
- Yes
-
-
- Network usage monitoring
- sFlow / netFlow at physical router
- Hypervisor and Virtual Router
-
-
- DNS and DHCP
- Yes
- Yes
-
-
-
-
- The two types of networking may be in use in the same cloud. However, a given zone must use
- either Basic Networking or Advanced Networking.
- Different types of network traffic can be segmented on the same physical network. Guest
- traffic can also be segmented by account. To isolate traffic, you can use separate VLANs. If you
- are using separate VLANs on a single physical network, make sure the VLAN tags are in separate
- numerical ranges.
-
diff --git a/docs/en-US/basic-zone-configuration.xml b/docs/en-US/basic-zone-configuration.xml
deleted file mode 100644
index 79d4ab8ce1b..00000000000
--- a/docs/en-US/basic-zone-configuration.xml
+++ /dev/null
@@ -1,319 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Basic Zone Configuration
-
-
- After you select Basic in the Add Zone wizard and click Next, you will be asked to enter
- the following details. Then click Next.
-
-
- Name. A name for the zone.
-
-
- DNS 1 and 2. These are DNS servers for use by guest
- VMs in the zone. These DNS servers will be accessed via the public network you will add
- later. The public IP addresses for the zone must have a route to the DNS server named
- here.
-
-
- Internal DNS 1 and Internal DNS 2. These are DNS
- servers for use by system VMs in the zone (these are VMs used by &PRODUCT; itself, such
- as virtual routers, console proxies, and Secondary Storage VMs.) These DNS servers will
- be accessed via the management traffic network interface of the System VMs. The private
- IP address you provide for the pods must have a route to the internal DNS server named
- here.
-
-
- Hypervisor. (Introduced in version 3.0.1) Choose
- the hypervisor for the first cluster in the zone. You can add clusters with different
- hypervisors later, after you finish adding the zone.
-
-
- Network Offering. Your choice here determines what
- network services will be available on the network for guest VMs.
-
-
-
-
-
-
- Network Offering
- Description
-
-
-
-
- DefaultSharedNetworkOfferingWithSGService
- If you want to enable security groups for guest traffic isolation,
- choose this. (See Using Security Groups to Control Traffic to
- VMs.)
-
-
- DefaultSharedNetworkOffering
- If you do not need security groups, choose this.
-
-
- DefaultSharedNetscalerEIPandELBNetworkOffering
- If you have installed a Citrix NetScaler appliance as part of your
- zone network, and you will be using its Elastic IP and Elastic Load Balancing
- features, choose this. With the EIP and ELB features, a basic zone with
- security groups enabled can offer 1:1 static NAT and load
- balancing.
-
-
-
-
-
-
- Network Domain. (Optional) If you want to assign a
- special domain name to the guest VM network, specify the DNS suffix.
-
-
- Public. A public zone is available to all users. A
- zone that is not public will be assigned to a particular domain. Only users in that
- domain will be allowed to create guest VMs in this zone.
-
-
-
-
- Choose which traffic types will be carried by the physical network.
- The traffic types are management, public, guest, and storage traffic. For more
- information about the types, roll over the icons to display their tool tips, or see Basic
- Zone Network Traffic Types. This screen starts out with some traffic types already assigned.
- To add more, drag and drop traffic types onto the network. You can also change the network
- name if desired.
-
-
- Assign a network traffic label to each traffic type on the physical network. These
- labels must match the labels you have already defined on the hypervisor host. To assign each
- label, click the Edit button under the traffic type icon. A popup dialog appears where you
- can type the label, then click OK.
- These traffic labels will be defined only for the hypervisor selected for the first
- cluster. For all other hypervisors, the labels can be configured after the zone is
- created.
-
-
- Click Next.
-
-
- (NetScaler only) If you chose the network offering for NetScaler, you have an additional
- screen to fill out. Provide the requested details to set up the NetScaler, then click
- Next.
-
-
- IP address. The NSIP (NetScaler IP) address of the
- NetScaler device.
-
-
- Username/Password. The authentication credentials
- to access the device. &PRODUCT; uses these credentials to access the device.
-
-
- Type. NetScaler device type that is being added. It
- could be NetScaler VPX, NetScaler MPX, or NetScaler SDX. For a comparison of the types,
- see About Using a NetScaler Load Balancer.
-
-
- Public interface. Interface of NetScaler that is
- configured to be part of the public network.
-
-
- Private interface. Interface of NetScaler that is
- configured to be part of the private network.
-
-
- Number of retries. Number of times to attempt a
- command on the device before considering the operation failed. Default is 2.
-
-
- Capacity. Number of guest networks/accounts that
- will share this NetScaler device.
-
-
- Dedicated. When marked as dedicated, this device
- will be dedicated to a single account. When Dedicated is checked, the value in the
- Capacity field has no significance – implicitly, its value is 1.
-
-
-
-
- (NetScaler only) Configure the IP range for public traffic. The IPs in this range will
- be used for the static NAT capability which you enabled by selecting the network offering
- for NetScaler with EIP and ELB. Enter the following details, then click Add. If desired, you
- can repeat this step to add more IP ranges. When done, click Next.
-
-
- Gateway. The gateway in use for these IP
- addresses.
-
-
- Netmask. The netmask associated with this IP
- range.
-
-
- VLAN. The VLAN that will be used for public
- traffic.
-
-
- Start IP/End IP. A range of IP addresses that are
- assumed to be accessible from the Internet and will be allocated for access to guest
- VMs.
-
-
-
-
- In a new zone, &PRODUCT; adds the first pod for you. You can always add more pods later.
- For an overview of what a pod is, see .
- To configure the first pod, enter the following, then click Next:
-
-
- Pod Name. A name for the pod.
-
-
- Reserved system gateway. The gateway for the hosts
- in that pod.
-
-
- Reserved system netmask. The network prefix that
- defines the pod's subnet. Use CIDR notation.
-
-
- Start/End Reserved System IP. The IP range in the
- management network that &PRODUCT; uses to manage various system VMs, such as Secondary
- Storage VMs, Console Proxy VMs, and DHCP. For more information, see System Reserved IP
- Addresses.
-
-
-
-
- Configure the network for guest traffic. Provide the following, then click Next:
-
-
- Guest gateway. The gateway that the guests should
- use.
-
-
- Guest netmask. The netmask in use on the subnet the
- guests will use.
-
-
- Guest start IP/End IP. Enter the first and last IP
- addresses that define a range that &PRODUCT; can assign to guests.
-
-
- We strongly recommend the use of multiple NICs. If multiple NICs are used, they
- may be in a different subnet.
-
-
- If one NIC is used, these IPs should be in the same CIDR as the pod CIDR.
-
-
-
-
-
-
- In a new pod, &PRODUCT; adds the first cluster for you. You can always add more clusters
- later. For an overview of what a cluster is, see About Clusters.
- To configure the first cluster, enter the following, then click Next:
-
-
- Hypervisor. (Version 3.0.0 only; in 3.0.1, this
- field is read only) Choose the type of hypervisor software that all hosts in this
- cluster will run. If you choose VMware, additional fields appear so you can give
- information about a vSphere cluster. For vSphere servers, we recommend creating the
- cluster of hosts in vCenter and then adding the entire cluster to &PRODUCT;. See Add
- Cluster: vSphere.
-
-
- Cluster name. Enter a name for the cluster. This
- can be text of your choosing and is not used by &PRODUCT;.
-
-
-
-
- In a new cluster, &PRODUCT; adds the first host for you. You can always add more hosts
- later. For an overview of what a host is, see About Hosts.
-
- When you add a hypervisor host to &PRODUCT;, the host must not have any VMs already
- running.
-
- Before you can configure the host, you need to install the hypervisor software on the
- host. You will need to know which version of the hypervisor software version is supported by
- &PRODUCT; and what additional configuration is required to ensure the host will work with
- &PRODUCT;. To find these installation details, see:
-
-
- Citrix XenServer Installation and Configuration
-
-
- VMware vSphere Installation and Configuration
-
-
- KVM vSphere Installation and Configuration
-
-
-
- To configure the first host, enter the following, then click Next:
-
-
- Host Name. The DNS name or IP address of the
- host.
-
-
- Username. The username is root.
-
-
- Password. This is the password for the user named
- above (from your XenServer or KVM install).
-
-
- Host Tags. (Optional) Any labels that you use to
- categorize hosts for ease of maintenance. For example, you can set this to the cloud's
- HA tag (set in the ha.tag global configuration parameter) if you want this host to be
- used only for VMs with the "high availability" feature enabled. For more information,
- see HA-Enabled Virtual Machines as well as HA for Hosts.
-
-
-
-
- In a new cluster, &PRODUCT; adds the first primary storage server for you. You can
- always add more servers later. For an overview of what primary storage is, see About Primary
- Storage.
- To configure the first primary storage server, enter the following, then click
- Next:
-
-
- Name. The name of the storage device.
-
-
- Protocol. For XenServer, choose either NFS, iSCSI,
- or PreSetup. For KVM, choose NFS, SharedMountPoint,CLVM, or RBD. For vSphere choose
- either VMFS (iSCSI or FiberChannel) or NFS. The remaining fields in the screen vary
- depending on what you choose here.
-
-
-
-
-
diff --git a/docs/en-US/basic-zone-guest-ip-addresses.xml b/docs/en-US/basic-zone-guest-ip-addresses.xml
deleted file mode 100644
index 5143f71f17e..00000000000
--- a/docs/en-US/basic-zone-guest-ip-addresses.xml
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Basic Zone Guest IP Addresses
- When basic networking is used, &PRODUCT; will assign IP addresses in the CIDR of the pod to the guests in that pod. The administrator must add a Direct IP range on the pod for this purpose. These IPs are in the same VLAN as the hosts.
-
diff --git a/docs/en-US/basic-zone-network-traffic-types.xml b/docs/en-US/basic-zone-network-traffic-types.xml
deleted file mode 100644
index 850373658b4..00000000000
--- a/docs/en-US/basic-zone-network-traffic-types.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Basic Zone Network Traffic Types
- When basic networking is used, there can be only one physical network in the zone. That physical network carries the following traffic types:
-
- Guest. When end users run VMs, they generate guest traffic. The guest VMs communicate with each other over a network that can be referred to as the guest network. Each pod in a basic zone is a broadcast domain, and therefore each pod has a different IP range for the guest network. The administrator must configure the IP range for each pod.
- Management. When &PRODUCT;'s internal resources communicate with each other, they generate management traffic. This includes communication between hosts, system VMs (VMs used by &PRODUCT; to perform various tasks in the cloud), and any other component that communicates directly with the &PRODUCT; Management Server. You must configure the IP range for the system VMs to use.
- We strongly recommend the use of separate NICs for management traffic and guest traffic.
- Public. Public traffic is generated when VMs in the cloud access the Internet. Publicly accessible IPs must be allocated for this purpose. End users can use the &PRODUCT; UI to acquire these IPs to implement NAT between their guest network and the public network, as described in Acquiring a New IP Address.
- Storage. While labeled "storage" this is specifically about secondary storage, and doesn't affect traffic for primary storage. This includes traffic such as VM templates and snapshots, which is sent between the secondary storage VM and secondary storage servers. &PRODUCT; uses a separate Network Interface Controller (NIC) named storage NIC for storage network traffic. Use of a storage NIC that always operates on a high bandwidth network allows fast template and snapshot copying. You must configure the IP range to use for the storage network.
-
- In a basic network, configuring the physical network is fairly straightforward. In most cases, you only need to configure one guest network to carry traffic that is generated by guest VMs. If you use a NetScaler load balancer and enable its elastic IP and elastic load balancing (EIP and ELB) features, you must also configure a network to carry public traffic. &PRODUCT; takes care of presenting the necessary network configuration steps to you in the UI when you add a new zone.
-
diff --git a/docs/en-US/basic-zone-physical-network-configuration.xml b/docs/en-US/basic-zone-physical-network-configuration.xml
deleted file mode 100644
index 4b1d24f2657..00000000000
--- a/docs/en-US/basic-zone-physical-network-configuration.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Basic Zone Physical Network Configuration
- In a basic network, configuring the physical network is fairly straightforward. You only need to configure one guest network to carry traffic that is generated by guest VMs. When you first add a zone to &PRODUCT;, you set up the guest network through the Add Zone screens.
-
-
diff --git a/docs/en-US/best-practices-for-vms.xml b/docs/en-US/best-practices-for-vms.xml
deleted file mode 100644
index 164932ac79a..00000000000
--- a/docs/en-US/best-practices-for-vms.xml
+++ /dev/null
@@ -1,67 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Best Practices for Virtual Machines
- For VMs to work as expected and provide excellent service, follow these guidelines.
-
- Monitor VMs for Max Capacity
- The &PRODUCT; administrator should monitor the total number of VM instances in each
- cluster, and disable allocation to the cluster if the total is approaching the maximum that
- the hypervisor can handle. Be sure to leave a safety margin to allow for the possibility of
- one or more hosts failing, which would increase the VM load on the other hosts as the VMs
- are automatically redeployed. Consult the documentation for your chosen hypervisor to find
- the maximum permitted number of VMs per host, then use &PRODUCT; global configuration
- settings to set this as the default limit. Monitor the VM activity in each cluster at all
- times. Keep the total number of VMs below a safe level that allows for the occasional host
- failure. For example, if there are N hosts in the cluster, and you want to allow for one
- host in the cluster to be down at any given time, the total number of VM instances you can
- permit in the cluster is at most (N-1) * (per-host-limit). Once a cluster reaches this
- number of VMs, use the &PRODUCT; UI to disable allocation of more VMs to the
- cluster.
-
-
- Install Required Tools and Drivers
- Be sure the following are installed on each VM:
-
- For XenServer, install PV drivers and Xen tools on each VM.
- This will enable live migration and clean guest shutdown.
- Xen tools are required in order for dynamic CPU and RAM scaling to work.
- For vSphere, install VMware Tools on each VM.
- This will enable console view to work properly.
- VMware Tools are required in order for dynamic CPU and RAM scaling to work.
-
- To be sure that Xen tools or VMware Tools is installed, use one of the following techniques:
-
- Create each VM from a template that already has the tools installed; or,
- When registering a new template, the administrator or user can indicate whether tools are
- installed on the template. This can be done through the UI
- or using the updateTemplate API; or,
- If a user deploys a virtual machine with a template that does not have
- Xen tools or VMware Tools, and later installs the tools on the VM,
- then the user can inform &PRODUCT; using the updateVirtualMachine API.
- After installing the tools and updating the virtual machine, stop
- and start the VM.
-
-
-
diff --git a/docs/en-US/best-practices-primary-storage.xml b/docs/en-US/best-practices-primary-storage.xml
deleted file mode 100644
index 279b95c0de1..00000000000
--- a/docs/en-US/best-practices-primary-storage.xml
+++ /dev/null
@@ -1,33 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Best Practices for Primary Storage
-
- The speed of primary storage will impact guest performance. If possible, choose smaller, higher RPM drives or SSDs for primary storage.
- There are two ways CloudStack can leverage primary storage:
- Static: This is CloudStack's traditional way of handling storage. In this model, a preallocated amount of storage (ex. a volume from a SAN) is given to CloudStack. CloudStack then permits many of its volumes to be created on this storage (can be root and/or data disks). If using this technique, ensure that nothing is stored on the storage. Adding the storage to &PRODUCT; will destroy any existing data.
- Dynamic: This is a newer way for CloudStack to manage storage. In this model, a storage system (rather than a preallocated amount of storage) is given to CloudStack. CloudStack, working in concert with a storage plug-in, dynamically creates volumes on the storage system and each volume on the storage system maps to a single CloudStack volume. This is highly useful for features such as storage Quality of Service. Currently this feature is supported for data disks (Disk Offerings).
-
-
diff --git a/docs/en-US/best-practices-secondary-storage.xml b/docs/en-US/best-practices-secondary-storage.xml
deleted file mode 100644
index 3d535c326e9..00000000000
--- a/docs/en-US/best-practices-secondary-storage.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Best Practices for Secondary Storage
-
- Each Zone can have one or more secondary storage servers. Multiple secondary storage servers provide increased scalability to the system.
- Secondary storage has a high read:write ratio and is expected to consist of larger drives with lower IOPS than primary storage.
- Ensure that nothing is stored on the server. Adding the server to &PRODUCT; will destroy any existing data.
-
-
diff --git a/docs/en-US/best-practices-templates.xml b/docs/en-US/best-practices-templates.xml
deleted file mode 100644
index 4e2992c021d..00000000000
--- a/docs/en-US/best-practices-templates.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Best Practices for Templates
- If you plan to use large templates (100 GB or larger), be sure you have a 10-gigabit network to support the large templates. A slower network can lead to timeouts and other errors when large templates are used.
-
diff --git a/docs/en-US/best-practices-virtual-router.xml b/docs/en-US/best-practices-virtual-router.xml
deleted file mode 100644
index 060d8680992..00000000000
--- a/docs/en-US/best-practices-virtual-router.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Best Practices for Virtual Routers
-
- WARNING: Restarting a virtual router from a hypervisor console deletes all the iptables rules. To work around this issue, stop the virtual router and start it from the &PRODUCT; UI.
- WARNING: Do not use the destroyRouter API when only one router is available in the network, because restartNetwork API with the cleanup=false parameter can't recreate it later. If you want to destroy and recreate the single router available in the network, use the restartNetwork API with the cleanup=true parameter.
-
-
-
-
-
diff --git a/docs/en-US/best-practices.xml b/docs/en-US/best-practices.xml
deleted file mode 100644
index 41d7cde9036..00000000000
--- a/docs/en-US/best-practices.xml
+++ /dev/null
@@ -1,82 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Best Practices
- Deploying a cloud is challenging. There are many different technology choices to make, and &PRODUCT; is flexible enough in its configuration that there are many possible ways to combine and configure the chosen technology. This section contains suggestions and requirements about cloud deployments.
- These should be treated as suggestions and not absolutes. However, we do encourage anyone planning to build a cloud outside of these guidelines to seek guidance and advice on the project mailing lists.
-
- Process Best Practices
-
-
- A staging system that models the production environment is strongly advised. It is critical if customizations have been applied to &PRODUCT;.
-
-
- Allow adequate time for installation, a beta, and learning the system. Installs with basic networking can be done in hours. Installs with advanced networking usually take several days for the first attempt, with complicated installations taking longer. For a full production system, allow at least 4-8 weeks for a beta to work through all of the integration issues. You can get help from fellow users on the cloudstack-users mailing list.
-
-
-
-
- Setup Best Practices
-
-
- Each host should be configured to accept connections only from well-known entities such as the &PRODUCT; Management Server or your network monitoring software.
-
-
- Use multiple clusters per pod if you need to achieve a certain switch density.
-
-
- Primary storage mountpoints or LUNs should not exceed 6 TB in size. It is better to have multiple smaller primary storage elements per cluster than one large one.
-
-
- When exporting shares on primary storage, avoid data loss by restricting the range of IP addresses that can access the storage. See "Linux NFS on Local Disks and DAS" or "Linux NFS on iSCSI".
-
-
- NIC bonding is straightforward to implement and provides increased reliability.
-
-
- 10G networks are generally recommended for storage access when larger servers that can support relatively more VMs are used.
-
-
- Host capacity should generally be modeled in terms of RAM for the guests. Storage and CPU may be overprovisioned. RAM may not. RAM is usually the limiting factor in capacity designs.
-
-
- (XenServer) Configure the XenServer dom0 settings to allocate more memory to dom0. This can enable XenServer to handle larger numbers of virtual machines. We recommend 2940 MB of RAM for XenServer dom0. For instructions on how to do this, see http://support.citrix.com/article/CTX126531. The article refers to XenServer 5.6, but the same information applies to XenServer 6.0.
-
-
-
-
- Maintenance Best Practices
-
-
- Monitor host disk space. Many host failures occur because the host's root disk fills up from logs that were not rotated adequately.
-
-
- Monitor the total number of VM instances in each cluster, and disable allocation to the cluster if the total is approaching the maximum that the hypervisor can handle. Be sure to leave a safety margin to allow for the possibility of one or more hosts failing, which would increase the VM load on the other hosts as the VMs are redeployed. Consult the documentation for your chosen hypervisor to find the maximum permitted number of VMs per host, then use &PRODUCT; global configuration settings to set this as the default limit. Monitor the VM activity in each cluster and keep the total number of VMs below a safe level that allows for the occasional host failure. For example, if there are N hosts in the cluster, and you want to allow for one host in the cluster to be down at any given time, the total number of VM instances you can permit in the cluster is at most (N-1) * (per-host-limit). Once a cluster reaches this number of VMs, use the &PRODUCT; UI to disable allocation to the cluster.
-
-
- The lack of up-do-date hotfixes can lead to data corruption and lost VMs.
- Be sure all the hotfixes provided by the hypervisor vendor are applied. Track the release of hypervisor patches through your hypervisor vendor’s support channel, and apply patches as soon as possible after they are released. &PRODUCT; will not track or notify you of required hypervisor patches. It is essential that your hosts are completely up to date with the provided hypervisor patches. The hypervisor vendor is likely to refuse to support any system that is not up to date with patches.
-
-
diff --git a/docs/en-US/build-deb.xml b/docs/en-US/build-deb.xml
deleted file mode 100644
index dca31d23a28..00000000000
--- a/docs/en-US/build-deb.xml
+++ /dev/null
@@ -1,123 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Building DEB packages
-
- In addition to the bootstrap dependencies, you'll also need to install
- several other dependencies. Note that we recommend using Maven 3, which
- is not currently available in 12.04.1 LTS. So, you'll also need to add a
- PPA repository that includes Maven 3. After running the command
- add-apt-repository, you will be prompted to continue and
- a GPG key will be added.
-
-
-$ sudo apt-get update
-$ sudo apt-get install python-software-properties
-$ sudo add-apt-repository ppa:natecarlson/maven3
-$ sudo apt-get update
-$ sudo apt-get install ant debhelper openjdk-6-jdk tomcat6 libws-commons-util-java genisoimage python-mysqldb libcommons-codec-java libcommons-httpclient-java liblog4j1.2-java maven3
-
-
- While we have defined, and you have presumably already installed the
- bootstrap prerequisites, there are a number of build time prerequisites
- that need to be resolved. &PRODUCT; uses maven for dependency resolution.
- You can resolve the buildtime depdencies for CloudStack by running:
-
-$ mvn3 -P deps
-
- Now that we have resolved the dependencies we can move on to building &PRODUCT;
- and packaging them into DEBs by issuing the following command.
-
-
-$ dpkg-buildpackage -uc -us
-
-
-
- This command will build 16 Debian packages. You should have all of the following:
-
-
-cloud-agent_4.0.0-incubating_amd64.deb
-cloud-agent-deps_4.0.0-incubating_amd64.deb
-cloud-agent-libs_4.0.0-incubating_amd64.deb
-cloud-awsapi_4.0.0-incubating_amd64.deb
-cloud-cli_4.0.0-incubating_amd64.deb
-cloud-client_4.0.0-incubating_amd64.deb
-cloud-client-ui_4.0.0-incubating_amd64.deb
-cloud-core_4.0.0-incubating_amd64.deb
-cloud-deps_4.0.0-incubating_amd64.deb
-cloud-python_4.0.0-incubating_amd64.deb
-cloud-scripts_4.0.0-incubating_amd64.deb
-cloud-server_4.0.0-incubating_amd64.deb
-cloud-setup_4.0.0-incubating_amd64.deb
-cloud-system-iso_4.0.0-incubating_amd64.deb
-cloud-usage_4.0.0-incubating_amd64.deb
-cloud-utils_4.0.0-incubating_amd64.deb
-
-
-
- Setting up an APT repo
-
- After you've created the packages, you'll want to copy them to a system where you can serve the packages over HTTP. You'll create a directory for the packages and then use dpkg-scanpackages to create Packages.gz, which holds information about the archive structure. Finally, you'll add the repository to your system(s) so you can install the packages using APT.
-
- The first step is to make sure that you have the dpkg-dev package installed. This should have been installed when you pulled in the debhelper application previously, but if you're generating Packages.gz on a different system, be sure that it's installed there as well.
-
-$ sudo apt-get install dpkg-dev
-
-The next step is to copy the DEBs to the directory where they can be served over HTTP. We'll use /var/www/cloudstack/repo in the examples, but change the directory to whatever works for you.
-
-
-sudo mkdir -p /var/www/cloudstack/repo/binary
-sudo cp *.deb /var/www/cloudstack/repo/binary
-sudo cd /var/www/cloudstack/repo/binary
-sudo dpkg-scanpackages . /dev/null | tee Packages | gzip -9 > Packages.gz
-
-
-Note: Override Files
- You can safely ignore the warning about a missing override file.
-
-
-Now you should have all of the DEB packages and Packages.gz in the binary directory and available over HTTP. (You may want to use wget or curl to test this before moving on to the next step.)
-
-
-
- Configuring your machines to use the APT repository
-
- Now that we have created the repository, you need to configure your machine
- to make use of the APT repository. You can do this by adding a repository file
- under /etc/apt/sources.list.d. Use your preferred editor to
- create /etc/apt/sources.list.d/cloudstack.list with this
- line:
-
- deb http://server.url/cloudstack/repo binary ./
-
- Now that you have the repository info in place, you'll want to run another
- update so that APT knows where to find the &PRODUCT; packages.
-
-$ sudo apt-get update
-
-You can now move on to the instructions under Install on Ubuntu.
-
-
-
diff --git a/docs/en-US/build-nonoss.xml b/docs/en-US/build-nonoss.xml
deleted file mode 100644
index dbcab99e9bb..00000000000
--- a/docs/en-US/build-nonoss.xml
+++ /dev/null
@@ -1,49 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Building Non-OSS
- If you need support for the VMware, NetApp, F5, NetScaler, SRX, or any other non-Open Source Software (nonoss) plugins, you'll need to download a few components on your own and follow a slightly different procedure to build from source.
- Why Non-OSS?
- Some of the plugins supported by &PRODUCT; cannot be distributed with &PRODUCT; for licensing reasons. In some cases, some of the required libraries/JARs are under a proprietary license. In other cases, the required libraries may be under a license that's not compatible with Apache's licensing guidelines for third-party products.
-
-
-
- To build the Non-OSS plugins, you'll need to have the requisite JARs installed under the deps directory.
- Because these modules require dependencies that can't be distributed with &PRODUCT; you'll need to download them yourself. Links to the most recent dependencies are listed on the How to build CloudStack page on the wiki.
-
- You may also need to download vhd-util when using XenServer hypervisors, which was removed due to licensing issues. You'll copy vhd-util to the scripts/vm/hypervisor/xenserver/ directory.
-
-
- Once you have all the dependencies copied over, you'll be able to build &PRODUCT; with the nonoss option:
-
- $ mvn clean
- $ mvn install -Dnonoss
-
-
-
- Once you've built &PRODUCT; with the nonoss profile, you can package it using the or instructions.
-
-
-
diff --git a/docs/en-US/build-rpm.xml b/docs/en-US/build-rpm.xml
deleted file mode 100644
index c15074293a6..00000000000
--- a/docs/en-US/build-rpm.xml
+++ /dev/null
@@ -1,96 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Building RPMs from Source
- As mentioned previously in , you will need to install several prerequisites before you can build packages for &PRODUCT;. Here we'll assume you're working with a 64-bit build of CentOS or Red Hat Enterprise Linux.
- # yum groupinstall "Development Tools"
- # yum install java-1.6.0-openjdk-devel.x86_64 genisoimage mysql mysql-server ws-commons-util MySQL-python tomcat6 createrepo
- Next, you'll need to install build-time dependencies for CloudStack with
- Maven. We're using Maven 3, so you'll want to
- grab a Maven 3 tarball
- and uncompress it in your home directory (or whatever location you prefer):
- $ tar zxvf apache-maven-3.0.4-bin.tar.gz
- $ export PATH=/usr/local/apache-maven-3.0.4//bin:$PATH
- Maven also needs to know where Java is, and expects the JAVA_HOME environment
- variable to be set:
- $ export JAVA_HOME=/usr/lib/jvm/jre-1.6.0-openjdk.x86_64/
- Verify that Maven is installed correctly:
- $ mvn --version
- You probably want to ensure that your environment variables will survive a logout/reboot.
- Be sure to update ~/.bashrc with the PATH and JAVA_HOME variables.
-
- Building RPMs for &PRODUCT; is fairly simple. Assuming you already have the source downloaded and have uncompressed the tarball into a local directory, you're going to be able to generate packages in just a few minutes.
- Packaging has Changed
- If you've created packages for &PRODUCT; previously, you should be aware that the process has changed considerably since the project has moved to using Apache Maven. Please be sure to follow the steps in this section closely.
-
-
- Generating RPMS
- Now that we have the prerequisites and source, you will cd to the packaging/centos63/ directory.
- $ cd packaging/centos63
- Generating RPMs is done using the package.sh script:
- $./package.sh
-
- That will run for a bit and then place the finished packages in dist/rpmbuild/RPMS/x86_64/.
- You should see seven RPMs in that directory:
-
- cloudstack-agent-4.1.0-SNAPSHOT.el6.x86_64.rpm
- cloudstack-awsapi-4.1.0-SNAPSHOT.el6.x86_64.rpm
- cloudstack-cli-4.1.0-SNAPSHOT.el6.x86_64.rpm
- cloudstack-common-4.1.0-SNAPSHOT.el6.x86_64.rpm
- cloudstack-docs-4.1.0-SNAPSHOT.el6.x86_64.rpm
- cloudstack-management-4.1.0-SNAPSHOT.el6.x86_64.rpm
- cloudstack-usage-4.1.0-SNAPSHOT.el6.x86_64.rpm
-
-
- Creating a yum repo
-
- While RPMs is a useful packaging format - it's most easily consumed from Yum repositories over a network. The next step is to create a Yum Repo with the finished packages:
- $ mkdir -p ~/tmp/repo
- $ cp dist/rpmbuild/RPMS/x86_64/*rpm ~/tmp/repo/
- $ createrepo ~/tmp/repo
-
-
- The files and directories within ~/tmp/repo can now be uploaded to a web server and serve as a yum repository.
-
-
-
- Configuring your systems to use your new yum repository
-
- Now that your yum repository is populated with RPMs and metadata
- we need to configure the machines that need to install &PRODUCT;.
- Create a file named /etc/yum.repos.d/cloudstack.repo with this information:
-
- [apache-cloudstack]
- name=Apache CloudStack
- baseurl=http://webserver.tld/path/to/repo
- enabled=1
- gpgcheck=0
-
-
- Completing this step will allow you to easily install &PRODUCT; on a number of machines across the network.
-
-
-
-
diff --git a/docs/en-US/building-devcloud.xml b/docs/en-US/building-devcloud.xml
deleted file mode 100644
index f3c4d19a5d9..00000000000
--- a/docs/en-US/building-devcloud.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Building DevCloud
- The DevCloud appliance can be downloaded from the wiki at . It can also be built from scratch. Code is being developed to provide this alternative build. It is based on veewee, Vagrant and Puppet.
- The goal is to automate the DevCloud build and make this automation capability available to all within the source release of &PRODUCT;
- This is under heavy development. The code is located in the source tree under tools/devcloud
- A preliminary wiki page describes the build at https://cwiki.apache.org/confluence/display/CLOUDSTACK/Building+DevCloud
-
-
diff --git a/docs/en-US/building-documentation.xml b/docs/en-US/building-documentation.xml
deleted file mode 100644
index 8ee63b06ec0..00000000000
--- a/docs/en-US/building-documentation.xml
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Building &PRODUCT; Documentation
- To build a specific guide, go to the source tree of the documentation in /docs and identify the guide you want to build.
- Currently there are four guides plus the release notes, all defined in publican configuration files:
-
- publican-adminguide.cfg
- publican-devguide.cfg
- publican-installation.cfg
- publican-plugin-niciranvp.cfg
- publican-release-notes.cfg
-
- To build the Developer guide for example, do the following:
- publican build --config=publican-devguide.cfg --formats=pdf --langs=en-US
- A pdf file will be created in tmp/en-US/pdf, you may choose to build the guide in a different format like html. In that case just replace the format value.
-
-
diff --git a/docs/en-US/building-marvin.xml b/docs/en-US/building-marvin.xml
deleted file mode 100644
index e33c4cb2248..00000000000
--- a/docs/en-US/building-marvin.xml
+++ /dev/null
@@ -1,46 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Building and Installing Marvin
- Marvin is built with Maven and is dependent on APIdoc. To build it do the following in the root tree of &PRODUCT;:
- mvn -P developer -pl :cloud-apidoc
- mvn -P developer -pl :cloud-marvin
- If successful the build will have created the cloudstackAPI Python package under tools/marvin/marvin/cloudstackAPI as well as a gziped Marvin package under tools/marvin dist. To install the Python Marvin module do the following in tools/marvin:
- sudo python ./setup.py install
- The dependencies will be downloaded the Python module installed and you should be able to use Marvin in Python. Check that you can import the module before starting to use it.
- $ python
-Python 2.7.3 (default, Nov 17 2012, 19:54:34)
-[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
-Type "help", "copyright", "credits" or "license" for more information.
->>> import marvin
->>> from marvin.cloudstackAPI import *
->>>
-
- You could also install it using pip or easy_install using the local distribution package in tools/marvin/dist :
- pip install tools/marvin/dist/Marvin-0.1.0.tar.gz
- Or:
- easy_install tools/marvin/dist/Marvin-0.1.0.tar.gz
-
-
diff --git a/docs/en-US/building-prerequisites.xml b/docs/en-US/building-prerequisites.xml
deleted file mode 100644
index d97ca40f2a3..00000000000
--- a/docs/en-US/building-prerequisites.xml
+++ /dev/null
@@ -1,66 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
-
- Build Procedure Prerequisites
- In this section we will assume that you are using the Ubuntu Linux distribution with the Advanced Packaging Tool (APT). If you are using a different distribution or OS and a different packaging tool, adapt the following instructions to your environment. To build &PRODUCT; you will need:
-
-
- git, http://git-scm.com
- sudo apt-get install git-core
-
-
- maven, http://maven.apache.org
- sudo apt-get install maven
- Make sure that you installed maven 3
- $ mvn --version
-Apache Maven 3.0.4
-Maven home: /usr/share/maven
-Java version: 1.6.0_24, vendor: Sun Microsystems Inc.
-Java home: /usr/lib/jvm/java-6-openjdk-amd64/jre
-Default locale: en_US, platform encoding: UTF-8
-OS name: "linux", version: "3.2.0-33-generic", arch: "amd64", family: "unix"
-
-
- java
- set the JAVA_HOME environment variable
- $ export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
-
-
-
- In addition, to deploy and run &PRODUCT; in a development environment you will need:
-
-
- Mysql
- sudo apt-get install mysql-server-5.5
- Start the mysqld service and create a cloud user with cloud as a password
-
-
- Tomcat 6
- sudo apt-get install tomcat6
-
-
-
-
diff --git a/docs/en-US/building-translation.xml b/docs/en-US/building-translation.xml
deleted file mode 100644
index dd66365cd9d..00000000000
--- a/docs/en-US/building-translation.xml
+++ /dev/null
@@ -1,75 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Translating &PRODUCT; Documentation
- Now that you know how to build the documentation with Publican, let's move on to building it in different languages. Publican helps us
- build the documentation in various languages by using Portable Object Template (POT) files and Portable Objects (PO) files for each language.
-
- The POT files are generated by parsing all the DocBook files in the language of origin, en-US for us, and creating a long list of strings
- for each file that needs to be translated. The translation can be done by hand directly in the PO files of each target language or via the
- transifex service.
-
-
- Transifex is a free service to help translate documents and organize distributed teams
- of translators. Anyone interested in helping with the translation should get an account on Transifex
-
-
- Three &PRODUCT; projects exist on Transifex. It is recommended to tour those projects to become familiar with Transifex:
-
- https://www.transifex.com/projects/p/ACS_DOCS/
- https://www.transifex.com/projects/p/ACS_Runbook/
- https://www.transifex.com/projects/p/CloudStackUI/
-
-
-
-
- The pot directory should already exist in the source tree. If you want to build an up to date translation, you might have to update it to include any pot file that was not previously generated.
- To register new resources on transifex, you will need to be an admin of the transifex &PRODUCT; site. Send an email to the developer list if you want access.
-
- First we need to generate the .pot files for all the DocBook xml files needed for a particular guide. This is well explained at the publican website in a section on
- how to prepare a document for translation.
- The basic command to execute to build the pot files for the developer guide is:
- publican update_pot --config=publican-devguide.cfg
- This will create a pot directory with pot files in it, one for each corresponding xml files needed to build the guide. Once generated, all pots files need to be configured for translation using transifex this is best done by using the transifex client that you can install with the following command (For RHEL and its derivatives):
- yum install transifex-client
- The transifex client is also available via PyPi and you can install it like this:
- easy_install transifex-client
- Once you have installed the transifex client you can run the settx.sh script in the docs directory. This will create the .tx/config file used by transifex to push and pull all translation strings.
- All the resource files need to be uploaded to transifex, this is done with the transifex client like so:
- tx push -s
- Once the translators have completed translation of the documentation, the translated strings can be pulled from transifex like so:
- tx pull -a
- If you wish to push specific resource files or pull specific languages translation strings, you can do so with the transifex client. A complete documentation of
- the client is available on the client website
- When you pull new translation strings a directory will be created corresponding to the language of the translation. This directory will contain PO files that will be used by Publican to create the documentation in that specific language. For example assuming that you pull the French translation whose language code is fr-FR, you will build the documentation with publican:
- publican build --config=publican-devguide.cfg --formats=html --langs=fr-FR
-
-
- Some languages like Chinese or Japanese will not render well in pdf format and html should be used.
-
-
-
-
-
diff --git a/docs/en-US/building-with-maven-deploy.xml b/docs/en-US/building-with-maven-deploy.xml
deleted file mode 100644
index e4b9801aa30..00000000000
--- a/docs/en-US/building-with-maven-deploy.xml
+++ /dev/null
@@ -1,39 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Deployment and Testing Steps
- Deploying the &PRODUCT; code that you compiled is a two step process:
-
- If you have not configured the database or modified its properties do:
- mvn -P developer -pl developer -Ddeploydb
-
- Then you need to run the &PRODUCT; management server. To attach a debugger to it, do:
- export MAVEN_OPTS="-Xmx1024 -Xdebug -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n"
- mvn -pl :cloud-client-ui jetty:run
-
-
- When dealing with the database, remember that you may wipe it entirely and lose any data center configuration that you may have set previously.
-
-
diff --git a/docs/en-US/building-with-maven-steps.xml b/docs/en-US/building-with-maven-steps.xml
deleted file mode 100644
index 1c15bfa96e1..00000000000
--- a/docs/en-US/building-with-maven-steps.xml
+++ /dev/null
@@ -1,33 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Building Steps
- &PRODUCT; uses git for source version control, first make sure you have the source code by pulling it:
- git clone https://git-wip-us.apache.org/repos/asf/cloudstack.git
- Several Project Object Models (POM) are defined to deal with the various build targets of &PRODUCT;. Certain features require some packages that are not compatible with the Apache license and therefore need to be downloaded on your own. Check the wiki for additional information https://cwiki.apache.org/CLOUDSTACK/building-with-maven.html. In order to build all the open source targets of &PRODUCT; do:
- mvn clean install
- The resulting jar files will be in the target directory of the subdirectory of the compiled module.
-
-
diff --git a/docs/en-US/building-with-maven.xml b/docs/en-US/building-with-maven.xml
deleted file mode 100644
index 5363b1d754a..00000000000
--- a/docs/en-US/building-with-maven.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Using Maven to Build &PRODUCT;
-
-
-
-
-
-
diff --git a/docs/en-US/castor-with-cs.xml b/docs/en-US/castor-with-cs.xml
deleted file mode 100644
index 7bf676b9c62..00000000000
--- a/docs/en-US/castor-with-cs.xml
+++ /dev/null
@@ -1,86 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Using the CAStor Back-end Storage with &PRODUCT;
- This section describes how to use a CAStor cluster as the back-end storage system for a
- &PRODUCT; S3 front-end. The CAStor back-end storage for &PRODUCT; extends the existing storage
- classes and allows the storage configuration attribute to point to a CAStor cluster.
- This feature makes use of the &PRODUCT; server's local disk to spool files before writing
- them to CAStor when handling the PUT operations. However, a file must be successfully written
- into the CAStor cluster prior to the return of a success code to the S3 client to ensure that
- the transaction outcome is correctly reported.
-
- The S3 multipart file upload is not supported in this release. You are prompted with
- proper error message if a multipart upload is attempted.
-
- To configure CAStor:
-
-
- Install &PRODUCT; by following the instructions given in the INSTALL.txt file.
-
- You can use the S3 storage system in &PRODUCT; without setting up and installing the
- compute components.
-
-
-
- Enable the S3 API by setting "enable.s3.api = true" in the Global parameter section in
- the UI and register a user.
- For more information, see S3 API in
- &PRODUCT;.
-
-
- Edit the cloud-bridge.properties file and modify the "storage.root" parameter.
-
-
- Set "storage.root" to the key word "castor".
-
-
- Specify a CAStor tenant domain to which content is written. If the domain is not
- specified, the CAStor default domain, specified by the "cluster" parameter in CAStor's
- node.cfg file, will be used.
-
-
- Specify a list of node IP addresses, or set "zeroconf" and the cluster
- name. When using a static IP list with a large cluster, it is not necessary to include
- every node, only a few is required to initialize the client software.
- For example:
- storage.root=castor domain=cloudstack 10.1.1.51 10.1.1.52 10.1.1.53
- In this example, the configuration file directs &PRODUCT; to write the S3 files to
- CAStor instead of to a file system, where the CAStor domain name is cloudstack, and the
- CAStor node IP addresses are those listed.
-
-
- (Optional) The last value is a port number on which to communicate with the CAStor
- cluster. If not specified, the default is 80.
- #Static IP list with optional port
-storage.root=castor domain=cloudstack 10.1.1.51 10.1.1.52 10.1.1.53 80
-#Zeroconf locator for cluster named "castor.example.com"
-storage.root=castor domain=cloudstack zeroconf=castor.example.com
-
-
-
-
- Create the tenant domain within the CAStor storage cluster. If you omit this step before
- attempting to store content, you will get HTTP 412 errors in the awsapi.log.
-
-
-
diff --git a/docs/en-US/change-console-proxy-ssl-certificate-domain.xml b/docs/en-US/change-console-proxy-ssl-certificate-domain.xml
deleted file mode 100644
index 3fd05018e99..00000000000
--- a/docs/en-US/change-console-proxy-ssl-certificate-domain.xml
+++ /dev/null
@@ -1,49 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Changing the Console Proxy SSL Certificate and Domain
- If the administrator prefers, it is possible for the URL of the customer's console session to show a domain other than realhostip.com. The administrator can customize the displayed domain by selecting a different domain and uploading a new SSL certificate and private key. The domain must run a DNS service that is capable of resolving queries for addresses of the form aaa-bbb-ccc-ddd.your.domain to an IPv4 IP address in the form aaa.bbb.ccc.ddd, for example, 202.8.44.1. To change the console proxy domain, SSL certificate, and private key:
-
- Set up dynamic name resolution or populate all possible DNS names in your public IP range into your existing DNS server with the format aaa-bbb-ccc-ddd.company.com -> aaa.bbb.ccc.ddd.
- Generate the private key and certificate signing request (CSR). When you are using openssl to generate private/public key pairs and CSRs, for the private key that you are going to paste into the &PRODUCT; UI, be sure to convert it into PKCS#8 format.
-
- Generate a new 2048-bit private keyopenssl genrsa -des3 -out yourprivate.key 2048
- Generate a new certificate CSRopenssl req -new -key yourprivate.key -out yourcertificate.csr
- Head to the website of your favorite trusted Certificate Authority, purchase an SSL certificate, and submit the CSR. You should receive a valid certificate in return
- Convert your private key format into PKCS#8 encrypted format.openssl pkcs8 -topk8 -in yourprivate.key -out yourprivate.pkcs8.encrypted.key
- Convert your PKCS#8 encrypted private key into the PKCS#8 format that is compliant with &PRODUCT;openssl pkcs8 -in yourprivate.pkcs8.encrypted.key -out yourprivate.pkcs8.key
-
-
- In the Update SSL Certificate screen of the &PRODUCT; UI, paste the following
-
- The Certificate you generated in the previous steps.
- The Private key you generated in the previous steps.
- The desired new domain name; for example, company.com
-
-
- The desired new domain name; for example, company.comThis stops all currently running console proxy VMs, then restarts them with the new certificate and key. Users might notice a brief interruption in console availability
-
- The Management Server will generate URLs of the form "aaa-bbb-ccc-ddd.company.com" after this change is made. New console requests will be served with the new DNS domain name, certificate, and key
-
diff --git a/docs/en-US/change-database-config.xml b/docs/en-US/change-database-config.xml
deleted file mode 100644
index 567b9e41d04..00000000000
--- a/docs/en-US/change-database-config.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Changing the Database Configuration
- The &PRODUCT; Management Server stores database configuration information (e.g., hostname, port, credentials) in the file /etc/cloudstack/management/db.properties. To effect a change, edit this file on each Management Server, then restart the Management Server.
-
diff --git a/docs/en-US/change-database-password.xml b/docs/en-US/change-database-password.xml
deleted file mode 100644
index 863984e269c..00000000000
--- a/docs/en-US/change-database-password.xml
+++ /dev/null
@@ -1,76 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Changing the Database Password
- You may need to change the password for the MySQL account used by CloudStack. If so, you'll need to change the password in MySQL, and then add the encrypted password to /etc/cloudstack/management/db.properties.
-
-
- Before changing the password, you'll need to stop CloudStack's management server and the usage engine if you've deployed that component.
-
-# service cloudstack-management stop
-# service cloudstack-usage stop
-
-
-
- Next, you'll update the password for the CloudStack user on the MySQL server.
-
-# mysql -u root -p
-
- At the MySQL shell, you'll change the password and flush privileges:
-
-update mysql.user set password=PASSWORD("newpassword123") where User='cloud';
-flush privileges;
-quit;
-
-
-
- The next step is to encrypt the password and copy the encrypted password to CloudStack's database configuration (/etc/cloudstack/management/db.properties).
-
- # java -classpath /usr/share/cloudstack-common/lib/jasypt-1.9.0.jar \
-org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI encrypt.sh \
-input="newpassword123" password="`cat /etc/cloudstack/management/key`" \
-verbose=false
-
-
-File encryption type
- Note that this is for the file encryption type. If you're using the web encryption type then you'll use password="management_server_secret_key"
-
-
-
- Now, you'll update /etc/cloudstack/management/db.properties with the new ciphertext. Open /etc/cloudstack/management/db.properties in a text editor, and update these parameters:
-
-db.cloud.password=ENC(encrypted_password_from_above)
-db.usage.password=ENC(encrypted_password_from_above)
-
-
-
- After copying the new password over, you can now start CloudStack (and the usage engine, if necessary).
-
- # service cloudstack-management start
- # service cloud-usage start
-
-
-
-
diff --git a/docs/en-US/change-host-password.xml b/docs/en-US/change-host-password.xml
deleted file mode 100644
index 7221fe62417..00000000000
--- a/docs/en-US/change-host-password.xml
+++ /dev/null
@@ -1,39 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Changing Host Password
- The password for a XenServer Node, KVM Node, or vSphere Node may be changed in the database. Note that all Nodes in a Cluster must have the same password.
- To change a Node's password:
-
- Identify all hosts in the cluster.
- Change the password on all hosts in the cluster. Now the password for the host and the password known to &PRODUCT; will not match. Operations on the cluster will fail until the two passwords match.
-
- Get the list of host IDs for the host in the cluster where you are changing the password. You will need to access the database to determine these host IDs. For each hostname "h" (or vSphere cluster) that you are changing the password for, execute:
- mysql> select id from cloud.host where name like '%h%';
- This should return a single ID. Record the set of such IDs for these hosts.
- Update the passwords for the host in the database. In this example, we change the passwords for hosts with IDs 5, 10, and 12 to "password".
- mysql> update cloud.host set password='password' where id=5 or id=10 or id=12;
-
-
diff --git a/docs/en-US/change-network-offering-on-guest-network.xml b/docs/en-US/change-network-offering-on-guest-network.xml
deleted file mode 100644
index de3a80ecddc..00000000000
--- a/docs/en-US/change-network-offering-on-guest-network.xml
+++ /dev/null
@@ -1,68 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Changing the Network Offering on a Guest Network
- A user or administrator can change the network offering that is associated with an existing
- guest network.
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- If you are changing from a network offering that uses the &PRODUCT; virtual router to
- one that uses external devices as network service providers, you must first stop all the VMs
- on the network.
-
-
- In the left navigation, choose Network.
-
-
- Click the name of the network you want to modify.
-
-
- In the Details tab, click Edit.
-
-
-
-
- EditButton.png: button to edit a network
-
-
-
-
- In Network Offering, choose the new network offering, then click Apply.
- A prompt is displayed asking whether you want to keep the existing CIDR. This is to let
- you know that if you change the network offering, the CIDR will be affected.
- If you upgrade between virtual router as a provider and an external network device as
- provider, acknowledge the change of CIDR to continue, so choose Yes.
-
-
- Wait for the update to complete. Don’t try to restart VMs until the network change is
- complete.
-
-
- If you stopped any VMs, restart them.
-
-
-
diff --git a/docs/en-US/change-to-behavior-of-list-commands.xml b/docs/en-US/change-to-behavior-of-list-commands.xml
deleted file mode 100644
index 69b9e4d2beb..00000000000
--- a/docs/en-US/change-to-behavior-of-list-commands.xml
+++ /dev/null
@@ -1,108 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Change to Behavior of List Commands
- There was a major change in how our List* API commands work in CloudStack 3.0 compared to
- 2.2.x. The rules below apply only for managed resources – those that belong to an account,
- domain, or project. They are irrelevant for the List* commands displaying unmanaged (system)
- resources, such as hosts, clusters, and external network resources.
- When no parameters are passed in to the call, the caller sees only resources owned by the
- caller (even when the caller is the administrator). Previously, the administrator saw everyone
- else's resources by default.
- When accountName and domainId are passed in:
-
-
- The caller sees the resources dedicated to the account specified.
-
-
- If the call is executed by a regular user, the user is authorized to specify only the
- user's own account and domainId.
-
-
- If the caller is a domain administrator, CloudStack performs an authorization check to
- see whether the caller is permitted to view resources for the given account and
- domainId.
-
-
- When projectId is passed in, only resources belonging to that project are listed.
- When domainId is passed in, the call returns only resources belonging to the domain
- specified. To see the resources of subdomains, use the parameter isRecursive=true. Again, the
- regular user can see only resources owned by that user, the root administrator can list
- anything, and a domain administrator is authorized to see only resources of the administrator's
- own domain and subdomains.
- To see all resources the caller is authorized to see, except for Project resources, use the
- parameter listAll=true.
- To see all Project resources the caller is authorized to see, use the parameter
- projectId=-1.
- There is one API command that doesn't fall under the rules above completely: the
- listTemplates command. This command has its own flags defining the list rules:
-
-
-
-
-
-
- listTemplates Flag
- Description
-
-
-
-
- featured
- Returns templates that have been marked as featured and
- public.
-
-
- self
- Returns templates that have been registered or created by the calling
- user.
-
-
- selfexecutable
- Same as self, but only returns templates that are ready to be deployed
- with.
-
-
- sharedexecutable
- Ready templates that have been granted to the calling user by another
- user.
-
-
- executable
- Templates that are owned by the calling user, or public templates, that can
- be used to deploy a new VM.
-
-
- community
- Returns templates that have been marked as public but not
- featured.
-
-
- all
- Returns all templates (only usable by admins).
-
-
-
-
- The &PRODUCT; UI on a general view will display all resources that the logged-in user is
- authorized to see, except for project resources. To see the project resources, select the
- project view.
-
diff --git a/docs/en-US/changed-API-commands-4.2.xml b/docs/en-US/changed-API-commands-4.2.xml
deleted file mode 100644
index 8fda9cc13bd..00000000000
--- a/docs/en-US/changed-API-commands-4.2.xml
+++ /dev/null
@@ -1,1129 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Changed API Commands in 4.2
-
-
-
-
-
-
- API Commands
- Description
-
-
-
-
- listNetworkACLs
- The following new request parameters are added: aclid (optional), action
- (optional), protocol (optional)
- The following new response parameters are added: aclid, action,
- number
-
-
- copyTemplate
-
- The following new response parameters are added: isdynamicallyscalable,
- sshkeyenabled
-
-
- listRouters
-
- The following new response parameters are added: ip6dns1, ip6dns2,
- role
-
-
- updateConfiguration
- The following new request parameters are added: accountid (optional),
- clusterid (optional), storageid (optional), zoneid (optional)
- The following new response parameters are added: id, scope
-
-
- listVolumes
- The following request parameter is removed: details
- The following new response parameter is added: displayvolume
-
-
- suspendProject
-
- The following new response parameters are added: cpuavailable, cpulimit, cputotal,
- ipavailable, iplimit, iptotal, memoryavailable, memorylimit, memorytotal,
- networkavailable, networklimit, networktotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal, snapshotavailable, snapshotlimit,
- snapshottotal, templateavailable, templatelimit, templatetotal, vmavailable, vmlimit,
- vmrunning, vmstopped, vmtotal, volumeavailable, volumelimit, volumetotal,
- vpcavailable, vpclimit, vpctotal
-
-
- listRemoteAccessVpns
-
- The following new response parameters are added: id
-
-
- registerTemplate
- The following new request parameters are added: imagestoreuuid (optional),
- isdynamicallyscalable (optional), isrouting (optional)
- The following new response parameters are added: isdynamicallyscalable,
- sshkeyenabled
-
-
- addTrafficMonitor
-
- The following response parameters are removed: privateinterface, privatezone,
- publicinterface, publiczone, usageinterface, username
-
-
- createTemplate
- The following response parameters are removed: clusterid, clustername,
- disksizeallocated, disksizetotal, disksizeused, ipaddress, path, podid, podname,
- state, tags, type
- The following new response parameters are added: account, accountid, bootable,
- checksum, crossZones, details, displaytext, domain, domainid, format, hostid,
- hostname, hypervisor, isdynamicallyscalable, isextractable, isfeatured, ispublic,
- isready, ostypeid, ostypename, passwordenabled, project, projectid, removed, size,
- sourcetemplateid, sshkeyenabled, status, templatetag, templatetype,
- tags
-
-
- listLoadBalancerRuleInstances
-
- The following new response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable,
- affinitygroup
-
-
- migrateVolume
- The following new request parameters is added: livemigrate (optional)
- The following new response parameters is added: displayvolume
-
-
- createAccount
- The following new request parameters are added: accountid (optional), userid
- (optional)
- The following new response parameters are added: accountdetails, cpuavailable,
- cpulimit, cputotal, defaultzoneid, ipavailable, iplimit, iptotal, iscleanuprequired,
- isdefault, memoryavailable, memorylimit, memorytotal, name, networkavailable,
- networkdomain, networklimit, networktotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, projectavailable, projectlimit,
- projecttotal, receivedbytes, secondarystorageavailable, secondarystoragelimit,
- secondarystoragetotal, sentbytes, snapshotavailable, snapshotlimit, snapshottotal,
- templateavailable, templatelimit, templatetotal, vmavailable, vmlimit, vmrunning,
- vmstopped, vmtotal, volumeavailable, volumelimit, volumetotal, vpcavailable, vpclimit,
- vpctotal, user
- The following parameters are removed: account, accountid, apikey, created, email,
- firstname, lastname, secretkey, timezone, username
-
-
- updatePhysicalNetwork
- The following new request parameters is added: removevlan (optional)
-
-
-
- listTrafficMonitors
-
- The following response parameters are removed: privateinterface, privatezone,
- publicinterface, publiczone, usageinterface, username
-
-
- attachIso
-
- The following new response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable,
- affinitygroup
-
-
- listProjects
- The following new request parameters are added: cpuavailable, cpulimit,
- cputotal, ipavailable, iplimit, iptotal, memoryavailable, memorylimit, memorytotal,
- networkavailable, networklimit, networktotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal, snapshotavailable, snapshotlimit,
- snapshottotal, templateavailable, templatelimit, templatetotal, vmavailable, vmlimit,
- vmrunning, vmstopped, vmtotal, volumeavailable, volumelimit, volumetotal,
- vpcavailable, vpclimit, vpctotal
-
-
- enableAccount
-
- The following new response parameters are added: cpuavailable, cpulimit, cputotal,
- isdefault, memoryavailable, memorylimit, memorytotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal
-
-
- listPublicIpAddresses
-
- The following new response parameters are added: isportable, vmipaddress
-
-
-
- enableStorageMaintenance
-
- The following new response parameters are added: hypervisor, scope,
- suitableformigration
-
-
- listLoadBalancerRules
- The following new request parameters is added: networkid (optional)
- The following new response parameters is added: networkid
-
-
- stopRouter
-
- The following new response parameters are added: ip6dns1, ip6dns2, role
-
-
-
- listClusters
-
- The following new response parameters are added: cpuovercommitratio,
- memoryovercommitratio
-
-
- attachVolume
-
- The following new response parameter is added: displayvolume
-
-
- updateVPCOffering
- The following request parameters is made mandatory: id
-
-
- resetSSHKeyForVirtualMachine
- The following new request parameter is added: keypair (required)
- The following parameter is removed: name
- The following new response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable,
- affinitygroup
-
-
- updateCluster
- The following new request parameters are removed: cpuovercommitratio,
- memoryovercommitratio
- The following new response parameters are removed: cpuovercommitratio,
- memoryovercommitratio
-
-
- listPrivateGateways
- The following new response parameters are added: aclid, sourcenatsupported
-
-
-
- ldapConfig
- The following new request parameters are added: listall (optional)
- The following parameters has been made optional: searchbase, hostname,
- queryfilter
- The following new response parameter is added: ssl
-
-
- listTemplates
-
- The following new response parameters are added: isdynamicallyscalable,
- sshkeyenabled
-
-
- listNetworks
-
- The following new response parameters are added: aclid, displaynetwork, ip6cidr,
- ip6gateway, ispersistent, networkcidr, reservediprange
-
-
- restartNetwork
-
- The following new response parameters are added: isportable, vmipaddress
-
-
-
- prepareTemplate
-
- The following new response parameters are added: isdynamicallyscalable,
- sshkeyenabled
-
-
- rebootVirtualMachine
-
- The following new response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable,
- affinitygroup
-
-
- changeServiceForRouter
- The following new request parameters are added: aclid (optional), action
- (optional), protocol (optional)
- The following new response parameters are added: id, scope
-
-
- updateZone
- The following new request parameters are added: ip6dns1 (optional), ip6dns2
- (optional)
- The following new response parameters are added: ip6dns1, ip6dns2
-
-
- ldapRemove
-
- The following new response parameters are added: ssl
-
-
- updateServiceOffering
-
- The following new response parameters are added: deploymentplanner, isvolatile
-
-
-
- updateStoragePool
-
- The following new response parameters are added: hypervisor, scope,
- suitableformigration
-
-
- listFirewallRules
- The following request parameter is removed: traffictype
- The following new response parameters are added: networkid
-
-
- updateUser
-
- The following new response parameters are added: iscallerchilddomain, isdefault
-
-
-
- updateProject
-
- The following new response parameters are added: cpuavailable, cpulimit, cputotal,
- ipavailable, iplimit, iptotal, memoryavailable, memorylimit, memorytotal,
- networkavailable, networklimit, networktotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal, snapshotavailable, snapshotlimit,
- snapshottotal, templateavailable, templatelimit, templatetotal, vmavailable, vmlimit,
- vmrunning, vmstopped, vmtotal, volumeavailable, volumelimit, volumetotal,
- vpcavailable, vpclimit, vpctotal
-
-
- updateTemplate
- The following new request parameters are added: isdynamicallyscalable
- (optional), isrouting (optional)
- The following new response parameters are added: isdynamicallyscalable,
- sshkeyenabled
-
-
- disableUser
-
- The following new response parameters are added: iscallerchilddomain, isdefault
-
-
-
- activateProject
-
- The following new response parameters are added: cpuavailable, cpulimit, cputotal,
- ipavailable, iplimit, iptotal, memoryavailable, memorylimit, memorytotal,
- networkavailable, networklimit, networktotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal, snapshotavailable, snapshotlimit,
- snapshottotal, templateavailable, templatelimit, templatetotal, vmavailable, vmlimit,
- vmrunning, vmstopped, vmtotal, volumeavailable, volumelimit, volumetotal,
- vpcavailable, vpclimit, vpctotal
-
-
- createNetworkACL
- The following new request parameters are added: aclid (optional), action
- (optional), number (optional)
- The following request parameter is now optional: networkid
- The following new response parameters are added: aclid, action, number
-
-
-
- enableStaticNat
- The following new request parameters are added: vmguestip (optional)
-
-
-
- registerIso
- The following new request parameters are added: imagestoreuuid (optional),
- isdynamicallyscalable (optional)
- The following new response parameters are added: isdynamicallyscalable,
- sshkeyenabled
-
-
- createIpForwardingRule
-
- The following new response parameter is added: vmguestip
-
-
- resetPasswordForVirtualMachine
-
- The following new response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable,
- affinitygroup
-
-
- createVolume
- The following new request parameter is added: displayvolume (optional)
- The following new response parameter is added: displayvolume
-
-
- startRouter
-
- The following new response parameters are added: ip6dns1, ip6dns2, role
-
-
-
- listCapabilities
- The following new response parameters are added: apilimitinterval and
- apilimitmax.
-
-
- createServiceOffering
- The following new request parameters are added: deploymentplanner (optional),
- isvolatile (optional), serviceofferingdetails (optional).
- isvolatie indicates whether the service offering includes Volatile VM capability,
- which will discard the VM's root disk and create a new one on reboot.
- The following new response parameters are added: deploymentplanner, isvolatile
-
-
-
- restoreVirtualMachine
- The following request parameter is added: templateID (optional). This is used to point to the
- new template ID when the base image is updated. The parameter templateID can be an ISO
- ID in case of restore vm deployed using ISO.
- The following response parameters are added: diskioread, diskiowrite, diskkbsread,
- diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
- createNetwork
- The following new request parameters are added: aclid (optional),
- displaynetwork (optional), endipv6 (optional), ip6cidr (optional), ip6gateway
- (optional), isolatedpvlan (optional), startipv6 (optional)
- The following new response parameters are added: aclid, displaynetwork, ip6cidr,
- ip6gateway, ispersistent, networkcidr, reservediprange
-
-
- createVlanIpRange
- The following new request parameters are added: startipv6, endipv6,
- ip6gateway, ip6cidr
- Changed parameters: startip (is now optional)
- The following new response parameters are added: startipv6, endipv6, ip6gateway,
- ip6cidr
-
-
- CreateZone
- The following new request parameters are added: ip6dns1, ip6dns2
- The following new response parameters are added: ip6dns1, ip6dns2
-
-
- deployVirtualMachine
- The following request parameters are added: affinitygroupids (optional),
- affinitygroupnames (optional), displayvm (optional), ip6address (optional)
- The following request parameter is modified: iptonetworklist has a new possible
- value, ipv6
- The following new response parameters are added: diskioread, diskiowrite,
- diskkbsread, diskkbswrite, displayvm, isdynamicallyscalable,
- affinitygroup
-
-
-
- createNetworkOffering
-
-
- The following request parameters are added: details (optional),
- egressdefaultpolicy (optional), ispersistent (optional)
- ispersistent determines if the network or network offering created or listed by
- using this offering are persistent or not.
- The following response parameters are added: details, egressdefaultpolicy,
- ispersistent
-
-
-
-
- listNetworks
-
-
- The following request parameters is added: isPersistent.
- This parameter determines if the network or network offering created or listed by
- using this offering are persistent or not.
-
-
-
-
- listNetworkOfferings
-
-
- The following request parameters is added: isPersistent.
- This parameter determines if the network or network offering created or listed by
- using this offering are persistent or not.
- For listNetworkOfferings, the following response parameter has been added:
- details, egressdefaultpolicy, ispersistent
-
-
-
-
- addF5LoadBalancer
- configureNetscalerLoadBalancer
- addNetscalerLoadBalancer
- listF5LoadBalancers
- configureF5LoadBalancer
- listNetscalerLoadBalancers
-
-
- The following response parameter is removed: inline.
-
-
-
-
- listRouters
-
-
- For nic responses, the following fields have been added.
-
-
- ip6address
-
-
- ip6gateway
-
-
- ip6cidr
-
-
-
-
-
-
- listVirtualMachines
-
-
- The following request parameters are added: affinitygroupid (optional), vpcid
- (optional)
- The following response parameters are added: diskioread, diskiowrite, diskkbsread,
- diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- listRouters
- listZones
-
-
- For DomainRouter and DataCenter response, the following fields have been
- added.
-
-
- ip6dns1
-
-
- ip6dns2
-
-
- For listZones, the following optional request parameters are added: name,
- networktype
-
-
-
- listFirewallRules
- createFirewallRule
-
- The following request parameter is added: traffictype (optional).
- The following response parameter is added: networkid
-
-
-
- listUsageRecords
- The following response parameter is added: virtualsize.
-
-
-
-
- deleteIso
-
-
- The following request parameter is removed: forced
-
-
-
- addCluster
- The following request parameters are added: guestvswitchtype (optional), guestvswitchtype
- (optional), publicvswitchtype (optional), publicvswitchtype (optional)
- The following request parameters are removed: cpuovercommitratio,
- memoryovercommitratio
-
-
-
- updateCluster
- The following request parameters are added: cpuovercommitratio,
- ramovercommitratio
-
-
-
-
- createStoragePool
-
-
- The following request parameters are added: hypervisor (optional), provider
- (optional), scope (optional)
- The following request parameters have been made mandatory: podid, clusterid
- The following response parameter has been added: hypervisor, scope,
- suitableformigration
-
-
-
- listStoragePools
- The following request parameter is added: scope (optional)
- The following response parameters are added: hypervisor, scope,
- suitableformigration
-
-
-
- updateDiskOffering
-
-
- The following response parameter is added: displayoffering
-
-
-
-
- changeServiceForVirtualMachine
-
-
- The following response parameter are added: diskioread, diskiowrite, diskkbsread,
- diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- recoverVirtualMachine
-
-
- The following response parameters are added: diskioread, diskiowrite, diskkbsread,
- diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- listCapabilities
-
-
- The following response parameters are added: apilimitinterval, apilimitmax
-
-
-
-
- createRemoteAccessVpn
-
-
- The following response parameters are added: id
-
-
-
-
- startVirtualMachine
-
-
- The following response parameters are added: diskioread, diskiowrite, diskkbsread,
- diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- detachIso
-
-
- The following response parameters are added: diskioread, diskiowrite, diskkbsread,
- diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- updateVPC
-
-
- The following request parameters has been made mandatory: id, name
-
-
-
-
- associateIpAddress
-
-
- The following request parameters are added: isportable (optional), regionid
- (optional)
- The following response parameters are added: isportable, vmipaddress
-
-
-
-
- listProjectAccounts
-
-
- The following response parameters are added: cpuavailable, cpulimit, cputotal,
- ipavailable, iplimit, iptotal, memoryavailable, memorylimit, memorytotal,
- networkavailable, networklimit, networktotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal, snapshotavailable, snapshotlimit,
- snapshottotal, templateavailable, templatelimit, templatetotal, vmavailable, vmlimit,
- vmrunning, vmstopped, vmtotal, volumeavailable, volumelimit, volumetotal,
- vpcavailable, vpclimit, vpctotal
-
-
-
-
- disableAccount
-
-
- The following response parameters are added: cpuavailable, cpulimit, cputotal,
- isdefault, memoryavailable, memorylimit, memorytotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal
-
-
-
-
- listPortForwardingRules
-
-
- The following response parameters are added: vmguestip
-
-
-
-
- migrateVirtualMachine
-
-
- The following response parameters are added: diskioread, diskiowrite, diskkbsread,
- diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- cancelStorageMaintenance
-
-
- The following response parameters are added: hypervisor, scope,
- suitableformigration
-
-
-
-
- createPortForwardingRule
-
- The following request parameter is added: vmguestip (optional) The
- following response parameter is added: vmguestip
-
-
-
- addVpnUser
-
-
- The following response parameter is added: state
-
-
-
-
- createVPCOffering
-
-
- The following request parameter is added: serviceproviderlist (optional)
-
-
-
-
- assignVirtualMachine
-
-
- The following response parameters are added: diskioread, diskiowrite, diskkbsread,
- diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- listConditions
-
-
- The following response parameters are added: account, counter, domain, domainid,
- project, projectid, relationaloperator, threshold
- Removed response parameters: name, source, value
-
-
-
-
- createPrivateGateway
-
-
- The following request parameters are added: aclid (optional), sourcenatsupported
- (optional)
- The following response parameters are added: aclid, sourcenatsupported
-
-
-
-
- updateVirtualMachine
-
-
- The following request parameters are added: displayvm (optional),
- isdynamicallyscalable (optional)
- The following response parameters are added: diskioread, diskiowrite, diskkbsread,
- diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- destroyRouter
-
-
- The following response parameters are added: ip6dns1, ip6dns2, role
-
-
-
-
- listServiceOfferings
-
-
- The following response parameters are added: deploymentplanner, isvolatile
-
-
-
-
- listUsageRecords
-
-
- The following response parameters are removed: virtualsize
-
-
-
-
- createProject
-
-
- The following response parameters are added: cpuavailable, cpulimit, cputotal,
- ipavailable, iplimit, iptotal, memoryavailable, memorylimit, memorytotal,
- networkavailable, networklimit, networktotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal, snapshotavailable, snapshotlimit,
- snapshottotal, templateavailable, templatelimit, templatetotal, vmavailable, vmlimit,
- vmrunning, vmstopped, vmtotal, volumeavailable, volumelimit, volumetotal,
- vpcavailable, vpclimit, vpctotal
-
-
-
-
- enableUser
-
-
- The following response parameters are added: iscallerchilddomain, isdefault
-
-
-
-
-
- createLoadBalancerRule
-
-
- The following response parameter is added: networkid
-
-
-
-
- updateAccount
-
-
- The following response parameters are added: cpuavailable, cpulimit, cputotal,
- isdefault, memoryavailable, memorylimit, memorytotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal
-
-
-
-
- copyIso
-
-
- The following response parameters are added: isdynamicallyscalable, sshkeyenabled
-
-
-
-
-
- uploadVolume
-
-
- The following request parameters are added: imagestoreuuid (optional), projectid
- (optional
- The following response parameters are added: displayvolume
-
-
-
-
- createDomain
-
-
- The following request parameter is added: domainid (optional)
-
-
-
-
- stopVirtualMachine
-
-
- The following response parameters are added: diskioread, diskiowrite, diskkbsread,
- diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- listAccounts
-
-
- The following response parameters are added: cpuavailable, cpulimit, cputotal,
- isdefault, memoryavailable, memorylimit, memorytotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal
-
-
-
-
- createSnapshot
-
-
- The following response parameter is added: zoneid
-
-
-
-
- updateIso
-
-
- The following request parameters are added: isdynamicallyscalable (optional),
- isrouting (optional)
- The following response parameters are added: isdynamicallyscalable,
- sshkeyenabled
-
-
-
-
- listIpForwardingRules
-
-
- The following response parameter is added: vmguestip
-
-
-
-
- updateNetwork
-
-
- The following request parameters are added: displaynetwork (optional), guestvmcidr
- (optional)
- The following response parameters are added: aclid, displaynetwork, ip6cidr,
- ip6gateway, ispersistent, networkcidr, reservediprange
-
-
-
-
- destroyVirtualMachine
-
-
- The following response parameters are added: diskioread, diskiowrite, diskkbsread,
- diskkbswrite, displayvm, isdynamicallyscalable, affinitygroup
-
-
-
-
- createDiskOffering
-
-
- The following request parameter is added: displayoffering (optional)
- The following response parameter is added: displayoffering
-
-
-
-
- rebootRouter
-
-
- The following response parameters are added: ip6dns1, ip6dns2, role
-
-
-
-
- listConfigurations
-
-
- The following request parameters are added: accountid (optional), clusterid
- (optional), storageid (optional), zoneid (optional)
- The following response parameters are added: id, scope
-
-
-
-
- createUser
-
-
- The following request parameter is added: userid (optional)
- The following response parameters are added: iscallerchilddomain, isdefault
-
-
-
-
- listDiskOfferings
-
-
- The following response parameter is added: displayoffering
-
-
-
-
- detachVolume
-
-
- The following response parameter is added: displayvolume
-
-
-
-
- deleteUser
-
-
- The following response parameters are added: displaytext, success
- Removed parameters: id, account, accountid, accounttype, apikey, created, domain,
- domainid, email, firstname, lastname, secretkey, state, timezone, username
-
-
-
-
- listSnapshots
-
-
- The following request parameter is added: zoneid (optional)
- The following response parameter is added: zoneid
-
-
-
-
- markDefaultZoneForAccount
-
-
- The following response parameters are added: cpuavailable, cpulimit, cputotal,
- isdefault, memoryavailable, memorylimit, memorytotal, primarystorageavailable,
- primarystoragelimit, primarystoragetotal, secondarystorageavailable,
- secondarystoragelimit, secondarystoragetotal
-
-
-
-
- restartVPC
-
-
- The following request parameters are made mandatory: id
-
-
-
-
- updateHypervisorCapabilities
-
-
- The following response parameters are added: hypervisor, hypervisorversion,
- maxdatavolumeslimit, maxguestslimit, maxhostspercluster, securitygroupenabled,
- storagemotionenabled
- Removed parameters: cpunumber, cpuspeed, created, defaultuse, displaytext, domain,
- domainid, hosttags, issystem, limitcpuuse, memory, name, networkrate, offerha,
- storagetype, systemvmtype, tags
-
-
-
-
- updateLoadBalancerRule
-
-
- The following response parameter is added: networkid
-
-
-
-
- listVlanIpRanges
-
-
- The following response parameters are added: endipv6, ip6cidr, ip6gateway,
- startipv6
-
-
-
-
- listHypervisorCapabilities
-
-
- The following response parameters are added: maxdatavolumeslimit,
- maxhostspercluster, storagemotionenabled
-
-
-
-
- updateNetworkOffering
-
-
- The following response parameters are added: details, egressdefaultpolicy,
- ispersistent
-
-
-
-
- createVirtualRouterElement
-
-
- The following request parameters are added: providertype (optional)
-
-
-
-
- listVpnUsers
-
-
- The following response parameter is added: state
-
-
-
-
- listUsers
-
-
- The following response parameters are added: iscallerchilddomain, isdefault
-
-
-
-
-
- listSupportedNetworkServices
-
-
- The following response parameter is added: provider
-
-
-
-
- listIsos
-
-
- The following response parameters are added: isdynamicallyscalable, sshkeyenabled
-
-
-
-
-
-
-
diff --git a/docs/en-US/changed-apicommands-4-0.xml b/docs/en-US/changed-apicommands-4-0.xml
deleted file mode 100644
index 042d5e2611e..00000000000
--- a/docs/en-US/changed-apicommands-4-0.xml
+++ /dev/null
@@ -1,268 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Changed API Commands in 4.0.0-incubating
-
-
-
-
-
-
- API Commands
- Description
-
-
-
-
-
- copyTemplate
- prepareTemplate
- registerTemplate
- updateTemplate
- createProject
- activateProject
- suspendProject
- updateProject
- listProjectAccounts
- createVolume
- migrateVolume
- attachVolume
- detachVolume
- uploadVolume
- createSecurityGroup
- registerIso
- copyIso
- updateIso
- createIpForwardingRule
- listIpForwardingRules
- createLoadBalancerRule
- updateLoadBalancerRule
- createSnapshot
-
-
- The commands in this list have a single new response parameter, and no other
- changes.
- New response parameter: tags(*)
-
- Many other commands also have the new tags(*) parameter in addition to other
- changes; those commands are listed separately.
-
-
-
-
- rebootVirtualMachine
- attachIso
- detachIso
- listLoadBalancerRuleInstances
- resetPasswordForVirtualMachine
- changeServiceForVirtualMachine
- recoverVirtualMachine
- startVirtualMachine
- migrateVirtualMachine
- deployVirtualMachine
- assignVirtualMachine
- updateVirtualMachine
- restoreVirtualMachine
- stopVirtualMachine
- destroyVirtualMachine
-
-
- The commands in this list have two new response parameters, and no other
- changes.
- New response parameters: keypair, tags(*)
-
-
-
-
- listSecurityGroups
- listFirewallRules
- listPortForwardingRules
- listSnapshots
- listIsos
- listProjects
- listTemplates
- listLoadBalancerRules
-
- The commands in this list have the following new parameters, and no other
- changes.
- New request parameter: tags (optional)
- New response parameter: tags(*)
-
-
-
-
- listF5LoadBalancerNetworks
- listNetscalerLoadBalancerNetworks
- listSrxFirewallNetworks
- updateNetwork
-
-
- The commands in this list have three new response parameters, and no other
- changes.
- New response parameters: canusefordeploy, vpcid, tags(*)
-
-
-
-
- createZone
- updateZone
-
- The commands in this list have the following new parameters, and no other
- changes.
- New request parameter: localstorageenabled (optional)
- New response parameter: localstorageenabled
-
-
-
- listZones
- New response parameter: localstorageenabled
-
-
-
- rebootRouter
- changeServiceForRouter
- startRouter
- destroyRouter
- stopRouter
-
- The commands in this list have two new response parameters, and no other
- changes.
- New response parameters: vpcid, nic(*)
-
-
-
- updateAccount
- disableAccount
- listAccounts
- markDefaultZoneForAccount
- enableAccount
-
- The commands in this list have three new response parameters, and no other
- changes.
- New response parameters: vpcavailable, vpclimit, vpctotal
-
-
- listRouters
-
- New request parameters: forvpc (optional), vpcid (optional)
- New response parameters: vpcid, nic(*)
-
-
-
- listNetworkOfferings
-
- New request parameters: forvpc (optional)
- New response parameters: forvpc
-
-
-
- listVolumes
-
- New request parameters: details (optional), tags (optional)
- New response parameters: tags(*)
-
-
-
- addTrafficMonitor
-
- New request parameters: excludezones (optional), includezones (optional)
-
-
-
- createNetwork
-
- New request parameters: vpcid (optional)
- New response parameters: canusefordeploy, vpcid, tags(*)
-
-
-
- listPublicIpAddresses
-
- New request parameters: tags (optional), vpcid (optional)
- New response parameters: vpcid, tags(*)
-
-
-
- listNetworks
-
- New request parameters: canusefordeploy (optional), forvpc (optional), tags
- (optional), vpcid (optional)
- New response parameters: canusefordeploy, vpcid, tags(*)
-
-
-
- restartNetwork
-
- New response parameters: vpcid, tags(*)
-
-
-
- enableStaticNat
-
- New request parameter: networkid (optional)
-
-
-
- createDiskOffering
-
- New request parameter: storagetype (optional)
- New response parameter: storagetype
-
-
-
- listDiskOfferings
-
- New response parameter: storagetype
-
-
-
- updateDiskOffering
-
- New response parameter: storagetype
-
-
-
- createFirewallRule
-
- Changed request parameters: ipaddressid (old version - optional, new version -
- required)
- New response parameter: tags(*)
-
-
-
- listVirtualMachines
-
- New request parameters: isoid (optional), tags (optional), templateid
- (optional)
- New response parameters: keypair, tags(*)
-
-
-
- updateStorageNetworkIpRange
-
- New response parameters: id, endip, gateway, netmask, networkid, podid, startip,
- vlan, zoneid
-
-
-
-
-
-
diff --git a/docs/en-US/changed-apicommands-4.1.xml b/docs/en-US/changed-apicommands-4.1.xml
deleted file mode 100644
index 1667aafaa22..00000000000
--- a/docs/en-US/changed-apicommands-4.1.xml
+++ /dev/null
@@ -1,253 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Changed API Commands in 4.1
-
-
-
-
-
-
- API Commands
- Description
-
-
-
-
-
- createNetworkOffering
-
-
- The following request parameters have been added:
-
-
- isPersistent
-
-
- startipv6
-
-
- endipv6
-
-
- ip6gateway
-
-
- ip6cidr
-
-
-
-
-
-
- listNetworkOfferings
- listNetworks
-
-
- The following request parameters have been added:
-
-
- isPersistent
- This parameter determines if the network or network offering listed are
- persistent or not.
-
-
- ip6gateway
-
-
- ip6cidr
-
-
-
-
-
-
- createVlanIpRange
-
-
- The following request parameters have been added:
-
-
- startipv6
-
-
- endipv6
-
-
- ip6gateway
-
-
- ip6cidr
-
-
-
-
-
-
- deployVirtualMachine
-
-
- The following parameter has been added: ip6Address.
- The following parameter is updated to accept the IPv6 address:
- iptonetworklist.
-
-
-
-
- CreateZoneCmd
-
-
- The following parameter have been added: ip6dns1, ip6dns2.
-
-
-
-
- listRouters
- listVirtualMachines
-
-
- For nic responses, the following fields have been added.
-
-
- ip6address
-
-
- ip6gateway
-
-
- ip6cidr
-
-
-
-
-
-
- listVlanIpRanges
-
-
- For nic responses, the following fields have been added.
-
-
- startipv6
-
-
- endipv6
-
-
- ip6gateway
-
-
- ip6cidr
-
-
-
-
-
-
- listRouters
- listZones
-
-
- For DomainRouter and DataCenter response, the following fields have been
- added.
-
-
- ip6dns1
-
-
- ip6dns2
-
-
-
-
-
-
- addF5LoadBalancer
- configureNetscalerLoadBalancer
- addNetscalerLoadBalancer
- listF5LoadBalancers
- configureF5LoadBalancer
- listNetscalerLoadBalancers
-
-
- The following response parameter is removed: inline.
-
-
-
- listFirewallRules
- createFirewallRule
-
- The following request parameter is added: traffictype (optional).
-
-
-
- listUsageRecords
- The following response parameter is added: virtualsize.
-
-
-
-
- deleteIso
-
-
- The following request parameter is added: forced (optional).
-
-
-
-
- createStoragePool
-
-
- The following request parameters are made mandatory:
-
-
- podid
-
-
- clusterid
-
-
-
-
-
-
- listZones
-
-
- The following request parameter is added: securitygroupenabled
-
-
-
- createAccount
- The following new request parameters are added: accountid, userid
-
-
- createUser
- The following new request parameter is added: userid
-
-
- createDomain
- The following new request parameter is added: domainid
-
-
-
-
-
diff --git a/docs/en-US/changing-root-password.xml b/docs/en-US/changing-root-password.xml
deleted file mode 100644
index 880f50fcf22..00000000000
--- a/docs/en-US/changing-root-password.xml
+++ /dev/null
@@ -1,50 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Changing the Root Password
- During installation and ongoing cloud administration, you will need to log in to the UI as the root administrator.
- The root administrator account manages the &PRODUCT; deployment, including physical infrastructure.
- The root administrator can modify configuration settings to change basic functionality, create or delete user accounts, and take many actions that should be performed only by an authorized person.
- When first installing &PRODUCT;, be sure to change the default password to a new, unique value.
-
- Open your favorite Web browser and go to this URL. Substitute the IP address of your own Management Server:
- http://<management-server-ip-address>:8080/client
-
- Log in to the UI using the current root user ID and password. The default is admin, password.
- Click Accounts.
- Click the admin account name.
- Click View Users.
- Click the admin user name.
-
- Click the Change Password button.
-
-
-
-
- change-password.png: button to change a user's password
-
-
- Type the new password, and click OK.
-
-
diff --git a/docs/en-US/changing-secondary-storage-ip.xml b/docs/en-US/changing-secondary-storage-ip.xml
deleted file mode 100644
index 34f93e32c61..00000000000
--- a/docs/en-US/changing-secondary-storage-ip.xml
+++ /dev/null
@@ -1,44 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Changing the Secondary Storage IP Address
- You can change the secondary storage IP address after it has been provisioned. After changing the IP address on the host, log in to your management server and execute the following commands. Replace HOSTID below with your own value, and change the URL to use the appropriate IP address and path for your server:
-
- # mysql -p
- mysql> use cloud;
- mysql> select id from host where type = 'SecondaryStorage';
- mysql> update host_details set value = 'nfs://192.168.160.20/export/mike-ss1'
- where host_id = HOSTID and name = 'orig.url';
- mysql> update host set name = 'nfs://192.168.160.20/export/mike-ss1' where type
- = 'SecondaryStorage' and id = #;
- mysql> update host set url = 'nfs://192.168.160.20/export/mike-ss1' where type
- = 'SecondaryStorage' and id = #;
- mysql> update host set guid = 'nfs://192.168.160.20/export/mike-ss1' where type
- = 'SecondaryStorage' and id = #;
-
- When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text.
- Then log in to the cloud console UI and stop and start (not reboot) the Secondary Storage VM for that Zone.
-
-
-
diff --git a/docs/en-US/changing-secondary-storage-servers.xml b/docs/en-US/changing-secondary-storage-servers.xml
deleted file mode 100644
index a628eec9b39..00000000000
--- a/docs/en-US/changing-secondary-storage-servers.xml
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Changing Secondary Storage Servers
- You can change the secondary storage NFS mount. Perform the following steps to do so:
-
- Stop all running Management Servers.
- Wait 30 minutes. This allows any writes to secondary storage to complete.
- Copy all files from the old secondary storage mount to the new.
- Use the procedure above to change the IP address for secondary storage if required.
- Start the Management Server.
-
-
-
diff --git a/docs/en-US/changing-service-offering-for-vm.xml b/docs/en-US/changing-service-offering-for-vm.xml
deleted file mode 100644
index f4e2ceb309f..00000000000
--- a/docs/en-US/changing-service-offering-for-vm.xml
+++ /dev/null
@@ -1,190 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Changing the Service Offering for a VM
- To upgrade or downgrade the level of compute resources available to a virtual machine, you
- can change the VM's compute offering.
-
-
- Log in to the &PRODUCT; UI as a user or admin.
-
-
- In the left navigation, click Instances.
-
-
- Choose the VM that you want to work with.
-
-
- (Skip this step if you have enabled dynamic VM scaling; see .)
- Click the Stop button to stop the VM.
-
-
-
-
- StopButton.png: button to stop a VM
-
-
-
-
-
- Click the Change Service button.
-
-
-
-
- ChangeServiceButton.png: button to change the service of a VM
-
-
- The Change service dialog box is displayed.
-
-
- Select the offering you want to apply to the selected VM.
-
-
- Click OK.
-
-
-
-
- CPU and Memory Scaling for Running VMs
- (Supported on VMware and XenServer)
- It is not always possible to accurately predict the CPU and RAM requirements when you
- first deploy a VM. You might need to increase these resources at any time during the life of a
- VM. You can dynamically modify CPU and RAM levels to scale up these resources for a running VM
- without incurring any downtime.
- Dynamic CPU and RAM scaling can be used in the following cases:
-
-
- User VMs on hosts running VMware and XenServer.
-
-
- System VMs on VMware.
-
-
- VMware Tools or XenServer Tools must be installed on the virtual machine.
-
-
- The new requested CPU and RAM values must be within the constraints allowed by the
- hypervisor and the VM operating system.
-
-
- New VMs that are created after the installation of &PRODUCT; 4.2 can use the dynamic
- scaling feature. If you are upgrading from a previous version of &PRODUCT;, your existing
- VMs created with previous versions will not have the dynamic scaling capability unless you
- update them using the following procedure.
-
-
-
-
- Updating Existing VMs
- If you are upgrading from a previous version of &PRODUCT;, and you want your existing VMs
- created with previous versions to have the dynamic scaling capability, update the VMs using
- the following steps:
-
-
- Make sure the zone-level setting enable.dynamic.scale.vm is set to true. In the left
- navigation bar of the &PRODUCT; UI, click Infrastructure, then click Zones, click the zone
- you want, and click the Settings tab.
-
-
- Install Xen tools (for XenServer hosts) or VMware Tools (for VMware hosts) on each VM
- if they are not already installed.
-
-
- Stop the VM.
-
-
- Click the Edit button.
-
-
- Click the Dynamically Scalable checkbox.
-
-
- Click Apply.
-
-
- Restart the VM.
-
-
-
-
- Configuring Dynamic CPU and RAM Scaling
- To configure this feature, use the following new global configuration variables:
-
-
- enable.dynamic.scale.vm: Set to True to enable the feature. By default, the feature is
- turned off.
-
-
- scale.retry: How many times to attempt the scaling operation. Default = 2.
-
-
-
-
- How to Dynamically Scale CPU and RAM
- To modify the CPU and/or RAM capacity of a virtual machine, you need to change the compute
- offering of the VM to a new compute offering that has the desired CPU and RAM values. You can
- use the same steps described above in , but
- skip the step where you stop the virtual machine. Of course, you might have to create a new
- compute offering first.
- When you submit a dynamic scaling request, the resources will be scaled up on the current
- host if possible. If the host does not have enough resources, the VM will be live migrated to
- another host in the same cluster. If there is no host in the cluster that can fulfill the
- requested level of CPU and RAM, the scaling operation will fail. The VM will continue to run
- as it was before.
-
-
- Limitations
-
-
- You can not do dynamic scaling for system VMs on XenServer.
-
-
- &PRODUCT; will not check to be sure that the new CPU and RAM levels are compatible
- with the OS running on the VM.
-
-
- When scaling memory or CPU for a Linux VM on VMware, you might need to run scripts in
- addition to the other steps mentioned above. For more information, see Hot adding memory in Linux (1012764) in the VMware Knowledge Base.
-
-
- (VMware) If resources are not available on the current host, scaling up will fail on
- VMware because of a known issue where &PRODUCT; and vCenter calculate the available
- capacity differently. For more information, see https://issues.apache.org/jira/browse/CLOUDSTACK-1809.
-
-
- On VMs running Linux 64-bit and Windows 7 32-bit operating systems, if the VM is
- initially assigned a RAM of less than 3 GB, it can be dynamically scaled up to 3 GB, but
- not more. This is due to a known issue with these operating systems, which will freeze if
- an attempt is made to dynamically scale from less than 3 GB to more than 3 GB.
-
-
-
-
-
diff --git a/docs/en-US/changing-vm-name-os-group.xml b/docs/en-US/changing-vm-name-os-group.xml
deleted file mode 100644
index daf78bca107..00000000000
--- a/docs/en-US/changing-vm-name-os-group.xml
+++ /dev/null
@@ -1,59 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Changing the VM Name, OS, or Group
- After a VM is created, you can modify the display name, operating system, and the group it belongs to.
- To access a VM through the &PRODUCT; UI:
-
- Log in to the &PRODUCT; UI as a user or admin.
- In the left navigation, click Instances.
- Select the VM that you want to modify.
- Click the Stop button to stop the VM.
-
-
-
-
- StopButton.png: button to stop a VM
-
-
-
- Click Edit.
-
-
-
-
- EditButton.png: button to edit the properties of a VM
-
-
- Make the desired changes to the following:
-
- Display name: Enter a new display name if you want to change
- the name of the VM.
- OS Type: Select the desired operating system.
- Group: Enter the group name for the VM.
-
- Click Apply.
-
-
-
diff --git a/docs/en-US/choosing-a-deployment-architecture.xml b/docs/en-US/choosing-a-deployment-architecture.xml
deleted file mode 100644
index 0503d8c7597..00000000000
--- a/docs/en-US/choosing-a-deployment-architecture.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Choosing a Deployment Architecture
- The architecture used in a deployment will vary depending on the size and purpose of the deployment. This section contains examples of deployment architecture, including a small-scale deployment useful for test and trial deployments and a fully-redundant large-scale setup for production deployments.
-
-
-
-
-
-
diff --git a/docs/en-US/choosing-a-hypervisor.xml b/docs/en-US/choosing-a-hypervisor.xml
deleted file mode 100644
index bf83fe3d17f..00000000000
--- a/docs/en-US/choosing-a-hypervisor.xml
+++ /dev/null
@@ -1,136 +0,0 @@
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Choosing a Hypervisor: Supported Features
- &PRODUCT; supports many popular hypervisors. Your cloud can consist entirely of hosts running a single hypervisor, or you can use multiple hypervisors. Each cluster of hosts must run the same hypervisor.
- You might already have an installed base of nodes running a particular hypervisor, in which case, your choice of hypervisor has already been made. If you are starting from scratch, you need to decide what hypervisor software best suits your needs. A discussion of the relative advantages of each hypervisor is outside the scope of our documentation. However, it will help you to know which features of each hypervisor are supported by &PRODUCT;. The following table provides this information.
-
-
-
-
-
-
-
-
-
-
-
- Feature
- XenServer 6.0.2
- vSphere 4.1/5.0
- KVM - RHEL 6.2
- OVM 2.3
- Bare Metal
-
-
-
-
- Network Throttling
- Yes
- Yes
- No
- No
- N/A
-
-
- Security groups in zones that use basic networking
- Yes
- No
- Yes
- No
- No
-
-
- iSCSI
- Yes
- Yes
- Yes
- Yes
- N/A
-
-
- FibreChannel
- Yes
- Yes
- Yes
- No
- N/A
-
-
- Local Disk
- Yes
- Yes
- Yes
- No
- Yes
-
-
- HA
- Yes
- Yes (Native)
- Yes
- Yes
- N/A
-
-
- Snapshots of local disk
- Yes
- Yes
- Yes
- No
- N/A
-
-
- Local disk as data disk
- No
- No
- No
- No
- N/A
-
-
- Work load balancing
- No
- DRS
- No
- No
- N/A
-
-
- Manual live migration of VMs from host to host
- Yes
- Yes
- Yes
- Yes
- N/A
-
-
- Conserve management traffic IP address by using link local network to communicate with virtual router
- Yes
- No
- Yes
- Yes
- N/A
-
-
-
-
-
diff --git a/docs/en-US/cisco3750-hardware.xml b/docs/en-US/cisco3750-hardware.xml
deleted file mode 100644
index b5266105074..00000000000
--- a/docs/en-US/cisco3750-hardware.xml
+++ /dev/null
@@ -1,52 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Cisco 3750
- The following steps show how a Cisco 3750 is configured for zone-level layer-3 switching.
- These steps assume VLAN 201 is used to route untagged private IPs for pod 1, and pod 1’s layer-2
- switch is connected to GigabitEthernet1/0/1.
-
-
- Setting VTP mode to transparent allows us to utilize VLAN IDs above 1000. Since we only
- use VLANs up to 999, vtp transparent mode is not strictly required.
- vtp mode transparent
-vlan 200-999
-exit
-
-
- Configure GigabitEthernet1/0/1.
- interface GigabitEthernet1/0/1
-switchport trunk encapsulation dot1q
-switchport mode trunk
-switchport trunk native vlan 201
-exit
-
-
- The statements configure GigabitEthernet1/0/1 as follows:
-
-
- VLAN 201 is the native untagged VLAN for port GigabitEthernet1/0/1.
-
-
- Cisco passes all VLANs by default. As a result, all VLANs (300-999) are passed to all the pod-level layer-2 switches.
-
-
-
diff --git a/docs/en-US/cisco3750-layer2.xml b/docs/en-US/cisco3750-layer2.xml
deleted file mode 100644
index e4fe1422056..00000000000
--- a/docs/en-US/cisco3750-layer2.xml
+++ /dev/null
@@ -1,45 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Cisco 3750
- The following steps show how a Cisco 3750 is configured for pod-level layer-2
- switching.
-
-
- Setting VTP mode to transparent allows us to utilize VLAN IDs above 1000. Since we only
- use VLANs up to 999, vtp transparent mode is not strictly required.
- vtp mode transparent
-vlan 300-999
-exit
-
-
- Configure all ports to dot1q and set 201 as the native VLAN.
- interface range GigabitEthernet 1/0/1-24
-switchport trunk encapsulation dot1q
-switchport mode trunk
-switchport trunk native vlan 201
-exit
-
-
- By default, Cisco passes all VLANs. Cisco switches complain of the native VLAN IDs are
- different when 2 ports are connected together. That’s why you must specify VLAN 201 as the
- native VLAN on the layer-2 switch.
-
diff --git a/docs/en-US/citrix-xenserver-installation.xml b/docs/en-US/citrix-xenserver-installation.xml
deleted file mode 100644
index 09d07aa2a90..00000000000
--- a/docs/en-US/citrix-xenserver-installation.xml
+++ /dev/null
@@ -1,757 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Citrix XenServer Installation for &PRODUCT;
- If you want to use the Citrix XenServer hypervisor to run guest virtual machines, install
- XenServer 6.0 or XenServer 6.0.2 on the host(s) in your cloud. For an initial installation,
- follow the steps below. If you have previously installed XenServer and want to upgrade to
- another version, see .
-
- System Requirements for XenServer Hosts
-
-
- The host must be certified as compatible with one of the following. See the Citrix
- Hardware Compatibility Guide: http://hcl.xensource.com
-
-
- XenServer 5.6 SP2
-
-
- XenServer 6.0
-
-
- XenServer 6.0.2
-
-
-
-
- You must re-install Citrix XenServer if you are going to re-use a host from a previous
- install.
-
-
- Must support HVM (Intel-VT or AMD-V enabled)
-
-
- Be sure all the hotfixes provided by the hypervisor vendor are applied. Track the
- release of hypervisor patches through your hypervisor vendor’s support channel, and apply
- patches as soon as possible after they are released. &PRODUCT; will not track or notify
- you of required hypervisor patches. It is essential that your hosts are completely up to
- date with the provided hypervisor patches. The hypervisor vendor is likely to refuse to
- support any system that is not up to date with patches.
-
-
- All hosts within a cluster must be homogeneous. The CPUs must be of the same type,
- count, and feature flags.
-
-
- Must support HVM (Intel-VT or AMD-V enabled in BIOS)
-
-
- 64-bit x86 CPU (more cores results in better performance)
-
-
- Hardware virtualization support required
-
-
- 4 GB of memory
-
-
- 36 GB of local disk
-
-
- At least 1 NIC
-
-
- Statically allocated IP Address
-
-
- When you deploy &PRODUCT;, the hypervisor host must not have any VMs already
- running
-
-
-
- The lack of up-do-date hotfixes can lead to data corruption and lost VMs.
-
-
-
- XenServer Installation Steps
-
-
- From https://www.citrix.com/English/ss/downloads/, download the appropriate version
- of XenServer for your &PRODUCT; version (see ). Install it using the Citrix XenServer
- Installation Guide.
- Older Versions of XenServer
- Note that you can download the most recent release of XenServer without having a Citrix account. If you wish to download older versions, you will need to create an account and look through the download archives.
-
-
-
- After installation, perform the following configuration steps, which are described in
- the next few sections:
-
-
-
-
-
-
- Required
- Optional
-
-
-
-
-
-
-
-
-
- Set up SR if not using NFS, iSCSI, or local disk; see
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Configure XenServer dom0 Memory
- Configure the XenServer dom0 settings to allocate more memory to dom0. This can enable
- XenServer to handle larger numbers of virtual machines. We recommend 2940 MB of RAM for
- XenServer dom0. For instructions on how to do this, see http://support.citrix.com/article/CTX126531. The article refers to XenServer 5.6,
- but the same information applies to XenServer 6.0.
-
-
- Username and Password
- All XenServers in a cluster must have the same username and password as configured in
- &PRODUCT;.
-
-
- Time Synchronization
- The host must be set to use NTP. All hosts in a pod must have the same time.
-
-
- Install NTP.
- # yum install ntp
-
-
- Edit the NTP configuration file to point to your NTP server.
- # vi /etc/ntp.conf
- Add one or more server lines in this file with the names of the NTP servers you want
- to use. For example:
- server 0.xenserver.pool.ntp.org
-server 1.xenserver.pool.ntp.org
-server 2.xenserver.pool.ntp.org
-server 3.xenserver.pool.ntp.org
-
-
-
- Restart the NTP client.
- # service ntpd restart
-
-
- Make sure NTP will start again upon reboot.
- # chkconfig ntpd on
-
-
-
-
- Licensing
- Citrix XenServer Free version provides 30 days usage without a license. Following the 30
- day trial, XenServer requires a free activation and license. You can choose to install a
- license now or skip this step. If you skip this step, you will need to install a license when
- you activate and license the XenServer.
-
- Getting and Deploying a License
- If you choose to install a license now you will need to use the XenCenter to activate
- and get a license.
-
-
- In XenCenter, click Tools > License manager.
-
-
- Select your XenServer and select Activate Free XenServer.
-
-
- Request a license.
-
-
- You can install the license with XenCenter or using the xe command line tool.
-
-
-
- Install &PRODUCT; XenServer Support Package (CSP)
- (Optional)
- To enable security groups, elastic load balancing, and elastic IP on XenServer, download
- and install the &PRODUCT; XenServer Support Package (CSP). After installing XenServer, perform
- the following additional steps on each XenServer host.
-
-
- Download the CSP software onto the XenServer host from one of the following
- links:
- For XenServer 6.0.2:
- http://download.cloud.com/releases/3.0.1/XS-6.0.2/xenserver-cloud-supp.tgz
- For XenServer 5.6 SP2:
- http://download.cloud.com/releases/2.2.0/xenserver-cloud-supp.tgz
- For XenServer 6.0:
- http://download.cloud.com/releases/3.0/xenserver-cloud-supp.tgz
-
-
- Extract the file:
- # tar xf xenserver-cloud-supp.tgz
-
-
- Run the following script:
- # xe-install-supplemental-pack xenserver-cloud-supp.iso
-
-
- If the XenServer host is part of a zone that uses basic networking, disable Open
- vSwitch (OVS):
- # xe-switch-network-backend bridge
- Restart the host machine when prompted.
-
-
- The XenServer host is now ready to be added to &PRODUCT;.
-
-
- Primary Storage Setup for XenServer
- &PRODUCT; natively supports NFS, iSCSI and local storage. If you are using one of these
- storage types, there is no need to create the XenServer Storage Repository ("SR").
- If, however, you would like to use storage connected via some other technology, such as
- FiberChannel, you must set up the SR yourself. To do so, perform the following steps. If you
- have your hosts in a XenServer pool, perform the steps on the master node. If you are working
- with a single XenServer which is not part of a cluster, perform the steps on that
- XenServer.
-
-
- Connect FiberChannel cable to all hosts in the cluster and to the FiberChannel storage
- host.
-
-
- Rescan the SCSI bus. Either use the following command or use XenCenter to perform an
- HBA rescan.
- # scsi-rescan
-
-
- Repeat step on every host.
-
-
- Check to be sure you see the new SCSI disk.
- # ls /dev/disk/by-id/scsi-360a98000503365344e6f6177615a516b -l
- The output should look like this, although the specific file name will be different
- (scsi-<scsiID>):
- lrwxrwxrwx 1 root root 9 Mar 16 13:47
-/dev/disk/by-id/scsi-360a98000503365344e6f6177615a516b -> ../../sdc
-
-
-
- Repeat step on every host.
-
-
- On the storage server, run this command to get a unique ID for the new SR.
- # uuidgen
- The output should look like this, although the specific ID will be different:
- e6849e96-86c3-4f2c-8fcc-350cc711be3d
-
-
- Create the FiberChannel SR. In name-label, use the unique ID you just
- generated.
-
-# xe sr-create type=lvmohba shared=true
-device-config:SCSIid=360a98000503365344e6f6177615a516b
-name-label="e6849e96-86c3-4f2c-8fcc-350cc711be3d"
-
- This command returns a unique ID for the SR, like the following example (your ID will
- be different):
- 7a143820-e893-6c6a-236e-472da6ee66bf
-
-
- To create a human-readable description for the SR, use the following command. In uuid,
- use the SR ID returned by the previous command. In name-description, set whatever friendly
- text you prefer.
- # xe sr-param-set uuid=7a143820-e893-6c6a-236e-472da6ee66bf name-description="Fiber Channel storage repository"
- Make note of the values you will need when you add this storage to &PRODUCT; later
- (see ). In the Add Primary Storage dialog, in
- Protocol, you will choose PreSetup. In SR Name-Label, you will enter the name-label you
- set earlier (in this example, e6849e96-86c3-4f2c-8fcc-350cc711be3d).
-
-
- (Optional) If you want to enable multipath I/O on a FiberChannel SAN, refer to the
- documentation provided by the SAN vendor.
-
-
-
-
- iSCSI Multipath Setup for XenServer (Optional)
- When setting up the storage repository on a Citrix XenServer, you can enable multipath
- I/O, which uses redundant physical components to provide greater reliability in the connection
- between the server and the SAN. To enable multipathing, use a SAN solution that is supported
- for Citrix servers and follow the procedures in Citrix documentation. The following links
- provide a starting point:
-
-
- http://support.citrix.com/article/CTX118791
-
-
- http://support.citrix.com/article/CTX125403
-
-
- You can also ask your SAN vendor for advice about setting up your Citrix repository for
- multipathing.
- Make note of the values you will need when you add this storage to the &PRODUCT; later
- (see ). In the Add Primary Storage dialog, in Protocol,
- you will choose PreSetup. In SR Name-Label, you will enter the same name used to create the
- SR.
- If you encounter difficulty, address the support team for the SAN provided by your vendor.
- If they are not able to solve your issue, see Contacting Support.
-
-
- Physical Networking Setup for XenServer
- Once XenServer has been installed, you may need to do some additional network
- configuration. At this point in the installation, you should have a plan for what NICs the
- host will have and what traffic each NIC will carry. The NICs should be cabled as necessary to
- implement your plan.
- If you plan on using NIC bonding, the NICs on all hosts in the cluster must be cabled
- exactly the same. For example, if eth0 is in the private bond on one host in a cluster, then
- eth0 must be in the private bond on all hosts in the cluster.
- The IP address assigned for the management network interface must be static. It can be set
- on the host itself or obtained via static DHCP.
- &PRODUCT; configures network traffic of various types to use different NICs or bonds on
- the XenServer host. You can control this process and provide input to the Management Server
- through the use of XenServer network name labels. The name labels are placed on physical
- interfaces or bonds and configured in &PRODUCT;. In some simple cases the name labels are not
- required.
- When configuring networks in a XenServer environment, network traffic labels must be
- properly configured to ensure that the virtual interfaces are created by &PRODUCT; are bound
- to the correct physical device. The name-label of the XenServer network must match the
- XenServer traffic label specified while creating the &PRODUCT; network. This is set by running
- the following command:
- xe network-param-set uuid=<network id> name-label=<CloudStack traffic label>
-
- Configuring Public Network with a Dedicated NIC for XenServer (Optional)
- &PRODUCT; supports the use of a second NIC (or bonded pair of NICs, described in ) for the public network. If bonding is not used, the
- public network can be on any NIC and can be on different NICs on the hosts in a cluster. For
- example, the public network can be on eth0 on node A and eth1 on node B. However, the
- XenServer name-label for the public network must be identical across all hosts. The
- following examples set the network label to "cloud-public". After the management
- server is installed and running you must configure it with the name of the chosen network
- label (e.g. "cloud-public"); this is discussed in .
- If you are using two NICs bonded together to create a public network, see .
- If you are using a single dedicated NIC to provide public network access, follow this
- procedure on each new host that is added to &PRODUCT; before adding the host.
-
-
- Run xe network-list and find the public network. This is usually attached to the NIC
- that is public. Once you find the network make note of its UUID. Call this
- <UUID-Public>.
-
-
- Run the following command.
- # xe network-param-set name-label=cloud-public uuid=<UUID-Public>
-
-
-
-
- Configuring Multiple Guest Networks for XenServer (Optional)
- &PRODUCT; supports the use of multiple guest networks with the XenServer hypervisor.
- Each network is assigned a name-label in XenServer. For example, you might have two networks
- with the labels "cloud-guest" and "cloud-guest2". After the management
- server is installed and running, you must add the networks and use these labels so that
- &PRODUCT; is aware of the networks.
- Follow this procedure on each new host before adding the host to &PRODUCT;:
-
-
- Run xe network-list and find one of the guest networks. Once you find the network
- make note of its UUID. Call this <UUID-Guest>.
-
-
- Run the following command, substituting your own name-label and uuid values.
- # xe network-param-set name-label=<cloud-guestN> uuid=<UUID-Guest>
-
-
- Repeat these steps for each additional guest network, using a different name-label
- and uuid each time.
-
-
-
-
- Separate Storage Network for XenServer (Optional)
- You can optionally set up a separate storage network. This should be done first on the
- host, before implementing the bonding steps below. This can be done using one or two
- available NICs. With two NICs bonding may be done as above. It is the administrator's
- responsibility to set up a separate storage network.
- Give the storage network a different name-label than what will be given for other
- networks.
- For the separate storage network to work correctly, it must be the only interface that
- can ping the primary storage device's IP address. For example, if eth0 is the
- management network NIC, ping -I eth0 <primary storage device IP> must fail. In all
- deployments, secondary storage devices must be pingable from the management network NIC or
- bond. If a secondary storage device has been placed on the storage network, it must also be
- pingable via the storage network NIC or bond on the hosts as well.
- You can set up two separate storage networks as well. For example, if you intend to
- implement iSCSI multipath, dedicate two non-bonded NICs to multipath. Each of the two
- networks needs a unique name-label.
- If no bonding is done, the administrator must set up and name-label the separate storage
- network on all hosts (masters and slaves).
- Here is an example to set up eth5 to access a storage network on 172.16.0.0/24.
-
-# xe pif-list host-name-label='hostname' device=eth5
-uuid(RO): ab0d3dd4-5744-8fae-9693-a022c7a3471d
-device ( RO): eth5
-#xe pif-reconfigure-ip DNS=172.16.3.3 gateway=172.16.0.1 IP=172.16.0.55 mode=static netmask=255.255.255.0 uuid=ab0d3dd4-5744-8fae-9693-a022c7a3471d
-
-
- NIC Bonding for XenServer (Optional)
- XenServer supports Source Level Balancing (SLB) NIC bonding. Two NICs can be bonded
- together to carry public, private, and guest traffic, or some combination of these. Separate
- storage networks are also possible. Here are some example supported configurations:
-
-
- 2 NICs on private, 2 NICs on public, 2 NICs on storage
-
-
- 2 NICs on private, 1 NIC on public, storage uses management network
-
-
- 2 NICs on private, 2 NICs on public, storage uses management network
-
-
- 1 NIC for private, public, and storage
-
-
- All NIC bonding is optional.
- XenServer expects all nodes in a cluster will have the same network cabling and same
- bonds implemented. In an installation the master will be the first host that was added to
- the cluster and the slave hosts will be all subsequent hosts added to the cluster. The bonds
- present on the master set the expectation for hosts added to the cluster later. The
- procedure to set up bonds on the master and slaves are different, and are described below.
- There are several important implications of this:
-
-
- You must set bonds on the first host added to a cluster. Then you must use xe
- commands as below to establish the same bonds in the second and subsequent hosts added
- to a cluster.
-
-
- Slave hosts in a cluster must be cabled exactly the same as the master. For example,
- if eth0 is in the private bond on the master, it must be in the management network for
- added slave hosts.
-
-
-
- Management Network Bonding
- The administrator must bond the management network NICs prior to adding the host to
- &PRODUCT;.
-
-
- Creating a Private Bond on the First Host in the Cluster
- Use the following steps to create a bond in XenServer. These steps should be run on
- only the first host in a cluster. This example creates the cloud-private network with two
- physical NICs (eth0 and eth1) bonded into it.
-
-
- Find the physical NICs that you want to bond together.
- # xe pif-list host-name-label='hostname' device=eth0
-# xe pif-list host-name-label='hostname' device=eth1
- These command shows the eth0 and eth1 NICs and their UUIDs. Substitute the ethX
- devices of your choice. Call the UUID's returned by the above command slave1-UUID
- and slave2-UUID.
-
-
- Create a new network for the bond. For example, a new network with name
- "cloud-private".
- This label is important. &PRODUCT; looks for a network by a
- name you configure. You must use the same name-label for all hosts in the cloud for
- the management network.
- # xe network-create name-label=cloud-private
-# xe bond-create network-uuid=[uuid of cloud-private created above]
-pif-uuids=[slave1-uuid],[slave2-uuid]
-
-
- Now you have a bonded pair that can be recognized by &PRODUCT; as the management
- network.
-
-
- Public Network Bonding
- Bonding can be implemented on a separate, public network. The administrator is
- responsible for creating a bond for the public network if that network will be bonded and
- will be separate from the management network.
-
-
- Creating a Public Bond on the First Host in the Cluster
- These steps should be run on only the first host in a cluster. This example creates
- the cloud-public network with two physical NICs (eth2 and eth3) bonded into it.
-
-
- Find the physical NICs that you want to bond together.
- #xe pif-list host-name-label='hostname' device=eth2
-# xe pif-list host-name-label='hostname' device=eth3
- These command shows the eth2 and eth3 NICs and their UUIDs. Substitute the ethX
- devices of your choice. Call the UUID's returned by the above command slave1-UUID
- and slave2-UUID.
-
-
- Create a new network for the bond. For example, a new network with name
- "cloud-public".
- This label is important. &PRODUCT; looks for a network by a
- name you configure. You must use the same name-label for all hosts in the cloud for
- the public network.
- # xe network-create name-label=cloud-public
-# xe bond-create network-uuid=[uuid of cloud-public created above]
-pif-uuids=[slave1-uuid],[slave2-uuid]
-
-
- Now you have a bonded pair that can be recognized by &PRODUCT; as the public
- network.
-
-
- Adding More Hosts to the Cluster
- With the bonds (if any) established on the master, you should add additional, slave
- hosts. Run the following command for all additional hosts to be added to the cluster. This
- will cause the host to join the master in a single XenServer pool.
- # xe pool-join master-address=[master IP] master-username=root
-master-password=[your password]
-
-
- Complete the Bonding Setup Across the Cluster
- With all hosts added to the pool, run the cloud-setup-bond script. This script will
- complete the configuration and set up of the bonds across all hosts in the cluster.
-
-
- Copy the script from the Management Server in
- /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/cloud-setup-bonding.sh to the
- master host and ensure it is executable.
-
-
- Run the script:
- # ./cloud-setup-bonding.sh
-
-
- Now the bonds are set up and configured properly across the cluster.
-
-
-
-
- Upgrading XenServer Versions
- This section tells how to upgrade XenServer software on &PRODUCT; hosts. The actual
- upgrade is described in XenServer documentation, but there are some additional steps you must
- perform before and after the upgrade.
-
- Be sure the hardware is certified compatible with the new version of XenServer.
-
- To upgrade XenServer:
-
-
- Upgrade the database. On the Management Server node:
-
-
- Back up the database:
- # mysqldump --user=root --databases cloud > cloud.backup.sql
-# mysqldump --user=root --databases cloud_usage > cloud_usage.backup.sql
-
-
- You might need to change the OS type settings for VMs running on the upgraded
- hosts.
-
-
- If you upgraded from XenServer 5.6 GA to XenServer 5.6 SP2, change any VMs
- that have the OS type CentOS 5.5 (32-bit), Oracle Enterprise Linux 5.5 (32-bit),
- or Red Hat Enterprise Linux 5.5 (32-bit) to Other Linux (32-bit). Change any VMs
- that have the 64-bit versions of these same OS types to Other Linux
- (64-bit).
-
-
- If you upgraded from XenServer 5.6 SP2 to XenServer 6.0.2, change any VMs that
- have the OS type CentOS 5.6 (32-bit), CentOS 5.7 (32-bit), Oracle Enterprise Linux
- 5.6 (32-bit), Oracle Enterprise Linux 5.7 (32-bit), Red Hat Enterprise Linux 5.6
- (32-bit) , or Red Hat Enterprise Linux 5.7 (32-bit) to Other Linux (32-bit).
- Change any VMs that have the 64-bit versions of these same OS types to Other Linux
- (64-bit).
-
-
- If you upgraded from XenServer 5.6 to XenServer 6.0.2, do all of the
- above.
-
-
-
-
- Restart the Management Server and Usage Server. You only need to do this once for
- all clusters.
- # service cloudstack-management start
-# service cloudstack-usage start
-
-
-
-
- Disconnect the XenServer cluster from &PRODUCT;.
-
-
- Log in to the &PRODUCT; UI as root.
-
-
- Navigate to the XenServer cluster, and click Actions – Unmanage.
-
-
- Watch the cluster status until it shows Unmanaged.
-
-
-
-
- Log in to one of the hosts in the cluster, and run this command to clean up the
- VLAN:
- # . /opt/xensource/bin/cloud-clean-vlan.sh
-
-
- Still logged in to the host, run the upgrade preparation script:
- # /opt/xensource/bin/cloud-prepare-upgrade.sh
- Troubleshooting: If you see the error "can't eject CD," log in to the
- VM and umount the CD, then run the script again.
-
-
- Upgrade the XenServer software on all hosts in the cluster. Upgrade the master
- first.
-
-
- Live migrate all VMs on this host to other hosts. See the instructions for live
- migration in the Administrator's Guide.
- Troubleshooting: You might see the following error when you migrate a VM:
- [root@xenserver-qa-2-49-4 ~]# xe vm-migrate live=true host=xenserver-qa-2-49-5 vm=i-2-8-VM
-You attempted an operation on a VM which requires PV drivers to be installed but the drivers were not detected.
-vm: b6cf79c8-02ee-050b-922f-49583d9f1a14 (i-2-8-VM)
- To solve this issue, run the following:
- # /opt/xensource/bin/make_migratable.sh b6cf79c8-02ee-050b-922f-49583d9f1a14
-
-
- Reboot the host.
-
-
- Upgrade to the newer version of XenServer. Use the steps in XenServer
- documentation.
-
-
- After the upgrade is complete, copy the following files from the management server
- to this host, in the directory locations shown below:
-
-
-
-
-
-
- Copy this Management Server file...
- ...to this location on the XenServer host
-
-
-
-
- /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/xenserver60/NFSSR.py
- /opt/xensource/sm/NFSSR.py
-
-
- /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/setupxenserver.sh
- /opt/xensource/bin/setupxenserver.sh
-
-
- /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/make_migratable.sh
- /opt/xensource/bin/make_migratable.sh
-
-
- /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/cloud-clean-vlan.sh
- /opt/xensource/bin/cloud-clean-vlan.sh
-
-
-
-
-
-
- Run the following script:
- # /opt/xensource/bin/setupxenserver.sh
- Troubleshooting: If you see the following error message, you can safely ignore
- it.
- mv: cannot stat `/etc/cron.daily/logrotate': No such file or directory
-
-
- Plug in the storage repositories (physical block devices) to the XenServer
- host:
- # for pbd in `xe pbd-list currently-attached=false| grep ^uuid | awk '{print $NF}'`; do xe pbd-plug uuid=$pbd ; done
- Note: If you add a host to this XenServer pool, you need to migrate all VMs on
- this host to other hosts, and eject this host from XenServer pool.
-
-
-
-
- Repeat these steps to upgrade every host in the cluster to the same version of
- XenServer.
-
-
- Run the following command on one host in the XenServer cluster to clean up the host
- tags:
- # for host in $(xe host-list | grep ^uuid | awk '{print $NF}') ; do xe host-param-clear uuid=$host param-name=tags; done;
-
- When copying and pasting a command, be sure the command has pasted as a single line
- before executing. Some document viewers may introduce unwanted line breaks in copied
- text.
-
-
-
- Reconnect the XenServer cluster to &PRODUCT;.
-
-
- Log in to the &PRODUCT; UI as root.
-
-
- Navigate to the XenServer cluster, and click Actions – Manage.
-
-
- Watch the status to see that all the hosts come up.
-
-
-
-
- After all hosts are up, run the following on one host in the cluster:
- # /opt/xensource/bin/cloud-clean-vlan.sh
-
-
-
-
diff --git a/docs/en-US/cloud-infrastructure-concepts.xml b/docs/en-US/cloud-infrastructure-concepts.xml
deleted file mode 100644
index 2ba228aa4dd..00000000000
--- a/docs/en-US/cloud-infrastructure-concepts.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Cloud Infrastructure Concepts
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/cloud-infrastructure-overview.xml b/docs/en-US/cloud-infrastructure-overview.xml
deleted file mode 100644
index 49a413871a5..00000000000
--- a/docs/en-US/cloud-infrastructure-overview.xml
+++ /dev/null
@@ -1,79 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Cloud Infrastructure Overview
-
- The Management Server manages one or more zones (typically,
- datacenters) containing host computers where guest virtual
- machines will run. The cloud infrastructure is organized as follows:
-
-
-
-
- Zone: Typically, a zone is equivalent to a single
- datacenter. A zone consists of one or more pods and secondary
- storage.
-
-
-
-
- Pod: A pod is usually one rack of hardware that includes a
- layer-2 switch and one or more clusters.
-
-
-
-
- Cluster: A cluster consists of one or more hosts and primary
- storage.
-
-
-
-
- Host: A single compute node within a cluster. The hosts are
- where the actual cloud services run in the form of guest
- virtual machines.
-
-
-
-
- Primary storage is associated with a cluster, and it stores
- the disk volumes for all the VMs running on hosts in that cluster.
-
-
-
- Secondary storage is associated with a zone, and it stores
- templates, ISO images, and disk volume snapshots.
-
-
-
-
-
-
-
- infrastructure_overview.png: Nested organization of a zone
-
- More Information
- For more information, see documentation on cloud infrastructure concepts.
-
diff --git a/docs/en-US/cloudmonkey.xml b/docs/en-US/cloudmonkey.xml
deleted file mode 100644
index be4d17c3aa1..00000000000
--- a/docs/en-US/cloudmonkey.xml
+++ /dev/null
@@ -1,264 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- CloudMonkey
- CloudMonkey is the &PRODUCT; Command Line Interface (CLI). It is written in Python. CloudMonkey can be used both as an interactive shell and as a command line tool which simplifies &PRODUCT; configuration and management. It can be used with &PRODUCT; releases since the 4.0.x branch.
-
- CloudMonkey is still under development and should be considered a Work In Progress (WIP), the wiki is the most up to date documentation:
- https://cwiki.apache.org/CLOUDSTACK/cloudstack-cloudmonkey-cli.html
-
-
-
- Installing CloudMonkey
- CloudMonkey is dependent on readline, pygments, prettytable, when installing from source you will need to resolve those dependencies. Using the cheese shop, the dependencies will be automatically installed.
- There are three ways to get CloudMonkey. Via the official &PRODUCT; source releases or via a community maintained distribution at the cheese shop. Developers can also get it directly from the git repository in tools/cli/.
-
-
-
- Via the official Apache &PRODUCT; releases as well as the git repository.
-
-
-
-
-
- Via a community maintained package on Cheese Shop
- pip install cloudmonkey
-
-
-
-
-
-
- Configuration
- To configure CloudMonkey you can edit the ~/.cloudmonkey/config file in the user's home directory as shown below. The values can also be set interactively at the cloudmonkey prompt. Logs are kept in ~/.cloudmonkey/log, and history is stored in ~/.cloudmonkey/history. Discovered apis are listed in ~/.cloudmonkey/cache. Only the log and history files can be custom paths and can be configured by setting appropriate file paths in ~/.cloudmonkey/config
-
-$ cat ~/.cloudmonkey/config
-[core]
-log_file = /Users/sebastiengoasguen/.cloudmonkey/log
-asyncblock = true
-paramcompletion = false
-history_file = /Users/sebastiengoasguen/.cloudmonkey/history
-
-[ui]
-color = true
-prompt = >
-tabularize = false
-
-[user]
-secretkey =VDaACYb0LV9eNjTetIOElcVQkvJck_J_QljX_FcHRj87ZKiy0z0ty0ZsYBkoXkY9b7eq1EhwJaw7FF3akA3KBQ
-apikey = plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdMkAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg
-
-[server]
-path = /client/api
-host = localhost
-protocol = http
-port = 8080
-timeout = 3600
-
- The values can also be set at the CloudMonkey prompt. The API and secret keys are obtained via the &PRODUCT; UI or via a raw api call.
-
- set prompt myprompt>
-myprompt> set host localhost
-myprompt> set port 8080
-myprompt> set apikey
-myprompt> set secretkey
-]]>
-
- You can use CloudMonkey to interact with a local cloud, and even with a remote public cloud. You just need to set the host value properly and obtain the keys from the cloud administrator.
-
-
-
- API Discovery
-
- In &PRODUCT; 4.0.* releases, the list of api calls available will be pre-cached, while starting with &PRODUCT; 4.1 releases and above an API discovery service is enabled. CloudMonkey will discover automatically the api calls available on the management server. The sync command in CloudMonkey pulls a list of apis which are accessible to your user role, along with help docs etc. and stores them in ~/.cloudmonkey/cache. This allows cloudmonkey to be adaptable to changes in mgmt server, so in case the sysadmin enables a plugin such as Nicira NVP for that user role, the users can get those changes. New verbs and grammar (DSL) rules are created on the fly.
-
- To discover the APIs available do:
-
- > sync
-324 APIs discovered and cached
-
-
-
-
- Tabular Output
- The number of key/value pairs returned by the api calls can be large resulting in a very long output. To enable easier viewing of the output, a tabular formatting can be setup. You may enable tabular listing and even choose set of column fields, this allows you to create your own field using the filter param which takes in comma separated argument. If argument has a space, put them under double quotes. The create table will have the same sequence of field filters provided
- To enable it, use the set function and create filters like so:
-
-> set tabularize true
-> list users filter=id,domain,account
-count = 1
-user:
-+--------------------------------------+--------+---------+
-| id | domain | account |
-+--------------------------------------+--------+---------+
-| 7ed6d5da-93b2-4545-a502-23d20b48ef2a | ROOT | admin |
-+--------------------------------------+--------+---------+
-
-
-
-
- Interactive Shell Usage
- To start learning CloudMonkey, the best is to use the interactive shell. Simply type CloudMonkey at the prompt and you should get the interactive shell.
- At the CloudMonkey prompt press the tab key twice, you will see all potential verbs available. Pick on, enter a space and then press tab twice. You will see all actions available for that verb
-
-
-EOF assign cancel create detach extract ldap prepare reconnect restart shell update
-activate associate change delete disable generate list query register restore start upload
-add attach configure deploy enable get mark quit remove revoke stop
-api authorize copy destroy exit help migrate reboot reset set suspend
-cloudmonkey>create
-account diskoffering loadbalancerrule portforwardingrule snapshot tags vpc
-autoscalepolicy domain network privategateway snapshotpolicy template vpcoffering
-autoscalevmgroup firewallrule networkacl project sshkeypair user vpnconnection
-autoscalevmprofile instancegroup networkoffering remoteaccessvpn staticroute virtualrouterelement vpncustomergateway
-condition ipforwardingrule physicalnetwork securitygroup storagenetworkiprange vlaniprange vpngateway
-counter lbstickinesspolicy pod serviceoffering storagepool volume zone
-]]>
-
- Picking one action and entering a space plus the tab key, you will obtain the list of parameters for that specific api call.
-
-create network
-account= domainid= isAsync= networkdomain= projectid= vlan=
-acltype= endip= name= networkofferingid= startip= vpcid=
-displaytext= gateway= netmask= physicalnetworkid= subdomainaccess= zoneid=
-]]>
-
- To get additional help on that specific api call you can use the following:
-
-create network -h
-Creates a network
-Required args: displaytext name networkofferingid zoneid
-Args: account acltype displaytext domainid endip gateway isAsync name netmask networkdomain networkofferingid physicalnetworkid projectid startip subdomainaccess vlan vpcid zoneid
-
-cloudmonkey>create network -help
-Creates a network
-Required args: displaytext name networkofferingid zoneid
-Args: account acltype displaytext domainid endip gateway isAsync name netmask networkdomain networkofferingid physicalnetworkid projectid startip subdomainaccess vlan vpcid zoneid
-
-cloudmonkey>create network --help
-Creates a network
-Required args: displaytext name networkofferingid zoneid
-Args: account acltype displaytext domainid endip gateway isAsync name netmask networkdomain networkofferingid physicalnetworkid projectid startip subdomainaccess vlan vpcid zoneid
-cloudmonkey>
-]]>
-
- Note the required arguments necessary for the calls.
- To find out the required parameters value, using a debugger console on the &PRODUCT; UI might be very useful. For instance using Firebug on Firefox, you can navigate the UI and check the parameters values for each call you are making as you navigate the UI.
-
-
-
- Starting a Virtual Machine instance with CloudMonkey
- To start a virtual machine instance we will use the deploy virtualmachine call.
-
-deploy virtualmachine -h
-Creates and automatically starts a virtual machine based on a service offering, disk offering, and template.
-Required args: serviceofferingid templateid zoneid
-Args: account diskofferingid displayname domainid group hostid hypervisor ipaddress iptonetworklist isAsync keyboard keypair name networkids projectid securitygroupids securitygroupnames serviceofferingid size startvm templateid userdata zoneid
-]]>
-
- The required arguments are serviceofferingid, templateid and zoneid
- In order to specify the template that we want to use, we can list all available templates with the following call:
-
-list templates templatefilter=all
-count = 2
-template:
-========
-domain = ROOT
-domainid = 8a111e58-e155-4482-93ce-84efff3c7c77
-zoneid = e1bfdfaf-3d9b-43d4-9aea-2c9f173a1ae7
-displaytext = SystemVM Template (XenServer)
-ostypeid = 849d7d0a-9fbe-452a-85aa-70e0a0cbc688
-passwordenabled = False
-id = 6d360f79-4de9-468c-82f8-a348135d298e
-size = 2101252608
-isready = True
-templatetype = SYSTEM
-zonename = devcloud
-...
-]]>
-
- In this snippet, I used DevCloud and only showed the beginning output of the first template, the SystemVM template
- Similarly to get the serviceofferingid you would do:
-
-list serviceofferings | grep id
-id = ef2537ad-c70f-11e1-821b-0800277e749c
-id = c66c2557-12a7-4b32-94f4-48837da3fa84
-id = 3d8b82e5-d8e7-48d5-a554-cf853111bc50
-]]>
-
- Note that we can use the linux pipe as well as standard linux commands within the interactive shell. Finally we would start an instance with the following call:
-
-deploy virtualmachine templateid=13ccff62-132b-4caf-b456-e8ef20cbff0e zoneid=e1bfdfaf-3d9b-43d4-9aea-2c9f173a1ae7 serviceofferingid=ef2537ad-c70f-11e1-821b-0800277e749c
-jobprocstatus = 0
-created = 2013-03-05T13:04:51-0800
-cmd = com.cloud.api.commands.DeployVMCmd
-userid = 7ed6d5da-93b2-4545-a502-23d20b48ef2a
-jobstatus = 1
-jobid = c441d894-e116-402d-aa36-fdb45adb16b7
-jobresultcode = 0
-jobresulttype = object
-jobresult:
-=========
-virtualmachine:
-==============
-domain = ROOT
-domainid = 8a111e58-e155-4482-93ce-84efff3c7c77
-haenable = False
-templatename = tiny Linux
-...
-]]>
-
- The instance would be stopped with:
-
-cloudmonkey>stop virtualmachine id=7efe0377-4102-4193-bff8-c706909cc2d2
-
- The ids that you will use will differ from this example. Make sure you use the ones that corresponds to your &PRODUCT; cloud.
-
-
-
- Scripting with CloudMonkey
- All previous examples use CloudMonkey via the interactive shell, however it can be used as a straightfoward CLI, passing the commands to the cloudmonkey command like shown below.
- $cloudmonkey list users
- As such it can be used in shell scripts, it can received commands via stdin and its output can be parsed like any other unix commands as mentioned before.
-
-
-
diff --git a/docs/en-US/cloudstack-api.xml b/docs/en-US/cloudstack-api.xml
deleted file mode 100644
index 891b19f580b..00000000000
--- a/docs/en-US/cloudstack-api.xml
+++ /dev/null
@@ -1,31 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- &PRODUCT; API
- The &PRODUCT; API is a low level API that has been used to implement the &PRODUCT; web UIs. It is also a good basis for implementing other popular APIs such as EC2/S3 and emerging DMTF standards.
- Many &PRODUCT; API calls are asynchronous. These will return a Job ID immediately when called. This Job ID can be used to query the status of the job later. Also, status calls on impacted resources will provide some indication of their state.
- The API has a REST-like query basis and returns results in XML or JSON.
- See the Developer’s Guide and the API Reference.
-
diff --git a/docs/en-US/cloudstack.ent b/docs/en-US/cloudstack.ent
deleted file mode 100644
index abb18851bcf..00000000000
--- a/docs/en-US/cloudstack.ent
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
-
-
-
-
diff --git a/docs/en-US/cloudstack.xml b/docs/en-US/cloudstack.xml
deleted file mode 100644
index 0b762a2da1f..00000000000
--- a/docs/en-US/cloudstack.xml
+++ /dev/null
@@ -1,80 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
-
- &PRODUCT; Complete Documentation
- Apache CloudStack
- 4.0.0-incubating
- 1
-
-
-
- Complete documentation for &PRODUCT;.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/cluster-add.xml b/docs/en-US/cluster-add.xml
deleted file mode 100644
index 3046c5e0dfd..00000000000
--- a/docs/en-US/cluster-add.xml
+++ /dev/null
@@ -1,31 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Adding a Cluster
- You need to tell &PRODUCT; about the hosts that it will manage. Hosts exist inside clusters, so before you begin adding hosts to the cloud, you must add at least one cluster.
-
-
-
-
diff --git a/docs/en-US/compatibility-matrix.xml b/docs/en-US/compatibility-matrix.xml
deleted file mode 100644
index 8576f71e781..00000000000
--- a/docs/en-US/compatibility-matrix.xml
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Compatibility Matrix
-
-
-
-
- Hypervisor
- CloudStack 2.1.x
- CloudStack 2.2.x
- CloudStack 3.0.0
- CloudStack 3.0.1
- CloudStack 3.0.2
- CloudStack 3.0.3
-
-
-
-
- XenServer 5.6
- Yes
- Yes
- No
- No
- No
- No
-
-
- XenServer 5.6 FP1
- Yes
- Yes
- No
- No
- No
- No
-
-
- XenServer 5.6 SP2
- Yes
- Yes
- No
- No
- Yes
- Yes
-
-
- XenServer 6.0.0
- No
- No
- No
- No
- No
- Yes
-
-
- XenServer 6.0.2
- No
- No
- Yes
- Yes
- Yes
- Yes
-
-
- XenServer 6.1
- No
- No
- No
- No
- No
- No
-
-
- KVM (RHEL 6.0 or 6.1)
- Yes
- Yes
- Yes
- Yes
- Yes
- Yes
-
-
- VMware (vSphere and vCenter, both version 4.1)
- Yes
- Yes
- Yes
- Yes
- Yes
- Yes
-
-
-
-
-
diff --git a/docs/en-US/compute-disk-service-offerings.xml b/docs/en-US/compute-disk-service-offerings.xml
deleted file mode 100644
index 1fd2a91a38b..00000000000
--- a/docs/en-US/compute-disk-service-offerings.xml
+++ /dev/null
@@ -1,50 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Compute and Disk Service Offerings
- A service offering is a set of virtual hardware features such as CPU core count and speed, memory, and disk size. The &PRODUCT; administrator can set up various offerings, and then end users choose from the available offerings when they create a new VM. A service offering includes the following elements:
-
- CPU, memory, and network resource guarantees
- How resources are metered
- How the resource usage is charged
- How often the charges are generated
-
- For example, one service offering might allow users to create a virtual machine instance that is equivalent to a 1 GHz Intel® Core™ 2 CPU, with 1 GB memory at $0.20/hour, with network traffic metered at $0.10/GB. Based on the user’s selected offering, &PRODUCT; emits usage records that can be integrated with billing systems. &PRODUCT; separates service offerings into compute offerings and disk offerings. The computing service offering specifies:
-
- Guest CPU
- Guest RAM
- Guest Networking type (virtual or direct)
- Tags on the root disk
-
- The disk offering specifies:
-
- Disk size (optional). An offering without a disk size will allow users to pick their own
- Tags on the data disk
-
-
-
-
-
-
-
diff --git a/docs/en-US/concepts.xml b/docs/en-US/concepts.xml
deleted file mode 100644
index e20f442a935..00000000000
--- a/docs/en-US/concepts.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Concepts
-
-
-
-
diff --git a/docs/en-US/configure-acl.xml b/docs/en-US/configure-acl.xml
deleted file mode 100644
index 3ac2b7462c4..00000000000
--- a/docs/en-US/configure-acl.xml
+++ /dev/null
@@ -1,287 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Configuring Network Access Control List
- Define Network Access Control List (ACL) on the VPC virtual router to control incoming
- (ingress) and outgoing (egress) traffic between the VPC tiers, and the tiers and Internet. By
- default, all incoming traffic to the guest networks is blocked and all outgoing traffic from
- guest networks is allowed, once you add an ACL rule for outgoing traffic, then only outgoing
- traffic specified in this ACL rule is allowed, the rest is blocked. To open the ports, you must
- create a new network ACL. The network ACLs can be created for the tiers only if the NetworkACL
- service is supported.
-
- About Network ACL Lists
- In &PRODUCT; terminology, Network ACL is a group of Network ACL items. Network ACL items
- are nothing but numbered rules that are evaluated in order, starting with the lowest numbered
- rule. These rules determine whether traffic is allowed in or out of any tier associated with
- the network ACL. You need to add the Network ACL items to the Network ACL, then associate the
- Network ACL with a tier. Network ACL is associated with a VPC and can be assigned to multiple
- VPC tiers within a VPC. A Tier is associated with a Network ACL at all the times. Each tier
- can be associated with only one ACL.
- The default Network ACL is used when no ACL is associated. Default behavior is all the
- incoming traffic is blocked and outgoing traffic is allowed from the tiers. Default network
- ACL cannot be removed or modified. Contents of the default Network ACL is:
-
-
-
-
-
-
-
-
-
- Rule
- Protocol
- Traffic type
- Action
- CIDR
-
-
-
-
- 1
- All
- Ingress
- Deny
- 0.0.0.0/0
-
-
- 2
- All
- Egress
- Deny
- 0.0.0.0/0
-
-
-
-
-
-
- Creating ACL Lists
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In the Select view, select VPC.
- All the VPCs that you have created for the account is listed in the page.
-
-
- Click the Configure button of the VPC.
- For each tier, the following options are displayed:
-
-
- Internal LB
-
-
- Public LB IP
-
-
- Static NAT
-
-
- Virtual Machines
-
-
- CIDR
-
-
- The following router information is displayed:
-
-
- Private Gateways
-
-
- Public IP Addresses
-
-
- Site-to-Site VPNs
-
-
- Network ACL Lists
-
-
-
-
- Select Network ACL Lists.
- The following default rules are displayed in the Network ACLs page: default_allow,
- default_deny.
-
-
- Click Add ACL Lists, and specify the following:
-
-
- ACL List Name: A name for the ACL list.
-
-
- Description: A short description of the ACL list
- that can be displayed to users.
-
-
-
-
-
-
- Creating an ACL Rule
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In the Select view, select VPC.
- All the VPCs that you have created for the account is listed in the page.
-
-
- Click the Configure button of the VPC.
-
-
- Select Network ACL Lists.
- In addition to the custom ACL lists you have created, the following default rules are
- displayed in the Network ACLs page: default_allow, default_deny.
-
-
- Select the desired ACL list.
-
-
- Select the ACL List Rules tab.
- To add an ACL rule, fill in the following fields to specify what kind of network
- traffic is allowed in the VPC.
-
-
- Rule Number: The order in which the rules are
- evaluated.
-
-
- CIDR: The CIDR acts as the Source CIDR for the
- Ingress rules, and Destination CIDR for the Egress rules. To accept traffic only from
- or to the IP addresses within a particular address block, enter a CIDR or a
- comma-separated list of CIDRs. The CIDR is the base IP address of the incoming
- traffic. For example, 192.168.0.0/22. To allow all CIDRs, set to 0.0.0.0/0.
-
-
- Action: What action to be taken. Allow traffic or
- block.
-
-
- Protocol: The networking protocol that sources
- use to send traffic to the tier. The TCP and UDP protocols are typically used for data
- exchange and end-user communications. The ICMP protocol is typically used to send
- error messages or network monitoring data. All supports all the traffic. Other option
- is Protocol Number.
-
-
- Start Port, End
- Port (TCP, UDP only): A range of listening ports that are the destination
- for the incoming traffic. If you are opening a single port, use the same number in
- both fields.
-
-
- Protocol Number: The protocol number associated
- with IPv4 or IPv6. For more information, see Protocol
- Numbers.
-
-
- ICMP Type, ICMP
- Code (ICMP only): The type of message and error code that will be
- sent.
-
-
- Traffic Type: The type of traffic: Incoming or
- outgoing.
-
-
-
-
- Click Add. The ACL rule is added.
- You can edit the tags assigned to the ACL rules and delete the ACL rules you have
- created. Click the appropriate button in the Details tab.
-
-
-
-
- Creating a Tier with Custom ACL List
-
-
- Create a VPC.
-
-
- Create a custom ACL list.
-
-
- Add ACL rules to the ACL list.
-
-
- Create a tier in the VPC.
- Select the desired ACL list while creating a tier.
-
-
- Click OK.
-
-
-
-
- Assigning a Custom ACL List to a Tier
-
-
- Create a VPC.
-
-
- Create a tier in the VPC.
-
-
- Associate the tier with the default ACL rule.
-
-
- Create a custom ACL list.
-
-
- Add ACL rules to the ACL list.
-
-
- Select the tier for which you want to assign the custom ACL.
-
-
- Click the Replace ACL List icon.
-
-
-
-
- replace-acl-icon.png: button to replace an ACL list
-
-
- The Replace ACL List dialog is displayed.
-
-
- Select the desired ACL list.
-
-
- Click OK.
-
-
-
-
diff --git a/docs/en-US/configure-guest-traffic-in-advanced-zone.xml b/docs/en-US/configure-guest-traffic-in-advanced-zone.xml
deleted file mode 100644
index fb6685091a5..00000000000
--- a/docs/en-US/configure-guest-traffic-in-advanced-zone.xml
+++ /dev/null
@@ -1,79 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Configure Guest Traffic in an Advanced Zone
- These steps assume you have already logged in to the &PRODUCT; UI. To configure the base
- guest network:
-
-
- In the left navigation, choose Infrastructure. On Zones, click View More, then click the
- zone to which you want to add a network.
-
-
- Click the Network tab.
-
-
- Click Add guest network.
- The Add guest network window is displayed:
-
-
-
-
-
- networksetupzone.png: Depicts network setup in a single zone
-
-
-
-
- Provide the following information:
-
-
- Name. The name of the network. This will be
- user-visible
-
-
- Display Text: The description of the network. This
- will be user-visible
-
-
- Zone: The zone in which you are configuring the
- guest network.
-
-
- Network offering: If the administrator has
- configured multiple network offerings, select the one you want to use for this
- network
-
-
- Guest Gateway: The gateway that the guests should
- use
-
-
- Guest Netmask: The netmask in use on the subnet the
- guests will use
-
-
-
-
- Click OK.
-
-
-
\ No newline at end of file
diff --git a/docs/en-US/configure-package-repository.xml b/docs/en-US/configure-package-repository.xml
deleted file mode 100644
index cda46773f53..00000000000
--- a/docs/en-US/configure-package-repository.xml
+++ /dev/null
@@ -1,69 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Configure package repository
- &PRODUCT; is only distributed from source from the official mirrors.
- However, members of the CloudStack community may build convenience binaries
- so that users can install Apache CloudStack without needing to build from
- source.
-
-
- If you didn't follow the steps to build your own packages from source
- in the sections for or
- you may find pre-built
- DEB and RPM packages for your convenience linked from the
- downloads
- page.
-
-
- These repositories contain both the Management Server and KVM Hypervisor packages.
-
-
- DEB package repository
- You can add a DEB package repository to your apt sources with the following commands. Please note that only packages for Ubuntu 12.04 LTS (precise) are being built at this time.
- Use your preferred editor and open (or create) /etc/apt/sources.list.d/cloudstack.list. Add the community provided repository to the file:
-deb http://cloudstack.apt-get.eu/ubuntu precise 4.1
- We now have to add the public key to the trusted keys.
- $wget -O - http://cloudstack.apt-get.eu/release.asc|apt-key add -
- Now update your local apt cache.
- $apt-get update
- Your DEB package repository should now be configured and ready for use.
-
-
- RPM package repository
- There is a RPM package repository for &PRODUCT; so you can easily install on RHEL based platforms.
- If you're using an RPM-based system, you'll want to add the Yum repository so that you can install &PRODUCT; with Yum.
- Yum repository information is found under /etc/yum.repos.d. You'll see several .repo files in this directory, each one denoting a specific repository.
- To add the &PRODUCT; repository, create /etc/yum.repos.d/cloudstack.repo and insert the following information.
-
-[cloudstack]
-name=cloudstack
-baseurl=http://cloudstack.apt-get.eu/rhel/4.1/
-enabled=1
-gpgcheck=0
-
- Now you should be able to install CloudStack using Yum.
-
-
diff --git a/docs/en-US/configure-public-traffic-in-an-advanced-zone.xml b/docs/en-US/configure-public-traffic-in-an-advanced-zone.xml
deleted file mode 100644
index 7a61cd380af..00000000000
--- a/docs/en-US/configure-public-traffic-in-an-advanced-zone.xml
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Configure Public Traffic in an Advanced Zone
- In a zone that uses advanced networking, you need to configure at least one range of IP
- addresses for Internet traffic.
-
\ No newline at end of file
diff --git a/docs/en-US/configure-snmp-rhel.xml b/docs/en-US/configure-snmp-rhel.xml
deleted file mode 100644
index bd227ff8ed5..00000000000
--- a/docs/en-US/configure-snmp-rhel.xml
+++ /dev/null
@@ -1,86 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Configuring SNMP Community String on a RHEL Server
- The SNMP Community string is similar to a user id or password that provides access to a
- network device, such as router. This string is sent along with all SNMP requests. If the
- community string is correct, the device responds with the requested information. If the
- community string is incorrect, the device discards the request and does not respond.
- The NetScaler device uses SNMP to communicate with the VMs. You must install SNMP and
- configure SNMP Community string for a secure communication between the NetScaler device and the
- RHEL machine.
-
-
- Ensure that you installed SNMP on RedHat. If not, run the following command:
- yum install net-snmp-utils
-
-
- Edit the /etc/snmp/snmpd.conf file to allow the SNMP polling from the NetScaler
- device.
-
-
- Map the community name into a security name (local and mynetwork, depending on where
- the request is coming from):
-
- Use a strong password instead of public when you edit the following table.
-
- # sec.name source community
-com2sec local localhost public
-com2sec mynetwork 0.0.0.0 public
-
- Setting to 0.0.0.0 allows all IPs to poll the NetScaler server.
-
-
-
- Map the security names into group names:
- # group.name sec.model sec.name
-group MyRWGroup v1 local
-group MyRWGroup v2c local
-group MyROGroup v1 mynetwork
-group MyROGroup v2c mynetwork
-
-
- Create a view to allow the groups to have the permission to:
- incl/excl subtree mask view all included .1
-
-
- Grant access with different write permissions to the two groups to the view you
- created.
- # context sec.model sec.level prefix read write notif
- access MyROGroup "" any noauth exact all none none
- access MyRWGroup "" any noauth exact all all all
-
-
-
-
- Unblock SNMP in iptables.
- iptables -A INPUT -p udp --dport 161 -j ACCEPT
-
-
- Start the SNMP service:
- service snmpd start
-
-
- Ensure that the SNMP service is started automatically during the system startup:
- chkconfig snmpd on
-
-
-
diff --git a/docs/en-US/configure-usage-server.xml b/docs/en-US/configure-usage-server.xml
deleted file mode 100644
index 83bed07b349..00000000000
--- a/docs/en-US/configure-usage-server.xml
+++ /dev/null
@@ -1,104 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Configuring the Usage Server
- To configure the usage server:
-
- Be sure the Usage Server has been installed. This requires extra steps beyond just installing the &PRODUCT; software. See Installing the Usage Server (Optional) in the Advanced Installation Guide.
- Log in to the &PRODUCT; UI as administrator.
- Click Global Settings.
- In Search, type usage. Find the configuration parameter that controls the behavior you want to set. See the table below for a description of the available parameters.
- In Actions, click the Edit icon.
- Type the desired value and click the Save icon.
- Restart the Management Server (as usual with any global configuration change) and also the Usage Server:
- # service cloudstack-management restart
-# service cloudstack-usage restart
-
-
- The following table shows the global configuration settings that control the behavior of the Usage Server.
-
-
-
-
- Parameter Name
- Description
-
-
-
-
- enable.usage.server
- Whether the Usage Server is active.
-
-
- usage.aggregation.timezone
- Time zone of usage records. Set this if the usage records and daily job execution are in different time zones. For example, with the following settings, the usage job will run at PST 00:15 and generate usage records for the 24 hours from 00:00:00 GMT to 23:59:59 GMT:
- usage.stats.job.exec.time = 00:15
-usage.execution.timezone = PST
-usage.aggregation.timezone = GMT
-
- Valid values for the time zone are specified in
- Default: GMT
-
-
-
- usage.execution.timezone
- The time zone of usage.stats.job.exec.time. Valid values for the time zone are specified in
- Default: The time zone of the management server.
-
-
-
- usage.sanity.check.interval
- The number of days between sanity checks. Set this in order to periodically search for records with erroneous data before issuing customer invoices. For example, this checks for VM usage records created after the VM was destroyed, and similar checks for templates, volumes, and so on. It also checks for usage times longer than the aggregation range. If any issue is found, the alert ALERT_TYPE_USAGE_SANITY_RESULT = 21 is sent.
-
-
- usage.stats.job.aggregation.range
- The time period in minutes between Usage Server processing jobs. For example, if you set it to 1440, the Usage Server will run once per day. If you set it to 600, it will run every ten hours. In general, when a Usage Server job runs, it processes all events generated since usage was last run.
- There is special handling for the case of 1440 (once per day). In this case the Usage Server does not necessarily process all records since Usage was last run. &PRODUCT; assumes that you require processing once per day for the previous, complete day’s records. For example, if the current day is October 7, then it is assumed you would like to process records for October 6, from midnight to midnight. &PRODUCT; assumes this “midnight to midnight†is relative to the usage.execution.timezone.
- Default: 1440
-
-
-
- usage.stats.job.exec.time
- The time when the Usage Server processing will start. It is specified in 24-hour format (HH:MM) in the time zone of the server, which should be GMT. For example, to start the Usage job at 10:30 GMT, enter “10:30â€.
- If usage.stats.job.aggregation.range is also set, and its value is not 1440, then its value will be added to usage.stats.job.exec.time to get the time to run the Usage Server job again. This is repeated until 24 hours have elapsed, and the next day's processing begins again at usage.stats.job.exec.time.
- Default: 00:15.
-
-
-
-
-
- For example, suppose that your server is in GMT, your user population is predominantly in the East Coast of the United States, and you would like to process usage records every night at 2 AM local (EST) time. Choose these settings:
-
- enable.usage.server = true
- usage.execution.timezone = America/New_York
- usage.stats.job.exec.time = 07:00. This will run the Usage job at 2:00 AM EST. Note that this will shift by an hour as the East Coast of the U.S. enters and exits Daylight Savings Time.
- usage.stats.job.aggregation.range = 1440
-
- With this configuration, the Usage job will run every night at 2 AM EST and will process records for the previous day’s midnight-midnight as defined by the EST (America/New_York) time zone.
- Because the special value 1440 has been used for usage.stats.job.aggregation.range, the Usage
- Server will ignore the data between midnight and 2 AM. That data will be included in the
- next day's run.
-
-
-
diff --git a/docs/en-US/configure-virtual-router.xml b/docs/en-US/configure-virtual-router.xml
deleted file mode 100644
index 8740c0cef8b..00000000000
--- a/docs/en-US/configure-virtual-router.xml
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Configuring the Virtual Router
- You can set the following:
-
- IP range
- Supported network services
- Default domain name for the network serviced by the virtual router
- Gateway IP address
- How often &PRODUCT; fetches network usage statistics from &PRODUCT; virtual routers. If you want to collect traffic metering data from the virtual router, set the global configuration parameter router.stats.interval. If you are not using the virtual router to gather network usage statistics, set it to 0.
-
-
-
diff --git a/docs/en-US/configure-vpc.xml b/docs/en-US/configure-vpc.xml
deleted file mode 100644
index e0e2ee93f19..00000000000
--- a/docs/en-US/configure-vpc.xml
+++ /dev/null
@@ -1,37 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Configuring a Virtual Private Cloud
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/configure-vpn.xml b/docs/en-US/configure-vpn.xml
deleted file mode 100644
index f389f30efc3..00000000000
--- a/docs/en-US/configure-vpn.xml
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Configuring Remote Access VPN
- To set up VPN for the cloud:
-
- Log in to the &PRODUCT; UI as an administrator or end user.
- In the left navigation, click Global Settings.
- Set the following global configuration parameters.
-
- remote.access.vpn.client.ip.range – The range of IP addresses to be allocated to remote access VPN clients. The first IP in the range is used by the VPN server.
- remote.access.vpn.psk.length – Length of the IPSec key.
- remote.access.vpn.user.limit – Maximum number of VPN users per account.
-
- To enable VPN for a particular network:
-
- Log in as a user or administrator to the &PRODUCT; UI.
- In the left navigation, click Network.
- Click the name of the network you want to work with.
- Click View IP Addresses.
- Click one of the displayed IP address names.
- Click the Enable VPN button.
-
-
-
-
- AttachDiskButton.png: button to attach a volume
-
-
- The IPsec key is displayed in a popup window.
-
-
diff --git a/docs/en-US/configure-xenserver-dom0-memory.xml b/docs/en-US/configure-xenserver-dom0-memory.xml
deleted file mode 100644
index 0a02d3e3818..00000000000
--- a/docs/en-US/configure-xenserver-dom0-memory.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Configure XenServer dom0 Memory
- Configure the XenServer dom0 settings to allocate more memory to dom0. This can enable XenServer to handle larger numbers of virtual machines. We recommend 2940 MB of RAM for XenServer dom0. For instructions on how to do this, see Citrix Knowledgebase Article.The article refers to XenServer 5.6, but the same information applies to XenServer 6
-
-
diff --git a/docs/en-US/configuring-projects.xml b/docs/en-US/configuring-projects.xml
deleted file mode 100644
index af1fc5323e3..00000000000
--- a/docs/en-US/configuring-projects.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-
-%BOOK_ENTITIES;
-]>
-
- Configuring Projects
- Before &PRODUCT; users start using projects, the &PRODUCT; administrator must set
- up various systems to support them, including membership invitations, limits on project
- resources, and controls on who can create projects.
-
-
-
-
-
diff --git a/docs/en-US/console-proxy.xml b/docs/en-US/console-proxy.xml
deleted file mode 100644
index 5f9a82027d2..00000000000
--- a/docs/en-US/console-proxy.xml
+++ /dev/null
@@ -1,140 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Console Proxy
- The Console Proxy is a type of System Virtual Machine that has a role in presenting a
- console view via the web UI. It connects the user’s browser to the VNC port made available via
- the hypervisor for the console of the guest. Both the administrator and end user web UIs offer a
- console connection.
- Clicking a console icon brings up a new window. The AJAX code downloaded into that window
- refers to the public IP address of a console proxy VM. There is exactly one public IP address
- allocated per console proxy VM. The AJAX application connects to this IP. The console proxy then
- proxies the connection to the VNC port for the requested VM on the Host hosting the
- guest.
-
- The hypervisors will have many ports assigned to VNC usage so that multiple VNC sessions
- can occur simultaneously.
-
- There is never any traffic to the guest virtual IP, and there is no need to enable VNC
- within the guest.
- The console proxy VM will periodically report its active session count to the Management
- Server. The default reporting interval is five seconds. This can be changed through standard
- Management Server configuration with the parameter consoleproxy.loadscan.interval.
- Assignment of guest VM to console proxy is determined by first determining if the guest VM
- has a previous session associated with a console proxy. If it does, the Management Server will
- assign the guest VM to the target Console Proxy VM regardless of the load on the proxy VM.
- Failing that, the first available running Console Proxy VM that has the capacity to handle new
- sessions is used.
- Console proxies can be restarted by administrators but this will interrupt existing console
- sessions for users.
-
- Using a SSL Certificate for the Console Proxy
- The console viewing functionality uses a dynamic DNS service under the domain name
- realhostip.com to assist in providing SSL security to console sessions. The console proxy is
- assigned a public IP address. In order to avoid browser warnings for mismatched SSL
- certificates, the URL for the new console window is set to the form of
- https://aaa-bbb-ccc-ddd.realhostip.com. You will see this URL during console session creation.
- &PRODUCT; includes the realhostip.com SSL certificate in the console proxy VM. Of course,
- &PRODUCT; cannot know about the DNS A records for our customers' public IPs prior to shipping
- the software. &PRODUCT; therefore runs a dynamic DNS server that is authoritative for the
- realhostip.com domain. It maps the aaa-bbb-ccc-ddd part of the DNS name to the IP address
- aaa.bbb.ccc.ddd on lookups. This allows the browser to correctly connect to the console
- proxy's public IP, where it then expects and receives a SSL certificate for realhostip.com,
- and SSL is set up without browser warnings.
-
-
- Changing the Console Proxy SSL Certificate and Domain
- If the administrator prefers, it is possible for the URL of the customer's console session
- to show a domain other than realhostip.com. The administrator can customize the displayed
- domain by selecting a different domain and uploading a new SSL certificate and private key.
- The domain must run a DNS service that is capable of resolving queries for addresses of the
- form aaa-bbb-ccc-ddd.your.domain to an IPv4 IP address in the form aaa.bbb.ccc.ddd, for
- example, 202.8.44.1. To change the console proxy domain, SSL certificate, and private
- key:
-
-
- Set up dynamic name resolution or populate all possible DNS names in your public IP
- range into your existing DNS server with the format aaa-bbb-ccc-ddd.company.com ->
- aaa.bbb.ccc.ddd.
-
-
- Generate the private key and certificate signing request (CSR). When you are using
- openssl to generate private/public key pairs and CSRs, for the private key that you are
- going to paste into the &PRODUCT; UI, be sure to convert it into PKCS#8 format.
-
-
- Generate a new 2048-bit private key
- openssl genrsa -des3 -out yourprivate.key 2048
-
-
- Generate a new certificate CSR
- openssl req -new -key yourprivate.key -out yourcertificate.csr
-
-
- Head to the website of your favorite trusted Certificate Authority, purchase an
- SSL certificate, and submit the CSR. You should receive a valid certificate in
- return
-
-
- Convert your private key format into PKCS#8 encrypted format.
- openssl pkcs8 -topk8 -in yourprivate.key -out yourprivate.pkcs8.encrypted.key
-
-
- Convert your PKCS#8 encrypted private key into the PKCS#8 format that is compliant
- with &PRODUCT;
- openssl pkcs8 -in yourprivate.pkcs8.encrypted.key -out yourprivate.pkcs8.key
-
-
-
-
- In the Update SSL Certificate screen of the &PRODUCT; UI, paste the following:
-
-
- The certificate you've just generated.
-
-
- The private key you've just generated.
-
-
- The desired new domain name; for example, company.com
-
-
-
-
-
-
-
- updatessl.png: Updating Console Proxy SSL Certificate
-
-
-
-
- The desired new domain name; for example, company.com
- This stops all currently running console proxy VMs, then restarts them with the new
- certificate and key. Users might notice a brief interruption in console
- availability.
-
-
- The Management Server generates URLs of the form "aaa-bbb-ccc-ddd.company.com" after this
- change is made. The new console requests will be served with the new DNS domain name,
- certificate, and key.
-
-
diff --git a/docs/en-US/convert-hyperv-vm-to-template.xml b/docs/en-US/convert-hyperv-vm-to-template.xml
deleted file mode 100644
index df388234d1f..00000000000
--- a/docs/en-US/convert-hyperv-vm-to-template.xml
+++ /dev/null
@@ -1,69 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Converting a Hyper-V VM to a Template
- To convert a Hyper-V VM to a XenServer-compatible &PRODUCT; template, you will need a standalone XenServer host with an attached NFS VHD SR. Use whatever XenServer version you are using with &PRODUCT;, but use XenCenter 5.6 FP1 or SP2 (it is backwards compatible to 5.6). Additionally, it may help to have an attached NFS ISO SR.
- For Linux VMs, you may need to do some preparation in Hyper-V before trying to get the VM to work in XenServer. Clone the VM and work on the clone if you still want to use the VM in Hyper-V. Uninstall Hyper-V Integration Components and check for any references to device names in /etc/fstab:
-
- From the linux_ic/drivers/dist directory, run make uninstall (where "linux_ic" is the path to the copied Hyper-V Integration Components files).
- Restore the original initrd from backup in /boot/ (the backup is named *.backup0).
- Remove the "hdX=noprobe" entries from /boot/grub/menu.lst.
- Check /etc/fstab for any partitions mounted by device name. Change those entries (if any) to
- mount by LABEL or UUID. You can get that information with the blkid command.
-
- The next step is make sure the VM is not running in Hyper-V, then get the VHD into XenServer. There are two options for doing this.
- Option one:
-
- Import the VHD using XenCenter. In XenCenter, go to Tools>Virtual Appliance Tools>Disk Image Import.
- Choose the VHD, then click Next.
- Name the VM, choose the NFS VHD SR under Storage, enable "Run Operating System Fixups" and choose the NFS ISO SR.
- Click Next, then Finish. A VM should be created.
-
- Option two:
-
- Run XenConvert, under From choose VHD, under To choose XenServer. Click Next.
- Choose the VHD, then click Next.
- Input the XenServer host info, then click Next.
- Name the VM, then click Next, then Convert. A VM should be created.
-
- Once you have a VM created from the Hyper-V VHD, prepare it using the following steps:
-
- Boot the VM, uninstall Hyper-V Integration Services, and reboot.
- Install XenServer Tools, then reboot.
- Prepare the VM as desired. For example, run sysprep on Windows VMs. See .
-
- Either option above will create a VM in HVM mode. This is fine for Windows VMs, but Linux VMs may not perform optimally. Converting a Linux VM to PV mode will require additional steps and will vary by distribution.
-
- Shut down the VM and copy the VHD from the NFS storage to a web server; for example, mount the NFS share on the web server and copy it, or from the XenServer host use sftp or scp to upload it to the web server.
- In &PRODUCT;, create a new template using the following values:
-
- URL. Give the URL for the VHD
- OS Type. Use the appropriate OS. For PV mode on CentOS, choose Other PV (32-bit) or Other PV (64-bit). This choice is available only for XenServer.
- Hypervisor. XenServer
- Format. VHD
-
-
-
- The template will be created, and you can create instances from it.
-
diff --git a/docs/en-US/create-bare-metal-template.xml b/docs/en-US/create-bare-metal-template.xml
deleted file mode 100644
index 0ee4c11fead..00000000000
--- a/docs/en-US/create-bare-metal-template.xml
+++ /dev/null
@@ -1,45 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Creating a Bare Metal Template
- Before you can create a bare metal template, you must have performed several other installation and setup steps to create a bare metal cluster and environment. See Bare Metal Installation in the Installation Guide. It is assumed you already have a directory named "win7_64bit" on your CIFS server, containing the image for the bare metal instance. This directory and image are set up as part of the Bare Metal Installation procedure.
-
- Log in to the &PRODUCT; UI as an administrator or end user.
- In the left navigation bar, click Templates.
- Click Create Template.
- In the dialog box, enter the following values.
-
- Name. Short name for the template.
- Display Text. Description of the template.
- URL. The directory name which contains image file on your CIFS server. For example, win7_64bit.
- Zone. All Zones.
- OS Type. Select the OS type of the ISO image. Choose other if the OS Type of the ISO is not listed or if the ISO is not bootable.
- Hypervisor. BareMetal.
- Format. BareMetal.
- Password Enabled. No.
- Public. No.
- Featured. Choose Yes if you would like this template to be more prominent for users to select. Only administrators may make templates featured.
-
-
diff --git a/docs/en-US/create-linux-template.xml b/docs/en-US/create-linux-template.xml
deleted file mode 100755
index 156a0acf613..00000000000
--- a/docs/en-US/create-linux-template.xml
+++ /dev/null
@@ -1,41 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
-
-
-
- Creating a Linux Template
- Linux templates should be prepared using this documentation in order to prepare your linux VMs for template deployment. For ease of documentation, the VM which you are configuring the template on will be referred to as "Template Master". This guide currently covers legacy setups which do not take advantage of UserData and cloud-init and assumes openssh-server is installed during installation.
-
-
- An overview of the procedure is as follow:
-
- Upload your Linux ISO.For more information, see .
- Create a VM Instance with this ISO. For more information, see .
- Prepare the Linux VM
- Create a template from the VM. For more information, see .
-
-
-
-
-
diff --git a/docs/en-US/create-new-projects.xml b/docs/en-US/create-new-projects.xml
deleted file mode 100644
index 7696c9ee00f..00000000000
--- a/docs/en-US/create-new-projects.xml
+++ /dev/null
@@ -1,37 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Creating a New Project
- &PRODUCT; administrators and domain administrators can create projects. If the global configuration parameter allow.user.create.projects is set to true, end users can also create projects.
-
- Log in as administrator to the &PRODUCT; UI.
- In the left navigation, click Projects.
- In Select view, click Projects.
- Click New Project.
- Give the project a name and description for display to users, then click Create Project.
- A screen appears where you can immediately add more members to the project. This is optional. Click Next when you are ready to move on.
- Click Save.
-
-
diff --git a/docs/en-US/create-template-from-existing-vm.xml b/docs/en-US/create-template-from-existing-vm.xml
deleted file mode 100644
index 35788fdfcc1..00000000000
--- a/docs/en-US/create-template-from-existing-vm.xml
+++ /dev/null
@@ -1,56 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Creating a Template from an Existing Virtual Machine
- Once you have at least one VM set up in the way you want, you can use it as the prototype for other VMs.
-
- Create and start a virtual machine using any of the techniques given in .
- Make any desired configuration changes on the running VM, then click Stop.
- Wait for the VM to stop. When the status shows Stopped, go to the next step.
- Click Create Template and provide the following:
-
- Name and Display Text. These will be shown in the UI, so
- choose something descriptive.
- OS Type. This helps &PRODUCT; and the hypervisor perform
- certain operations and make assumptions that improve the performance of the
- guest. Select one of the following.
-
- If the operating system of the stopped VM is listed, choose it.
- If the OS type of the stopped VM is not listed, choose Other.
- If you want to boot from this template in PV mode, choose Other PV (32-bit) or Other PV (64-bit). This choice is available only for XenServere:
- Note: Generally you should not choose an older version of the OS than the version in the image. For example, choosing CentOS 5.4 to support a CentOS 6.2 image will in general not work. In those cases you should choose Other.
-
-
- Public. Choose Yes to make this template accessible to all
- users of this &PRODUCT; installation. The template will appear in the
- Community Templates list. See .
- Password Enabled. Choose Yes if your template has the
- &PRODUCT; password change script installed. See .
-
- Click Add.
-
- The new template will be visible in the Templates section when the template creation process
- has been completed. The template is then available when creating a new VM.
-
diff --git a/docs/en-US/create-template-from-snapshot.xml b/docs/en-US/create-template-from-snapshot.xml
deleted file mode 100644
index d9684226671..00000000000
--- a/docs/en-US/create-template-from-snapshot.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Creating a Template from a Snapshot
-
- If you do not want to stop the VM in order to use the Create Template menu item (as described in ), you can create a template directly from any snapshot through the &PRODUCT; UI.
-
diff --git a/docs/en-US/create-templates-overview.xml b/docs/en-US/create-templates-overview.xml
deleted file mode 100644
index 900165f482f..00000000000
--- a/docs/en-US/create-templates-overview.xml
+++ /dev/null
@@ -1,37 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Creating Templates: Overview
- &PRODUCT; ships with a default template for the CentOS operating system. There are a variety of ways to add more templates. Administrators and end users can add templates. The typical sequence of events is:
-
- Launch a VM instance that has the operating system you want. Make any other desired configuration changes to the VM.
- Stop the VM.
- Convert the volume into a template.
-
- There are other ways to add templates to &PRODUCT;. For example, you can take a snapshot
- of the VM's volume and create a template from the snapshot, or import a VHD from another
- system into &PRODUCT;.
- The various techniques for creating templates are described in the next few sections.
-
-
diff --git a/docs/en-US/create-vpn-connection-vpc.xml b/docs/en-US/create-vpn-connection-vpc.xml
deleted file mode 100644
index 88a058c9f89..00000000000
--- a/docs/en-US/create-vpn-connection-vpc.xml
+++ /dev/null
@@ -1,122 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Creating a VPN Connection
- &PRODUCT; supports creating up to 8 VPN connections.
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In the Select view, select VPC.
- All the VPCs that you create for the account are listed in the page.
-
-
- Click the Configure button of the VPC to which you want to deploy the VMs.
- The VPC page is displayed where all the tiers you created are listed in a
- diagram.
-
-
- Click the Settings icon.
- For each tier, the following options are displayed:
-
-
- Internal LB
-
-
- Public LB IP
-
-
- Static NAT
-
-
- Virtual Machines
-
-
- CIDR
-
-
- The following router information is displayed:
-
-
- Private Gateways
-
-
- Public IP Addresses
-
-
- Site-to-Site VPNs
-
-
- Network ACL Lists
-
-
-
-
- Select Site-to-Site VPN.
- The Site-to-Site VPN page is displayed.
-
-
- From the Select View drop-down, ensure that VPN Connection is selected.
-
-
- Click Create VPN Connection.
- The Create VPN Connection dialog is displayed:
-
-
-
-
-
- createvpnconnection.png: creating a vpn connection to the customer
- gateway.
-
-
-
-
- Select the desired customer gateway, then click OK to confirm.
- Within a few moments, the VPN Connection is displayed.
- The following information on the VPN connection is displayed:
-
-
- IP Address
-
-
- Gateway
-
-
- State
-
-
- IPSec Preshared Key
-
-
- IKE Policy
-
-
- ESP Policy
-
-
-
-
-
diff --git a/docs/en-US/create-vpn-customer-gateway.xml b/docs/en-US/create-vpn-customer-gateway.xml
deleted file mode 100644
index 8bcd488160c..00000000000
--- a/docs/en-US/create-vpn-customer-gateway.xml
+++ /dev/null
@@ -1,191 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Creating and Updating a VPN Customer Gateway
-
- A VPN customer gateway can be connected to only one VPN gateway at a time.
-
- To add a VPN Customer Gateway:
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In the Select view, select VPN Customer Gateway.
-
-
- Click Add site-to-site VPN.
-
-
-
-
-
- addvpncustomergateway.png: adding a customer gateway.
-
-
- Provide the following information:
-
-
- Name: A unique name for the VPN customer gateway
- you create.
-
-
- Gateway: The IP address for the remote
- gateway.
-
-
- CIDR list: The guest CIDR list of the remote
- subnets. Enter a CIDR or a comma-separated list of CIDRs. Ensure that a guest CIDR list
- is not overlapped with the VPC’s CIDR, or another guest CIDR. The CIDR must be
- RFC1918-compliant.
-
-
- IPsec Preshared Key: Preshared keying is a method
- where the endpoints of the VPN share a secret key. This key value is used to
- authenticate the customer gateway and the VPC VPN gateway to each other.
-
- The IKE peers (VPN end points) authenticate each other by computing and sending a
- keyed hash of data that includes the Preshared key. If the receiving peer is able to
- create the same hash independently by using its Preshared key, it knows that both
- peers must share the same secret, thus authenticating the customer gateway.
-
-
-
- IKE Encryption: The Internet Key Exchange (IKE)
- policy for phase-1. The supported encryption algorithms are AES128, AES192, AES256, and
- 3DES. Authentication is accomplished through the Preshared Keys.
-
- The phase-1 is the first phase in the IKE process. In this initial negotiation
- phase, the two VPN endpoints agree on the methods to be used to provide security for
- the underlying IP traffic. The phase-1 authenticates the two VPN gateways to each
- other, by confirming that the remote gateway has a matching Preshared Key.
-
-
-
- IKE Hash: The IKE hash for phase-1. The supported
- hash algorithms are SHA1 and MD5.
-
-
- IKE DH: A public-key cryptography protocol which
- allows two parties to establish a shared secret over an insecure communications channel.
- The 1536-bit Diffie-Hellman group is used within IKE to establish session keys. The
- supported options are None, Group-5 (1536-bit) and Group-2 (1024-bit).
-
-
- ESP Encryption: Encapsulating Security Payload
- (ESP) algorithm within phase-2. The supported encryption algorithms are AES128, AES192,
- AES256, and 3DES.
-
- The phase-2 is the second phase in the IKE process. The purpose of IKE phase-2 is
- to negotiate IPSec security associations (SA) to set up the IPSec tunnel. In phase-2,
- new keying material is extracted from the Diffie-Hellman key exchange in phase-1, to
- provide session keys to use in protecting the VPN data flow.
-
-
-
- ESP Hash: Encapsulating Security Payload (ESP) hash
- for phase-2. Supported hash algorithms are SHA1 and MD5.
-
-
- Perfect Forward Secrecy: Perfect Forward Secrecy
- (or PFS) is the property that ensures that a session key derived from a set of long-term
- public and private keys will not be compromised. This property enforces a new
- Diffie-Hellman key exchange. It provides the keying material that has greater key
- material life and thereby greater resistance to cryptographic attacks. The available
- options are None, Group-5 (1536-bit) and Group-2 (1024-bit). The security of the key
- exchanges increase as the DH groups grow larger, as does the time of the
- exchanges.
-
- When PFS is turned on, for every negotiation of a new phase-2 SA the two gateways
- must generate a new set of phase-1 keys. This adds an extra layer of protection that
- PFS adds, which ensures if the phase-2 SA’s have expired, the keys used for new
- phase-2 SA’s have not been generated from the current phase-1 keying material.
-
-
-
- IKE Lifetime (seconds): The phase-1 lifetime of the
- security association in seconds. Default is 86400 seconds (1 day). Whenever the time
- expires, a new phase-1 exchange is performed.
-
-
- ESP Lifetime (seconds): The phase-2 lifetime of the
- security association in seconds. Default is 3600 seconds (1 hour). Whenever the value is
- exceeded, a re-key is initiated to provide a new IPsec encryption and authentication
- session keys.
-
-
- Dead Peer Detection: A method to detect an
- unavailable Internet Key Exchange (IKE) peer. Select this option if you want the virtual
- router to query the liveliness of its IKE peer at regular intervals. It’s recommended to
- have the same configuration of DPD on both side of VPN connection.
-
-
-
-
- Click OK.
-
-
-
- Updating and Removing a VPN Customer Gateway
- You can update a customer gateway either with no VPN connection, or related VPN connection
- is in error state.
-
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In the Select view, select VPN Customer Gateway.
-
-
- Select the VPN customer gateway you want to work with.
-
-
- To modify the required parameters, click the Edit VPN Customer Gateway button
-
-
-
-
- edit.png: button to edit a VPN customer gateway
-
-
-
-
- To remove the VPN customer gateway, click the Delete VPN Customer Gateway button
-
-
-
-
- delete.png: button to remove a VPN customer gateway
-
-
-
-
- Click OK.
-
-
-
diff --git a/docs/en-US/create-vpn-gateway-for-vpc.xml b/docs/en-US/create-vpn-gateway-for-vpc.xml
deleted file mode 100644
index 0f8a0dcc03b..00000000000
--- a/docs/en-US/create-vpn-gateway-for-vpc.xml
+++ /dev/null
@@ -1,98 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Creating a VPN gateway for the VPC
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In the Select view, select VPC.
- All the VPCs that you have created for the account is listed in the page.
-
-
- Click the Configure button of the VPC to which you want to deploy the VMs.
- The VPC page is displayed where all the tiers you created are listed in a
- diagram.
-
-
- Click the Settings icon.
- For each tier, the following options are displayed:
-
-
- Internal LB
-
-
- Public LB IP
-
-
- Static NAT
-
-
- Virtual Machines
-
-
- CIDR
-
-
- The following router information is displayed:
-
-
- Private Gateways
-
-
- Public IP Addresses
-
-
- Site-to-Site VPNs
-
-
- Network ACL Lists
-
-
-
-
- Select Site-to-Site VPN.
- If you are creating the VPN gateway for the first time, selecting Site-to-Site VPN
- prompts you to create a VPN gateway.
-
-
- In the confirmation dialog, click Yes to confirm.
- Within a few moments, the VPN gateway is created. You will be prompted to view the
- details of the VPN gateway you have created. Click Yes to confirm.
- The following details are displayed in the VPN Gateway page:
-
-
- IP Address
-
-
- Account
-
-
- Domain
-
-
-
-
-
diff --git a/docs/en-US/create-vr-network-offering.xml b/docs/en-US/create-vr-network-offering.xml
deleted file mode 100644
index 317e3c200a1..00000000000
--- a/docs/en-US/create-vr-network-offering.xml
+++ /dev/null
@@ -1,108 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Creating and Changing a Virtual Router Network Offering
- To create the network offering in association with a virtual router system service
- offering:
-
-
- Log in to the &PRODUCT; UI as a user or admin.
-
-
- First, create a system service offering, for example: VRsystemofferingHA.
- For more information on creating a system service offering, see .
-
-
- From the Select Offering drop-down, choose Network Offering.
-
-
- Click Add Network Offering.
-
-
- In the dialog, make the following choices:
-
-
- Name. Any desired name for the network
- offering.
-
-
- Description. A short description of the offering
- that can be displayed to users.
-
-
- Network Rate. Allowed data transfer rate in MB per
- second.
-
-
- Traffic Type. The type of network traffic that will
- be carried on the network.
-
-
- Guest Type. Choose whether the guest network is
- isolated or shared. For a description of these terms, see .
-
-
- Specify VLAN. (Isolated guest networks only)
- Indicate whether a VLAN should be specified when this offering is used.
-
-
- Supported Services. Select one or more of the
- possible network services. For some services, you must also choose the service provider;
- for example, if you select Load Balancer, you can choose the &PRODUCT; virtual router or
- any other load balancers that have been configured in the cloud. Depending on which
- services you choose, additional fields may appear in the rest of the dialog box. For
- more information, see
-
-
- System Offering. Choose the system service offering
- that you want virtual routers to use in this network. In this case, the default “System
- Offering For Software Router†and the custom “VRsystemofferingHA†are available and
- displayed.
-
-
-
-
- Click OK and the network offering is created.
-
-
- To change the network offering of a guest network to the virtual router service
- offering:
-
-
- Select Network from the left navigation pane.
-
-
- Select the guest network that you want to offer this network service to.
-
-
- Click the Edit button.
-
-
- From the Network Offering drop-down, select the virtual router network offering you have
- just created.
-
-
- Click OK.
-
-
-
diff --git a/docs/en-US/create-windows-template.xml b/docs/en-US/create-windows-template.xml
deleted file mode 100644
index d02f0678444..00000000000
--- a/docs/en-US/create-windows-template.xml
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Creating a Windows Template
- Windows templates must be prepared with Sysprep before they can be provisioned on multiple machines. Sysprep allows you to create a generic Windows template and avoid any possible SID conflicts.
- (XenServer) Windows VMs running on XenServer require PV drivers, which may be provided in the template or added after the VM is created. The PV drivers are necessary for essential management functions such as mounting additional volumes and ISO images, live migration, and graceful shutdown.
-
-
- An overview of the procedure is as follows:
-
- Upload your Windows ISO.For more information, see .
- Create a VM Instance with this ISO. For more information, see .
- Follow the steps in Sysprep for Windows Server 2008 R2 (below) or Sysprep for Windows Server 2003 R2, depending on your version of Windows Server
- The preparation steps are complete. Now you can actually create the template as described in Creating the Windows Template.
-
-
-
-
diff --git a/docs/en-US/creating-a-plugin.xml b/docs/en-US/creating-a-plugin.xml
deleted file mode 100644
index 448d4e6ea69..00000000000
--- a/docs/en-US/creating-a-plugin.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Plugin Development
- This chapter will detail different elements related to the development of plugins within Cloudstack
-
-
diff --git a/docs/en-US/creating-compute-offerings.xml b/docs/en-US/creating-compute-offerings.xml
deleted file mode 100644
index 5c5033afabb..00000000000
--- a/docs/en-US/creating-compute-offerings.xml
+++ /dev/null
@@ -1,70 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Creating a New Compute Offering
- To create a new compute offering:
-
- Log in with admin privileges to the &PRODUCT; UI.
- In the left navigation bar, click Service Offerings.
- In Select Offering, choose Compute Offering.
- Click Add Compute Offering.
- In the dialog, make the following choices:
-
- Name: Any desired name for the service offering.
- Description: A short description of the offering that can be
- displayed to users
- Storage type: The type of disk that should be allocated.
- Local allocates from storage attached directly to the host where the system
- VM is running. Shared allocates from storage accessible via NFS.
- # of CPU cores: The number of cores which should be allocated
- to a system VM with this offering
- CPU (in MHz): The CPU speed of the cores that the system VM
- is allocated. For example, “2000†would provide for a 2 GHz clock.
- Memory (in MB): The amount of memory in megabytes that the
- system VM should be allocated. For example, “2048†would provide for a 2 GB
- RAM allocation.
- Network Rate: Allowed data transfer rate in MB per
- second.
- Offer HA: If yes, the administrator can choose to have the
- system VM be monitored and as highly available as possible.
- Storage Tags: The tags that should be associated with the
- primary storage used by the system VM.
- Host Tags: (Optional) Any tags that you use to organize your
- hosts
- CPU cap: Whether to limit the level of CPU usage even if
- spare capacity is available.
- isVolatile: If checked, VMs created from this service
- offering will have their root disks reset upon reboot. This is useful for
- secure environments that need a fresh start on every boot and for desktops
- that should not retain state.
- Public: Indicate whether the service offering should be
- available all domains or only some domains. Choose Yes to make it available
- to all domains. Choose No to limit the scope to a subdomain; &PRODUCT;
- will then prompt for the subdomain's name.
-
- Click Add.
-
-
-
-
diff --git a/docs/en-US/creating-disk-offerings.xml b/docs/en-US/creating-disk-offerings.xml
deleted file mode 100644
index 627311e4418..00000000000
--- a/docs/en-US/creating-disk-offerings.xml
+++ /dev/null
@@ -1,48 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Creating a New Disk Offering
- To create a new disk offering:
-
- Log in with admin privileges to the &PRODUCT; UI.
- In the left navigation bar, click Service Offerings.
- In Select Offering, choose Disk Offering.
- Click Add Disk Offering.
- In the dialog, make the following choices:
-
- Name. Any desired name for the disk offering.
- Description. A short description of the offering that can be displayed to users
- Custom Disk Size. If checked, the user can set their own disk size. If not checked, the root administrator must define a value in Disk Size.
- Disk Size. Appears only if Custom Disk Size is not selected. Define the volume size in GB.
- QoS Type. Three options: Empty (no Quality of Service), hypervisor (rate limiting enforced on the hypervisor side), and storage (guaranteed minimum and maximum IOPS enforced on the storage side). If leveraging QoS, make sure that the hypervisor or storage system supports this feature.
- Custom IOPS. If checked, the user can set their own IOPS. If not checked, the root administrator can define values. If the root admin does not set values when using storage QoS, default values are used (the defauls can be overridden if the proper parameters are passed into &PRODUCT; when creating the primary storage in question).
- Min IOPS. Appears only if storage QoS is to be used. Set a guaranteed minimum number of IOPS to be enforced on the storage side.
- Max IOPS. Appears only if storage QoS is to be used. Set a maximum number of IOPS to be enforced on the storage side (the system may go above this limit in certain circumstances for short intervals).
- (Optional)Storage Tags. The tags that should be associated with the primary storage for this disk. Tags are a comma separated list of attributes of the storage. For example "ssd,blue". Tags are also added on Primary Storage. &PRODUCT; matches tags on a disk offering to tags on the storage. If a tag is present on a disk offering that tag (or tags) must also be present on Primary Storage for the volume to be provisioned. If no such primary storage exists, allocation from the disk offering will fail..
- Public. Indicate whether the service offering should be available all domains or only some domains. Choose Yes to make it available to all domains. Choose No to limit the scope to a subdomain; &PRODUCT; will then prompt for the subdomain's name.
-
- Click Add.
-
-
diff --git a/docs/en-US/creating-my-first-plugin.xml b/docs/en-US/creating-my-first-plugin.xml
deleted file mode 100644
index 3809fd30335..00000000000
--- a/docs/en-US/creating-my-first-plugin.xml
+++ /dev/null
@@ -1,216 +0,0 @@
-
-
-
- Creating my first plugin
- This is a brief walk through of creating a simple plugin that adds an additional command to the API to return the message "Hello World".
-
- Letting Cloudstack know about the plugin
- Before we can being we need to tell Cloudstack about the existance of our plugin. In order to do this we are required to edit some files related to the cloud-client-ui module
-
-
- Navigate to the folder called client
-
-
- Open pom.xml and add a dependency, this will look something like the following:
-
- client/pom.xml
- <dependency>
- <groupId>org.apache.cloudstack</groupId>
- <artifactId>cloud-plugin-api-helloworld</artifactId>
- <version>${project.version}</version>
-</dependency>
-
-
-
- Continuing with client as your working directory open up tomcatconf/applicationContext.xml.in
-
-
- Within this file we must insert a bean to load our class:
-
- client/tomcatconf/applicationContext.xml.in
- <bean id="helloWorldImpl" class="org.apache.cloudstack.helloworld.HelloWorldImpl" />
-
-
-
- Finally we need to register the additional API commands we add. Again with client as your working directory this is done by modifying tomcatconf/commands.properties.in
-
-
- Within the file we simply add the names of the API commands we want to create followed by a permission number. 1 = admin, 2 = resource domain admin, 4 = domain admin, 8 = user.
-
- tomcatconf/commands.properties.in
- helloWorld=8
-
-
-
-
-
- Creating the plugin
- Within the Cloudstack filing structure all plugins live under the plugins folder. Since the sample plugin for this document is going to be API related it will live in plugins/api/helloworld. Along with this we will need a standard maven package layout, so lets create all the required folders:
- $ mkdir -p plugins/api/helloworld/{src,target,test}
-$ mkdir -p plugins/api/helloworld/src/org/apache/cloudstack/{api,helloworld}
-$ mkdir -p plugins/api/helloworld/src/org/apache/cloudstack/api/{command,response}
-$ mkdir -p plugins/api/helloworld/src/org/apache/cloudstack/api/command/user/helloworld
- With helloworld as our working directory we should have a tree layout like the following:
- $ cd plugins/api/helloworld
-$ tree
-.
-|-- src
-| `-- org
-| `-- apache
-| `-- cloudstack
-| |-- api
-| | |-- command
-| | | `-- user
-| | | `-- helloworld
-| | |-- response
-| `-- helloworld
-|-- target
-`-- test
-
-12 directories, 0 files
- First we will create a pom.xml for our plugin:
-
- plugins/api/helloworld/pom.xml
- <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
- <artifactId>cloud-plugin-api-helloworld</artifactId>
- <name>Apache CloudStack Plugin - Hello World Plugin</name>
- <parent>
- <groupId>org.apache.cloudstack</groupId>
- <artifactId>cloudstack-plugins</artifactId>
- <version>4.2.0-SNAPSHOT</version>
- <relativePath>../../pom.xml</relativePath>
- </parent>
- <dependencies>
- <dependency>
- <groupId>org.apache.cloudstack</groupId>
- <artifactId>cloud-api</artifactId>
- <version>${project.version}</version>
- </dependency>
- <dependency>
- <groupId>org.apache.cloudstack</groupId>
- <artifactId>cloud-utils</artifactId>
- <version>${project.version}</version>
- </dependency>
- </dependencies>
- <build>
- <defaultGoal>install</defaultGoal>
- <sourceDirectory>src</sourceDirectory>
- <testSourceDirectory>test</testSourceDirectory>
- </build>
-</project>
-
- Next we need to make the root plugin pom aware of our plugin to do this simply edit plugins/pom.xml inserting a line like the following:
- ......
-<module>api/helloworld</module>
-......
- Finally we will being to create code for your plugin. Create an interface called HelloWorld that will extend PluggableService within src/org/apache/cloudstack/hellowold
- package org.apache.cloudstack.helloworld;
-
-import com.cloud.utils.component.PluggableService;
-
-public interface HelloWorld extends PluggableService { }
- Create an implementation of HelloWorld called HelloWorldImpl:
- package org.apache.cloudstack.helloworld;
-
-import org.apache.cloudstack.api.command.user.helloworld.HelloWorldCmd;
-import org.apache.log4j.Logger;
-import org.springframework.stereotype.Component;
-
-import javax.ejb.Local;
-import java.util.*;
-
-@Component
-@Local(value = HelloWorld.class)
-public class HelloWorldImpl implements HelloWorld {
- private static final Logger s_logger = Logger.getLogger(HelloWorldImpl.class);
-
- public HelloWorldImpl() {
- super();
- }
- /**
- * This informs cloudstack of the API commands you are creating.
- */
- @Override
- public List<Class<?>> getCommands() {
- List<Class<?>> cmdList = new ArrayList<Class<?>>();
- cmdList.add(HelloWorldCmd.class);
- return cmdList;
- }
-}
- Next we will create our API command navigate to src/org/apache/cloudstack/api/command/user/helloworld and open up HelloWorldCmd.java, create it as follows
- package org.apache.cloudstack.api.command.user.helloworld;
-
-import org.apache.cloudstack.api.APICommand;
-import org.apache.cloudstack.api.BaseCmd;
-import org.apache.cloudstack.api.response.HelloWorldResponse;
-import org.apache.log4j.Logger;
-
-// Note this name matches the name you inserted into client/tomcatconf/commands.properties.in
-@APICommand(name = "helloWorld", responseObject = HelloWorldResponse.class, description = "Returns a hello world message", since = "4.2.0")
-public class HelloWorldCmd extends BaseCmd {
- public static final Logger s_logger = Logger.getLogger(HelloWorldCmd.class.getName());
- private static final String s_name = "helloworldresponse";
-
- @Override
- public void execute()
- {
- HelloWorldResponse response = new HelloWorldResponse();
- response.setObjectName("helloworld");
- response.setResponseName(getCommandName());
- this.setResponseObject(response);
- }
-
- @Override
- public String getCommandName() {
- return s_name;
- }
-
- @Override
- public long getEntityOwnerId() {
- return 0;
- }
-}
- Finally we need to create our HelloWorldResponse class, this will exist within src/org/apache/cloudstack/api/response/
- package org.apache.cloudstack.api.response;
-
-import com.google.gson.annotations.SerializedName;
-import org.apache.cloudstack.api.BaseResponse;
-import com.cloud.serializer.Param;
-
-@SuppressWarnings("unused")
-public class HelloWorldResponse extends BaseResponse {
- @SerializedName("HelloWorld") @Param(description="HelloWorld Response")
- private String HelloWorld;
-
- public HelloWorldResponse(){
- this.HelloWorld = "Hello World";
- }
-}
-
-
- Compiling your plugin:
- Within the directory of your plugin i.e. plugins/api/helloworld run mvn clean install.
- After this we need to recompile the client-cloud-ui to do this come back to the cloudstack base directory and execute mvn -pl client clean install
-
-
- Starting Cloudstack and Testing:
- Start up cloudstack with the normal mvn pl :client-cloud-ui jetty:run, wait a few moments for it to start up then head over to: localhost:8096/client/api?command=helloWorld and you should see your HelloWorld message.
-
-
diff --git a/docs/en-US/creating-network-offerings.xml b/docs/en-US/creating-network-offerings.xml
deleted file mode 100644
index 4f75781c3cb..00000000000
--- a/docs/en-US/creating-network-offerings.xml
+++ /dev/null
@@ -1,285 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Creating a New Network Offering
- To create a network offering:
-
-
- Log in with admin privileges to the &PRODUCT; UI.
-
-
- In the left navigation bar, click Service Offerings.
-
-
- In Select Offering, choose Network Offering.
-
-
- Click Add Network Offering.
-
-
- In the dialog, make the following choices:
-
-
- Name. Any desired name for the network
- offering.
-
-
- Description. A short description of the offering
- that can be displayed to users.
-
-
- Network Rate. Allowed data transfer rate in MB per
- second.
-
-
- Guest Type. Choose whether the guest network is
- isolated or shared.
- For a description of this term, see .
- For a description of this term, see the Administration Guide.
-
-
-
- Persistent. Indicate whether the guest network is
- persistent or not. The network that you can provision without having to deploy a VM on
- it is termed persistent network. For more information, see .
-
-
- Specify VLAN. (Isolated guest networks only)
- Indicate whether a VLAN could be specified when this offering is used. If you select
- this option and later use this network offering while creating a VPC tier or an isolated
- network, you will be able to specify a VLAN ID for the network you create.
-
-
- VPC. This option indicate whether the guest network
- is Virtual Private Cloud-enabled. A Virtual Private Cloud (VPC) is a private, isolated
- part of &PRODUCT;. A VPC can have its own virtual network topology that resembles a
- traditional physical network. For more information on VPCs, see .
-
-
- Supported Services. Select one or more of the
- possible network services. For some services, you must also choose the service provider;
- for example, if you select Load Balancer, you can choose the &PRODUCT; virtual router or
- any other load balancers that have been configured in the cloud. Depending on which
- services you choose, additional fields may appear in the rest of the dialog box.
- Based on the guest network type selected, you can see the following supported
- services:
-
-
-
-
- Supported Services
- Description
- Isolated
- Shared
-
-
-
-
- DHCP
- For more information, see .
- Supported
- Supported
-
-
- DNS
- For more information, see .
- Supported
- Supported
-
-
- Load Balancer
- If you select Load Balancer, you can choose the &PRODUCT; virtual
- router or any other load balancers that have been configured in the
- cloud.
- Supported
- Supported
-
-
- Firewall
- For more information, see .
- For more information, see the Administration
- Guide.
- Supported
- Supported
-
-
- Source NAT
- If you select Source NAT, you can choose the &PRODUCT; virtual router
- or any other Source NAT providers that have been configured in the
- cloud.
- Supported
- Supported
-
-
- Static NAT
- If you select Static NAT, you can choose the &PRODUCT; virtual router
- or any other Static NAT providers that have been configured in the
- cloud.
- Supported
- Supported
-
-
- Port Forwarding
- If you select Port Forwarding, you can choose the &PRODUCT; virtual
- router or any other Port Forwarding providers that have been configured in the
- cloud.
- Supported
- Not Supported
-
-
- VPN
- For more information, see .
- Supported
- Not Supported
-
-
- User Data
- For more information, see .
- For more information, see the Administration
- Guide.
- Not Supported
- Supported
-
-
- Network ACL
- For more information, see .
- Supported
- Not Supported
-
-
- Security Groups
- For more information, see .
- Not Supported
- Supported
-
-
-
-
-
-
- System Offering. If the service provider for any of
- the services selected in Supported Services is a virtual router, the System Offering
- field appears. Choose the system service offering that you want virtual routers to use
- in this network. For example, if you selected Load Balancer in Supported Services and
- selected a virtual router to provide load balancing, the System Offering field appears
- so you can choose between the &PRODUCT; default system service offering and any custom
- system service offerings that have been defined by the &PRODUCT; root
- administrator.
- For more information, see .
- For more information, see the Administration Guide.
-
-
- LB Isolation: Specify what type of load balancer
- isolation you want for the network: Shared or Dedicated.
- Dedicated: If you select dedicated LB isolation, a
- dedicated load balancer device is assigned for the network from the pool of dedicated
- load balancer devices provisioned in the zone. If no sufficient dedicated load balancer
- devices are available in the zone, network creation fails. Dedicated device is a good
- choice for the high-traffic networks that make full use of the device's
- resources.
- Shared: If you select shared LB isolation, a shared
- load balancer device is assigned for the network from the pool of shared load balancer
- devices provisioned in the zone. While provisioning &PRODUCT; picks the shared load
- balancer device that is used by the least number of accounts. Once the device reaches
- its maximum capacity, the device will not be allocated to a new account.
-
-
- Mode: You can select either Inline mode or Side by
- Side mode:
- Inline mode: Supported only for Juniper SRX
- firewall and BigF5 load balancer devices. In inline mode, a firewall device is placed in
- front of a load balancing device. The firewall acts as the gateway for all the incoming
- traffic, then redirect the load balancing traffic to the load balancer behind it. The
- load balancer in this case will not have the direct access to the public network.
- Side by Side: In side by side mode, a firewall
- device is deployed in parallel with the load balancer device. So the traffic to the load
- balancer public IP is not routed through the firewall, and therefore, is exposed to the
- public network.
-
-
- Associate Public IP: Select this option if you want
- to assign a public IP address to the VMs deployed in the guest network. This option is
- available only if
-
-
- Guest network is shared.
-
-
- StaticNAT is enabled.
-
-
- Elastic IP is enabled.
-
-
- For information on Elastic IP, see .
-
-
- Redundant router capability: Available only when
- Virtual Router is selected as the Source NAT provider. Select this option if you want to
- use two virtual routers in the network for uninterrupted connection: one operating as
- the master virtual router and the other as the backup. The master virtual router
- receives requests from and sends responses to the user’s VM. The backup virtual router
- is activated only when the master is down. After the failover, the backup becomes the
- master virtual router. &PRODUCT; deploys the routers on different hosts to ensure
- reliability if one host is down.
-
-
- Conserve mode: Indicate whether to use conserve
- mode. In this mode, network resources are allocated only when the first virtual machine
- starts in the network. When conservative mode is off, the public IP can only be used for
- a single service. For example, a public IP used for a port forwarding rule cannot be
- used for defining other services, such as StaticNAT or load balancing. When the conserve
- mode is on, you can define more than one service on the same public IP.
-
- If StaticNAT is enabled, irrespective of the status of the conserve mode, no port
- forwarding or load balancing rule can be created for the IP. However, you can add the
- firewall rules by using the createFirewallRule command.
-
-
-
- Tags: Network tag to specify which physical network
- to use.
-
-
- Default egress policy: Configure the default policy
- for firewall egress rules. Options are Allow and Deny. Default is Allow if no egress
- policy is specified, which indicates that all the egress traffic is accepted when a
- guest network is created from this offering.
- To block the egress traffic for a guest network, select Deny. In this case, when you
- configure an egress rules for an isolated guest network, rules are added to allow the
- specified traffic.
-
-
-
-
- Click Add.
-
-
-
diff --git a/docs/en-US/creating-new-volumes.xml b/docs/en-US/creating-new-volumes.xml
deleted file mode 100644
index 5440dc5a016..00000000000
--- a/docs/en-US/creating-new-volumes.xml
+++ /dev/null
@@ -1,84 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Creating a New Volume
- You can add more data disk volumes to a guest VM at any time, up to the limits of your
- storage capacity. Both &PRODUCT; administrators and users can add volumes to VM instances. When
- you create a new volume, it is stored as an entity in &PRODUCT;, but the actual storage
- resources are not allocated on the physical storage device until you attach the volume. This
- optimization allows the &PRODUCT; to provision the volume nearest to the guest that will use it
- when the first attachment is made.
-
- Using Local Storage for Data Volumes
- You can create data volumes on local storage (supported with XenServer, KVM, and VMware).
- The data volume is placed on the same host as the VM instance that is attached to the data
- volume. These local data volumes can be attached to virtual machines, detached, re-attached,
- and deleted just as with the other types of data volume.
- Local storage is ideal for scenarios where persistence of data volumes and HA is not
- required. Some of the benefits include reduced disk I/O latency and cost reduction from using
- inexpensive local disks.
- In order for local volumes to be used, the feature must be enabled for the zone.
- You can create a data disk offering for local storage. When a user creates a new VM, they
- can select this disk offering in order to cause the data disk volume to be placed in local
- storage.
- You can not migrate a VM that has a volume in local storage to a different host, nor
- migrate the volume itself away to a different host. If you want to put a host into maintenance
- mode, you must first stop any VMs with local data volumes on that host.
-
-
- To Create a New Volume
-
-
- Log in to the &PRODUCT; UI as a user or admin.
-
-
- In the left navigation bar, click Storage.
-
-
- In Select View, choose Volumes.
-
-
- To create a new volume, click Add Volume, provide the following details, and click
- OK.
-
-
- Name. Give the volume a unique name so you can find it later.
-
-
- Availability Zone. Where do you want the storage to reside? This should be close
- to the VM that will use the volume.
-
-
- Disk Offering. Choose the characteristics of the storage.
-
-
- The new volume appears in the list of volumes with the state “Allocated.†The volume
- data is stored in &PRODUCT;, but the volume is not yet ready for use
-
-
- To start using the volume, continue to Attaching a Volume
-
-
-
-
diff --git a/docs/en-US/creating-shared-network.xml b/docs/en-US/creating-shared-network.xml
deleted file mode 100644
index e6a018f39d5..00000000000
--- a/docs/en-US/creating-shared-network.xml
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Configuring a Shared Guest Network
-
-
- Log in to the &PRODUCT; UI as administrator.
-
-
- In the left navigation, choose Infrastructure.
-
-
- On Zones, click View More.
-
-
- Click the zone to which you want to add a guest network.
-
-
- Click the Physical Network tab.
-
-
- Click the physical network you want to work with.
-
-
- On the Guest node of the diagram, click Configure.
-
-
- Click the Network tab.
-
-
- Click Add guest network.
- The Add guest network window is displayed.
-
-
- Specify the following:
-
-
- Name: The name of the network. This will be visible
- to the user.
-
-
- Description: The short description of the network
- that can be displayed to users.
-
-
- VLAN ID: The unique ID of the VLAN.
-
-
- Isolated VLAN ID: The unique ID of the Secondary
- Isolated VLAN.
-
-
- Scope: The available scopes are Domain, Account,
- Project, and All.
-
-
- Domain: Selecting Domain limits the scope of
- this guest network to the domain you specify. The network will not be available for
- other domains. If you select Subdomain Access, the guest network is available to all
- the sub domains within the selected domain.
-
-
- Account: The account for which the guest
- network is being created for. You must specify the domain the account belongs
- to.
-
-
- Project: The project for which the guest
- network is being created for. You must specify the domain the project belongs
- to.
-
-
- All: The guest network is available for all the
- domains, account, projects within the selected zone.
-
-
-
-
- Network Offering: If the administrator has
- configured multiple network offerings, select the one you want to use for this
- network.
-
-
- Gateway: The gateway that the guests should
- use.
-
-
- Netmask: The netmask in use on the subnet the
- guests will use.
-
-
- IP Range: A range of IP addresses that are
- accessible from the Internet and are assigned to the guest VMs.
- If one NIC is used, these IPs should be in the same CIDR in the case of IPv6.
-
-
- IPv6 CIDR: The network prefix that defines the
- guest network subnet. This is the CIDR that describes the IPv6 addresses in use in the
- guest networks in this zone. To allot IP addresses from within a particular address
- block, enter a CIDR.
-
-
- Network Domain: A custom DNS suffix at the level of
- a network. If you want to assign a special domain name to the guest VM network, specify
- a DNS suffix.
-
-
-
-
- Click OK to confirm.
-
-
-
diff --git a/docs/en-US/creating-system-service-offerings.xml b/docs/en-US/creating-system-service-offerings.xml
deleted file mode 100644
index e33d9d07767..00000000000
--- a/docs/en-US/creating-system-service-offerings.xml
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Creating a New System Service Offering
- To create a system service offering:
-
- Log in with admin privileges to the &PRODUCT; UI.
- In the left navigation bar, click Service Offerings.
- In Select Offering, choose System Offering.
- Click Add System Service Offering.
- In the dialog, make the following choices:
-
- Name. Any desired name for the system offering.
- Description. A short description of the offering that can be displayed to users
- System VM Type. Select the type of system virtual machine that this offering is intended to support.
- Storage type. The type of disk that should be allocated. Local allocates from storage attached directly to the host where the system VM is running. Shared allocates from storage accessible via NFS.
- # of CPU cores. The number of cores which should be allocated to a system VM with this offering
- CPU (in MHz). The CPU speed of the cores that the system VM is allocated. For example, "2000" would provide for a 2 GHz clock.
- Memory (in MB). The amount of memory in megabytes that the system VM should be allocated. For example, "2048" would provide for a 2 GB RAM allocation.
- Network Rate. Allowed data transfer rate in MB per second.
- Offer HA. If yes, the administrator can choose to have the system VM be monitored and as highly available as possible.
- Storage Tags. The tags that should be associated with the primary storage used by the system VM.
- Host Tags. (Optional) Any tags that you use to organize your hosts
- CPU cap. Whether to limit the level of CPU usage even if spare capacity is available.
- Public. Indicate whether the service offering should be available all domains or only some domains. Choose Yes to make it available to all domains. Choose No to limit the scope to a subdomain; &PRODUCT; will then prompt for the subdomain's name.
-
- Click Add.
-
-
-
-
diff --git a/docs/en-US/creating-vms.xml b/docs/en-US/creating-vms.xml
deleted file mode 100644
index df4d88ed548..00000000000
--- a/docs/en-US/creating-vms.xml
+++ /dev/null
@@ -1,117 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Creating VMs
- Virtual machines are usually created from a template. Users can also create blank virtual
- machines. A blank virtual machine is a virtual machine without an OS template. Users can attach
- an ISO file and install the OS from the CD/DVD-ROM.
-
- You can create a VM without starting it. You can determine whether the VM needs to be
- started as part of the VM deployment. A request parameter, startVM, in the deployVm API
- provides this feature. For more information, see the Developer's Guide
-
-
-
- Creating a VM from a template
-
-
- Log in to the &PRODUCT; UI as an administrator or user.
-
-
- In the left navigation bar, click Instances.
-
-
- Click Add Instance.
-
-
- Select a zone.
-
-
- Select a template, then follow the steps in the wizard. For more information about how
- the templates came to be in this list, see .
-
-
- Be sure that the hardware you have allows starting the selected service offering.
-
-
- Click Submit and your VM will be created and started.
-
- For security reasons, the internal name of the VM is visible only to the root
- admin.
-
-
-
-
-
- Creating a VM from an ISO
-
- (XenServer) Windows VMs running on XenServer require PV drivers, which may be provided in
- the template or added after the VM is created. The PV drivers are necessary for essential
- management functions such as mounting additional volumes and ISO images, live migration, and
- graceful shutdown.
-
-
-
-
- Log in to the &PRODUCT; UI as an administrator or user.
-
-
- In the left navigation bar, click Instances.
-
-
- Click Add Instance.
-
-
- Select a zone.
-
-
- Select ISO Boot, and follow the steps in the wizard.
-
-
- Click Submit and your VM will be created and started.
-
-
-
-
-
-
- Configuring Usage of Linked Clones on VMware
- (For ESX hypervisor in conjunction with vCenter)
- VMs can be created as either linked clones or full clones on VMware.
- For a full description of clone types, refer to VMware documentation. In summary: A
- full clone is a copy of an existing virtual machine which, once created, does not depend
- in any way on the original virtual machine. A linked clone is also a copy of an existing
- virtual machine, but it has ongoing dependency on the original. A linked clone shares the
- virtual disk of the original VM, and retains access to all files that were present at the
- time the clone was created.
- The use of these different clone types involves some side effects and tradeoffs, so it
- is to the administrator's advantage to be able to choose which of the two types will be
- used in a &PRODUCT; deployment.
- A new global configuration setting has been added, vmware.create.full.clone. When the
- administrator sets this to true, end users can create guest VMs only as full clones. The
- default value is false.
- It is not recommended to change the value of vmware.create.full.clone in a cloud with
- running VMs. However, if the value is changed, existing VMs are not affected. Only VMs
- created after the setting is put into effect are subject to the restriction.
-
-
diff --git a/docs/en-US/customizing-dns.xml b/docs/en-US/customizing-dns.xml
deleted file mode 100644
index c24bad895f4..00000000000
--- a/docs/en-US/customizing-dns.xml
+++ /dev/null
@@ -1,44 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Customizing the Network Domain Name
- The root administrator can optionally assign a custom DNS suffix at the level of a network, account, domain, zone, or entire &PRODUCT; installation, and a domain administrator can do so within their own domain. To specify a custom domain name and put it into effect, follow these steps.
-
- Set the DNS suffix at the desired scope
-
- At the network level, the DNS suffix can be assigned through the UI when creating a new network, as described in or with the updateNetwork command in the &PRODUCT; API.
- At the account, domain, or zone level, the DNS suffix can be assigned with the appropriate &PRODUCT; API commands: createAccount, editAccount, createDomain, editDomain, createZone, or editZone.
- At the global level, use the configuration parameter guest.domain.suffix. You can also use the &PRODUCT; API command updateConfiguration. After modifying this global configuration, restart the Management Server to put the new setting into effect.
-
- To make the new DNS suffix take effect for an existing network, call the &PRODUCT; API command updateNetwork. This step is not necessary when the DNS suffix was specified while creating a new network.
-
- The source of the network domain that is used depends on the following rules.
-
- For all networks, if a network domain is specified as part of a network's own configuration, that value is used.
- For an account-specific network, the network domain specified for the account is used. If none is specified, the system looks for a value in the domain, zone, and global configuration, in that order.
- For a domain-specific network, the network domain specified for the domain is used. If none is specified, the system looks for a value in the zone and global configuration, in that order.
- For a zone-specific network, the network domain specified for the zone is used. If none is specified, the system looks for a value in the global configuration.
-
-
diff --git a/docs/en-US/database-replication.xml b/docs/en-US/database-replication.xml
deleted file mode 100644
index 8ca80713732..00000000000
--- a/docs/en-US/database-replication.xml
+++ /dev/null
@@ -1,144 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Database Replication (Optional)
- &PRODUCT; supports database replication from one MySQL node to another. This is achieved using standard MySQL replication. You may want to do this as insurance against MySQL server or storage loss. MySQL replication is implemented using a master/slave model. The master is the node that the Management Servers are configured to use. The slave is a standby node that receives all write operations from the master and applies them to a local, redundant copy of the database. The following steps are a guide to implementing MySQL replication.
- Creating a replica is not a backup solution. You should develop a backup procedure for the MySQL data that is distinct from replication.
-
- Ensure that this is a fresh install with no data in the master.
-
- Edit my.cnf on the master and add the following in the [mysqld] section below datadir.
-
-log_bin=mysql-bin
-server_id=1
-
- The server_id must be unique with respect to other servers. The recommended way to achieve this is to give the master an ID of 1 and each slave a sequential number greater than 1, so that the servers are numbered 1, 2, 3, etc.
-
-
- Restart the MySQL service. On RHEL/CentOS systems, use:
-
-# service mysqld restart
-
- On Debian/Ubuntu systems, use:
-
-# service mysql restart
-
-
-
- Create a replication account on the master and give it privileges. We will use the "cloud-repl" user with the password "password". This assumes that master and slave run on the 172.16.1.0/24 network.
-
-# mysql -u root
-mysql> create user 'cloud-repl'@'172.16.1.%' identified by 'password';
-mysql> grant replication slave on *.* TO 'cloud-repl'@'172.16.1.%';
-mysql> flush privileges;
-mysql> flush tables with read lock;
-
-
- Leave the current MySQL session running.
- In a new shell start a second MySQL session.
-
- Retrieve the current position of the database.
-
-# mysql -u root
-mysql> show master status;
-+------------------+----------+--------------+------------------+
-| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
-+------------------+----------+--------------+------------------+
-| mysql-bin.000001 | 412 | | |
-+------------------+----------+--------------+------------------+
-
-
- Note the file and the position that are returned by your instance.
- Exit from this session.
-
- Complete the master setup. Returning to your first session on the master, release the locks and exit MySQL.
-
-mysql> unlock tables;
-
-
-
- Install and configure the slave. On the slave server, run the following commands.
-
-# yum install mysql-server
-# chkconfig mysqld on
-
-
-
- Edit my.cnf and add the following lines in the [mysqld] section below datadir.
-
-server_id=2
-innodb_rollback_on_timeout=1
-innodb_lock_wait_timeout=600
-
-
-
- Restart MySQL. Use "mysqld" on RHEL/CentOS systems:
-
-# service mysqld restart
-
- On Ubuntu/Debian systems use "mysql."
-
-# service mysql restart
-
-
-
- Instruct the slave to connect to and replicate from the master. Replace the IP address, password, log file, and position with the values you have used in the previous steps.
-
-mysql> change master to
- -> master_host='172.16.1.217',
- -> master_user='cloud-repl',
- -> master_password='password',
- -> master_log_file='mysql-bin.000001',
- -> master_log_pos=412;
-
-
-
- Then start replication on the slave.
-
-mysql> start slave;
-
-
-
- Optionally, open port 3306 on the slave as was done on the master earlier.
- This is not required for replication to work. But if you choose not to do this, you will need to do it when failover to the replica occurs.
-
-
-
- Failover
- This will provide for a replicated database that can be used to implement manual failover for the Management Servers. &PRODUCT; failover from one MySQL instance to another is performed by the administrator. In the event of a database failure you should:
-
- Stop the Management Servers (via service cloudstack-management stop).
- Change the replica's configuration to be a master and restart it.
- Ensure that the replica's port 3306 is open to the Management Servers.
- Make a change so that the Management Server uses the new database. The simplest process here is to put the IP address of the new database server into each Management Server's /etc/cloudstack/management/db.properties.
-
- Restart the Management Servers:
-
-# service cloudstack-management start
-
-
-
-
-
diff --git a/docs/en-US/dates-in-usage-record.xml b/docs/en-US/dates-in-usage-record.xml
deleted file mode 100644
index dc2f07221be..00000000000
--- a/docs/en-US/dates-in-usage-record.xml
+++ /dev/null
@@ -1,26 +0,0 @@
-
-
-
- Dates in the Usage Record
- Usage records include a start date and an end date. These dates define the period of time for which the raw usage number was calculated. If daily aggregation is used, the start date is midnight on the day in question and the end date is 23:59:59 on the day in question (with one exception; see below). A virtual machine could have been deployed at noon on that day, stopped at 6pm on that day, then started up again at 11pm. When usage is calculated on that day, there will be 7 hours of running VM usage (usage type 1) and 12 hours of allocated VM usage (usage type 2). If the same virtual machine runs for the entire next day, there will 24 hours of both running VM usage (type 1) and allocated VM usage (type 2).
- Note: The start date is not the time a virtual machine was started, and the end date is not the time when a virtual machine was stopped. The start and end dates give the time range within which usage was calculated.
- For network usage, the start date and end date again define the range in which the number of bytes transferred was calculated. If a user downloads 10 MB and uploads 1 MB in one day, there will be two records, one showing the 10 megabytes received and one showing the 1 megabyte sent.
- There is one case where the start date and end date do not correspond to midnight and 11:59:59pm when daily aggregation is used. This occurs only for network usage records. When the usage server has more than one day's worth of unprocessed data, the old data will be included in the aggregation period. The start date in the usage record will show the date and time of the earliest event. For other types of usage, such as IP addresses and VMs, the old unprocessed data is not included in daily aggregation.
-
-
diff --git a/docs/en-US/dedicated-ha-hosts.xml b/docs/en-US/dedicated-ha-hosts.xml
deleted file mode 100644
index 89c721f080a..00000000000
--- a/docs/en-US/dedicated-ha-hosts.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Dedicated HA Hosts
- One or more hosts can be designated for use only by HA-enabled VMs that are restarting due to a host failure. Setting up a pool of such dedicated HA hosts as the recovery destination for all HA-enabled VMs is useful to:
-
- Make it easier to determine which VMs have been restarted as part of the &PRODUCT; high-availability function. If a VM is running on a dedicated HA host, then it must be an HA-enabled VM whose original host failed. (With one exception: It is possible for an administrator to manually migrate any VM to a dedicated HA host.).
- Keep HA-enabled VMs from restarting on hosts which may be reserved for other purposes.
-
- The dedicated HA option is set through a special host tag when the host is created. To allow the administrator to dedicate hosts to only HA-enabled VMs, set the global configuration variable ha.tag to the desired tag (for example, "ha_host"), and restart the Management Server. Enter the value in the Host Tags field when adding the host(s) that you want to dedicate to HA-enabled VMs.
- If you set ha.tag, be sure to actually use that tag on at least one host in your cloud. If the tag specified in ha.tag is not set for any host in the cloud, the HA-enabled VMs will fail to restart after a crash.
-
diff --git a/docs/en-US/default-account-resource-limit.xml b/docs/en-US/default-account-resource-limit.xml
deleted file mode 100644
index 5134e508c11..00000000000
--- a/docs/en-US/default-account-resource-limit.xml
+++ /dev/null
@@ -1,45 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Default Account Resource Limits
- You can limit resource use by accounts. The default limits are set by using global
- configuration parameters, and they affect all accounts within a cloud. The relevant
- parameters are those beginning with max.account, for example: max.account.snapshots.
- To override a default limit for a particular account, set a per-account resource limit.
-
- Log in to the &PRODUCT; UI.
- In the left navigation tree, click Accounts.
- Select the account you want to modify. The current limits are displayed. A value of -1 shows
- that there is no limit in place.
- Click the Edit button.
-
-
-
-
- editbutton.png: edits the settings
-
-
-
-
-
diff --git a/docs/en-US/default-template.xml b/docs/en-US/default-template.xml
deleted file mode 100644
index 16442c38f47..00000000000
--- a/docs/en-US/default-template.xml
+++ /dev/null
@@ -1,56 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- The Default Template
- &PRODUCT; includes a CentOS template. This template is downloaded by the Secondary Storage VM after the primary and secondary storage are configured. You can use this template in your production deployment or you can delete it and use custom templates.
- The root password for the default template is "password".
- A default template is provided for each of XenServer, KVM, and vSphere. The templates that are downloaded depend on the hypervisor type that is available in your cloud. Each template is approximately 2.5 GB physical size.
- The default template includes the standard iptables rules, which will block most access to the template excluding ssh.
- # iptables --list
-Chain INPUT (policy ACCEPT)
-target prot opt source destination
-RH-Firewall-1-INPUT all -- anywhere anywhere
-
-Chain FORWARD (policy ACCEPT)
-target prot opt source destination
-RH-Firewall-1-INPUT all -- anywhere anywhere
-
-Chain OUTPUT (policy ACCEPT)
-target prot opt source destination
-
-Chain RH-Firewall-1-INPUT (2 references)
-target prot opt source destination
-ACCEPT all -- anywhere anywhere
-ACCEPT icmp -- anywhere anywhere icmp any
-ACCEPT esp -- anywhere anywhere
-ACCEPT ah -- anywhere anywhere
-ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns
-ACCEPT udp -- anywhere anywhere udp dpt:ipp
-ACCEPT tcp -- anywhere anywhere tcp dpt:ipp
-ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
-ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
-REJECT all -- anywhere anywhere reject-with icmp-host-
-
-
diff --git a/docs/en-US/delete-event-alerts.xml b/docs/en-US/delete-event-alerts.xml
deleted file mode 100644
index 392b37f151f..00000000000
--- a/docs/en-US/delete-event-alerts.xml
+++ /dev/null
@@ -1,89 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Deleting and Archiving Events and Alerts
- &PRODUCT; provides you the ability to delete or archive the existing alerts and events that
- you no longer want to implement. You can regularly delete or archive any alerts or events that
- you cannot, or do not want to resolve from the database.
- You can delete or archive individual alerts or events either directly by using the Quickview
- or by using the Details page. If you want to delete multiple alerts or events at the same time,
- you can use the respective context menu. You can delete alerts or events by category for a time
- period. For example, you can select categories such as USER.LOGOUT, VM.DESTROY, VM.AG.UPDATE, CONFIGURATION.VALUE.EDI, and so on.
- You can also view the number of events or alerts archived or deleted.
- In order to support the delete or archive alerts, the following global parameters have been
- added:
-
-
- alert.purge.delay: The alerts older than specified
- number of days are purged. Set the value to 0 to never purge alerts automatically.
-
-
- alert.purge.interval: The interval in seconds to wait
- before running the alert purge thread. The default is 86400 seconds (one day).
-
-
-
- Archived alerts or events cannot be viewed in the UI or by using the API. They are
- maintained in the database for auditing or compliance purposes.
-
-
- Permissions
- Consider the following:
-
-
- The root admin can delete or archive one or multiple alerts or events.
-
-
- The domain admin or end user can delete or archive one or multiple events.
-
-
-
-
- Procedure
-
-
- Log in as administrator to the &PRODUCT; UI.
-
-
- In the left navigation, click Events.
-
-
- Perform either of the following:
-
-
- To archive events, click Archive Events, and specify event type and time
- period.
-
-
- To archive events, click Delete Events, and specify event type and time
- period.
-
-
-
-
- Click OK.
-
-
-
-
diff --git a/docs/en-US/delete-reset-vpn.xml b/docs/en-US/delete-reset-vpn.xml
deleted file mode 100644
index 2fe85d279b6..00000000000
--- a/docs/en-US/delete-reset-vpn.xml
+++ /dev/null
@@ -1,107 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Restarting and Removing a VPN Connection
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In the Select view, select VPC.
- All the VPCs that you have created for the account is listed in the page.
-
-
- Click the Configure button of the VPC to which you want to deploy the VMs.
- The VPC page is displayed where all the tiers you created are listed in a
- diagram.
-
-
- Click the Settings icon.
- For each tier, the following options are displayed:
-
-
- Internal LB
-
-
- Public LB IP
-
-
- Static NAT
-
-
- Virtual Machines
-
-
- CIDR
-
-
- The following router information is displayed:
-
-
- Private Gateways
-
-
- Public IP Addresses
-
-
- Site-to-Site VPNs
-
-
- Network ACL Lists
-
-
-
-
- Select Site-to-Site VPN.
- The Site-to-Site VPN page is displayed.
-
-
- From the Select View drop-down, ensure that VPN Connection is selected.
- All the VPN connections you created are displayed.
-
-
- Select the VPN connection you want to work with.
- The Details tab is displayed.
-
-
- To remove a VPN connection, click the Delete VPN connection button
-
-
-
-
- remove-vpn.png: button to remove a VPN connection
-
-
- To restart a VPN connection, click the Reset VPN connection button present in the
- Details tab.
-
-
-
-
- reset-vpn.png: button to reset a VPN connection
-
-
-
-
-
diff --git a/docs/en-US/delete-templates.xml b/docs/en-US/delete-templates.xml
deleted file mode 100644
index f9351da844f..00000000000
--- a/docs/en-US/delete-templates.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Deleting Templates
- Templates may be deleted. In general, when a template spans multiple Zones, only the copy that is selected for deletion will be deleted; the same template in other Zones will not be deleted. The provided CentOS template is an exception to this. If the provided CentOS template is deleted, it will be deleted from all Zones.
- When templates are deleted, the VMs instantiated from them will continue to run. However, new VMs cannot be created based on the deleted template.
-
diff --git a/docs/en-US/deleting-vms.xml b/docs/en-US/deleting-vms.xml
deleted file mode 100644
index 97245c81ef4..00000000000
--- a/docs/en-US/deleting-vms.xml
+++ /dev/null
@@ -1,43 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Deleting VMs
- Users can delete their own virtual machines. A running virtual machine will be abruptly stopped before it is deleted. Administrators can delete any virtual machines.
- To delete a virtual machine:
-
- Log in to the &PRODUCT; UI as a user or admin.
- In the left navigation, click Instances.
- Choose the VM that you want to delete.
- Click the Destroy Instance button.
-
-
-
-
- Destroyinstance.png: button to destroy an instance
-
-
-
-
-
-
diff --git a/docs/en-US/dell62xx-hardware.xml b/docs/en-US/dell62xx-hardware.xml
deleted file mode 100644
index 8bc7770ce86..00000000000
--- a/docs/en-US/dell62xx-hardware.xml
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Dell 62xx
- The following steps show how a Dell 62xx is configured for zone-level layer-3 switching.
- These steps assume VLAN 201 is used to route untagged private IPs for pod 1, and pod 1’s layer-2
- switch is connected to Ethernet port 1/g1.
- The Dell 62xx Series switch supports up to 1024 VLANs.
-
-
- Configure all the VLANs in the database.
- vlan database
-vlan 200-999
-exit
-
-
- Configure Ethernet port 1/g1.
- interface ethernet 1/g1
-switchport mode general
-switchport general pvid 201
-switchport general allowed vlan add 201 untagged
-switchport general allowed vlan add 300-999 tagged
-exit
-
-
- The statements configure Ethernet port 1/g1 as follows:
-
-
- VLAN 201 is the native untagged VLAN for port 1/g1.
-
-
- All VLANs (300-999) are passed to all the pod-level layer-2 switches.
-
-
-
diff --git a/docs/en-US/dell62xx-layer2.xml b/docs/en-US/dell62xx-layer2.xml
deleted file mode 100644
index 1c0eea07203..00000000000
--- a/docs/en-US/dell62xx-layer2.xml
+++ /dev/null
@@ -1,49 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Dell 62xx
- The following steps show how a Dell 62xx is configured for pod-level layer-2
- switching.
-
-
- Configure all the VLANs in the database.
- vlan database
-vlan 300-999
-exit
-
-
- VLAN 201 is used to route untagged private IP addresses for pod 1, and pod 1 is connected to this layer-2 switch.
- interface range ethernet all
-switchport mode general
-switchport general allowed vlan add 300-999 tagged
-exit
-
-
- The statements configure all Ethernet ports to function as follows:
-
-
- All ports are configured the same way.
-
-
- All VLANs (300-999) are passed through all the ports of the layer-2 switch.
-
-
-
diff --git a/docs/en-US/deployment-architecture-overview.xml b/docs/en-US/deployment-architecture-overview.xml
deleted file mode 100644
index 835898ced7f..00000000000
--- a/docs/en-US/deployment-architecture-overview.xml
+++ /dev/null
@@ -1,57 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Deployment Architecture Overview
-
- A &PRODUCT; installation consists of two parts: the Management Server
- and the cloud infrastructure that it manages. When you set up and
- manage a &PRODUCT; cloud, you provision resources such as hosts,
- storage devices, and IP addresses into the Management Server, and
- the Management Server manages those resources.
-
-
- The minimum production installation consists of one machine running
- the &PRODUCT; Management Server and another machine to act as the
- cloud infrastructure (in this case, a very simple infrastructure
- consisting of one host running hypervisor software). In its smallest
- deployment, a single machine can act as both the Management Server
- and the hypervisor host (using the KVM hypervisor).
-
-
-
-
-
- basic-deployment.png: Basic two-machine deployment
-
-
- A more full-featured installation consists of a highly-available
- multi-node Management Server installation and up to tens of thousands of
- hosts using any of several advanced networking setups. For
- information about deployment options, see the "Choosing a Deployment Architecture"
- section of the &PRODUCT; Installation Guide.
-
-
-
-
-
diff --git a/docs/en-US/detach-move-volumes.xml b/docs/en-US/detach-move-volumes.xml
deleted file mode 100644
index 8922db12161..00000000000
--- a/docs/en-US/detach-move-volumes.xml
+++ /dev/null
@@ -1,59 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Detaching and Moving Volumes
-
- This procedure is different from moving volumes from one storage pool to another as described in .
-
- A volume can be detached from a guest VM and attached to another guest. Both &PRODUCT;
- administrators and users can detach volumes from VMs and move them to other VMs.
- If the two VMs are in different clusters, and the volume is large, it may take several
- minutes for the volume to be moved to the new VM.
-
-
-
- Log in to the &PRODUCT; UI as a user or admin.
-
-
- In the left navigation bar, click Storage, and choose Volumes in Select View.
- Alternatively, if you know which VM the volume is attached to, you can click Instances,
- click the VM name, and click View Volumes.
-
-
- Click the name of the volume you want to detach, then click the Detach Disk button.
-
-
-
-
- DetachDiskButton.png: button to detach a volume
-
-
-
-
-
- To move the volume to another VM, follow the steps in .
-
-
-
diff --git a/docs/en-US/devcloud-usage-mode.xml b/docs/en-US/devcloud-usage-mode.xml
deleted file mode 100644
index bc211ce1436..00000000000
--- a/docs/en-US/devcloud-usage-mode.xml
+++ /dev/null
@@ -1,60 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- DevCloud Usage Mode
- DevCloud can be used in several different ways:
-
-
- Full sandbox. Where &PRODUCT; is run within the DevCloud instance started in Virtual Box.
- In this mode, the &PRODUCT; management server runs within the instance and nested virtualization allows instantiation of tiny VMs within DevCloud itself. &PRODUCT; code modifications are done within DevCloud.
- The following diagram shows the architecture of the SandBox mode.
-
-
-
-
-
- DevCloud.png: Schematic of the DevCloud SandBox architecture
-
-
-
-
- A deployment environment. Where &PRODUCT; code is developed in the localhost of the developer and the resulting build is deployed within DevCloud
- This mode was used in the testing procedure of &PRODUCT; 4.0.0 incubating release. See the following screencast to see how: http://vimeo.com/54621457
-
-
- A host-only mode. Where DevCloud is used only as a host. &PRODUCT; management server is run in the localhost of the developer
- This mode makes use of a host-only interface defined in the Virtual Box preferences. Check the following screencast to see how: http://vimeo.com/54610161
- The following schematic shows the architecture of the Host-Only mode.
-
-
-
-
-
- DevCloud-hostonly.png: Schematic of the DevCloud host-only architecture
-
-
-
-
-
diff --git a/docs/en-US/devcloud.xml b/docs/en-US/devcloud.xml
deleted file mode 100644
index 677818700ae..00000000000
--- a/docs/en-US/devcloud.xml
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- DevCloud
- DevCloud is the &PRODUCT; sandbox. It is provided as a Virtual Box appliance. It is meant to be used as a development environment to easily test new &PRODUCT; development. It has also been used for training and &PRODUCT; demos since it provides a Cloud in a box.
-
- DevCloud is provided as a convenience by community members. It is not an official &PRODUCT; release artifact.
- The &PRODUCT; source code however, contains tools to build your own DevCloud.
-
-
- DevCloud is under development and should be considered a Work In Progress (WIP), the wiki is the most up to date documentation:
-
-
-
-
-
diff --git a/docs/en-US/developer-getting-started.xml b/docs/en-US/developer-getting-started.xml
deleted file mode 100644
index 14560280909..00000000000
--- a/docs/en-US/developer-getting-started.xml
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
-
- Getting Started
-
- To get started using the &PRODUCT; API, you should have the following:
-
- URL of the &PRODUCT; server you wish to integrate with.
- Both the API Key and Secret Key for an account. This should have been generated by the administrator of the cloud instance and given to you.
- Familiarity with HTTP GET/POST and query strings.
- Knowledge of either XML or JSON.
- Knowledge of a programming language that can generate HTTP requests; for example, Java or PHP.
-
-
-
diff --git a/docs/en-US/developer-introduction.xml b/docs/en-US/developer-introduction.xml
deleted file mode 100644
index 9d54f31dae9..00000000000
--- a/docs/en-US/developer-introduction.xml
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Introduction to the &PRODUCT; API
-
-
-
-
diff --git a/docs/en-US/disable-enable-zones-pods-clusters.xml b/docs/en-US/disable-enable-zones-pods-clusters.xml
deleted file mode 100644
index 7d52ae7c7a9..00000000000
--- a/docs/en-US/disable-enable-zones-pods-clusters.xml
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Disabling and Enabling Zones, Pods, and Clusters
- You can enable or disable a zone, pod, or cluster without permanently removing it from the cloud. This is useful for maintenance or when there are problems that make a portion of the cloud infrastructure unreliable. No new allocations will be made to a disabled zone, pod, or cluster until its state is returned to Enabled. When a zone, pod, or cluster is first added to the cloud, it is Disabled by default.
- To disable and enable a zone, pod, or cluster:
-
- Log in to the &PRODUCT; UI as administrator
- In the left navigation bar, click Infrastructure.
-
- In Zones, click View More.
-
- If you are disabling or enabling a zone, find the name of the zone in the list, and click the Enable/Disable button.
-
-
-
- enable-disable.png: button to enable or disable zone, pod, or cluster.
-
-
- If you are disabling or enabling a pod or cluster, click the name of the zone that contains the pod or cluster.
- Click the Compute tab.
-
- In the Pods or Clusters node of the diagram, click View All.
-
- Click the pod or cluster name in the list.
- Click the Enable/Disable button.
-
-
-
-
-
diff --git a/docs/en-US/disk-volume-usage-record-format.xml b/docs/en-US/disk-volume-usage-record-format.xml
deleted file mode 100644
index c15d979e113..00000000000
--- a/docs/en-US/disk-volume-usage-record-format.xml
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-
- Disk Volume Usage Record Format
- For disk volumes, the following fields exist in a usage record.
-
- account – name of the account
- accountid – ID of the account
- domainid – ID of the domain in which this account resides
- zoneid – Zone where the usage occurred
- description – A string describing what the usage record is tracking
- usage – String representation of the usage, including the units of usage (e.g. 'Hrs' for hours)
- usagetype – A number representing the usage type (see Usage Types)
- rawusage – A number representing the actual usage in hours
- usageid – The volume ID
- offeringid – The ID of the disk offering
- type – Hypervisor
- templateid – ROOT template ID
- size – The amount of storage allocated
- startdate, enddate – The range of time for which the usage is aggregated; see Dates in the Usage Record
-
-
diff --git a/docs/en-US/dns-dhcp.xml b/docs/en-US/dns-dhcp.xml
deleted file mode 100644
index 2359e8380cd..00000000000
--- a/docs/en-US/dns-dhcp.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- DNS and DHCP
- The Virtual Router provides DNS and DHCP services to the guests. It proxies DNS requests to the DNS server configured on the Availability Zone.
-
diff --git a/docs/en-US/domains.xml b/docs/en-US/domains.xml
deleted file mode 100644
index f348fe88998..00000000000
--- a/docs/en-US/domains.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Domains
- If the LDAP server requires SSL, you need to enable it in the ldapConfig command by setting the parameters ssl, truststore, and truststorepass. Before enabling SSL for ldapConfig, you need to get the certificate which the LDAP server is using and add it to a trusted keystore. You will need to know the path to the keystore and the password.
-
diff --git a/docs/en-US/egress-firewall-rule.xml b/docs/en-US/egress-firewall-rule.xml
deleted file mode 100644
index 93d5a814547..00000000000
--- a/docs/en-US/egress-firewall-rule.xml
+++ /dev/null
@@ -1,168 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Egress Firewall Rules in an Advanced Zone
- The egress traffic originates from a private network to a public network, such as the
- Internet. By default, the egress traffic is blocked in default network offerings, so no outgoing
- traffic is allowed from a guest network to the Internet. However, you can control the egress
- traffic in an Advanced zone by creating egress firewall rules. When an egress firewall rule is
- applied, the traffic specific to the rule is allowed and the remaining traffic is blocked. When
- all the firewall rules are removed the default policy, Block, is applied.
-
- Prerequisites and Guidelines
- Consider the following scenarios to apply egress firewall rules:
-
-
- Egress firewall rules are supported on Juniper SRX and virtual router.
-
-
- The egress firewall rules are not supported on shared networks.
-
-
- Allow the egress traffic from specified source CIDR. The Source CIDR is part of guest
- network CIDR.
-
-
- Allow the egress traffic with protocol TCP,UDP,ICMP, or ALL.
-
-
- Allow the egress traffic with protocol and destination port range. The port range is
- specified for TCP, UDP or for ICMP type and code.
-
-
- The default policy is Allow for the new network offerings, whereas on upgrade existing
- network offerings with firewall service providers will have the default egress policy
- Deny.
-
-
-
-
- Configuring an Egress Firewall Rule
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In Select view, choose Guest networks, then click the Guest network you want.
-
-
- To add an egress rule, click the Egress rules tab and fill out the following fields to
- specify what type of traffic is allowed to be sent out of VM instances in this guest
- network:
-
-
-
-
-
- egress-firewall-rule.png: adding an egress firewall rule
-
-
-
-
- CIDR: (Add by CIDR only) To send traffic only to
- the IP addresses within a particular address block, enter a CIDR or a comma-separated
- list of CIDRs. The CIDR is the base IP address of the destination. For example,
- 192.168.0.0/22. To allow all CIDRs, set to 0.0.0.0/0.
-
-
- Protocol: The networking protocol that VMs uses
- to send outgoing traffic. The TCP and UDP protocols are typically used for data
- exchange and end-user communications. The ICMP protocol is typically used to send
- error messages or network monitoring data.
-
-
- Start Port, End Port: (TCP, UDP only) A range of
- listening ports that are the destination for the outgoing traffic. If you are opening
- a single port, use the same number in both fields.
-
-
- ICMP Type, ICMP Code: (ICMP only) The type of
- message and error code that are sent.
-
-
-
-
- Click Add.
-
-
-
-
- Configuring the Default Egress Policy
- The default egress policy for Isolated guest network is configured by using Network
- offering. Use the create network offering option to determine whether the default policy
- should be block or allow all the traffic to the public network from a guest network. Use this
- network offering to create the network. If no policy is specified, by default all the traffic
- is allowed from the guest network that you create by using this network offering.
- You have two options: Allow and Deny.
-
- Allow
- If you select Allow for a network offering, by default egress traffic is allowed.
- However, when an egress rule is configured for a guest network, rules are applied to block
- the specified traffic and rest are allowed. If no egress rules are configured for the
- network, egress traffic is accepted.
-
-
- Deny
- If you select Deny for a network offering, by default egress traffic for the guest
- network is blocked. However, when an egress rules is configured for a guest network, rules
- are applied to allow the specified traffic. While implementing a guest network, &PRODUCT;
- adds the firewall egress rule specific to the default egress policy for the guest
- network.
-
- This feature is supported only on virtual router and Juniper SRX.
-
-
- Create a network offering with your desirable default egress policy:
-
-
- Log in with admin privileges to the &PRODUCT; UI.
-
-
- In the left navigation bar, click Service Offerings.
-
-
- In Select Offering, choose Network Offering.
-
-
- Click Add Network Offering.
-
-
- In the dialog, make necessary choices, including firewall provider.
-
-
- In the Default egress policy field, specify the behaviour.
-
-
- Click OK.
-
-
-
-
- Create an isolated network by using this network offering.
- Based on your selection, the network will have the egress public traffic blocked or
- allowed.
-
-
-
-
diff --git a/docs/en-US/elastic-ip.xml b/docs/en-US/elastic-ip.xml
deleted file mode 100644
index 8ecbd75be70..00000000000
--- a/docs/en-US/elastic-ip.xml
+++ /dev/null
@@ -1,103 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- About Elastic IP
- Elastic IP (EIP) addresses are the IP addresses that are associated with an account, and act
- as static IP addresses. The account owner has the complete control over the Elastic IP addresses
- that belong to the account. As an account owner, you can allocate an Elastic IP to a VM of your
- choice from the EIP pool of your account. Later if required you can reassign the IP address to a
- different VM. This feature is extremely helpful during VM failure. Instead of replacing the VM
- which is down, the IP address can be reassigned to a new VM in your account.
- Similar to the public IP address, Elastic IP addresses are mapped to their associated
- private IP addresses by using StaticNAT. The EIP service is equipped with StaticNAT (1:1)
- service in an EIP-enabled basic zone. The default network offering,
- DefaultSharedNetscalerEIPandELBNetworkOffering, provides your network with EIP and ELB network
- services if a NetScaler device is deployed in your zone. Consider the following illustration for
- more details.
-
-
-
-
-
- eip-ns-basiczone.png: Elastic IP in a NetScaler-enabled Basic Zone.
-
-
- In the illustration, a NetScaler appliance is the default entry or exit point for the
- &PRODUCT; instances, and firewall is the default entry or exit point for the rest of the data
- center. Netscaler provides LB services and staticNAT service to the guest networks. The guest
- traffic in the pods and the Management Server are on different subnets / VLANs. The policy-based
- routing in the data center core switch sends the public traffic through the NetScaler, whereas
- the rest of the data center goes through the firewall.
- The EIP work flow is as follows:
-
-
- When a user VM is deployed, a public IP is automatically acquired from the pool of
- public IPs configured in the zone. This IP is owned by the VM's account.
-
-
- Each VM will have its own private IP. When the user VM starts, Static NAT is provisioned
- on the NetScaler device by using the Inbound Network Address Translation (INAT) and Reverse
- NAT (RNAT) rules between the public IP and the private IP.
-
- Inbound NAT (INAT) is a type of NAT supported by NetScaler, in which the destination
- IP address is replaced in the packets from the public network, such as the Internet, with
- the private IP address of a VM in the private network. Reverse NAT (RNAT) is a type of NAT
- supported by NetScaler, in which the source IP address is replaced in the packets
- generated by a VM in the private network with the public IP address.
-
-
-
- This default public IP will be released in two cases:
-
-
- When the VM is stopped. When the VM starts, it again receives a new public IP, not
- necessarily the same one allocated initially, from the pool of Public IPs.
-
-
- The user acquires a public IP (Elastic IP). This public IP is associated with the
- account, but will not be mapped to any private IP. However, the user can enable Static
- NAT to associate this IP to the private IP of a VM in the account. The Static NAT rule
- for the public IP can be disabled at any time. When Static NAT is disabled, a new public
- IP is allocated from the pool, which is not necessarily be the same one allocated
- initially.
-
-
-
-
- For the deployments where public IPs are limited resources, you have the flexibility to
- choose not to allocate a public IP by default. You can use the Associate Public IP option to
- turn on or off the automatic public IP assignment in the EIP-enabled Basic zones. If you turn
- off the automatic public IP assignment while creating a network offering, only a private IP is
- assigned to a VM when the VM is deployed with that network offering. Later, the user can acquire
- an IP for the VM and enable static NAT.
- For more information on the Associate Public IP option, see .
- For more information on the Associate Public IP option, see the
- Administration Guide.
-
- The Associate Public IP feature is designed only for use with user VMs. The System VMs
- continue to get both public IP and private by default, irrespective of the network offering
- configuration.
-
- New deployments which use the default shared network offering with EIP and ELB services to
- create a shared network in the Basic zone will continue allocating public IPs to each user
- VM.
-
diff --git a/docs/en-US/enable-disable-static-nat-vpc.xml b/docs/en-US/enable-disable-static-nat-vpc.xml
deleted file mode 100644
index 467a304915d..00000000000
--- a/docs/en-US/enable-disable-static-nat-vpc.xml
+++ /dev/null
@@ -1,112 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Enabling or Disabling Static NAT on a VPC
- A static NAT rule maps a public IP address to the private IP address of a VM in a VPC to
- allow Internet traffic to it. This section tells how to enable or disable static NAT for a
- particular IP address in a VPC.
- If port forwarding rules are already in effect for an IP address, you cannot enable static
- NAT to that IP.
- If a guest VM is part of more than one network, static NAT rules will function only if they
- are defined on the default network.
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- In the Select view, select VPC.
- All the VPCs that you have created for the account is listed in the page.
-
-
- Click the Configure button of the VPC to which you want to deploy the VMs.
- The VPC page is displayed where all the tiers you created are listed in a
- diagram.
- For each tier, the following options are displayed.
-
-
- Internal LB
-
-
- Public LB IP
-
-
- Static NAT
-
-
- Virtual Machines
-
-
- CIDR
-
-
- The following router information is displayed:
-
-
- Private Gateways
-
-
- Public IP Addresses
-
-
- Site-to-Site VPNs
-
-
- Network ACL Lists
-
-
-
-
- In the Router node, select Public IP Addresses.
- The IP Addresses page is displayed.
-
-
- Click the IP you want to work with.
-
-
- In the Details tab,click the Static NAT button.
-
-
-
-
- enable-disable.png: button to enable Static NAT.
-
- The button toggles between Enable and Disable, depending on whether
- static NAT is currently enabled for the IP address.
-
-
- If you are enabling static NAT, a dialog appears as follows:
-
-
-
-
-
- select-vmstatic-nat.png: selecting a tier to apply staticNAT.
-
-
-
-
- Select the tier and the destination VM, then click Apply.
-
-
-
diff --git a/docs/en-US/enable-disable-static-nat.xml b/docs/en-US/enable-disable-static-nat.xml
deleted file mode 100644
index 0154dca2732..00000000000
--- a/docs/en-US/enable-disable-static-nat.xml
+++ /dev/null
@@ -1,46 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Enabling or Disabling Static NAT
- If port forwarding rules are already in effect for an IP address, you cannot enable static NAT to that IP.
- If a guest VM is part of more than one network, static NAT rules will function only if they are defined on the default network.
-
- Log in to the &PRODUCT; UI as an administrator or end user.
- In the left navigation, choose Network.
- Click the name of the network where you want to work with.
- Click View IP Addresses.
- Click the IP address you want to work with.
-
- Click the Static NAT
-
-
-
-
- ReleaseIPButton.png: button to release an IP
-
- button.The button toggles between Enable and Disable, depending on whether static NAT is currently enabled for the IP address.
- If you are enabling static NAT, a dialog appears where you can choose the destination VM and
- click Apply.
-
-
diff --git a/docs/en-US/enable-security-groups.xml b/docs/en-US/enable-security-groups.xml
deleted file mode 100644
index c957310f9d6..00000000000
--- a/docs/en-US/enable-security-groups.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Enabling Security Groups
- In order for security groups to function in a zone, the security groups feature must first be
- enabled for the zone. The administrator can do this when creating a new zone, by selecting a
- network offering that includes security groups. The procedure is described in Basic Zone
- Configuration in the Advanced Installation Guide. The administrator can not enable security
- groups for an existing zone, only when creating a new zone.
-
-
diff --git a/docs/en-US/enabling-api-call-expiration.xml b/docs/en-US/enabling-api-call-expiration.xml
deleted file mode 100644
index cd82d3d1141..00000000000
--- a/docs/en-US/enabling-api-call-expiration.xml
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Enabling API Call Expiration
-
- You can set an expiry timestamp on API calls to prevent replay attacks over non-secure channels, such as HTTP. The server tracks the expiry timestamp you have specified and rejects all the subsequent API requests that come in after this validity period.
-
- To enable this feature, add the following parameters to the API request:
-
- signatureVersion=3: If the signatureVersion parameter is missing or is not equal to 3, the expires parameter is ignored in the API request.
- expires=YYYY-MM-DDThh:mm:ssZ: Specifies the date and time at which the signature included in the request is expired. The timestamp is expressed in the YYYY-MM-DDThh:mm:ssZ format, as specified in the ISO 8601 standard.
-
- For example:
- expires=2011-10-10T12:00:00+0530
- A sample API request with expiration is given below:
- http://<IPAddress>:8080/client/api?command=listZones&signatureVersion=3&expires=2011-10-10T12:00:00+0530&apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXq-jB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ&signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D
-
-
diff --git a/docs/en-US/enabling-port-8096.xml b/docs/en-US/enabling-port-8096.xml
deleted file mode 100644
index 57c492edcd5..00000000000
--- a/docs/en-US/enabling-port-8096.xml
+++ /dev/null
@@ -1,37 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Enabling Port 8096
-
- Port 8096, which allows API calls without authentication, is closed and disabled by default on any fresh 3.0.1 installations. You can enable 8096 (or another port) for this purpose as follows:
-
-
- Ensure that the first Management Server is installed and running.
- Set the global configuration parameter integration.api.port to the desired port.
- Restart the Management Server.
- On the Management Server host machine, create an iptables rule allowing access to that port.
-
-
-
diff --git a/docs/en-US/end-user-ui-overview.xml b/docs/en-US/end-user-ui-overview.xml
deleted file mode 100644
index 6ec1a25fc55..00000000000
--- a/docs/en-US/end-user-ui-overview.xml
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- End User's UI Overview
- The &PRODUCT; UI helps users of cloud infrastructure to view and use their cloud resources, including virtual machines, templates and ISOs, data volumes and snapshots, guest networks, and IP addresses. If the user is a member or administrator of one or more &PRODUCT; projects, the UI can provide a project-oriented view.
-
diff --git a/docs/en-US/error-handling.xml b/docs/en-US/error-handling.xml
deleted file mode 100644
index 3f119bf4d93..00000000000
--- a/docs/en-US/error-handling.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Error Handling
- If an error occurs while processing an API request, the appropriate response in the format specified is returned. Each error response consists of an error code and an error text describing what possibly can go wrong. For an example error response, see page 12.
- An HTTP error code of 401 is always returned if API request was rejected due to bad signatures, missing API Keys, or the user simply did not have the permissions to execute the command.
-
diff --git a/docs/en-US/event-framework.xml b/docs/en-US/event-framework.xml
deleted file mode 100644
index 0f62fac1407..00000000000
--- a/docs/en-US/event-framework.xml
+++ /dev/null
@@ -1,110 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Event Notification
- Event notification framework provides a means for the Management Server components to
- publish and subscribe to &PRODUCT; events. Event notification is achieved by implementing the
- concept of event bus abstraction in the Management Server. An event bus is introduced in the
- Management Server that allows the &PRODUCT; components and extension plug-ins to subscribe to the
- events by using the Advanced Message Queuing Protocol (AMQP) client. In &PRODUCT;, a default
- implementation of event bus is provided as a plug-in that uses the RabbitMQ AMQP client. The
- AMQP client pushes the published events to a compatible AMQP server. Therefore all the &PRODUCT;
- events are published to an exchange in the AMQP server.
- A new event for state change, resource state change, is introduced as part of Event
- notification framework. Every resource, such as user VM, volume, NIC, network, public IP,
- snapshot, and template, is associated with a state machine and generates events as part of the
- state change. That implies that a change in the state of a resource results in a state change
- event, and the event is published in the corresponding state machine on the event bus. All the
- &PRODUCT; events (alerts, action events, usage events) and the additional category of resource
- state change events, are published on to the events bus.
-
- Use Cases
- The following are some of the use cases:
-
-
-
- Usage or Billing Engines: A third-party cloud usage solution can implement a plug-in
- that can connects to &PRODUCT; to subscribe to &PRODUCT; events and generate usage data. The
- usage data is consumed by their usage software.
-
-
- AMQP plug-in can place all the events on the a message queue, then a AMQP message broker
- can provide topic-based notification to the subscribers.
-
-
- Publish and Subscribe notification service can be implemented as a pluggable service in
- &PRODUCT; that can provide rich set of APIs for event notification, such as topics-based
- subscription and notification. Additionally, the pluggable service can deal with
- multi-tenancy, authentication, and authorization issues.
-
-
-
- Configuration
- As a &PRODUCT; administrator, perform the following one-time configuration to enable event
- notification framework. At run time no changes can control the behaviour.
-
-
-
- Open 'componentContext.xml.
-
-
- Define a bean named eventNotificationBus as follows:
-
-
- name : Specify a name for the bean.
-
-
- server : The name or the IP address of the RabbitMQ AMQP server.
-
-
- port : The port on which RabbitMQ server is running.
-
-
- username : The username associated with the account to access the RabbitMQ
- server.
-
-
- password : The password associated with the username of the account to access the
- RabbitMQ server.
-
-
- exchange : The exchange name on the RabbitMQ server where &PRODUCT; events are
- published.
- A sample bean is given below:
- <bean id="eventNotificationBus" class="org.apache.cloudstack.mom.rabbitmq.RabbitMQEventBus">
- <property name="name" value="eventNotificationBus"/>
- <property name="server" value="127.0.0.1"/>
- <property name="port" value="5672"/>
- <property name="username" value="guest"/>
- <property name="password" value="guest"/>
- <property name="exchange" value="cloudstack-events"/>
- </bean>
- The eventNotificationBus bean represents the
- org.apache.cloudstack.mom.rabbitmq.RabbitMQEventBus class.
-
-
-
-
- Restart the Management Server.
-
-
-
diff --git a/docs/en-US/event-log-queries.xml b/docs/en-US/event-log-queries.xml
deleted file mode 100644
index a0dcaa607fb..00000000000
--- a/docs/en-US/event-log-queries.xml
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Event Log Queries
- Database logs can be queried from the user interface. The list of events captured by the system includes:
-
- Virtual machine creation, deletion, and on-going management operations
- Virtual router creation, deletion, and on-going management operations
-
- Template creation and deletion
- Network/load balancer rules creation and deletion
- Storage volume creation and deletion
- User login and logout
-
-
diff --git a/docs/en-US/event-types.xml b/docs/en-US/event-types.xml
deleted file mode 100644
index 5ce585763de..00000000000
--- a/docs/en-US/event-types.xml
+++ /dev/null
@@ -1,220 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Event Types
-
-
-
-
-
-
- VM.CREATE
- TEMPLATE.EXTRACT
- SG.REVOKE.INGRESS
-
-
- VM.DESTROY
- TEMPLATE.UPLOAD
- HOST.RECONNECT
-
-
- VM.START
- TEMPLATE.CLEANUP
- MAINT.CANCEL
-
-
- VM.STOP
- VOLUME.CREATE
- MAINT.CANCEL.PS
-
-
- VM.REBOOT
- VOLUME.DELETE
- MAINT.PREPARE
-
-
- VM.UPGRADE
- VOLUME.ATTACH
- MAINT.PREPARE.PS
-
-
- VM.RESETPASSWORD
- VOLUME.DETACH
- VPN.REMOTE.ACCESS.CREATE
-
-
- ROUTER.CREATE
- VOLUME.UPLOAD
- VPN.USER.ADD
-
-
- ROUTER.DESTROY
- SERVICEOFFERING.CREATE
- VPN.USER.REMOVE
-
-
- ROUTER.START
- SERVICEOFFERING.UPDATE
- NETWORK.RESTART
-
-
- ROUTER.STOP
- SERVICEOFFERING.DELETE
- UPLOAD.CUSTOM.CERTIFICATE
-
-
- ROUTER.REBOOT
- DOMAIN.CREATE
- UPLOAD.CUSTOM.CERTIFICATE
-
-
- ROUTER.HA
- DOMAIN.DELETE
- STATICNAT.DISABLE
-
-
- PROXY.CREATE
- DOMAIN.UPDATE
- SSVM.CREATE
-
-
- PROXY.DESTROY
- SNAPSHOT.CREATE
- SSVM.DESTROY
-
-
- PROXY.START
- SNAPSHOT.DELETE
- SSVM.START
-
-
- PROXY.STOP
- SNAPSHOTPOLICY.CREATE
- SSVM.STOP
-
-
- PROXY.REBOOT
- SNAPSHOTPOLICY.UPDATE
- SSVM.REBOOT
-
-
- PROXY.HA
- SNAPSHOTPOLICY.DELETE
- SSVM.H
-
-
- VNC.CONNECT
- VNC.DISCONNECT
- NET.IPASSIGN
-
-
- NET.IPRELEASE
- NET.RULEADD
- NET.RULEDELETE
-
-
- NET.RULEMODIFY
- NETWORK.CREATE
- NETWORK.DELETE
-
-
- LB.ASSIGN.TO.RULE
- LB.REMOVE.FROM.RULE
- LB.CREATE
-
-
- LB.DELETE
- LB.UPDATE
- USER.LOGIN
-
-
- USER.LOGOUT
- USER.CREATE
- USER.DELETE
-
-
- USER.UPDATE
- USER.DISABLE
- TEMPLATE.CREATE
-
-
- TEMPLATE.DELETE
- TEMPLATE.UPDATE
- TEMPLATE.COPY
-
-
- TEMPLATE.DOWNLOAD.START
- TEMPLATE.DOWNLOAD.SUCCESS
- TEMPLATE.DOWNLOAD.FAILED
-
-
- ISO.CREATE
- ISO.DELETE
- ISO.COPY
-
-
- ISO.ATTACH
- ISO.DETACH
- ISO.EXTRACT
-
-
- ISO.UPLOAD
- SERVICE.OFFERING.CREATE
- SERVICE.OFFERING.EDIT
-
-
- SERVICE.OFFERING.DELETE
- DISK.OFFERING.CREATE
- DISK.OFFERING.EDIT
-
-
- DISK.OFFERING.DELETE
- NETWORK.OFFERING.CREATE
- NETWORK.OFFERING.EDIT
-
-
- NETWORK.OFFERING.DELETE
- POD.CREATE
- POD.EDIT
-
-
- POD.DELETE
- ZONE.CREATE
- ZONE.EDIT
-
-
- ZONE.DELETE
- VLAN.IP.RANGE.CREATE
- VLAN.IP.RANGE.DELETE
-
-
- CONFIGURATION.VALUE.EDIT
- SG.AUTH.INGRESS
-
-
-
-
-
-
diff --git a/docs/en-US/events-log.xml b/docs/en-US/events-log.xml
deleted file mode 100644
index fa97db45959..00000000000
--- a/docs/en-US/events-log.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Event Logs
- There are two types of events logged in the &PRODUCT; Event Log. Standard events log
- the success or failure of an event and can be used to identify jobs or processes that have
- failed. There are also long running job events. Events for asynchronous jobs log when a job
- is scheduled, when it starts, and when it completes. Other long running synchronous jobs log
- when a job starts, and when it completes. Long running synchronous and asynchronous event
- logs can be used to gain more information on the status of a pending job or can be used to
- identify a job that is hanging or has not started. The following sections provide more
- information on these events..
-
-
diff --git a/docs/en-US/events.xml b/docs/en-US/events.xml
deleted file mode 100644
index 3b93ee0451e..00000000000
--- a/docs/en-US/events.xml
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Events
- An event is essentially a significant or meaningful change in the state of both virtual and
- physical resources associated with a cloud environment. Events are used by monitoring systems,
- usage and billing systems, or any other event-driven workflow systems to discern a pattern and
- make the right business decision. In &PRODUCT; an event could be a state change of virtual or
- physical resources, an action performed by an user (action events), or policy based events
- (alerts).
-
-
-
-
-
-
-
diff --git a/docs/en-US/example-activedirectory-configuration.xml b/docs/en-US/example-activedirectory-configuration.xml
deleted file mode 100644
index 5a8178d5843..00000000000
--- a/docs/en-US/example-activedirectory-configuration.xml
+++ /dev/null
@@ -1,43 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Example LDAP Configuration for Active Directory
- This shows the configuration settings required for using ActiveDirectory.
-
- samAccountName - Logon name
- mail - Email Address
- cn - Real name
-
- Along with this the ldap.user.object name needs to be modified, by default ActiveDirectory uses the value "user" for this.
- Map the following attributes accordingly as shown below:
-
-
-
-
-
- add-ldap-configuration-ad.png: example configuration for active directory.
-
-
-
diff --git a/docs/en-US/example-openldap-configuration.xml b/docs/en-US/example-openldap-configuration.xml
deleted file mode 100644
index aa57a00cf18..00000000000
--- a/docs/en-US/example-openldap-configuration.xml
+++ /dev/null
@@ -1,44 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Example LDAP Configuration for OpenLdap
- This shows the configuration settings required for using OpenLDAP.
- The default values supplied are suited for OpenLDAP.
-
- uid - Logon name
- mail - Email Address
- cn - Real name
-
- Along with this the ldap.user.object name needs to be modified, by default OpenLDAP uses the value "inetOrgPerson" for this.
- Map the following attributes accordingly as shown below within the cloudstack ldap configuration:
-
-
-
-
-
- add-ldap-configuration-openldap.png: example configuration for OpenLdap.
-
-
-
diff --git a/docs/en-US/example-response-from-listUsageRecords.xml b/docs/en-US/example-response-from-listUsageRecords.xml
deleted file mode 100644
index e0d79240e09..00000000000
--- a/docs/en-US/example-response-from-listUsageRecords.xml
+++ /dev/null
@@ -1,56 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Example response from listUsageRecords
-
- All &PRODUCT; API requests are submitted in the form of a HTTP GET/POST with an associated command and any parameters. A request is composed of the following whether in HTTP or HTTPS:
-
-
- <listusagerecordsresponse>
- <count>1816</count>
- <usagerecord>
- <account>user5</account>
- <accountid>10004</accountid>
- <domainid>1</domainid>
- <zoneid>1</zoneid>
- <description>i-3-4-WC running time (ServiceOffering: 1) (Template: 3)</description>
- <usage>2.95288 Hrs</usage>
- <usagetype>1</usagetype>
- <rawusage>2.95288</rawusage>
- <virtualmachineid>4</virtualmachineid>
- <name>i-3-4-WC</name>
- <offeringid>1</offeringid>
- <templateid>3</templateid>
- <usageid>245554</usageid>
- <type>XenServer</type>
- <startdate>2009-09-15T00:00:00-0700</startdate>
- <enddate>2009-09-18T16:14:26-0700</enddate>
- </usagerecord>
-
- … (1,815 more usage records)
- </listusagerecordsresponse>
-
-
-
diff --git a/docs/en-US/export-template.xml b/docs/en-US/export-template.xml
deleted file mode 100644
index c225e360344..00000000000
--- a/docs/en-US/export-template.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Exporting Templates
- End users and Administrators may export templates from the &PRODUCT;. Navigate to the template in the UI and choose the Download function from the Actions menu.
-
-
diff --git a/docs/en-US/external-firewalls-and-load-balancers.xml b/docs/en-US/external-firewalls-and-load-balancers.xml
deleted file mode 100644
index 42ecacf9f75..00000000000
--- a/docs/en-US/external-firewalls-and-load-balancers.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- External Firewalls and Load Balancers
- &PRODUCT; is capable of replacing its Virtual Router with an external Juniper SRX device and
- an optional external NetScaler or F5 load balancer for gateway and load balancing services. In
- this case, the VMs use the SRX as their gateway.
-
-
-
-
-
-
-
diff --git a/docs/en-US/external-fw-topology-req.xml b/docs/en-US/external-fw-topology-req.xml
deleted file mode 100644
index ab81496a30a..00000000000
--- a/docs/en-US/external-fw-topology-req.xml
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- External Firewall Topology Requirements
- When external firewall integration is in place, the public IP VLAN must still be trunked to
- the Hosts. This is required to support the Secondary Storage VM and Console Proxy VM.
-
diff --git a/docs/en-US/external-guest-firewall-integration.xml b/docs/en-US/external-guest-firewall-integration.xml
deleted file mode 100644
index 0b34dca1065..00000000000
--- a/docs/en-US/external-guest-firewall-integration.xml
+++ /dev/null
@@ -1,201 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- External Guest Firewall Integration for Juniper SRX (Optional)
-
- Available only for guests using advanced networking.
-
- &PRODUCT; provides for direct management of the Juniper SRX series of firewalls. This
- enables &PRODUCT; to establish static NAT mappings from public IPs to guest VMs, and to use
- the Juniper device in place of the virtual router for firewall services. You can have one or
- more Juniper SRX per zone. This feature is optional. If Juniper integration is not provisioned,
- &PRODUCT; will use the virtual router for these services.
- The Juniper SRX can optionally be used in conjunction with an external load balancer.
- External Network elements can be deployed in a side-by-side or inline configuration.
-
-
-
-
-
- parallel-mode.png: adding a firewall and load balancer in parallel mode.
-
-
- &PRODUCT; requires the Juniper to be configured as follows:
-
- Supported SRX software version is 10.3 or higher.
-
-
-
- Install your SRX appliance according to the vendor's instructions.
-
-
- Connect one interface to the management network and one interface to the public network.
- Alternatively, you can connect the same interface to both networks and a use a VLAN for the
- public network.
-
-
- Make sure "vlan-tagging" is enabled on the private interface.
-
-
- Record the public and private interface names. If you used a VLAN for the public
- interface, add a ".[VLAN TAG]" after the interface name. For example, if you are using
- ge-0/0/3 for your public interface and VLAN tag 301, your public interface name would be
- "ge-0/0/3.301". Your private interface name should always be untagged because the
- &PRODUCT; software automatically creates tagged logical interfaces.
-
-
- Create a public security zone and a private security zone. By default, these will
- already exist and will be called "untrust" and "trust". Add the public interface to the
- public zone and the private interface to the private zone. Note down the security zone
- names.
-
-
- Make sure there is a security policy from the private zone to the public zone that
- allows all traffic.
-
-
- Note the username and password of the account you want the &PRODUCT; software to log
- in to when it is programming rules.
-
-
- Make sure the "ssh" and "xnm-clear-text" system services are enabled.
-
-
- If traffic metering is desired:
-
-
- a. Create an incoming firewall filter and an outgoing firewall filter. These filters
- should be the same names as your public security zone name and private security zone
- name respectively. The filters should be set to be "interface-specific". For example,
- here is the configuration where the public zone is "untrust" and the private zone is
- "trust":
- root@cloud-srx# show firewall
-filter trust {
- interface-specific;
-}
-filter untrust {
- interface-specific;
-}
-
-
- Add the firewall filters to your public interface. For example, a sample
- configuration output (for public interface ge-0/0/3.0, public security zone untrust, and
- private security zone trust) is:
- ge-0/0/3 {
- unit 0 {
- family inet {
- filter {
- input untrust;
- output trust;
- }
- address 172.25.0.252/16;
- }
- }
-}
-
-
-
-
- Make sure all VLANs are brought to the private interface of the SRX.
-
-
- After the &PRODUCT; Management Server is installed, log in to the &PRODUCT; UI as
- administrator.
-
-
- In the left navigation bar, click Infrastructure.
-
-
- In Zones, click View More.
-
-
- Choose the zone you want to work with.
-
-
- Click the Network tab.
-
-
- In the Network Service Providers node of the diagram, click Configure. (You might have
- to scroll down to see this.)
-
-
- Click SRX.
-
-
- Click the Add New SRX button (+) and provide the following:
-
-
- IP Address: The IP address of the SRX.
-
-
- Username: The user name of the account on the SRX that &PRODUCT; should use.
-
-
- Password: The password of the account.
-
-
- Public Interface. The name of the public interface on the SRX. For example,
- ge-0/0/2. A ".x" at the end of the interface indicates the VLAN that is in use.
-
-
- Private Interface: The name of the private interface on the SRX. For example,
- ge-0/0/1.
-
-
- Usage Interface: (Optional) Typically, the public interface is used to meter
- traffic. If you want to use a different interface, specify its name here
-
-
- Number of Retries: The number of times to attempt a command on the SRX before
- failing. The default value is 2.
-
-
- Timeout (seconds): The time to wait for a command on the SRX before considering it
- failed. Default is 300 seconds.
-
-
- Public Network: The name of the public network on the SRX. For example,
- trust.
-
-
- Private Network: The name of the private network on the SRX. For example,
- untrust.
-
-
- Capacity: The number of networks the device can handle
-
-
- Dedicated: When marked as dedicated, this device will be dedicated to a single
- account. When Dedicated is checked, the value in the Capacity field has no significance
- implicitly, its value is 1
-
-
-
-
- Click OK.
-
-
- Click Global Settings. Set the parameter external.network.stats.interval to indicate how
- often you want &PRODUCT; to fetch network usage statistics from the Juniper SRX. If you
- are not using the SRX to gather network usage statistics, set to 0.
-
-
-
diff --git a/docs/en-US/external-guest-lb-integration.xml b/docs/en-US/external-guest-lb-integration.xml
deleted file mode 100644
index 5760f9559e6..00000000000
--- a/docs/en-US/external-guest-lb-integration.xml
+++ /dev/null
@@ -1,109 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- External Guest Load Balancer Integration (Optional)
- &PRODUCT; can optionally use a Citrix NetScaler or BigIP F5 load balancer to provide load
- balancing services to guests. If this is not enabled, &PRODUCT; will use the software load
- balancer in the virtual router.
- To install and enable an external load balancer for &PRODUCT; management:
-
-
- Set up the appliance according to the vendor's directions.
-
-
- Connect it to the networks carrying public traffic and management traffic (these could
- be the same network).
-
-
- Record the IP address, username, password, public interface name, and private interface
- name. The interface names will be something like "1.1" or "1.2".
-
-
- Make sure that the VLANs are trunked to the management network interface.
-
-
- After the &PRODUCT; Management Server is installed, log in as administrator to the
- &PRODUCT; UI.
-
-
- In the left navigation bar, click Infrastructure.
-
-
- In Zones, click View More.
-
-
- Choose the zone you want to work with.
-
-
- Click the Network tab.
-
-
- In the Network Service Providers node of the diagram, click Configure. (You might have
- to scroll down to see this.)
-
-
- Click NetScaler or F5.
-
-
- Click the Add button (+) and provide the following:
- For NetScaler:
-
-
- IP Address: The IP address of the SRX.
-
-
- Username/Password: The authentication credentials to access the device. &PRODUCT;
- uses these credentials to access the device.
-
-
- Type: The type of device that is being added. It could be F5 Big Ip Load Balancer,
- NetScaler VPX, NetScaler MPX, or NetScaler SDX. For a comparison of the NetScaler types,
- see the &PRODUCT; Administration Guide.
-
-
- Public interface: Interface of device that is configured to be part of the public
- network.
-
-
- Private interface: Interface of device that is configured to be part of the private
- network.
-
-
- Number of retries. Number of times to attempt a command on the device before
- considering the operation failed. Default is 2.
-
-
- Capacity: The number of networks the device can handle.
-
-
- Dedicated: When marked as dedicated, this device will be dedicated to a single
- account. When Dedicated is checked, the value in the Capacity field has no significance
- implicitly, its value is 1.
-
-
-
-
- Click OK.
-
-
- The installation and provisioning of the external load balancer is finished. You can proceed
- to add VMs and NAT or load balancing rules.
-
diff --git a/docs/en-US/extracting-source.xml b/docs/en-US/extracting-source.xml
deleted file mode 100644
index d1690401229..00000000000
--- a/docs/en-US/extracting-source.xml
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Extracting source
-
- Extracting the &PRODUCT; release is relatively simple and can be done
- with a single command as follows:
- $tar -jxvf apache-cloudstack-4.1.0.src.tar.bz2
-
-
- You can now move into the directory:
- $cd ./apache-cloudstack-4.1.0-src
-
-
diff --git a/docs/en-US/feature-overview.xml b/docs/en-US/feature-overview.xml
deleted file mode 100644
index 57b6d84973d..00000000000
--- a/docs/en-US/feature-overview.xml
+++ /dev/null
@@ -1,81 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- What Can &PRODUCT; Do?
-
- Multiple Hypervisor Support
-
-
- &PRODUCT; works with a variety of hypervisors, and a single cloud deployment can contain multiple hypervisor implementations. The current release of &PRODUCT; supports pre-packaged enterprise solutions like Citrix XenServer and VMware vSphere, as well as KVM or Xen running on Ubuntu or CentOS.
-
-
- Massively Scalable Infrastructure Management
-
-
- &PRODUCT; can manage tens of thousands of servers installed in multiple geographically distributed datacenters. The centralized management server scales linearly, eliminating the need for intermediate cluster-level management servers. No single component failure can cause cloud-wide outage. Periodic maintenance of the management server can be performed without affecting the functioning of virtual machines running in the cloud.
-
-
- Automatic Configuration Management
-
- &PRODUCT; automatically configures each guest virtual machine’s networking and storage settings.
-
- &PRODUCT; internally manages a pool of virtual appliances to support the cloud itself. These appliances offer services such as firewalling, routing, DHCP, VPN access, console proxy, storage access, and storage replication. The extensive use of virtual appliances simplifies the installation, configuration, and ongoing management of a cloud deployment.
-
-
- Graphical User Interface
-
- &PRODUCT; offers an administrator's Web interface, used for provisioning and managing the cloud, as well as an end-user's Web interface, used for running VMs and managing VM templates. The UI can be customized to reflect the desired service provider or enterprise look and feel.
-
-
- API and Extensibility
-
-
- &PRODUCT; provides an API that gives programmatic access to all the
- management features available in the UI. The API is maintained and
- documented. This API enables the creation of command line tools and
- new user interfaces to suit particular needs. See the Developer’s
- Guide and API Reference, both available at
- Apache CloudStack Guides
- and
- Apache CloudStack API Reference
- respectively.
-
-
- The &PRODUCT; pluggable allocation architecture allows the creation
- of new types of allocators for the selection of storage and Hosts.
- See the Allocator Implementation Guide
- (http://docs.cloudstack.org/CloudStack_Documentation/Allocator_Implementation_Guide).
-
-
- High Availability
-
-
- &PRODUCT; has a number of features to increase the availability of the
- system. The Management Server itself may be deployed in a multi-node
- installation where the servers are load balanced. MySQL may be configured
- to use replication to provide for a manual failover in the event of
- database loss. For the hosts, &PRODUCT; supports NIC bonding and the use
- of separate networks for storage as well as iSCSI Multipath.
-
-
diff --git a/docs/en-US/feedback.xml b/docs/en-US/feedback.xml
deleted file mode 100644
index 4b06c9f3898..00000000000
--- a/docs/en-US/feedback.xml
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Feedback
- to-do
-
diff --git a/docs/en-US/firewall-rules.xml b/docs/en-US/firewall-rules.xml
deleted file mode 100644
index 837a4c6f9d0..00000000000
--- a/docs/en-US/firewall-rules.xml
+++ /dev/null
@@ -1,82 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Firewall Rules
- By default, all incoming traffic to the public IP address is rejected by the firewall. To
- allow external traffic, you can open firewall ports by specifying firewall rules. You can
- optionally specify one or more CIDRs to filter the source IPs. This is useful when you want to
- allow only incoming requests from certain IP addresses.
- You cannot use firewall rules to open ports for an elastic IP address. When elastic IP is
- used, outside access is instead controlled through the use of security groups. See .
- In an advanced zone, you can also create egress firewall rules by using the virtual router.
- For more information, see .
- Firewall rules can be created using the Firewall tab in the Management Server UI. This tab
- is not displayed by default when &PRODUCT; is installed. To display the Firewall tab, the
- &PRODUCT; administrator must set the global configuration parameter firewall.rule.ui.enabled to
- "true."
- To create a firewall rule:
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- In the left navigation, choose Network.
-
-
- Click the name of the network where you want to work with.
-
-
- Click View IP Addresses.
-
-
- Click the IP address you want to work with.
-
-
- Click the Configuration tab and fill in the following values.
-
-
- Source CIDR. (Optional) To accept only traffic from
- IP addresses within a particular address block, enter a CIDR or a comma-separated list
- of CIDRs. Example: 192.168.0.0/22. Leave empty to allow all CIDRs.
-
-
- Protocol. The communication protocol in use on the
- opened port(s).
-
-
- Start Port and End Port. The port(s) you want to
- open on the firewall. If you are opening a single port, use the same number in both
- fields
-
-
- ICMP Type and ICMP Code. Used only if Protocol is
- set to ICMP. Provide the type and code required by the ICMP protocol to fill out the
- ICMP header. Refer to ICMP documentation for more details if you are not sure what to
- enter
-
-
-
-
- Click Add.
-
-
-
diff --git a/docs/en-US/first_ms_node_install.xml b/docs/en-US/first_ms_node_install.xml
deleted file mode 100644
index af6b35b2c53..00000000000
--- a/docs/en-US/first_ms_node_install.xml
+++ /dev/null
@@ -1,57 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Install the First Management Server
-
-
-
- Ensure you have configured your machine according to
-
- or
-
- as appropriate for your platform.
-
-
-
-
- Install the &PRODUCT; management server packages by
- issuing one of the following commands as appropriate:
- #yum install cloudstack-management
- #apt-get install cloudstack-management
-
-
-
-
- (RPM-based distributions) When the installation is
- finished, run the following commands to start essential
- services:
- #service rpcbind start
-#service nfs start
-#chkconfig nfs on
-#chkconfig rpcbind on
-
-
-
-
diff --git a/docs/en-US/generic-firewall-provisions.xml b/docs/en-US/generic-firewall-provisions.xml
deleted file mode 100644
index 53ae45a09e0..00000000000
--- a/docs/en-US/generic-firewall-provisions.xml
+++ /dev/null
@@ -1,37 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Generic Firewall Provisions
- The hardware firewall is required to serve two purposes:
-
-
- Protect the Management Servers. NAT and port forwarding should be configured to direct
- traffic from the public Internet to the Management Servers.
-
-
- Route management network traffic between multiple zones. Site-to-site VPN should be
- configured between multiple zones.
-
-
- To achieve the above purposes you must set up fixed configurations for the firewall.
- Firewall rules and policies need not change as users are provisioned into the cloud. Any brand
- of hardware firewall that supports NAT and site-to-site VPN can be used.
-
diff --git a/docs/en-US/getting-release.xml b/docs/en-US/getting-release.xml
deleted file mode 100644
index 33c246f08c5..00000000000
--- a/docs/en-US/getting-release.xml
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Getting the release
-
- You can download the latest &PRODUCT; release from the
-
- Apache CloudStack project download page.
-
- Prior releases are available via archive.apache.org as well. See the downloads page for more information on archived releases.
- You'll notice several links under the 'Latest release' section. A link to a file ending in tar.bz2, as well as a PGP/GPG signature, MD5, and SHA512 file.
-
- The tar.bz2 file contains the Bzip2-compressed tarball with the source code.
- The .asc file is a detached cryptographic signature that can be used to help verify the authenticity of the release.
- The .md5 file is an MD5 hash of the release to aid in verify the validity of the release download.
- The .sha file is a SHA512 hash of the release to aid in verify the validity of the release download.
-
-
diff --git a/docs/en-US/global-config.xml b/docs/en-US/global-config.xml
deleted file mode 100644
index 237614d3f85..00000000000
--- a/docs/en-US/global-config.xml
+++ /dev/null
@@ -1,342 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Setting Configuration Parameters
-
- About Configuration Parameters
- &PRODUCT; provides a variety of settings you can use to set limits, configure features,
- and enable or disable features in the cloud. Once your Management Server is running, you might
- need to set some of these configuration parameters, depending on what optional features you
- are setting up. You can set default values at the global level, which will be in effect
- throughout the cloud unless you override them at a lower level. You can make local settings,
- which will override the global configuration parameter values, at the level of an account,
- zone, cluster, or primary storage.
- The documentation for each &PRODUCT; feature should direct you to the names of the
- applicable parameters. The following table shows a few of the more useful parameters.
-
-
-
-
-
-
- Field
- Value
-
-
-
-
- management.network.cidr
- A CIDR that describes the network that the management CIDRs reside on. This
- variable must be set for deployments that use vSphere. It is recommended to be set
- for other deployments as well. Example: 192.168.3.0/24.
-
-
- xen.setup.multipath
- For XenServer nodes, this is a true/false variable that instructs
- CloudStack to enable iSCSI multipath on the XenServer Hosts when they are added.
- This defaults to false. Set it to true if you would like CloudStack to enable
- multipath.
- If this is true for a NFS-based deployment multipath will still be enabled on
- the XenServer host. However, this does not impact NFS operation and is
- harmless.
-
-
- secstorage.allowed.internal.sites
- This is used to protect your internal network from rogue attempts to
- download arbitrary files using the template download feature. This is a
- comma-separated list of CIDRs. If a requested URL matches any of these CIDRs the
- Secondary Storage VM will use the private network interface to fetch the URL. Other
- URLs will go through the public interface. We suggest you set this to 1 or 2
- hardened internal machines where you keep your templates. For example, set it to
- 192.168.1.66/32.
-
-
- use.local.storage
- Determines whether CloudStack will use storage that is local to the Host
- for data disks, templates, and snapshots. By default CloudStack will not use this
- storage. You should change this to true if you want to use local storage and you
- understand the reliability and feature drawbacks to choosing local
- storage.
-
-
- host
- This is the IP address of the Management Server. If you are using multiple
- Management Servers you should enter a load balanced IP address that is reachable via
- the private network.
-
-
- default.page.size
- Maximum number of items per page that can be returned by a CloudStack API
- command. The limit applies at the cloud level and can vary from cloud to cloud. You
- can override this with a lower value on a particular API call by using the page and
- page size API command parameters. For more information, see the Developer's Guide.
- Default: 500.
-
-
- ha.tag
- The label you want to use throughout the cloud to designate certain hosts
- as dedicated HA hosts. These hosts will be used only for HA-enabled VMs that are
- restarting due to the failure of another host. For example, you could set this to
- ha_host. Specify the ha.tag value as a host tag when you add a new host to the
- cloud.
-
-
-
-
-
-
- Setting Global Configuration Parameters
- Use the following steps to set global configuration parameters. These values will be the
- defaults in effect throughout your &PRODUCT; deployment.
-
-
- Log in to the UI as administrator.
-
-
- In the left navigation bar, click Global Settings.
-
-
- In Select View, choose one of the following:
-
-
- Global Settings. This displays a list of the parameters with brief descriptions
- and current values.
-
-
- Hypervisor Capabilities. This displays a list of hypervisor versions with the
- maximum number of guests supported for each.
-
-
-
-
- Use the search box to narrow down the list to those you are interested in.
-
-
- In the Actions column, click the Edit icon to modify a value. If you are viewing
- Hypervisor Capabilities, you must click the name of the hypervisor first to display the
- editing screen.
-
-
-
-
- Setting Local Configuration Parameters
- Use the following steps to set local configuration parameters for an account, zone,
- cluster, or primary storage. These values will override the global configuration
- settings.
-
-
- Log in to the UI as administrator.
-
-
- In the left navigation bar, click Infrastructure or Accounts, depending on where you
- want to set a value.
-
-
- Find the name of the particular resource that you want to work with. For example, if
- you are in Infrastructure, click View All on the Zones, Clusters, or Primary Storage
- area.
-
-
- Click the name of the resource where you want to set a limit.
-
-
- Click the Settings tab.
-
-
- Use the search box to narrow down the list to those you are interested in.
-
-
- In the Actions column, click the Edit icon to modify a value.
-
-
-
-
- Granular Global Configuration Parameters
- The following global configuration parameters have been made more granular. The parameters
- are listed under three different scopes: account, cluster, and zone.
-
-
-
-
-
-
-
- Field
- Field
- Value
-
-
-
-
- account
- remote.access.vpn.client.iprange
- The range of IPs to be allocated to remotely access the VPN clients. The
- first IP in the range is used by the VPN server.
-
-
- account
- allow.public.user.templates
- If false, users will not be able to create public templates.
-
-
- account
- use.system.public.ips
- If true and if an account has one or more dedicated public IP ranges, IPs
- are acquired from the system pool after all the IPs dedicated to the account have
- been consumed.
-
-
- account
- use.system.guest.vlans
- If true and if an account has one or more dedicated guest VLAN ranges,
- VLANs are allocated from the system pool after all the VLANs dedicated to the
- account have been consumed.
-
-
- cluster
- cluster.storage.allocated.capacity.notificationthreshold
- The percentage, as a value between 0 and 1, of allocated storage utilization above which
- alerts are sent that the storage is below the threshold.
-
-
- cluster
- cluster.storage.capacity.notificationthreshold
- The percentage, as a value between 0 and 1, of storage utilization above which alerts are sent
- that the available storage is below the threshold.
-
-
- cluster
- cluster.cpu.allocated.capacity.notificationthreshold
- The percentage, as a value between 0 and 1, of cpu utilization above which alerts are sent
- that the available CPU is below the threshold.
-
-
- cluster
- cluster.memory.allocated.capacity.notificationthreshold
- The percentage, as a value between 0 and 1, of memory utilization above which alerts are sent
- that the available memory is below the threshold.
-
-
- cluster
- cluster.cpu.allocated.capacity.disablethreshold
- The percentage, as a value between 0 and 1, of CPU utilization above which allocators will
- disable that cluster from further usage. Keep the corresponding notification
- threshold lower than this value to be notified beforehand.
-
-
- cluster
- cluster.memory.allocated.capacity.disablethreshold
- The percentage, as a value between 0 and 1, of memory utilization above which allocators will
- disable that cluster from further usage. Keep the corresponding notification
- threshold lower than this value to be notified beforehand.
-
-
- cluster
- cpu.overprovisioning.factor
- Used for CPU over-provisioning calculation; the available CPU will be the mathematical product
- of actualCpuCapacity and cpu.overprovisioning.factor.
-
-
- cluster
- mem.overprovisioning.factor
- Used for memory over-provisioning calculation.
-
-
- cluster
- vmware.reserve.cpu
- Specify whether or not to reserve CPU when not over-provisioning; In case of CPU
- over-provisioning, CPU is always reserved.
-
-
- cluster
- vmware.reserve.mem
- Specify whether or not to reserve memory when not over-provisioning; In case of memory
- over-provisioning memory is always reserved.
-
-
- zone
- pool.storage.allocated.capacity.disablethreshold
- The percentage, as a value between 0 and 1, of allocated storage utilization above which
- allocators will disable that pool because the available allocated storage is below
- the threshold.
-
-
- zone
- pool.storage.capacity.disablethreshold
- The percentage, as a value between 0 and 1, of storage utilization above which allocators will
- disable the pool because the available storage capacity is below the
- threshold.
-
-
- zone
- storage.overprovisioning.factor
- Used for storage over-provisioning calculation; available storage will be the mathematical
- product of actualStorageSize and storage.overprovisioning.factor.
-
-
- zone
- network.throttling.rate
- Default data transfer rate in megabits per second allowed in a network.
-
-
- zone
- guest.domain.suffix
- Default domain name for VMs inside a virtual networks with a router.
-
-
- zone
- router.template.xen
- Name of the default router template on Xenserver.
-
-
- zone
- router.template.kvm
- Name of the default router template on KVM.
-
-
- zone
- router.template.vmware
- Name of the default router template on VMware.
-
-
- zone
- enable.dynamic.scale.vm
- Enable or diable dynamically scaling of a VM.
-
-
- zone
- use.external.dns
- Bypass internal DNS, and use the external DNS1 and DNS2
-
-
- zone
- blacklisted.routes
- Routes that are blacklisted cannot be used for creating static routes for a VPC Private
- Gateway.
-
-
-
-
-
-
diff --git a/docs/en-US/globally-configured-limits.xml b/docs/en-US/globally-configured-limits.xml
deleted file mode 100644
index ac71112b310..00000000000
--- a/docs/en-US/globally-configured-limits.xml
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Globally Configured Limits
- In a zone, the guest virtual network has a 24 bit CIDR by default. This limits the guest virtual network to 254 running instances. It can be adjusted as needed, but this must be done before any instances are created in the zone. For example, 10.1.1.0/22 would provide for ~1000 addresses.
- The following table lists limits set in the Global Configuration:
-
-
-
-
- Parameter Name
- Definition
-
-
-
-
-
- max.account.public.ips
- Number of public IP addresses that can be owned by an account
-
-
-
- max.account.snapshots
- Number of snapshots that can exist for an account
-
-
-
-
- max.account.templates
- Number of templates that can exist for an account
-
-
-
- max.account.user.vms
- Number of virtual machine instances that can exist for an account
-
-
-
- max.account.volumes
- Number of disk volumes that can exist for an account
-
-
-
- max.template.iso.size
- Maximum size for a downloaded template or ISO in GB
-
-
-
- max.volume.size.gb
- Maximum size for a volume in GB
-
-
- network.throttling.rate
- Default data transfer rate in megabits per second allowed per user (supported on XenServer)
-
-
- snapshot.max.hourly
- Maximum recurring hourly snapshots to be retained for a volume. If the limit is reached, early snapshots from the start of the hour are deleted so that newer ones can be saved. This limit does not apply to manual snapshots. If set to 0, recurring hourly snapshots can not be scheduled
-
-
-
- snapshot.max.daily
- Maximum recurring daily snapshots to be retained for a volume. If the limit is reached, snapshots from the start of the day are deleted so that newer ones can be saved. This limit does not apply to manual snapshots. If set to 0, recurring daily snapshots can not be scheduled
-
-
- snapshot.max.weekly
- Maximum recurring weekly snapshots to be retained for a volume. If the limit is reached, snapshots from the beginning of the week are deleted so that newer ones can be saved. This limit does not apply to manual snapshots. If set to 0, recurring weekly snapshots can not be scheduled
-
-
-
- snapshot.max.monthly
- Maximum recurring monthly snapshots to be retained for a volume. If the limit is reached, snapshots from the beginning of the month are deleted so that newer ones can be saved. This limit does not apply to manual snapshots. If set to 0, recurring monthly snapshots can not be scheduled.
-
-
-
-
- To modify global configuration parameters, use the global configuration screen in the &PRODUCT; UI. See Setting Global Configuration Parameters
-
diff --git a/docs/en-US/gslb.xml b/docs/en-US/gslb.xml
deleted file mode 100644
index 968e8e2cefa..00000000000
--- a/docs/en-US/gslb.xml
+++ /dev/null
@@ -1,487 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Global Server Load Balancing Support
- &PRODUCT; supports Global Server Load Balancing (GSLB) functionalities to provide business
- continuity, and enable seamless resource movement within a &PRODUCT; environment. &PRODUCT;
- achieve this by extending its functionality of integrating with NetScaler Application Delivery
- Controller (ADC), which also provides various GSLB capabilities, such as disaster recovery and
- load balancing. The DNS redirection technique is used to achieve GSLB in &PRODUCT;.
- In order to support this functionality, region level services and service provider are
- introduced. A new service 'GSLB' is introduced as a region level service. The GSLB service
- provider is introduced that will provider the GSLB service. Currently, NetScaler is the
- supported GSLB provider in &PRODUCT;. GSLB functionality works in an Active-Active data center
- environment.
-
- About Global Server Load Balancing
- Global Server Load Balancing (GSLB) is an extension of load balancing functionality, which
- is highly efficient in avoiding downtime. Based on the nature of deployment, GSLB represents a
- set of technologies that is used for various purposes, such as load sharing, disaster
- recovery, performance, and legal obligations. With GSLB, workloads can be distributed across
- multiple data centers situated at geographically separated locations. GSLB can also provide an
- alternate location for accessing a resource in the event of a failure, or to provide a means
- of shifting traffic easily to simplify maintenance, or both.
-
- Components of GSLB
- A typical GSLB environment is comprised of the following components:
-
-
- GSLB Site: In &PRODUCT; terminology, GSLB sites are
- represented by zones that are mapped to data centers, each of which has various network
- appliances. Each GSLB site is managed by a NetScaler appliance that is local to that
- site. Each of these appliances treats its own site as the local site and all other
- sites, managed by other appliances, as remote sites. It is the central entity in a GSLB
- deployment, and is represented by a name and an IP address.
-
-
- GSLB Services: A GSLB service is typically
- represented by a load balancing or content switching virtual server. In a GSLB
- environment, you can have a local as well as remote GSLB services. A local GSLB service
- represents a local load balancing or content switching virtual server. A remote GSLB
- service is the one configured at one of the other sites in the GSLB setup. At each site
- in the GSLB setup, you can create one local GSLB service and any number of remote GSLB
- services.
-
-
- GSLB Virtual Servers: A GSLB virtual server refers
- to one or more GSLB services and balances traffic between traffic across the VMs in
- multiple zones by using the &PRODUCT; functionality. It evaluates the configured GSLB
- methods or algorithms to select a GSLB service to which to send the client requests. One
- or more virtual servers from different zones are bound to the GSLB virtual server. GSLB
- virtual server does not have a public IP associated with it, instead it will have a FQDN
- DNS name.
-
-
- Load Balancing or Content Switching Virtual
- Servers: According to Citrix NetScaler terminology, a load balancing or
- content switching virtual server represents one or many servers on the local network.
- Clients send their requests to the load balancing or content switching virtual server’s
- virtual IP (VIP) address, and the virtual server balances the load across the local
- servers. After a GSLB virtual server selects a GSLB service representing either a local
- or a remote load balancing or content switching virtual server, the client sends the
- request to that virtual server’s VIP address.
-
-
- DNS VIPs: DNS virtual IP represents a load
- balancing DNS virtual server on the GSLB service provider. The DNS requests for domains
- for which the GSLB service provider is authoritative can be sent to a DNS VIP.
-
-
- Authoritative DNS: ADNS (Authoritative Domain Name
- Server) is a service that provides actual answer to DNS queries, such as web site IP
- address. In a GSLB environment, an ADNS service responds only to DNS requests for
- domains for which the GSLB service provider is authoritative. When an ADNS service is
- configured, the service provider owns that IP address and advertises it. When you create
- an ADNS service, the NetScaler responds to DNS queries on the configured ADNS service IP
- and port.
-
-
-
-
- How Does GSLB Works in &PRODUCT;?
- Global server load balancing is used to manage the traffic flow to a web site hosted on
- two separate zones that ideally are in different geographic locations. The following is an
- illustration of how GLSB functionality is provided in &PRODUCT;: An organization, xyztelco,
- has set up a public cloud that spans two zones, Zone-1 and Zone-2, across geographically
- separated data centers that are managed by &PRODUCT;. Tenant-A of the cloud launches a
- highly available solution by using xyztelco cloud. For that purpose, they launch two
- instances each in both the zones: VM1 and VM2 in Zone-1 and VM5 and VM6 in Zone-2. Tenant-A
- acquires a public IP, IP-1 in Zone-1, and configures a load balancer rule to load balance
- the traffic between VM1 and VM2 instances. &PRODUCT; orchestrates setting up a virtual
- server on the LB service provider in Zone-1. Virtual server 1 that is set up on the LB
- service provider in Zone-1 represents a publicly accessible virtual server that client
- reaches at IP-1. The client traffic to virtual server 1 at IP-1 will be load balanced across
- VM1 and VM2 instances.
- Tenant-A acquires another public IP, IP-2 in Zone-2 and sets up a load balancer rule to
- load balance the traffic between VM5 and VM6 instances. Similarly in Zone-2, &PRODUCT;
- orchestrates setting up a virtual server on the LB service provider. Virtual server 2 that
- is setup on the LB service provider in Zone-2 represents a publicly accessible virtual
- server that client reaches at IP-2. The client traffic that reaches virtual server 2 at IP-2
- is load balanced across VM5 and VM6 instances. At this point Tenant-A has the service
- enabled in both the zones, but has no means to set up a disaster recovery plan if one of the
- zone fails. Additionally, there is no way for Tenant-A to load balance the traffic
- intelligently to one of the zones based on load, proximity and so on. The cloud
- administrator of xyztelco provisions a GSLB service provider to both the zones. A GSLB
- provider is typically an ADC that has the ability to act as an ADNS (Authoritative Domain
- Name Server) and has the mechanism to monitor health of virtual servers both at local and
- remote sites. The cloud admin enables GSLB as a service to the tenants that use zones 1 and
- 2.
-
-
-
-
-
- gslb.png: GSLB architecture
-
-
- Tenant-A wishes to leverage the GSLB service provided by the xyztelco cloud. Tenant-A
- configures a GSLB rule to load balance traffic across virtual server 1 at Zone-1 and virtual
- server 2 at Zone-2. The domain name is provided as A.xyztelco.com. &PRODUCT; orchestrates
- setting up GSLB virtual server 1 on the GSLB service provider at Zone-1. &PRODUCT; binds
- virtual server 1 of Zone-1 and virtual server 2 of Zone-2 to GLSB virtual server 1. GSLB
- virtual server 1 is configured to start monitoring the health of virtual server 1 and 2 in
- Zone-1. &PRODUCT; will also orchestrate setting up GSLB virtual server 2 on GSLB service
- provider at Zone-2. &PRODUCT; will bind virtual server 1 of Zone-1 and virtual server 2 of
- Zone-2 to GLSB virtual server 2. GSLB virtual server 2 is configured to start monitoring the
- health of virtual server 1 and 2. &PRODUCT; will bind the domain A.xyztelco.com to both the
- GSLB virtual server 1 and 2. At this point, Tenant-A service will be globally reachable at
- A.xyztelco.com. The private DNS server for the domain xyztelcom.com is configured by the
- admin out-of-band to resolve the domain A.xyztelco.com to the GSLB providers at both the
- zones, which are configured as ADNS for the domain A.xyztelco.com. A client when sends a DNS
- request to resolve A.xyztelcom.com, will eventually get DNS delegation to the address of
- GSLB providers at zone 1 and 2. A client DNS request will be received by the GSLB provider.
- The GSLB provider, depending on the domain for which it needs to resolve, will pick up the
- GSLB virtual server associated with the domain. Depending on the health of the virtual
- servers being load balanced, DNS request for the domain will be resolved to the public IP
- associated with the selected virtual server.
-
-
-
- Configuring GSLB
- To configure a GSLB deployment, you must first configure a standard load balancing setup
- for each zone. This enables you to balance load across the different servers in each zone in
- the region. Then on the NetScaler side, configure both NetScaler appliances that you plan to
- add to each zone as authoritative DNS (ADNS) servers. Next, create a GSLB site for each zone,
- configure GSLB virtual servers for each site, create GLSB services, and bind the GSLB services
- to the GSLB virtual servers. Finally, bind the domain to the GSLB virtual servers. The GSLB
- configurations on the two appliances at the two different zones are identical, although each
- sites load-balancing configuration is specific to that site.
- Perform the following as a cloud administrator. As per the example given above, the
- administrator of xyztelco is the one who sets up GSLB:
-
-
- In the cloud.dns.name global parameter, specify the DNS name of your tenant's cloud
- that make use of the GSLB service.
-
-
- On the NetScaler side, configure GSLB as given in Configuring Global Server Load Balancing (GSLB):
-
-
- Configuring a standard load balancing setup.
-
-
- Configure Authoritative DNS, as explained in Configuring an Authoritative DNS Service.
-
-
- Configure a GSLB site with site name formed from the domain name details.
- Configure a GSLB site with the site name formed from the domain name.
- As per the example given above, the site names are A.xyztelco.com and
- B.xyztelco.com.
- For more information, see Configuring a Basic GSLB Site.
-
-
- Configure a GSLB virtual server.
- For more information, see Configuring a GSLB Virtual Server.
-
-
- Configure a GSLB service for each virtual server.
- For more information, see Configuring a GSLB Service.
-
-
- Bind the GSLB services to the GSLB virtual server.
- For more information, see Binding GSLB Services to a GSLB Virtual Server.
-
-
- Bind domain name to GSLB virtual server. Domain name is obtained from the domain
- details.
- For more information, see Binding a Domain to a GSLB Virtual Server.
-
-
-
-
- In each zone that are participating in GSLB, add GSLB-enabled NetScaler device.
- For more information, see .
-
-
- As a domain administrator/ user perform the following:
-
-
- Add a GSLB rule on both the sites.
- See .
-
-
- Assign load balancer rules.
- See .
-
-
-
- Prerequisites and Guidelines
-
-
- The GSLB functionality is supported both Basic and Advanced zones.
-
-
- GSLB is added as a new network service.
-
-
- GSLB service provider can be added to a physical network in a zone.
-
-
- The admin is allowed to enable or disable GSLB functionality at region level.
-
-
- The admin is allowed to configure a zone as GSLB capable or enabled.
- A zone shall be considered as GSLB capable only if a GSLB service provider is
- provisioned in the zone.
-
-
- When users have VMs deployed in multiple availability zones which are GSLB enabled,
- they can use the GSLB functionality to load balance traffic across the VMs in multiple
- zones.
-
-
- The users can use GSLB to load balance across the VMs across zones in a region only
- if the admin has enabled GSLB in that region.
-
-
- The users can load balance traffic across the availability zones in the same region
- or different regions.
-
-
- The admin can configure DNS name for the entire cloud.
-
-
- The users can specify an unique name across the cloud for a globally load balanced
- service. The provided name is used as the domain name under the DNS name associated with
- the cloud.
- The user-provided name along with the admin-provided DNS name is used to produce a
- globally resolvable FQDN for the globally load balanced service of the user. For
- example, if the admin has configured xyztelco.com as the DNS name for the cloud, and
- user specifies 'foo' for the GSLB virtual service, then the FQDN name of the GSLB
- virtual service is foo.xyztelco.com.
-
-
- While setting up GSLB, users can select a load balancing method, such as round
- robin, for using across the zones that are part of GSLB.
-
-
- The user shall be able to set weight to zone-level virtual server. Weight shall be
- considered by the load balancing method for distributing the traffic.
-
-
- The GSLB functionality shall support session persistence, where series of client
- requests for particular domain name is sent to a virtual server on the same zone.
- Statistics is collected from each GSLB virtual server.
-
-
-
-
- Enabling GSLB in NetScaler
- In each zone, add GSLB-enabled NetScaler device for load balancing.
-
-
- Log in as administrator to the &PRODUCT; UI.
-
-
- In the left navigation bar, click Infrastructure.
-
-
- In Zones, click View More.
-
-
- Choose the zone you want to work with.
-
-
- Click the Physical Network tab, then click the name of the physical network.
-
-
- In the Network Service Providers node of the diagram, click Configure.
- You might have to scroll down to see this.
-
-
- Click NetScaler.
-
-
- Click Add NetScaler device and provide the following:
- For NetScaler:
-
-
- IP Address: The IP address of the SRX.
-
-
- Username/Password: The authentication
- credentials to access the device. &PRODUCT; uses these credentials to access the
- device.
-
-
- Type: The type of device that is being added.
- It could be F5 Big Ip Load Balancer, NetScaler VPX, NetScaler MPX, or NetScaler SDX.
- For a comparison of the NetScaler types, see the &PRODUCT; Administration
- Guide.
-
-
- Public interface: Interface of device that is
- configured to be part of the public network.
-
-
- Private interface: Interface of device that is
- configured to be part of the private network.
-
-
- GSLB service: Select this option.
-
-
- GSLB service Public IP: The public IP address
- of the NAT translator for a GSLB service that is on a private network.
-
-
- GSLB service Private IP: The private IP of the
- GSLB service.
-
-
- Number of Retries. Number of times to attempt a
- command on the device before considering the operation failed. Default is 2.
-
-
- Capacity: The number of networks the device can
- handle.
-
-
- Dedicated: When marked as dedicated, this
- device will be dedicated to a single account. When Dedicated is checked, the value
- in the Capacity field has no significance implicitly, its value is 1.
-
-
-
-
- Click OK.
-
-
-
-
- Adding a GSLB Rule
-
-
- Log in to the &PRODUCT; UI as a domain administrator or user.
-
-
- In the left navigation pane, click Region.
-
-
- Select the region for which you want to create a GSLB rule.
-
-
- In the Details tab, click View GSLB.
-
-
- Click Add GSLB.
- The Add GSLB page is displayed as follows:
-
-
-
-
-
- gslb-add.png: adding a gslb rule
-
-
-
-
- Specify the following:
-
-
- Name: Name for the GSLB rule.
-
-
- Description: (Optional) A short description of
- the GSLB rule that can be displayed to users.
-
-
- GSLB Domain Name: A preferred domain name for
- the service.
-
-
- Algorithm: (Optional) The algorithm to use to
- load balance the traffic across the zones. The options are Round Robin, Least
- Connection, and Proximity.
-
-
- Service Type: The transport protocol to use for
- GSLB. The options are TCP and UDP.
-
-
- Domain: (Optional) The domain for which you
- want to create the GSLB rule.
-
-
- Account: (Optional) The account on which you
- want to apply the GSLB rule.
-
-
-
-
- Click OK to confirm.
-
-
-
-
- Assigning Load Balancing Rules to GSLB
-
-
- Log in to the &PRODUCT; UI as a domain administrator or user.
-
-
- In the left navigation pane, click Region.
-
-
- Select the region for which you want to create a GSLB rule.
-
-
- In the Details tab, click View GSLB.
-
-
- Select the desired GSLB.
-
-
- Click view assigned load balancing.
-
-
- Click assign more load balancing.
-
-
- Select the load balancing rule you have created for the zone.
-
-
- Click OK to confirm.
-
-
-
-
-
- Known Limitation
- Currently, &PRODUCT; does not support orchestration of services across the zones. The
- notion of services and service providers in region are to be introduced.
-
-
diff --git a/docs/en-US/gsoc-dharmesh.xml b/docs/en-US/gsoc-dharmesh.xml
deleted file mode 100644
index 01a77c70ab0..00000000000
--- a/docs/en-US/gsoc-dharmesh.xml
+++ /dev/null
@@ -1,149 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Dharmesh's 2013 GSoC Proposal
- This chapter describes Dharmrsh's 2013 Google Summer of Code project within the &PRODUCT; ASF project. It is a copy paste of the submitted proposal.
-
- Abstract
-
- The project aims to bring cloudformation like service to cloudstack. One of the prime use-case is cluster computing frameworks on cloudstack. A cloudformation service will give users and administrators of cloudstack ability to manage and control a set of resources easily. The cloudformation will allow booting and configuring a set of VMs and form a cluster. Simple example would be LAMP stack. More complex clusters such as mesos or hadoop cluster requires a little more advanced configuration. There is already some work done by Chiradeep Vittal at this front [5]. In this project, I will implement server side cloudformation service for cloudstack and demonstrate how to run mesos cluster using it.
-
-
-
-
- Mesos
-
- Mesos is a resource management platform for clusters. It aims to increase resource utilization of clusters by sharing cluster resources among multiple processing frameworks(like MapReduce, MPI, Graph Processing) or multiple instances of same framework. It provides efficient resource isolation through use of containers. Uses zookeeper for state maintenance and fault tolerance.
-
-
-
-
- What can run on mesos ?
-
- Spark: A cluster computing framework based on the Resilient Distributed Datasets (RDDs) abstraction. RDD is more generalized than MapReduce and can support iterative and interactive computation while retaining fault tolerance, scalability, data locality etc.
-
- Hadoop:: Hadoop is fault tolerant and scalable distributed computing framework based on MapReduce abstraction.
-
- Begel:: A graph processing framework based on pregel.
-
- and other frameworks like MPI, Hypertable.
-
-
-
- How to deploy mesos ?
-
- Mesos provides cluster installation scripts for cluster deployment. There are also scripts available to deploy a cluster on Amazon EC2. It would be interesting to see if this scripts can be leveraged in anyway.
-
-
-
- Deliverables
-
-
- Deploy CloudStack and understand instance configuration/contextualization
-
-
- Test and deploy Mesos on a set of CloudStack based VM, manually. Design/propose an automation framework
-
-
- Test stackmate and engage chiradeep (report bugs, make suggestion, make pull request)
-
-
- Create cloudformation template to provision a Mesos Cluster
-
-
- Compare with Apache Whirr or other cluster provisioning tools for server side implementation of cloudformation service.
-
-
-
-
-
- Architecture and Tools
-
- The high level architecture is as follows:
-
-
-
-
-
-
-
-
-
-
- It includes following components:
-
-
-
- CloudFormation Query API server:
- This acts as a point of contact to and exposes CloudFormation functionality as Query API. This can be accessed directly or through existing tools from Amazon AWS for their cloudformation service. It will be easy to start as a module which resides outside cloudstack at first and I plan to use dropwizard [3] to start with. Later may be the API server can be merged with cloudstack core. I plan to use mysql for storing details of clusters.
-
-
-
- Provisioning:
-
- Provisioning module is responsible for handling the booting process of the VMs through cloudstack. This uses the cloudstack APIs for launching VMs. I plan to use preconfigured templates/images with required dependencies installed, which will make cluster creation process much faster even for large clusters. Error handling is very important part of this module. For example, what you do if few VMs fail to boot in cluster ?
-
-
-
- Configuration:
-
- This module deals with configuring the VMs to form a cluster. This can be done via manual scripts/code or via configuration management tools like chef/ironfan/knife. Potentially workflow automation tools like rundeck [4] also can be used. Also Apache whirr and Provisionr are options. I plan to explore this tools and select suitable ones.
-
-
-
-
-
-
- API
-
- Query API will be based on Amazon AWS cloudformation service. This will allow leveraging existing tools for AWS.
-
-
-
- Timeline
- 1-1.5 week : project design. Architecture, tools selection, API design
- 1-1.5 week : getting familiar with cloudstack and stackmate codebase and architecture details
- 1-1.5 week : getting familiar with mesos internals
- 1-1.5 week : setting up the dev environment and create mesos templates
- 2-3 week : build provisioning and configuration module
- Midterm evaluation: provisioning module, configuration module
- 2-3 week : develope cloudformation server side implementation
- 2-3 week : test and integrate
-
-
-
- Future Work
-
-
- Auto Scaling:
- Automatically adding or removing VMs from mesos cluster based on various conditions like utilization going above/below a static threshold. There can be more sophisticated strategies based on prediction or fine grained metric collection with tight integration with mesos framework.
-
-
- Cluster Simulator:
- Integrating with existing simulator to simulate mesos clusters. This can be useful in various scenarios, for example while developing a new scheduling algorithm, testing autoscaling etc.
-
-
-
-
diff --git a/docs/en-US/gsoc-imduffy15.xml b/docs/en-US/gsoc-imduffy15.xml
deleted file mode 100644
index f78cb540704..00000000000
--- a/docs/en-US/gsoc-imduffy15.xml
+++ /dev/null
@@ -1,395 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Ians's 2013 GSoC Proposal
- This chapter describes Ians 2013 Google Summer of Code project within the &PRODUCT; ASF project. It is a copy paste of the submitted proposal.
-
- LDAP user provisioning
-
- "Need to automate the way the LDAP users are provisioned into cloud stack. This will mean better
- integration with a LDAP server, ability to import users and a way to define how the LDAP user
- maps to the cloudstack users."
-
-
-
- Abstract
-
- The aim of this project is to provide an more effective mechanism to provision users from LDAP
- into cloudstack. Currently cloudstack enables LDAP authentication. In this authentication users
- must be first setup in cloudstack. Once the user is setup in cloudstack they can authenticate
- using their LDAP username and password. This project will improve Cloudstack LDAP integration
- by enabling users be setup automatically using their LDAP credential
-
-
-
- Deliverables
-
-
- Service that retrieves a list of LDAP users from a configured group
-
-
- Extension of the cloudstack UI "Add User" screen to offer user list from LDAP
-
-
- Add service for saving new user it details from LDAP
-
-
- BDD unit and acceptance automated testing
-
-
- Document change details
-
-
-
-
- Quantifiable Results
-
-
-
-
- Given
- An administrator wants to add new user to cloudstack and LDAP is setup in cloudstack
-
-
- When
- The administrator opens the "Add User" screen
-
-
- Then
- A table of users appears for the current list of users (not already created on cloudstack) from the LDAP group displaying their usernames, given name and email address. The timezone dropdown will still be available beside each user
-
-
-
-
-
-
-
-
-
- Given
- An administrator wants to add new user to cloudstack and LDAP is not setup in cloudstack
-
-
- When
- The administrator opens the "Add User" screen
-
-
- Then
- The current add user screen and functionality is provided
-
-
-
-
-
-
-
-
-
- Given
- An administrator wants to add new user to cloudstack and LDAP is setup in cloudstack
-
-
- When
- The administrator opens the "Add User" screen and mandatory information is missing
-
-
- Then
- These fields will be editable to enable you to populate the name or email address
-
-
-
-
-
-
-
-
-
- Given
- An administrator wants to add new user to cloudstack, LDAP is setup and the user being created is in the LDAP query group
-
-
- When
- The administrator opens the "Add User" screen
-
-
- Then
- There is a list of LDAP users displayed but the user is present in the list
-
-
-
-
-
-
-
-
-
- Given
- An administrator wants to add a new user to cloudstack, LDAP is setup and the user is not in the query group
-
-
- When
- The administrator opens the "Add User" screen
-
-
- Then
- There is a list of LDAP users displayed but the user is not in the list
-
-
-
-
-
-
-
-
-
- Given
- An administrator wants to add a group of new users to cloudstack
-
-
- When
- The administrator opens the "Add User" screen, selects the users and hits save
-
-
- Then
- The list of new users are saved to the database
-
-
-
-
-
-
-
-
-
- Given
- An administrator has created a new LDAP user on cloudstack
-
-
- When
- The user authenticates against cloudstack with the right credentials
-
-
- Then
- They are authorised in cloudstack
-
-
-
-
-
-
-
-
-
- Given
- A user wants to edit an LDAP user
-
-
- When
- They open the "Edit User" screen
-
-
- Then
- The password fields are disabled and cannot be changed
-
-
-
-
-
-
-
- The Design Document
-
-
- LDAP user list service
-
-
-
- name: ldapUserList
-
-
- responseObject: LDAPUserResponse {username,email,name}
-
-
- parameter: listType:enum {NEW, EXISTING,ALL} (Default to ALL if no option provided)
-
-
- Create a new API service call for retreiving the list of users from LDAP. This will call a new
- ConfigurationService which will retrieve the list of users using the configured search base and the query
- filter. The list may be filtered in the ConfigurationService based on listType parameter
-
-
-
- LDAP Available Service
-
-
-
- name: ldapAvailable
-
-
- responseObject LDAPAvailableResponse {available:boolean}
-
-
- Create a new API service call veriying LDAP is setup correctly verifying the following configuration elements are all set:
-
-
- ldap.hostname
-
-
- ldap.port
-
-
- ldap.usessl
-
-
- ldap.queryfilter
-
-
- ldap.searchbase
-
-
- ldap.dn
-
-
- ldap.password
-
-
-
-
-
- LDAP Save Users Service
-
-
-
- name: ldapSaveUsers
-
-
- responseObject: LDAPSaveUsersRssponse {list]]>}
-
-
- parameter: list of users
-
-
- Saves the list of objects instead. Following the functionality in CreateUserCmd it will
-
-
- Create the user via the account service
-
-
- Handle the response
-
-
- It will be decided whether a transation should remain over whole save or only over individual users. A list of UserResponse will be returned.
-
-
-
- Extension of cloudstack UI "Add User" screen
-
-
-
- Extend account.js enable the adding of a list of users with editable fields where required. The new "add user" screen for LDAP setup will:
-
-
- Make an ajax call to the ldapAvailable, ldapuserList and ldapSaveUsers services
-
-
- Validate on username, email, firstname and lastname
-
-
-
-
-
- Extension of cloudstack UI "Edit User" screen
-
-
-
- Extend account.js to disable the password fields on the edit user screen if LDAP available, specifically:
-
-
- Make an ajax call to the ldapAvailable, ldapuserList and ldapSaveUsers services
-
-
- Validate on username, email, firstname and lastname. Additional server validation will nsure the password has not changed
-
-
-
-
-
- Approach
-
- To get started a development cloudstack environment will be created with DevCloud used to verify changes. Once the schedule is agreed with the mentor the deliverables will be broken into small user stories with expected delivery dates set. The development cycle will focus on BDD, enforcing all unit and acceptance tests are written first.
-
-
- A build pipe line for continious delivery environment around cloudstack will be implemented, the following stages will be adopted:
-
-
-
-
-
- Stage
- Action
-
-
-
-
- Commit
- Run unit tests
-
-
- Sonar
- Runs code quality metrics
-
-
- Acceptance
- Deploys the devcloud and runs all acceptance tests
-
-
- Deployment
- Deploy a new management server using Chef
-
-
-
-
-
-
- About me
-
- I am a Computer Science Student at Dublin City University in Ireland. I have interests in virtualization,
-automation, information systems, networking and web development
-
-
- I was involved with a project in a K-12(educational) environment of moving their server systems over
-to a virtualized environment on ESXi. I have good knowledge of programming in Java, PHP and
-Scripting langages. During the configuration of an automation system for OS deployment I experienced
-some exposure to scripting in powershell, batch, vbs and bash and configuration of PXE images based
-of WinPE and Debian.
-Additionally I am also a mentor in an opensource teaching movement called CoderDojo, we teach kids
-from the age of 8 everything from web page, HTML 5 game and raspberry pi development. It's really
-cool.
-
-
- I’m excited at the opportunity and learning experience that cloudstack are offering with this project.
-
-
-
diff --git a/docs/en-US/gsoc-meng.xml b/docs/en-US/gsoc-meng.xml
deleted file mode 100644
index 8ea2b4cfda7..00000000000
--- a/docs/en-US/gsoc-meng.xml
+++ /dev/null
@@ -1,235 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Meng's 2013 GSoC Proposal
- This chapter describes Meng's 2013 Google Summer of Code project within the &PRODUCT; ASF project. It is a copy paste of the submitted proposal.
-
- Project Description
-
- Getting a hadoop cluster going can be challenging and painful due to the tedious configuration phase and the diverse idiosyncrasies of each cloud provider. Apache Whirr[1] and Provisionr is a set of libraries for running cloud services in an automatic or semi-automatic fashion. They take advantage of a cloud-neutral library called jclouds[2] to create one-click, auto-configuring hadoop clusters on multiple clouds. Since jclouds supports CloudStack API, most of the services provided by Whirr and Provisionr should work out of the box on CloudStack. My first task is to test that assumption, make sure everything is well documented, and correct all issues with the latest version of CloudStack (4.0 and 4.1).
-
-
-
-The biggest challenge for hadoop provisioning is automatically configuring each instance at launch time based on what it is supposed to do, a process known as contextualization[3][4]. It causes last minute changes inside an instance to adapt to a cluster environment. Many automated cloud services are enabled by contextualization. For example in one-click hadoop clusters, contextualization basically amounts to generating and distributing ssh key pairs among instances, telling an instance where the master node is and what other slave nodes it should be aware of, etc. On EC2 contextualization is done via passing information through the EC2_USER_DATA entry[5][6]. Whirr and Provisionr embrace this feature to provision hadoop instances on EC2. My second task is to test and extend Whirr and Provisionr’s one-click solution on EC2 to CloudStack and also improve CloudStack’s support for Whirr and Provisionr to enable hadoop provisioning on CloudStack based clouds.
-
-
-My third task is to add a Query API that is compatible with Amazon Elastic MapReduce (EMR) to CloudStack. Through this API, all hadoop provisioning functionality will be exposed and users can reuse cloud clients that are written for EMR to create and manage hadoop clusters on CloudStack based clouds.
-
-
-
-
- Project Details
-
- Whirr defines four roles for the hadoop provisioning service: Namenode, JobTracker, Datanode and TaskTraker. With the help of CloudInit[7] (a popular package for cloud instance initialization), each VM instance is configured based on its role and a compressed file that is passed in the EC2_USER_DATA entry. Since CloudStack also supports EC2_USER_DATA, I think the most feasible way to have hadoop provisioning on CloudStack is to extend Whirr’s solution on EC2 to CloudStack platform and to make necessary adjustment based on CloudStack’s
-
-
-
- Whirr and Provisionr deal with two critical issues in their role configuration scripts (configure-hadoop-role_list): SSH key authentication and hostname configuration.
-
-
-
- SSH Key Authentication. The need for SSH Key based authentication is required so that the master node can login to slave nodes to start/stop hadoop daemons. Also each node needs to login to itself to start its own hadoop daemons. Traditionally this is done by generating a key pair on the master node and distributing the public key to all slave nodes. This can be only done with human intervention. Whirr works around this problem on EC2 by having a common key pair for all nodes in a hadoop cluster. Thus every node is able to login to one another. The key pair is provided by users and obtained by CloudInit inside an instance from metadata service. As far as I know, Cloudstack does not support user-provided ssh key authentication. Although CloudStack has the createSSHKeyPair API[8] to generate SSH keys and users can create an instance template that supports SSH keys, there is no easy way to have a unified SSH key on all cluster instances. Besides Whirr prefers minimal image management, so having a customized template doesn’t seem quite fit here.
-
-
- Hostname configuration. The hostname of each instance has to be properly set and injected into the set of hadoop config files (core-site.xml, hdfs-site.xml, mapred-site.xml ). For an EC2 instance, its host name is converted from a combination of its public IP and an EC2-specific pre/suffix (e.g. an instance with IP 54.224.206.71 will have its hostname set to ec2-54-224-206-71.compute-1.amazonaws.com). This hostname amounts to the Fully Qualified Domain Name that uniquely identifies this node on the network. As for the case of CloudStack, if users do not specify a name the hostname that identifies a VM on a network will be a unique UUID generated by CloudStack[9].
-
-
-
-
-
-
- These two are the main issues that need support improvement on the CloudStack side. Other things like preparing disks, installing hadoop tarballs and starting hadoop daemons can be easily done as they are relatively role/instance-independent and static. Runurl can be used to simplify user-data scripts.
-
-
-
-
-
- After we achieve hadoop provisioning on CloudStack using Whirr we can go further to add a Query API to CloudStack to expose this functionality. I will write an API that is compatible with Amazon Elastic MapReduce Service (EMR)[10] so that users can reuse clients that are written for EMR to submit jobs to existing hadoop clusters, poll job status, terminate a hadoop instance and do other things on CloudStack based clouds. There are eight actions[11] now supported in EMR API. I will try to implement as many as I can during the period of GSoC. The following statements give some examples of the API that I will write.
-
-
-
-This will launch a new hadoop cluster with four instances using specified instance types and add a job flow to it.
-
-
-
-This will add a step to the existing job flow with ID j-3UN6WX5RRO2AG. This step will run the specified jar file.
-
-
-
-This will return the status of the given job flow.
-
-
-
-
- Roadmap
-
- Jun. 17 ∼ Jun. 30
-
-
- Learn CloudStack and Apache Whirr/Provisionr APIs; Deploy a CloudStack cluster.
-
-
-
- Identify how EC2_USER_DATA is passed and executed on each CloudStack instance.
-
-
- Figure out how the files passed in EC2_USER_DATA are acted upon by CloudInit.
-
-
- Identify files in /etc/init/ that are used or modified by Whirr and Provisionr for hadoop related configuration.
-
-
- Deploy a hadoop cluster on CloudStack via Whirr/Provisionr. This is to test what are missing in CloudStack or Whirr/Provisionr in terms of their support for each other.
-
-
- Jul. 1∼ Aug. 1
-
-
- Write scripts to configure VM hostname on CloudStack with the help of CloudInit;
-
-
- Write scripts to distribute SSH keys among CloudStack instances. Add the capability of using user-provided ssh key for authentication to CloudStack.
-
-
- Take care of the other things left for hadoop provisioning, such as mounting disks, installing hadoop tarballs, etc.
-
-
- Compose files that need to be passed in EC2_USER_DATA to each CloudStack instance . Test these files and write patches to make sure that Whirr/Provisionr can succefully deploy one-click hadoop clusters on CloudStack.
-
-
- Aug. 3 ∼ Sep. 8
-
-
- Design and build an Elastic Mapreduce API for CloudStack that takes control of hadoop cluster creation and management.
-
-
- Implement the eight actions defined in EMR API. This task might take a while.
-
-
-
- Sep. 10 ∼ Sep. 23
-
-
-
- Code cleaning and documentation wrap up.
-
-
-
-
-
-
-
-
-
- Deliverables
-
-
-
- Whirr has limited support for CloudStack. Check what’s missing and make sure all steps are properly documented on the Whirr and CloudStack websites.
-
-
- Contribute code to CloudStack and and send patches to Whirr/Provisionr if necessary to enable hadoop provisioning on CloudStack via Whirr/Provisionr.
-
-
- Build an EMR-compatible API for CloudStack.
-
-
-
-
- Nice to have
- In addition to the required deliverables, it’s nice to have the following:
-
-
-
- The capability to add and remove hadoop nodes dynamically to enable elastic hadoop clusters on CloudStack.
-
-
-
- A review of the existing tools that offer one-click provisioning and make sure that they support CloudStack based clouds.
-
-
-
-
-
- References
-
-
-
-
- http://whirr.apache.org/
-
-
- http://www.jclouds.org/documentation/gettingstarted/what-is-jclouds/
-
-
- Katarzyna Keahey, Tim Freeman, Contextualization: Providing One-Click Virtual Clusters
-
-
- http://www.nimbusproject.org/docs/current/clouds/clusters2.html
-
-
- http://aws.amazon.com/amazon-linux-ami/
-
-
- https://svn.apache.org/repos/asf/whirr/branches/contrib-python/src/py/hadoop/cloud/data/hadoop-ec2-init-remote.sh
-
-
- https://help.ubuntu.com/community/CloudInit
-
-
- http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.2/html/Installation_Guide/using-sshkeys.html
-
-
- https://cwiki.apache.org/CLOUDSTACK/allow-user-provided-hostname-internal-vm-name-on-hypervisor-instead-of-cloud-platform-auto-generated-name-for-guest-vms.html
-
-
-http://docs.aws.amazon.com/ElasticMapReduce/latest/API/Welcome.html
-
-
- http://docs.aws.amazon.com/ElasticMapReduce/latest/API/API_Operations.html
-
-
- http://buildacloud.org/blog/235-puppet-and-cloudstack.html
-
-
-http://chriskleban-internet.blogspot.com/2012/03/build-cloud-cloudstack-instance.html
-
-
- http://gehrcke.de/2009/06/aws-about-api/
-
-
- Apache_CloudStack-4.0.0-incubating-API_Developers_Guide-en-US.pdf
-
-
-
-
-
-
diff --git a/docs/en-US/gsoc-midsummer-dharmesh.xml b/docs/en-US/gsoc-midsummer-dharmesh.xml
deleted file mode 100644
index 9e0fdcfec07..00000000000
--- a/docs/en-US/gsoc-midsummer-dharmesh.xml
+++ /dev/null
@@ -1,193 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Dharmesh's Mid-Summer Progress Updates
- This section describes Dharmesh's progress on project "Integration project to deploy and use Mesos on a CloudStack based cloud"
-
-
- Introduction
-
- I am lagging a little in my timeline of the project. After the community bonding period, I have explored several things. My mentor, Sebastian has been really helpful and along with several others from the community. Along with my GSoC project I took up the task of resolving CLOUDSTACK-212 and it has been a wonderful experience. I am putting my best effort to complete the mesos integration as described in my proposal.
-
-
-
-
- CLOUDSTACK-212 "Switch java package structure from com.cloud to org.apache"
-
- CLOOUDSTACK-212(https://issues.apache.org/jira/browse/CLOUDSTACK-212) is about migrating old com.cloud package structure to new org.apache to reflect the project move to Apache Software Foundation.
-
-
- Rohit had taken the initiative and had already refactored cloud-api project to new package. When I looked at this bug, I thought it was a pretty straight forward task. I was not quite correct.
-
-
- I used eclipse's refactoring capabilities for most of the refactoring. I used context-menu->refactor->rename with options of update - "references", "variable/method names" and "textual references" check-boxes checked. Also I disabled autobuild option as suggested. Also I disabled the CVS plugins as suggested by eclipse community the indexing by plugin while long refactoring was interfering and left garbled code. Even after these precautions, I noticed that eclipse was messing up some of the imports and especially bean-names in xml files. After correcting them manually, I got many test case failures. Upon investigation, I came to know that the error was because of resource folders of test cases. In short, I learned a lot.
-
-
- Due to active development on master branch even between I create master-rebased-patch and apply-test-submit and one of the committer checks the applicability of the patch, the patch was failing due to new merges during this time. After several such attempt cycles, it became clear that this is not a good idea.
- So after discussion with senior members of community, separate branch "namespacechanges" was created and I applied all the code refactoring there. Then one of the committer, Dave will cherry-pick them to master freezing other merge. I have submitted the patch as planned on 19th and it is currently being reviewed.
-
-
- One of the great advantage of working on this bug was I got much better understanding of the cloudstack codebase. Also my understanding of unit testing with maven has become much more clearer.
-
-
-
-
- Mesos integration with cloudstack
- There are multiple ways of implementing the project. I have explored following options with specific pros and cons.
-
-
-
- Shell script to boot and configure mesos
- This idea is to write a shell script to automate all the steps involved in running mesos over cloudstack. This is very flexible option as we have full power of shell.
-
-
- create security groups for master, slave and zookeeper.
-
-
- get latest AMI number and get the image.
-
-
- create device mapping
-
-
- launch slave
-
-
- launch master
-
-
- launch zookeeper
-
-
- wait for instances to come up
-
-
- ssh-copy-ids
-
-
- rsync
-
-
- run mesos setup script
-
-
-
- Since there exists a shell script within mesos codebase to create and configure mesos cluster on AWS, the idea is to use the same script and make use of cloudstack-aws API. Currently I am testing this script.
- Following are the steps:
-
-
- enable aws-api on cloudstack.
-
-
- create AMI or template with required dependencies.
-
-
- download mesos.
-
-
- configure boto environment to use with cloudstack
-
-
- run mesos-aws script.
-
-
-
- Pros:
-
- Since the script is part of mesos codebase, it will be updated to work in future as well.
-
-
-
-
-
-
- WHIRR-121 "Creating Whirr service for mesos"
- Whirr provides a comman API to deploy services to various clouds. Currently, it is highly hadoop centric. Tom white had done some work in Whirr community, but has not been updated for quite a long time.
-
- Pros:
-
- Leverage Whirr API and tools.
-
-
-
- Cons:
-
- Dependence on yet another tool.
-
-
-
-
-
- Creating a cloudformation template for mesos
- The idea is to use AWS cloudformation APIs/functions, so that it can be used with any cloudformation tools. Within cloudstack, Stackmate project is implementing cloudformation service.
-
- Pros:
-
- Leverage all the available tools for AWS cloudformation and stackmate
-
-
- Potentially can be used on multiple clouds.
-
-
-
- Cons:
-
- Have to stay in the limits of ASW cloudformation API and otherwise have to use user-data to pass "shell commands", which will be not a maintainable solution in long term.
-
-
-
-
-
-
-
- Conclusion
-
- I am very happy with the kind of things I have learned so far with the project. This includes:
-
-
-
- Advanced git commands
-
-
- Exposed to very large code base
-
-
- Hidden features, methods and bugs of eclipse that will be useful refactoring large projects
-
-
- How Unit testing work, especially with mvn
-
-
- How to evaluate pros and cons of multiple options to achieve same functionality
-
-
- Writing a blog
-
-
-
- The experience gained from this project is invaluable and it is great that the Google Summer Of Code program exist.
-
-
-
diff --git a/docs/en-US/gsoc-midsummer-ian.xml b/docs/en-US/gsoc-midsummer-ian.xml
deleted file mode 100644
index 1f65e2d309c..00000000000
--- a/docs/en-US/gsoc-midsummer-ian.xml
+++ /dev/null
@@ -1,344 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Mid-Summer Progress Updates for Ian Duffy - "Ldap User Provisioning"
- This section describes my progress with the project titled "LDAP User Provisioning".
-
- Introduction
-
- Progress on my project is moving along smoothly. The Cloudstack community along with my mentor Abhi have been very accomodating. Since the community bonding period communication has been consistent and the expectations have been clear. Sebastien, head mentor, has given us great guidance. I have enjoyed their teaching style. I found it was a nice gradual build up starting with creating a simple document update patch to eventually submitting a new Cloudstack Plugin.
-
-
- I am pleased with my progress on the project to date. I feel as if the goals set out in my proposal are very doable and that they should be achieved.
-
-
-
- Continuous Integration with Jenkins
-
- In order to try deliver working solutions of good quality I felt it would be a good idea to implement a continuous integration environment using Jenkins. The idea of this would be to automatically build and test my code. This was welcomed and aided by community members greatly.
-
-
-
-
-
-
- jenkins-pipeline.png: Screenshot of the build pipeline.
-
-
-
- The key stages of the pipeline are as follows:
-
-
-
-
- Acquire Code Base - This pulls down the latest Cloudstack codebase and builds it executing all unit tests.
-
-
-
-
- Static Analysis - This runs tests on my code to ensure quality and good practice. This is being achieved with sonar source.
-
-
-
-
- Integration Tests - This deploys the Cloudstack database. Brings up the Cloudstack Manager with jetty and their simulator. All checkin/integration tests are ran and then the jetty server is shutdown.
-
-
-
-
- Package(Only exists on my local Jenkins) - The codebase is packaged up into an RPM and placed onto a local yum repo. If the time allows this will be used for future automated acceptance testing.
-
-
-
-
- If your are interested in this I have created a screencast on youtube which walks through it: Continuous testing environment
-
-
-
- Ldap Plugin implementation
-
- At the start of the coding stage I began by reviewing the current LDAP implementation. This included:
-
-
-
-
- The user authenticator - Enables LDAP users to login to Cloudstack once the user exists within the internal Cloudstack database.
-
-
-
-
- LDAPConfig - Adds LDAP configuration. This is detailed in ldapConfig API reference This did not allow multiple configurations.
-
-
-
-
- LDAPRemove - Removes the LDAP configuration
-
-
-
-
- UI features. Global settings -> LDAP configuration allowed for the addition of a single LDAP server using the LDAPConfig command and the removal of an LDAP server using the LDAPRemove command.
-
-
-
-
- After reviewing this code and implementation for some time I discovered that it wasn't the most maintainable code. I realised I could extend it if required. But it would involve creating more unmaintainable code and it would be messy. This goes against my goal of delivering quality. I decided therefore, justifiably I think to completely redo the LDAP implementation within Cloudstack. By doing this I did expanded the scope of the project.
-
-
- I began to research the most appropriate way of structuring this. I started of by redoing the implementation. This meant creating the following classes(Excluding DAOs):
-
-
-
-
- LdapManager - Manages all LDAP connections.
-
-
-
-
- LdapConfiguration - Supplies all configuration from within the Cloudstack database or defaults where required.
-
-
-
-
- LdapUserManager - Handles any interaction with LDAP user information.
-
-
-
-
- LdapUtils - Supplies static helpers, e.g. escape search queries, get attributes from search queries.
-
-
-
-
- LdapContextFactory - Manages the creation of contexts.
-
-
-
-
- LdapAuthenticator - Supplies an authenticator to Cloudstack using the LdapManager.
-
-
-
-
- From this I felt I had a solid foundation for creating API commands to allow the user to interact with an LDAP server. I went on to create the following commands:
-
-
-
-
- LdapAddConfiguration - This allows for adding multiple LDAP configurations. Each configuration is just seen as a hostname and port.
-
-
-
-
-
-
- add-ldap-configuration.png: Screenshot of API response.
-
-
-
-
-
-
-
- add-ldap-configuration-failure.png: Screenshot of API response.
-
-
-
-
-
- LdapDeleteConfiguration - This allows for the deletion of an LDAP configuration based on its hostname.
-
-
-
-
-
-
- delete-ldap-configuration.png: Screenshot of API response.
-
-
-
-
-
-
-
- delete-ldap-configuration-failure.png: Screenshot of API response.
-
-
-
-
-
- LdapListConfiguration - This lists all of the LDAP configurations that exist within the database.
-
-
-
-
-
-
- list-ldap-configuration.png: Screenshot of the build pipeline.
-
-
-
-
-
- LdapListAllUsers - This lists all the users within LDAP.
-
-
-
-
-
-
- ldap-list-users.png: Screenshot of the build pipeline.
-
-
-
-
-
- Along with this global settings were added, this includes:
-
-
-
-
- LDAP basedn - Sets the basedn for their LDAP configuration
-
-
-
-
- LDAP bind password - Sets the password to use for binding to LDAP for creating the system context. If this is left blank along with bind principal then anonymous binding is used.
-
-
-
-
- LDAP bind principal - Sets the principle to use for binding with LDAP for creating the system context. If this is left blank along with the bind password then anonymous binding is used.
-
-
-
-
- LDAP email attribute - Sets the attribute to use for getting the users email address. Within both OpenLDAP and ActiveDirectory this is mail. For this reason this is set to mail by default.
-
-
-
-
- LDAP firstname attribute - Sets the attribute to use for getting the users firstname. Within both OpenLDAP and ActiveDiretory this is givenname. For this reason this is set to givenname by default.
-
-
-
-
- LDAP lastname attribute - Sets the attribute to use for getting the users lastname. Within both OpenLDAP and ActiveDiretory this is sn. For this reason this is set to sn by default.
-
-
-
-
- LDAP username attribute - This sets out the attribute to use for getting the users username. Within OpenLDAP this is uid and within ActiveDirectory this is samAccountName. In order to comply with posix standards this is set as uid by default.
-
-
-
-
- LDAP user object - This sets out the object type of user accounts within LDAP. Within OpenLDAP this is inetOrgPerson and within ActiveDirectory this is user. Again, in order to comply with posix standards this is set as inetOrgperson by default.
-
-
-
-
- With this implementation I believe it allows for a much more extendable and flexible approach. The whole implementation is abstracted from the Cloudstack codebase using the "plugin" model. This allows all of the LDAP features to be contained within one place. Along with this the implementation supplies a good foundation. A side affect of redoing the implementation allowed me to add support for multiple LDAP servers. This means failover is supported, so for example, if you have a standard ActiveDirectory with primary and secondary domain controller. Both can be added to Cloudstack which will allow it to failover to either one assume one of them is down.
-
-
- The API changes required me to update the UI interface within Cloudstack. With the improved API implementation this was easier. The Global Settings -> Ldap Configuration page has support for multiple LDAP servers however it only requires a hostname and port. All "global" ldap settings are set within the global settings page.
-
-
-
-
-
-
- ldap-global-settings.png: Screenshot the LDAP related settings within global settings.
-
-
-
-
-
-
-
- ldap-configuration.png: Screenshot of the LDAP configuration page.
-
-
-
-
- Add accounts UI
-
- Extending the UI to allow for easy provisioning of LDAP users is currently a work in progress. At the moment I have a 'working' implementation, see below screenshot. I am in need of assistance with it and am waiting on a review to be looked at.
-
-
-
-
-
-
- ldap-account-addition.png: Screenshot of add user screen when LDAP is enabled.
-
-
-
-
- Testing
-
- Unit tests have 92% code coverage within the LDAP Plugin. The unit tests were wrote in groovy using the spock framework. This allowed me to implement a BDD style of testing.
-
-
- Integration tests have been wrote in python using the marvin test framework for Cloudstack. This test configures a LDAP server and attempts to login as an LDAP user. The plugin comes with an embedded LDAP server for testing purposes.
-
- Execute integration tests:
- nosetests --with-marvin --marvin-config=setup/dev/local.cfg test/integration/component/test_ldap.py --loa
- Start embedded LDAP server:
- mvn -pl :cloud-plugin-user-authenticator-ldap ldap:run
-
-
- Conclusion
-
- I am very pleased with the learning outcomes of this project so far. I have been exposed to many things that my college's computer science curriculum does not cover. This includes:
-
-
-
- Usage of source control management tools(git) and dealing with code collaboration
-
-
- Usage of a dependency manager and build tool(maven)
-
-
- Usage of continous testing environments(jenkins)
-
-
- Usage of an IDE(eclipse)
-
-
- Exposure to testing, both unit and integration tests
-
-
- Exposure to a functional programming language(groovy)
-
-
- Exposure to web development libraries(jQuery)
-
-
-
- The experience gained from this project is invalueable and it is great that the Google Summer Of Code program exist.
-
-
-
diff --git a/docs/en-US/gsoc-midsummer-meng.xml b/docs/en-US/gsoc-midsummer-meng.xml
deleted file mode 100644
index ee24cf4a990..00000000000
--- a/docs/en-US/gsoc-midsummer-meng.xml
+++ /dev/null
@@ -1,216 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Mid-Summer Progress Updates for Meng - "Hadoop Provisioning on Cloudstack Via Whirr"
-
- In this section I describe my progress with the project titled "Hadoop Provisioning on CloudStack Via Whirr"
-
- Introduction
-
- It has been five weeks since the GSOC 2013 is kick-started. During the last five weeks I have been constantly learning from the CloudStack Community in aspects of both knowledge and personality. The whole community is very accommodating and willing to help newbies. I am making progress steadily with the community's help. This is my first experience working with such a large and cool code base, definitely a challenging and wonderful experience for me. Though I am a little slipped behind my schedule, I am making my best effort and hoping to complete what I set out in my proposal by the end of this summer.
-
-
-
-
-
-
- CloudStack Installation
-
- I spent two weeks or so on the CloudStack Installation. In the beginning, I am using the Ubuntu systems. Given that I am not familiar with maven and a little scared by various kinds of errors and exceptions during system deployment, I failed to deploy CloudStack through building from the source. With Ian's advice, I switched to CentOS and began to use rpm packages for installation, things went much smoother. By the end of the second week, I submitted my first patch -- CloudStack_4.1_Quick_Install_Guide.
-
-
-
-
- Deploying a Hadoop Cluster on CloudStack via Whirr
-
- Provided that CloudStack is in place and I can register templates and add instances, I went ahead to use Whirr to deploy a hadoop cluster on CloudStack. The cluster definition file is as follows:
-
-
-
-
-
-
-
-whirr.cluster-name: the name of your hadoop cluster.
-whirr.store-cluster-in-etc-hosts: store all cluster IPs and hostnames in /etc/hosts on each node.
- whirr.instance-templates: this specifies your cluster layout. One node acts as the jobtracker and namenode (the hadoop master). Another two slaves nodes act as both datanode and tasktracker.
- image-id: This tells CloudStack which template to use to start the cluster.
- hardware-id: This is the type of hardware to use for the cluster instances.
-
- private/public-key-file: :the key-pair used to login to each instance. Only RSA SSH keys are supported at this moment. Jclouds will move this key pair to the set of instances on startup.
- whirr.cluster-user: this is the name of the cluster admin user.
- whirr.bootstrap-user: this tells Jclouds which user name and password to use to login to each instance for bootstrapping and customizing each instance. You must specify this property if the image you choose has a hardwired username/password.(e.g. the default template CentOS 5.5(64-bit) no GUI (KVM) comes with Cloudstack has a hardcoded credential: root:password), otherwise you don't need to specify this property.
- whirr.env.repo: this tells Whirr which repository to use to download packages.
- whirr.hadoop.install-function/whirr.hadoop.configure-function :it's self-explanatory.
-
-
-
-
- Output of this deployment is as follows:
-
-
-
-
-
-
-
-
-
- Other details can be found at this post in my blog. In addition I have a Whirr trouble shooting post there if you are interested.
-
-
-
- Elastic Map Reduce(EMR) Plugin Implementation
-
- Given that I have completed the deployment of a hadoop cluster on CloudStack using Whirr through the above steps, I began to dive into the EMR plugin development. My first API is launchHadoopCluster, it's implementation is quite straight forward, by invoking an external Whirr command in the command line on the management server and piggybacking the Whirr output in responses.This api has a structure like below:
-
-
-
-
-
-The following is the source code of launchHadoopClusterCmd.java.
-
-
-
-
-
-
- You can invoke this api through the following command in CloudMonkey:
- > launchHadoopCluster config=myhadoop.properties
-
-This is sort of the launchHadoopCluster 0.0, other details can be found in this post .
-
-My undergoing working is modifying this api so that it calls Whirr libraries instead of invoking Whirr externally in the command line.
-First add Whirr as a dependency of this plugin so that maven will download Whirr automatically when you compile this plugin.
-
-
-
-
-
-
-
-I am planning to replace the Runtime.getRuntime().exec() above with the following code snippet.
-
- LaunchClusterCommand command = new LaunchClusterCommand();
- command.run(System.in, System.out, System.err, Arrays.asList(args));
-
-
-Eventually when a hadoop cluster is launched. We can use Yarn to submit hadoop jobs.
-Yarn exposes the following API for job submission.
-ApplicationId submitApplication(ApplicationSubmissionContext appContext) throws org.apache.hadoop.yarn.exceptions.YarnRemoteException
-In Yarn, an application is either a single job in the classical sense of Map-Reduce or a DAG of jobs. In other words an application can have many jobs. This fits well with the concepts in EMR design. The term job flow in EMR is equivalent to the application concept in Yarn. Correspondingly, a job flow step in EMR is equal to a job in Yarn. In addition Yarn exposes the following API to query the state of an application.
-ApplicationReport getApplicationReport(ApplicationId appId) throws org.apache.hadoop.yarn.exceptions.YarnRemoteException
-The above API can be used to implement the DescribeJobFlows API in EMR.
-
-
-
-
-
-
- Learning Jclouds
-As Whirr relies on Jclouds for clouds provisioning, it's important for me to understand what Jclouds features support Whirr and how Whirr interacts with Jclouds. I figured out the following problems:
-
-How does Whirr create user credentials on each node?
-
-Using the runScript feature provide by Jclouds, Whirr can execute a script at node bootup, one of the options in the script is to override the login credentials with the ones that provide in the cluster properties file. The following line from Whirr demonstrates this idea.
-final RunScriptOptions options = overrideLoginCredentials(LoginCredentials.builder().user(clusterSpec.getClusterUser()).privateKey(clusterSpec.getPrivateKey()).build());
-
-
-
-How does Whirr start up instances in the beginning?
-The computeService APIs provided by jclouds allow Whirr to create a set of nodes in a group(specified by the cluster name),and operate them as a logical unit without worrying about the implementation details of the cloud.
-Set<NodeMetadata> nodes = (Set<NodeMetadata>)computeService.createNodesInGroup(clusterName, num, template);
-The above command returns all the nodes the API was able to launch into in a running state with port 22 open.
-How does Whirr differentiate nodes by roles and configure them separately?
-Jclouds commands ending in Matching are called predicate commands. They allow Whirr to decide which subset of nodes these commands will affect. For example, the following command in Whirr will run a script with specified options on nodes who match the given condition.
-
-Predicate<NodeMetadata> condition;
-condition = Predicates.and(runningInGroup(spec.getClusterName()), condition);
-ComputeServiceContext context = getCompute().apply(spec);
-context.getComputeService().runScriptOnNodesMatching(condition,statement, options);
-
-The following is an example how a node playing the role of jobtracker in a hadoop cluster is configured to open certain ports using the predicate commands.
-
- Instance jobtracker = cluster.getInstanceMatching(role(ROLE)); // ROLE="hadoop-jobtracker"
- event.getFirewallManager().addRules(
- Rule.create()
- .destination(jobtracker)
- .ports(HadoopCluster.JOBTRACKER_WEB_UI_PORT),
- Rule.create()
- .source(HadoopCluster.getNamenodePublicAddress(cluster).getHostAddress())
- .destination(jobtracker)
- .ports(HadoopCluster.JOBTRACKER_PORT)
- );
-
-
-
-With the help of such predicated commands, Whirr can run different bootstrap and init scripts on nodes with distinct roles.
-
-
-
-
-
-
-
-
- Great Lessons Learned
-
- I am much appreciated with the opportunity to work with CloudStack and learn from the lovable community. I can see myself constantly evolving from this invaluable experience both technologically and psychologically. There were hard times that I were stuck on certain problems for days and good times that made me want to scream seeing problem cleared. This project is a great challenge for me. I am making progress steadily though not smoothly. That's where I learned the following great lessons:
-
-
-
-
-
- When you work in an open source community, do things in the open source way. There was a time when I locked myself up because I am stuck on problems and I am not confident enough to ask them on the mailing list. The more I restricted myself from the community the less progress I made. Also the lack of communication from my side also prevents me from learning from other people and get guidance from my mentor.
-
-
- CloudStack is evolving at a fast pace. There are many APIs being added ,many patches being submitted every day. That's why the community use the word "SNAPSHOT" for each version. At this moment I am learning to deal with fast code changing and upgrading. A large portion of my time is devoted to system installation and deployment. I am getting used to treat system exceptions and errors as a common case. That's another reason why communication with the community is critical.
-
-
-
-
- In addition to the project itself, I am strengthening my technical suite at the same time.
-
-
-I learned to use some useful software tools: maven, git, publican, etc.
-
-
-Reading the source code of Whirr make me learn more high level java programming skills, e.g. using generics, wildcard, service loader, the Executor model, Future object, etc .
-
-
- I am exposed to Jclouds, a useful cloud neutral library to manipulate different cloud infrastructures.
-
- I gained deeper understanding of cloud web services and learned the usage of several cloud clients, e.g. Jclouds CLI, CloudMonkey,etc.
-
-
-
-
-
-
-
-
- I am grateful that Google Summer Of Code exists, it gives us students a sense of how fast real-world software development works and provides us hand-on experience of coding in large open source projects. More importantly it's a self-challenging process that strengthens our minds along the way.
-
-
diff --git a/docs/en-US/gsoc-midsummer-nguyen.xml b/docs/en-US/gsoc-midsummer-nguyen.xml
deleted file mode 100644
index b4f4f5ab495..00000000000
--- a/docs/en-US/gsoc-midsummer-nguyen.xml
+++ /dev/null
@@ -1,480 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Mid-Summer Progress Updates for Nguyen Anh Tu - "Add Xen/XCP support for GRE SDN controller"
- This section describes my progress with the project titled "Add Xen/XCP support for GRE SDN controller"
-
- Introduction
- It has been a half way of GSoC2013 journey which I am getting more familiar with its activities. Personally, the previous one-and-a-half month has surprisingly passed by in a blink with lots of pressure. In this first time joining in GSoC2013, I have found it totally new and interesting in its working methods and challenges. Along with those stressful moments, I appreciated all wonderful experiences and knowledge that I have luckily gained from this commitment. It is time to review it all and present in time order.
-
- My project named “Add Xen/XCP support for GRE SDN controllerâ€, the proposal can be found here: Proposal
-
- Specifically, I need to improve the current GRE SDN controller to work with XCP, a free version of XenServer. Then, as mentioning with my two mentor Sebastien Goasguen and Hugo, I continue to work in next missions as below:
-
-
- re-factor GRE source code by following NiciraNVP plugin design.
- add GRE support for KVM hypervisor.
- develop a new ODL plugin using Opendaylight controller for controlling and managing network services via OpenFlow protocol.
-
- At the beginning, I started to explore frameworks and tools that CloudStack uses such as Spring framework, marven, git and Reviewboard. In my country developers are more familiar with svn than git, however these tools are also such easy to use so I don't write more about them. I want to note about using Spring in CloudStack and what happen in the Management Server startup process.
-
-
-
- Spring in CloudStack
- Spring provides a Container which contains pre-loaded components CloudStack use. At startup, these components are loaded to Container via two ways:
-
-
-
- components are declared as beans in componentcontext.xml and applicationcontext.xml
-
- <bean id="accountDaoImpl" class="com.cloud.user.dao.AccountDaoImpl" />
- <bean id="accountDetailsDaoImpl" class="com.cloud.user.AccountDetailsDaoImpl" />
- <bean id="accountJoinDaoImpl" class="com.cloud.api.query.dao.AccountJoinDaoImpl" />
- <bean id="accountGuestVlanMapDaoImpl" class="com.cloud.network.dao.AccountGuestVlanMapDaoImpl" />
- <bean id="accountVlanMapDaoImpl" class="com.cloud.dc.dao.AccountVlanMapDaoImpl" />
- ...
-
-
-
- components are marked with @Component annotation
-
- @Component
- @Local(value = { NetworkManager.class})
- public class NetworkManagerImpl extends ManagerBase implements NetworkManager, Listener {
- static final Logger s_logger = Logger.getLogger(NetworkManagerImpl.class);
-
-
-
- As I know recently @Component is not recommended.
- The fundamental functionality provided by the Spring Container is Dependency Injection (DI). To decouple Java components from other Java components the dependency to a certain other class should get injected into them rather that the class inself creates or finds this object. The general concept between dependency injection is called Inversion of Control. A class should not configure itself but should be configured from outside. A design based on independent classes / components increases the re-usability and possibility to test the software. Example of using DI in CloudStack is showed below:
-
- public class NetworkManagerImpl extends ManagerBase implements NetworkManager, Listener {
- static final Logger s_logger = Logger.getLogger(NetworkManagerImpl.class);
-
- @Inject
- DataCenterDao _dcDao = null;
- @Inject
- VlanDao _vlanDao = null;
- @Inject
- IPAddressDao _ipAddressDao = null;
- @Inject
- AccountDao _accountDao = null;
-
-
-
- Management Server Startup
- The MS startup process is defined in cloud-client-ui/WEB-INF/web.xml. The following items will be loaded sequentially:
-
- Log4jConfigListener.
- ContextLoaderListener.
- CloudStartupServlet.
- ConsoleServlet.
- ApiServlet.
-
- Of which, CloudStartupServlet will call to ComponentContext to init all of pre-defined components life cycle including configure() and start() phase. The components are divided into seven levels to consecutively startup. Of course, they must override configure() and start() methods.
-
- public interface ComponentLifecycle {
- public static final int RUN_LEVEL_SYSTEM_BOOTSTRAP = 0; // for system level bootstrap components
- public static final int RUN_LEVEL_SYSTEM = 1; // for system level service components (i.e., DAOs)
- public static final int RUN_LEVEL_FRAMEWORK_BOOTSTRAP = 2; // for framework startup checkers (i.e., DB migration check)
- public static final int RUN_LEVEL_FRAMEWORK = 3; // for framework bootstrap components(i.e., clustering management components)
- public static final int RUN_LEVEL_COMPONENT_BOOTSTRAP = 4; // general manager components
- public static final int RUN_LEVEL_COMPONENT = 5; // regular adapters, plugin components
- public static final int RUN_LEVEL_APPLICATION_MAINLOOP = 6;
- public static final int MAX_RUN_LEVELS = 7;
-
-
- // configuration phase
- Map<String, String> avoidMap = new HashMap<String, String>();
- for(int i = 0; i < ComponentLifecycle.MAX_RUN_LEVELS; i++) {
- for(Map.Entry<String, ComponentLifecycle> entry : ((Map<String, ComponentLifecycle>)classifiedComponents[i]).entrySet()) {
- ComponentLifecycle component = entry.getValue();
- String implClassName = ComponentContext.getTargetClass(component).getName();
- s_logger.info("Configuring " + implClassName);
-
- if(avoidMap.containsKey(implClassName)) {
- s_logger.info("Skip configuration of " + implClassName + " as it is already configured");
- continue;
- }
-
- try {
- component.configure(component.getName(), component.getConfigParams());
- } catch (ConfigurationException e) {
- s_logger.error("Unhandled exception", e);
- throw new RuntimeException("Unable to configure " + implClassName, e);
- }
-
- avoidMap.put(implClassName, implClassName);
- }
- }
-
-
- // starting phase
- avoidMap.clear();
- for(int i = 0; i < ComponentLifecycle.MAX_RUN_LEVELS; i++) {
- for(Map.Entry<String, ComponentLifecycle> entry : ((Map<String, ComponentLifecycle>)classifiedComponents[i]).entrySet()) {
- ComponentLifecycle component = entry.getValue();
- String implClassName = ComponentContext.getTargetClass(component).getName();
- s_logger.info("Starting " + implClassName);
-
- if(avoidMap.containsKey(implClassName)) {
- s_logger.info("Skip configuration of " + implClassName + " as it is already configured");
- continue;
- }
-
- try {
- component.start();
-
- if(getTargetObject(component) instanceof ManagementBean)
- registerMBean((ManagementBean)getTargetObject(component));
- } catch (Exception e) {
- s_logger.error("Unhandled exception", e);
- throw new RuntimeException("Unable to start " + implClassName, e);
- }
-
- avoidMap.put(implClassName, implClassName);
- }
- }
-
-
-
- Network Architecture
- Networking is the most important component in CloudStack, which serves network services from layer 2 to layer 7. In GsoC, fortunately I have a chance to learn about CloudsStack network architecture. It's really amazing. CloudStack's networking is divided to three parts:
- NetworkGuru
- NetworkGuru are responsible for:
-
- Design and implementation of virtual networks.
- IP adress management.
-
- See full description about Network Guru on my wiki post: Add Xen/XCP support for GRE SDN controller
- NetworkElement
- NetworkElement in my opinion is the most important in CloudStack's networking. It represents components that are present in network. Such components can provide any kind of network service or support the virtual networking infrastructure and their interface is defined by com.cloud.network.element.NetworkElement. There are two things we attend in NetworkElement: services and elements.
- CloudStack currently support network services below:
-
- Dhcp service.
- Connectivity service.
- Firewall service.
- Load Balancing service.
- Network ACL service.
- Port Forwarding service.
- SourceNat service.
- StaticNat service.
- UerData service.
- Vpc service.
-
- Many Element implemented these above services. They are:
-
- MidonetElement.
- BigSwitchVnsElement.
- NiciraNvpElement.
- BaremetalElement.
- VirtualRouterElement.
- VpcVirtualRouterElement.
- CiscoVnmcElement.
- JuniperSrxExternalFirewallElement.
- ElasticLbElement.
- F5ExternalLbElement.
- CloudZoneNetworkElement.
- BaremetalPxeElement.
- BaremetalUserdataElement.
- DnsNotifier.
- OvsElement.
- SecurityGroupElement.
-
- See full description about Network Element on my wiki post: Add Xen/XCP support for GRE SDN controller
- In addition, Elements willing to support network services have to implement corresponding methods from ServicesProvider interfaces. For example, NiciraNvpElement want to support staticNat rule so it has to override applyStaticNats method.
- NetworkManager
- Network Manager handle the resources managed by the network elements. They are also implemented as many other "resource" managers in CloudStack.
- For instance, the manager for setting up L2-in-L3 networks with Open vSwitch is OvsTunnelManagerImpl, whereas Virtual Router lifecycle is managed by VirtualApplianceManagerImpl.
- In the project, I'm going to implement L3 services for sdn controller, so I need to understand how network services implement.
-
-
- Network Services
- As I said in previous session, network services are represented in ServiceProvider interfaces. There are currently 12 service providers including: Dhcp, Firewall, IpDeployer, LoadBalancing, NetworkACL, PortForwarding, RemoteAccessVpn, Site2siteVpn, SourceNat, StaticNat, UserData and Vpc. In this session, I'll focus on L3 services implemented in CloudStack such as FirewallRule, PortForwardingRule, StaticNatRules, etc. All services are implemented at NetworkElement and every elements including network plugins (nicira nvp, bigswitch vns,...), which is willing to support them, must override from NetworkElement. For a clearly exlaination, I'll take the StaticNat service implemented in Nicira NVP plugin, source code can be found in NiciraNvpElement.java.
- NiciraNvpElement firstly has to check whether it can handle the StaticNat service via canHandle() method:
-
- if (!canHandle(network, Service.StaticNat)) {
- return false;
- }
-
-
- protected boolean canHandle(Network network, Service service) {
- s_logger.debug("Checking if NiciraNvpElement can handle service "
- + service.getName() + " on network " + network.getDisplayText());
-
- //Check if network has right broadcast domain type
- if (network.getBroadcastDomainType() != BroadcastDomainType.Lswitch) {
- return false;
- }
-
- //Check if NiciraNVP is the provider of the network
- if (!_networkModel.isProviderForNetwork(getProvider(),
- network.getId())) {
- s_logger.debug("NiciraNvpElement is not a provider for network "
- + network.getDisplayText());
- return false;
- }
-
- //Check if NiciraNVP support StaticNat service
- if (!_ntwkSrvcDao.canProviderSupportServiceInNetwork(network.getId(),
- service, Network.Provider.NiciraNvp)) {
- s_logger.debug("NiciraNvpElement can't provide the "
- + service.getName() + " service on network "
- + network.getDisplayText());
- return false;
- }
-
- return true;
- }
-
- NiciraNvp checks whether it is the provider of the network and it can support StaticNat service or not. After the checking, it makes a staticNat rely on their own Logical Router, that I won't report detail here.
- The sequence diagram for applying a L3 service is described below:
-
-
-
-
- network_service.png: Network services implementation sequence diagram.
-
- After understanding network architecture and services implementation, I decided to improve Ovs plugin to support L3 services. Because it's the native sdn controller, I want to use Virtual Router for L3 services deployment. This work will be done when I call L3 services execution from OvsElement to VirtualRouterManager. With Xen hosts, VirtualRouterElement execute L3 services via xapi plugin calls. I make a flow which describes more detail about the process below
-
-
-
-
- l3_services.png: Layer 3 services implementation in Ovs plugin.
-
- In Xen, all of L3 services are executed via a Xapi plugin naming "vmops". Default, Virtual Routers (VR) control and manage network services. In this case, "vmops" forwards request to network-responsibility shellscripts such as call_firewall.sh or call_loadbalancer.sh. They then parse parameters and call to shellscripts placed in VR via ssh. For example, if we define a staticNat rule, the process occurs as follow:
- VR Manager (VirtualNetworkApplianceManager) send staticNat command to AgentManager:
-
- try {
- answers = _agentMgr.send(router.getHostId(), cmds);
- } catch (OperationTimedoutException e) {
- s_logger.warn("Timed Out", e);
- throw new AgentUnavailableException("Unable to send commands to virtual router ", router.getHostId(), e);
- }
-
- AgentManager makes a xapi plugin call to host containing the VR
-
- String result = callHostPlugin(conn, "vmops", "setFirewallRule", "args", args.toString());
-
- "vmops" forwards the request to "call_firewall" shellscript
-
- @echo
- def setFirewallRule(session, args):
- sargs = args['args']
- cmd = sargs.split(' ')
- cmd.insert(0, "/usr/lib/xcp/bin/call_firewall.sh")
- cmd.insert(0, "/bin/bash")
- try:
- txt = util.pread2(cmd)
- txt = 'success'
- except:
- util.SMlog(" set firewall rule failed " )
- txt = ''
-
- return txt
-
- "call_firewall" parses the parameters and directly request to a shellscript placed in VR via ssh command
-
- ssh -p 3922 -q -o StrictHostKeyChecking=no -i $cert root@$domRIp "/root/firewall.sh $*"
-
- That's all. "firewall" script set some iptable rules for executing the staticNat rule
-
-
- Opendaylight Controller
- The project need to add an open source Openflow controller, and I decided to choose Opendaylight.
- Opendaylight (ODL) is an interesting experience that I have in GSoC. Before starting project, I still confused between many open source OpenFlow controller such as POX, NOX, Beacon, Floodlight, Opendaylight... Honestly, I do not have large knowledge of OpenFlow protocol and also open source SDN controller at the beginning of project. When the project was in progress, I chose Floodlight, a safe solution because of its rich of functionality and good documents. However, Sebastien Goasguen, CloudStack GSoC manager, recommended me to try Opendaylight. From the collected information, I found that Opendaylight are getting a lot of attentions from the community.
- At the moment, ODL has three main projects:
-
- Opendaylight Controller.
- Opendaylight Network Virtualization Platform.
- Opendaylight Virtual Tennant Network.
-
- It also has six incubating projects:
-
- YANG Tools.
- LISP Flow Mapping.
- OVSDB Integration.
- Openflow Protocol Library.
- BGP-LS/PCEP.
- Defense4All.
-
- For integrating Opendaylight to control and manage network services, I chose ODL Controller project, which is developed by Cisco programmers. The ODL controller is a pure software and as a JVM it can be run on any OS as long as it supports Java. The structure of the ODL controller is shown below:
-
-
-
-
- odl_structure.jpg: Opendaylight Controller architecture.
-
- The structure is separated to three layers:
-
- Network Apps and Orchestration: the top layer consists of applications that utilize the network for normal network communications. Also included in this layer are business and network logic applications that control and monitor network behavior.
- Controller Platform: the middle layer is the framework in which the SDN abstractions can manifest; providing a set of common APIs to the application layer (commonly referred to as the northbound interface), while implementing one or more protocols for command and control of the physical hardware within the network (typically referred to as the southbound interface).
- Physical and Virtual Network Devices: The bottom layer consists of the physical and virtual devices, switches, routers, etc., that make up the connective fabric between all endpoints within the network.
-
- This controller is implemented strictly in software and is contained within its own Java Virtual Machine (JVM).
- Source code can be cloned from git:
-
- git clone https://git.opendaylight.org/gerrit/p/controller.git
-
- Applications make request to ODL Northbound API via HTTP. Currently, ODL supports not too much services. All REST API we can find here: ODL Controller REST API
- For example, we can add query list of exist flows configured on a Node in a give container.
-
- GET http://controller-ip/controller/nb/v2/flow/{containerName}/{nodeType}/{nodeId}
- {containername}: name of the container. The container name for the base controller is “defaultâ€
- {nodeType}: type of the node being programmed
- {nodeId}: node identifier
-
- Or we can add a new flow
-
- POST http://controller-ip/controller/nb/v2/flow/{containerName}/{nodeType}/{nodeId}/{name}
-
- with request body in XML or JSON format
-
- { "actions" : [ "...", ... ],
- "nwDst" : "...",
- "hardTimeout" : "...",
- "installInHw" : "...",
- "tosBits" : "...",
- "cookie" : "...",
- "node" : { "id" : "...", "type" : "..." },
- "dlDst" : "...",
- "name" : "...",
- "nwSrc" : "...",
- "vlanPriority" : "...",
- "protocol" : "...",
- "priority" : "...",
- "vlanId" : "...",
- "tpDst" : "...",
- "etherType" : "...",
- "tpSrc" : "...",
- "ingressPort" : "...",
- "idleTimeout" : "...",
- "dlSrc" : "..." }
-
- The following python client writen by Dwcarder describe more specific about using REST API:https://github.com/dwcarder/python-OpenDaylight/blob/master/OpenDaylight.py
- In project, I learnt how to make HTTP request from CloudStack to ODL for controlling and managing network services. However, there is a problem that ODL currently don't support L2 configuration, while integration ODL to CloudStack requires this. I found an incubating project, led by Brent Salisbury and Evan Zeller from the University of Kentucky, is currently trying to integrate OpenvSwitch database management protocol to ODL, which will allow ODL to view, modify and delete OpenvSwitch object such as bridges and ports by way of the OpenvSwitch databse. In short, this project mainly creates a module acts like OVSDB-client and uses JSON-RPC for remote management. I talked to them and jumped into this project. Thus, I'll do an extra work on ODL community to improve ODL Controller support L2 configuration while still integrate ODL to CloudStack by making a new ODL plugin with the same behavior of NiciraNvp and Ovs.
- Full information about the incubating project can be found here:https://wiki.opendaylight.org/view/Project_Proposals:OVSDB-Integration
- The next session I will take a short description about XenAPI (also called Xapi), which applications use to interact virtualization resources in Xen hosts.
-
-
- Xen API
- There are many tool stacks we can use to manage Xen hosts, such as: XL, Xapi, libvirt or Xend. Of which, Xapi is the default. Xapi (or Xen API) is called from applications to control and manage virtualization resources in Xen hosts via XML-RPC. Xapi is the core component of XCP and XenServer and writen by Ocaml language.
- It's possible to talk directly to Xapi using XML-RPC. This is a way to make remote procedure calls using http requests. In fact, it's possible to send and receive messages using telnet but this is not recommended. The XML-RPC calls are the fixed standard, but we also have bindings to that XML-RPC for Python, C and Java.
- For example about using XML-RPC calls, I make a simple request written by python to list all VMs on a Xen host.
- First thing we need to import XenAPI lib:
-
- >>> import XenAPI
-
- Then we have to authenticate to XenServer or XCP addressed from url with user and password
-
- >>> session = XenAPI.Session('https://url')
- >>> session.login_with_password('user','password')
-
- If this works, we've done the hard bit and established communications with our server. Function bellow will list all Vms on this server.
-
- >>> session.xenapi.VM.get_all()
-
- The answer should be something like:
-
- ['OpaqueRef:7b737e4f-58d8-b493-ea31-31324a2de528', 'OpaqueRef:7237b8af-b80c-c021-fbdc-68146d98d7f5', ........., 'OpaqueRef:c3b752b9-1926-9ceb-f36a-408497c3478b']
-
- Which is a list of strings, each of which represents a unique identifier for a particular 'object' on the server. In this case of each 'OpaqueRef' represents a virtual machine. For each VM we can get the name (name_label)
-
- >>> [session.xenapi.VM.get_name_label(x) for x in session.xenapi.VM.get_all()]
-
- There are a lot of machines in this list. Some of them however are 'template Vms', frozen copies which can't actually run, but which can be cloned in oder to make real virtual machines. We can find out which Vms are templates by calling the VM.get_is_a_template() function. So let's combinate the two in order to produce a list of all the real Vms on my server:
-
- >>> [session.xenapi.VM.get_name_label(x) for x in session.xenapi.VM.get_all() if not session.xenapi.VM.get_is_a_template(x)]
-
- The answer should be something like:
-
- ['Debian Etch 4.0 (2)', 'Debian Etch 4.0 (1)', 'test9', 'test4', 'Control domain on host: ebony', 'Control domain on host: localhost.localdomain', 'test3', 'Debian Sarge 3.1 (1)', 'test2', 'Debian Etch 4.0 (3)', 'test1', 'test3', 'test7', 'test5']
-
- Finally it's only polite to log out of the server. This allows it to garbage collect the no-longer active session.
-
- >>> session.logout()
-
- Full python script can be found here: Xapi python client
- We can find Xapi source code from: https://github.com/xen-org/xen-api
- Xapi come with some main classes, each of them refer to a virtual resource object in Xen such as:
-
- VM: refer to virtual machine.
- VIF: refer to virtual NIC.
- VDI: refer to virtual volume or hard disk.
- ...
-
- Full information about Xapi source code we can find here. http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/api/ Click on each item we can see more detail.
- Xapi plugin
- Xapi has an extension mechanism that allows one to install a Python script (usually but it can be any executable) on the Xen host, and then call that through the Xapi. Writing a Xapi plugin in Python is simplified by using the XenAPIPlugin module, which is by default installed in dom0 in XCP. In my GsoC project, I have to call some plugin scripts to control and manage virtual switches. For example, I inserted a new function to get network name-label in vmops script.
- Then, we can call it directly from XE command line or via XML-RPC. Here is a simple call from XE:
-
- $xe host-call-plugin host-uuid=host-uuid plugin=vmops fn=getLabel
-
- If the plugins has some arguments, it should be inserted with "args:" keyword.
- In ACS, almost plugins are called from CitrixResourceBase.java. With my above function, I inserted a new method into CitrixResourceBase.java and called to the plugin as below:
-
- private String getLabel() {
- Connection conn = getConnection();
- String result = callHostPlugin(conn, "ovstunnel", "getLabel");
- return result;
- }
-
- Of which, Connection class will init a session to Xen host and callHostPlugin method executes a XML-RPC call to plugin.
- Note that every Xapi plugin scripts must be placed into /etc/xapi.d/plugins.
-
-
- What I've done
- In one-and-a-half month, I have understood all of above knowledge and finished two things:
-
- improve gre controller to support XCP.
- re-factor GRE source code by following NiciraNVP plugin design.
-
- improve gre controller to support XCP
- From the understanding of how the native SDN works, a small patch has been made to help it works with Xen Cloud Platform (XCP) version 1.6. Without the patch, this controller can serve XenServer only, the commercial version of XCP. I did try SDN with XCP and debug to find out what errors are and why they occur. After some efforts, I figured out following problems:
-
- The SDN controller has to know what interface it'll deploy GRE tunnels. To do this check, it looks into network to find out the PIF's interface. It has a network name-label, which user defined in the deploy zone phase. If not, it will be replaced by a default label. However, XCP's network has no user-defined or default name-label. Therefore in this step I have made a trick. I used whatever name-label found in the XCP host to bypass this check.
- When creating an OVS bridge, the controller creates a new dom0 vif, plugs to the bridge and immediately unplugs it. This action aims to ask XenServer create the bridge without running ovs-vsctl or brctl script. I saw that it is not very important to XCP hosts and also generates an error from xenopsd daemon, so I ignored this step.
- The script playing a direct role to interact with openvswitch is ovstunnel. It requires a lib named cloudstack_pluginlib, which does not exist in XCP. Thus, I inserted this file into copying process from CloudStack to XCP when add-host phase occurs.
- The "setup_ovs_bridge" function in ovstunnel takes a look into XenServer version to act a blocking IPv6. However, product_version parameter does not exist on XCP. It uses platform_version parameter instead. So, I decided to ignore this step.
-
- The patch is already committed to sdnextensions branch. It is also the primary branch I have been working on this GSoC period.
- re-factor GRE source code by following NiciraNVP plugin design
- GRE source code was re-factored with following changes:
-
- add Connectivity service checking: All of L2 configuration methods now have to check whether Ovs plugin can handle Connectivity service..
- move commands / answers to a new package: com.cloud.agent.api.
- add new NetworkProvider: Ovs.
- add L3 services to Ovs Capabilities: Ovs Capability now is set enabled to such L3 services as SourceNat, StaticNat, PortForwarding, RedundantRouter, Gateway. L2 service Connectivity is also set enabled.
- add L3 services prototype code to OvsElement.java
-
- With the knowledge about CloudStack's network architecture I have learned and represented above, I made a patch which permits guest networks can reach each other via private IPaddress without using VPC mode. Proposal can be found here: Routing between guest networks
- In next days, I will done the following things:
-
- implement L3 services with Virtual Router.
- improve Ovs to support KVM hypervisor.
- add new ODL plugin using ODL controller to control and manager network services.
-
-
-
diff --git a/docs/en-US/gsoc-midsummer-shiva.xml b/docs/en-US/gsoc-midsummer-shiva.xml
deleted file mode 100644
index c26c5a808a5..00000000000
--- a/docs/en-US/gsoc-midsummer-shiva.xml
+++ /dev/null
@@ -1,283 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Mid-Summer Progress Updates
- This section describes Mid-Summer Progress of Shiva Teja - "Create A New Modular UI for Apache CloudStack"
-
- Introduction
-
- The progress on my project has been very smooth so far and I got to learn a lot. I started with learning git and backbone.js and then went on to learn angular.js and evantually made a basic usable UI with angular.js. Sebastien has been guiding me and helping me throughout the period. Both CloudStack and Angular.js communities have been helpful along the way.
-
- I am happy with the progress so far and it is possible to reach the goals with a slightly faster pace.
-
-
- Progress and Experience So Far
-
- I made a basic UI from which a user can list a bunch of collections, launch VMs(and similar actions), edit configurations, add accounts, search through some of the fields. I've also added a very basic notification service and work is in progress for making a dropdown notification similar to the current UI.
-
-
- I started by learning backbone.js and improving the prototype that I've made with my proposal. Then I looked into the current UI's code and tried to make plugins. There was a lot of repeated DOM manipulation and ajax calls throughout the UI.Then I almost took a week looking into angular.js and experimenting with it. I finally chose angular.js because it does lot more than backbone and lets you do the same stuff in lesser and more elegant code, thus, easily maintainable. It was obvious that most of repetitive DOM manipulation can be removed with angular's directives and AJAX calls with, obviously, models. This is one of important reasons I feel that CloudStack should move from just jQuery to an MVC like angular. Apart from code reusabilty for custom UIs, angular offers much lesser, more structured and elegant code. Rolling out new features becomes a much easier task. Implementing features like Quick View or UI tooltips that are present in the current UI is just a matter of implementing another directive.
-
-
- Learning the framework and developing the app while following best practices was not easy at the beginning. I had difficulties in deciding things like structure of the app. Looking into existing apps like angular-app and famous threads on the mailing list helped.
-
-
- Another slightly challenging task was to desing the angular.js models for cloudstack. Angular.js documentation say just use any Plain Old Javascript Objects. Given that statement, there are so many possible ways of doing it. So deciding the best one was frustrating at the beginning, but turned out to be simple. A rule of thumb that I think should be followed throughout the app is to return promises whenever possible. Promises remove unnecessary callbacks and offers a much more elegant structuring of code. All the models and collections in the current UI return promises which allows us to take actions after the specified actions on models and collections takes place.
-
-
- Making complex directives can also be frustrating at the beginning. Videos from egghead.io came handy for understanding directives in depth. I feel that these are the next most powerful things that angular offers after 'the ability to use POJOs for models'. All the DOM manipulations can be put into directives and can be reused easily.
-
-
-
- Screenshots
- I'll try to explain the things that you can do with the UI developed so far with some screenshots and a bit of the code assosciated
-
- Instances tab
-
-
-
-
-
-
-
- instances-screen.png: Instances tab
-
-
-
-
- Simple confirmation modal when you click start vm button
-
-
-
-
-
- start-vm-screen.png: Start vm screen
-
-
- This is simple directive which launches such modal on click and can perform actions for 'yes' and 'no' clicks.(can be found at static/js/common/directives/confirm.js). In this case it'll call model.start() which will call the requester service to start the vm
-
-
- And the vm is running!
-
-
-
-
-
- vm-running.png: Running vm
-
-
- Labels automatically get updated by watching model changes
-
-
- Async calls
-
-
-
-
-
- async-calls.png: Example Async Calls
-
-
- Async calls are taken care by a service named requester which returns a promise. It resolves the promise when the query-async-job request returns with a result
-
-
-
-
- Example Modal Forms
-
-
- The Modal Form for adding an account
-
-
-
-
-
- add-account-screen.png: Add Account
-
-
- modal-form is directive that I wrote which can used for modal forms across the UI. Example usage can be seen in accounts or volumes in static/js/app
-
-
- Create Account request sent on submitting that form
-
-
-
-
-
- create-account-request.png: Create Account Request
-
-
-
-
-
-
- Edit Configurations
-
-
-
-
-
-
-
- configurations-screen.png: Configuration Screen
-
-
- I've moved the description of the configurations from a column in the current UI to a tooltip. These tooltips appear when you hover over the configurations.
-
-
- An input text box like this appears when you click edit
-
-
-
-
-
- edit-configuration.png: Configurations edit screen
-
-
- This is handled by edit-in-place directive that I wrote
-
-
- This shows that the configuration has been updated and the basic notification service that pops up
-
-
-
-
-
- configuration-edit-success.png: Configurations edit success screen
-
-
- It is as simple as calling model.update when the save button is clicked. As it returns a promise, it can be used to call the notification service whenever there are model changes.
-
-
- I tried my best to give an overview on code along with the screenshots. For more on the code, I'd recommend going through it thoroughly, as I'd love to have someone look at my code point out mistakes at this early stage.
-
-
-
- RESTful API
- I worked on the RESTful API for a while. I read a lot about REST but I could not get an elegant way of designing the API for the non RESTful verbs like start, stop etc. I have finished working the on the verbs that are RESTful(like list, update, delete..etc). The API can also handle sub-entities like listing virtual machines in a domain
- Here are some screenshots:
-
-
- List all virtual machines. Anything similar should work
-
-
-
-
-
- list-virtualmachines.png: List All Virtual Machines
-
-
-
-
- List the properties of a specific vm
-
-
-
-
-
- list-specific-vm.png: List Properties of a specific vm
-
-
-
-
- List virtual machines of a domain. Anything similar should work
-
-
-
-
-
- list-domain-vms.png: List virtual machines of a domain
-
-
-
-
- Create an account with a POST request. You can also do update, delete etc.
-
-
-
-
-
- create-account-post.png: Create Account with POST request
-
-
-
-
-
-
- Miscellaneous
- There are lot of other things that I've experimented with along the way which are not shown in screenshots. Although my initial timeline was designed keeping backbone.js in mind, I've been following a similar version of it till now. It has been a bit slow as I had to learn and implement at the same time. I've been rolling out things very fast for the past couple of weeks as I am good to go with most of the angular.js concepts. The project can be finished very easily if I continue the same pace. Here's a list of important things that will be implemented next, in the same order(I have already experimented with most of them.)
-
-
- Authentication handling: This is a slightly tough task. I looked into existing apps and made a basic security service which can be used for this purpose.
-
-
- Infinite scroll directive: I am loading all the data at a time in the current UI. This does not work well with huge production clouds. Again, changes the structure of collections slightly, important thing to be taken care of before doing further development.
-
-
- A modal wizard directive required for adding instances.
-
-
- After finishing those three I'd be equipped with enough UI stuff that can let me concentrate on my models. I'll try to add as many functionalities to the models which can easily used throught this UI, and also reusable in custon UIs. After finishing these, I'll implement a better notification system.
-
-
- Tests: Although I should've done these parallelly while developing the UI, given the lack of experience, it took me some time to realize that tests are important. I have setup a test environment karma and I'll soon write tests for whatever I've written so far.
-
-
-
-
- Experience gained working on OSS and CloudStack
- Working on OSS has been very different and offered much more to learn what a university project could offer me. Asking and answering questions is one of the important thing that really helped me and I feel this was the important part of the development so far. Although I was a bit shy to ask questions at the beginning, I really loved the way angular.js community has helped even for silly questions. Soon, I realized the same happens on the CloudStack mailing list or any OSS mailing list for that matter. Solving others problems also helps a lot in building up knowledge. So, answering questions is also one of the important thing about working on Open Source Software. Being nice and polite on the public discussions like this improves personality. I am really glad to be a part of it now and very thankful to Google for such a wonderful program that introduces students to real-world software problems at very early stages of student's experience.
- I did not know much about CloudStack itself when I started working on the project. Following the discussions on mailing list, I googled for different terms used, watched a few videos on cloud and I'm really interested in learning more. I really hope to join the real CloudStack development soon.
-
-
- Conclusion
- You can find a demo of the UI here live in action.
- I am really happy with the progress and experience so far. The goals of the project look easily reachable with the experience I have now. I still have RESTful API to be handled at the end. So I'll have to finish most of the project by the end of the august. Each of the task in the next todo list I've mentioned above should not take much time if things go well and models required for the UI should be ready by august last week so that I can take care of any UI specific things and RESTful stuff.
-
- Here's small list of things that I've learned so far:
-
-
- Git concepts, along with using JIRA and Review Board.
-
-
- Some advanced JS concepts and JS frameworks like jQuery, backbone.js, angular.js. Using Twitter Bootstrap for faster UI development.
-
-
- Basics of designing and structuring RESTful APIs
-
-
- Cloudmonkey's code and usage. I had to look into its code when I was designing the RESTful API.
-
-
- A bit more in depth understanding of Flask web framework
-
-
- Exposure to testing environment like karma and testing the UI in different browsers
-
-
- Code written so far is available here and here
- I thank Google and CloudStack for giving me this oppurtunity, Sebastien and Kelcey for helping me along the way.
-
-
diff --git a/docs/en-US/gsoc-midsummer.xml b/docs/en-US/gsoc-midsummer.xml
deleted file mode 100644
index 74ca62a107e..00000000000
--- a/docs/en-US/gsoc-midsummer.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Mid-Summer Progress Updates
- This chapter describes the progress of each &PRODUCT; Google Summer of Code project.
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/gsoc-proposals.xml b/docs/en-US/gsoc-proposals.xml
deleted file mode 100644
index 7c4b50c6511..00000000000
--- a/docs/en-US/gsoc-proposals.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Google Summer of Code Proposals
- This chapter contains the five proposals awarded to &PRODUCT; for the 2013 Google Summer of Code project.
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/gsoc-shiva.xml b/docs/en-US/gsoc-shiva.xml
deleted file mode 100644
index fe36d8ef050..00000000000
--- a/docs/en-US/gsoc-shiva.xml
+++ /dev/null
@@ -1,70 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Shiva Teja's 2013 GSoC Proposal
- This chapter describes Shiva Teja's 2013 Google Summer of Code project within the &PRODUCT; ASF project. It is a copy paste of the submitted proposal.
-
- Abstract
-
- The aim of this project is to create a new modular UI for Apache CloudStack using Bootstrap by Twitter and Backbone.js. To achieve this easily, I'll be creating a RESTful wrapper API on top of the current CloudStack API. I hope this project will make custom UIs for CloudStack very easy.
-
- Why does CloudStack need a new UI?
-
- The current UI cannot be reused easliy to make a custom UI. The UI I will be making using backbone.js can be reused very easily to make custom UIs. The models, views, routers etc can remain the same in all the UIs. The user interface can be changed just by changing the templates. Check the implementation details below for further details.
-
- Why does it need a RESTful wrapper API ?
-
- Backbone.js heavily depends on RESTful architecture. Making a new UI with backbone.js using a query based API might not be easy.
-
-
- List of deliverables
-
- A new UI for CloudStack(with almost all features in the current UI and new ones, if any).
- A RESTful wrapper API on top of the CloudStack API
- Some documentation about using this UI to make a custom UI.
-
-
-
- Approach
- Wrapper API: Backbone.js, by default, uses four HTTP methods(GET, PUT, POST, DELETE) for communicating with the server. It uses GET to fetch a resource from the server, POST to create a resource, PUT to update the resource and DELETE to delete the resource. A query based API can probably be used to make the UI by overriding backbone's default sync function. But it makes more sense to have an API which supports the above mentioned method and is resource based. This RESTful API works on top of the CloudStack API. The main task is to map the combinations of these HTTP methods and the resources to appropriate CloudStack API command. The other task is to decide on how the URLs should look like. Say for starting a virtual machine, for it to be RESTful, we have to use POST as we are creating a resource, or a PUT as we are changing the state of a virtual machine. So the possible options on the URL could be to do a POST /runningvirtualmachines and respond with 201 Created code or a PUT on /virtualmachines/id and respond with 200 OK. If these are decided, the wrapper can be generated or be written manually, which can use defined patters to map to appropriate CloudStack API commands(Similar to what cloudmonkey does. See this prototype. I can use cloudmonkey's code to generate the required API entity verb relationships. Each verb will have a set of rules saying what method should be used in the RESTful API and how should it look like in the URL. Another possible way could be to group entities first manually and write the wrapper manually(something like zone/pods/cluster). Some possibilities have been discussed in this thread.
-
- UI: It will be a single page app. It'll use client side templating for rendering. This makes it very easy to make a custom UI because it can be achieved just by changing the templates. Backbone views will make use of these templates to render the appropriate models/collections. A completely new interface can be written just by changing the templates. Javascript code can completely remain the same. The views will take care of appropriate DOM events. Such event will correspond to appropriate model/collection chages, thus causing appropriate API calls.
-
-
- Approximate Schedle
- Till June 17 - Decide on how the RESTful API should look like and design algorithms to generate the wrapper.
- July 5(soft deadline), July 10(hard deadline) : Wrapper API will be ready.
- July 12(soft) - July 15(hard): Make basic wireframes and designs for the website and get them approved.
- July 29(mid term evaluation) : All the basic models, views, routes of the UI should be ready along with a few templates.
- August 15(hard deadline, shouldn't take much time actually) - A basic usable UI where users can just list all the entities which are present in the current UI's main navigation( Like Instances, Templates, Accounts etc)
- September 1(hard) - From this UI, users should be able to launch instances, edit settings of most of the entities.
- September 16(Pencil down!) - Fix some design tweaks and finish a completely usable interface with functions similar to current UI.
- September 23 - Finish the documentation on how to use this UI to make custom UIs.
-
-
- About Me
- I am a 2nd year computer science undergrad studying at IIT Mandi, India. I've been using Python for an year and a half now. I've used Django, Flask and Tornado for my small projects. Along with Python, I use C++ for competitive programming. Recently, I fell in love with Haskell. I've always been fascinated about web technologies.
-
-
diff --git a/docs/en-US/gsoc-tuna.xml b/docs/en-US/gsoc-tuna.xml
deleted file mode 100644
index aa9726f095c..00000000000
--- a/docs/en-US/gsoc-tuna.xml
+++ /dev/null
@@ -1,231 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Nguyen's 2013 GSoC Proposal
- This chapter describes Nguyen 2013 Google Summer of Code project within the &PRODUCT; ASF project. It is a copy paste of the submitted proposal.
-
- Add Xen/XCP support for GRE SDN controller
-
- "This project aims to enhance the current native SDN controller in supporting Xen/XCP and integrate successfully the open source SDN controller (FloodLight) driving Open vSwitch through its interfaces."
-
-
-
- Abstract
-
- SDN, standing for Software-Defined Networking, is an approach to building data network equipments and softwares. It were invented by ONRC, Stanford University. SDN basically decouples the control from physical networking boxes and given to a software application called a controller. SDN has three parts: controller, protocols and switch; In which, OpenFlow is an open standard to deploy innovative protocols. Nowaday, more and more datacenters use SDN instead of traditional physical networking boxes. For example, Google announced that they completely built its own switches and SDN confrollers for use in its internal backbone network.
-
-
- OpenvSwitch, an open source software switch, is widely used as a virtual switch in virtualized server environments. It can currently run on any Linux-based virtualization platform, such as: KVM, Xen (XenServer, XCP, Xen hypervisor), VirtualBox... It also has been ported to a number of different operating systems and hardware platforms: Linux, FreeBSD, Windows and even non-POSIX embedded systems. In cloud computing IaaS, using OpenvSwitch instead of Linux bridge on compute nodes becomes an inevitable trend because of its powerful features and the ability of OpenFlow integration as well.
-
-
- In CloudStack, we already have a native SDN controller. With KVM hypervisor, developers can easily install OpenvSwitch module; whereas, Xen even has a build-in one. The combination of SDN controller and OpenvSwitch gives us many advanced things. For example, creating GRE tunnels as an isolation method instead of VLAN is a good try. In this project, we are planning to support GRE tunnels in Xen/XCP hypervisor with the native SDN controller. When it's done, substituting open-sources SDN controllers (floodlight, beacon, pox, nox) for the current one is an amazing next step.
-
-
-
- Design description
-
- CloudStack currently has a native SDN Controller that is used to build meshes of GRE tunnels between Xen hosts. There consists of 4 parts: OVS tunnel manager, OVS Dao/VO, Command/Answer and Ovs tunnel plugin. The details are as follow:
-
-
- OVS tunnel manager: Consist of OvsElement and OvsTunnelManager.
-
-
- OvsElement is used for controlling Ovs tunnel lifecycle (prepare, release)
-
-
-
- prepare(network, nic, vm, dest): create tunnel for vm on network to dest
-
-
- release(network, nic, vm): destroy tunnel for vm on network
-
-
-
- OvsTunnelManager drives bridge configuration and tunnel creation via calling respective commands to Agent.
-
-
-
- destroyTunnel(vm, network): call OvsDestroyTunnelCommand to destroy tunnel for vm on network
-
-
- createTunnel(vm, network, dest): call OvsCreateTunnelCommand to create tunnel for vm on network to dest
-
-
-
- OVS tunnel plugin: These are ovstunnel and ovs-vif-flows.py script, writen as XAPI plugin. The OVS tunnel manager will call them via XML-RPC.
-
-
- Ovstunnel plugin calls corresponding vsctl commands for setting up the OVS bridge, creating GRE tunnels or destroying them.
-
-
-
- setup_ovs_bridge()
-
-
- destroy_ovs_bridge()
-
-
- create_tunnel()
-
-
- destroy_tunnel()
-
-
-
- Ovs-vif-flow.py clears or applies rule for VIFs every time it is plugged or unplugged from a OVS bridge.
-
-
-
- clear_flow()
-
-
- apply_flow()
-
-
-
- OVS command/answer: It is designed under the format of requests and answers between Manager and Plugin. These commands will correspondence exactly the mentioned manipulations.
-
-
-
- OvsSetupBridgeCommand
-
-
- OvsSetupBridgeAnswer
-
-
- OvsDestroyBridgeCommand
-
-
- OvsDestroyBridgeAnswer
-
-
- OvsCreateTunnelCommand
-
-
- OvsCreateTunnelAnswer
-
-
- OvsDestroyTunnelCommand
-
-
- OvsDestroyTunnelAnswer
-
-
- OvsFetchInterfaceCommand
-
-
- OvsFetchInterfaceAnswer
-
-
-
- OVS Dao/VO
-
-
-
- OvsTunnelInterfaceDao
-
-
- OvsTunnelInterfaceVO
-
-
- OvsTunnelNetworkDao
-
-
- OvsTunnelNetworkVO
-
-
-
-
- Integrate FloodLight as SDN controller
-
- I think that we maybe deploy FloodLight Server as a new SystemVM. This VM acts like current SystemVMs. One Floodlight SystemVM per Zone, so it can manage for virtual switches under this zone.
-
-
-
- Deliverables
-
- GRE has been used as isolation method in CloudStack when deploy with Xen/XCP hosts.
-
-
-
- User set sdn.ovs.controller parameter in Global Setting to true. He deploys Advance Networking and chooses GRE as isolation method
-
-
- Make use of Floodlight instead of native SDN controller.
-
-
-
-
- About me
-
- My name is Nguyen Anh Tu, a young and enthusiastic researcher in Cloud Computing Center - Viettel Research and Development Institute, Vietnam. Since last year, we has built Cloud Platform based on CloudStack, starting with version 3.0.2. As the results, some advanced modules were successfully developed, consists of:
-
-
-
- Encrypt Data Volume for VMs.
-
-
- Dynamic Allocate Memory for VMs by changing policy on Squeeze Daemon.
-
-
- AutoScale without using NetScale.
-
-
- Deploy a new SystemVM type for Intrustion Detection System.
-
-
-
- Given the working experience and recent researches, I have obtained remarkably the understanding of specific knowledges to carry on this project, details as follow:
-
-
-
- Java source code on CloudStack: Design Pattern, Spring framework.
-
-
- Bash, Python programming.
-
-
- XAPI plugin.
-
-
- XML-RPC.
-
-
- OpenVSwitch on Xen.
-
-
-
- Other knowledges:
-
-
-
- XAPI RRD, XenStore.
-
-
- Ocaml Programming (XAPI functions).
-
-
-
-
diff --git a/docs/en-US/guest-ip-ranges.xml b/docs/en-US/guest-ip-ranges.xml
deleted file mode 100644
index c49dc6a76f8..00000000000
--- a/docs/en-US/guest-ip-ranges.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Guest IP Ranges
- The IP ranges for guest network traffic are set on a per-account basis by the user. This
- allows the users to configure their network in a fashion that will enable VPN linking between
- their guest network and their clients.
- In shared networks in Basic zone and Security Group-enabled Advanced networks, you will have
- the flexibility to add multiple guest IP ranges from different subnets. You can add or remove
- one IP range at a time. For more information, see .
-
diff --git a/docs/en-US/guest-network.xml b/docs/en-US/guest-network.xml
deleted file mode 100644
index 692eb29f525..00000000000
--- a/docs/en-US/guest-network.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Guest Network
- In a &PRODUCT; cloud, guest VMs can communicate with each other using shared infrastructure with the security and user perception that the guests have a private LAN.
- The &PRODUCT; virtual router is the main component providing networking features for guest traffic.
-
diff --git a/docs/en-US/guest-nw-usage-with-traffic-sentinel.xml b/docs/en-US/guest-nw-usage-with-traffic-sentinel.xml
deleted file mode 100644
index d6fc10bca52..00000000000
--- a/docs/en-US/guest-nw-usage-with-traffic-sentinel.xml
+++ /dev/null
@@ -1,72 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Guest Network Usage Integration for Traffic Sentinel
- To collect usage data for a guest network, &PRODUCT; needs to pull the data from an external
- network statistics collector installed on the network. Metering statistics for guest networks
- are available through &PRODUCT;’s integration with inMon Traffic Sentinel.
- Traffic Sentinel is a network traffic usage data collection package. &PRODUCT; can feed
- statistics from Traffic Sentinel into its own usage records, providing a basis for billing users
- of cloud infrastructure. Traffic Sentinel uses the traffic monitoring protocol sFlow. Routers
- and switches generate sFlow records and provide them for collection by Traffic Sentinel, then
- &PRODUCT; queries the Traffic Sentinel database to obtain this information
- To construct the query, &PRODUCT; determines what guest IPs were in use during the current
- query interval. This includes both newly assigned IPs and IPs that were assigned in a previous
- time period and continued to be in use. &PRODUCT; queries Traffic Sentinel for network
- statistics that apply to these IPs during the time period they remained allocated in &PRODUCT;.
- The returned data is correlated with the customer account that owned each IP and the timestamps
- when IPs were assigned and released in order to create billable metering records in &PRODUCT;.
- When the Usage Server runs, it collects this data.
- To set up the integration between &PRODUCT; and Traffic Sentinel:
-
-
- On your network infrastructure, install Traffic Sentinel and configure it to gather
- traffic data. For installation and configuration steps, see inMon documentation at Traffic Sentinel Documentation.
-
-
- In the Traffic Sentinel UI, configure Traffic Sentinel to accept script querying from
- guest users. &PRODUCT; will be the guest user performing the remote queries to gather
- network usage for one or more IP addresses.
- Click File > Users > Access Control > Reports Query, then select Guest from the
- drop-down list.
-
-
- On &PRODUCT;, add the Traffic Sentinel host by calling the &PRODUCT; API command
- addTrafficMonitor. Pass in the URL of the Traffic Sentinel as protocol + host + port
- (optional); for example, http://10.147.28.100:8080. For the addTrafficMonitor command
- syntax, see the API Reference at API
- Documentation.
- For information about how to call the &PRODUCT; API, see the Developer’s Guide at
-
- &PRODUCT; API Developer's Guide.
-
-
- Log in to the &PRODUCT; UI as administrator.
-
-
- Select Configuration from the Global Settings page, and set the following:
- direct.network.stats.interval: How often you want &PRODUCT; to query Traffic
- Sentinel.
-
-
-
diff --git a/docs/en-US/guest-traffic.xml b/docs/en-US/guest-traffic.xml
deleted file mode 100644
index 943073ebc97..00000000000
--- a/docs/en-US/guest-traffic.xml
+++ /dev/null
@@ -1,43 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Guest Traffic
- A network can carry guest traffic only between VMs within one zone. Virtual machines in different zones cannot communicate with each other using their IP addresses; they must communicate with each other by routing through a public IP address.
- See a typical guest traffic setup given below:
-
-
-
-
- guest-traffic-setup.png: Depicts a guest traffic setup
-
- Typically, the Management Server automatically creates a virtual router for each network. A
- virtual router is a special virtual machine that runs on the hosts. Each virtual router in an
- isolated network has three network interfaces. If multiple public VLAN is used, the router will
- have multiple public interfaces. Its eth0 interface serves as the gateway for the guest traffic
- and has the IP address of 10.1.1.1. Its eth1 interface is used by the system to configure the
- virtual router. Its eth2 interface is assigned a public IP address for public traffic. If
- multiple public VLAN is used, the router will have multiple public interfaces.
- The virtual router provides DHCP and will automatically assign an IP address for each guest VM within the IP range assigned for the network. The user can manually reconfigure guest VMs to assume different IP addresses.
- Source NAT is automatically configured in the virtual router to forward outbound traffic for all guest VMs
-
diff --git a/docs/en-US/ha-enabled-vm.xml b/docs/en-US/ha-enabled-vm.xml
deleted file mode 100644
index 19666a4db27..00000000000
--- a/docs/en-US/ha-enabled-vm.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- HA-Enabled Virtual Machines
- The user can specify a virtual machine as HA-enabled. By default, all virtual router VMs and Elastic Load Balancing VMs are automatically configured as HA-enabled. When an HA-enabled VM crashes, &PRODUCT; detects the crash and restarts the VM automatically within the same Availability Zone. HA is never performed across different Availability Zones. &PRODUCT; has a conservative policy towards restarting VMs and ensures that there will never be two instances of the same VM running at the same time. The Management Server attempts to start the VM on another Host in the same cluster.
- HA features work with iSCSI or NFS primary storage. HA with local storage is not supported.
-
diff --git a/docs/en-US/ha-for-hosts.xml b/docs/en-US/ha-for-hosts.xml
deleted file mode 100644
index 15b5fa73f0b..00000000000
--- a/docs/en-US/ha-for-hosts.xml
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- HA for Hosts
- The user can specify a virtual machine as HA-enabled. By default, all virtual router VMs and Elastic Load Balancing VMs are automatically configured as HA-enabled. When an HA-enabled VM crashes, &PRODUCT; detects the crash and restarts the VM automatically within the same Availability Zone. HA is never performed across different Availability Zones. &PRODUCT; has a conservative policy towards restarting VMs and ensures that there will never be two instances of the same VM running at the same time. The Management Server attempts to start the VM on another Host in the same cluster.
- HA features work with iSCSI or NFS primary storage. HA with local storage is not supported.
-
-
diff --git a/docs/en-US/ha-management-server.xml b/docs/en-US/ha-management-server.xml
deleted file mode 100644
index 1afebce3bf3..00000000000
--- a/docs/en-US/ha-management-server.xml
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- HA for Management Server
- The &PRODUCT; Management Server should be deployed in a multi-node configuration such that it is not susceptible to individual server failures. The Management Server itself (as distinct from the MySQL database) is stateless and may be placed behind a load balancer.
- Normal operation of Hosts is not impacted by an outage of all Management Serves. All guest VMs will continue to work.
- When the Management Server is down, no new VMs can be created, and the end user and admin UI, API, dynamic load distribution, and HA will cease to work.
-
diff --git a/docs/en-US/hardware-config-eg.xml b/docs/en-US/hardware-config-eg.xml
deleted file mode 100644
index 3174bfa8576..00000000000
--- a/docs/en-US/hardware-config-eg.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Example Hardware Configuration
- This section contains an example configuration of specific switch models for zone-level
- layer-3 switching. It assumes VLAN management protocols, such as VTP or GVRP, have been
- disabled. The example scripts must be changed appropriately if you choose to use VTP or
- GVRP.
-
-
-
diff --git a/docs/en-US/hardware-firewall.xml b/docs/en-US/hardware-firewall.xml
deleted file mode 100644
index efab3c73806..00000000000
--- a/docs/en-US/hardware-firewall.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Hardware Firewall
- All deployments should have a firewall protecting the management server; see Generic
- Firewall Provisions. Optionally, some deployments may also have a Juniper SRX firewall that will
- be the default gateway for the guest networks; see .
-
-
-
-
-
diff --git a/docs/en-US/health-checks-for-lb-rules.xml b/docs/en-US/health-checks-for-lb-rules.xml
deleted file mode 100644
index 4c7e091c1ce..00000000000
--- a/docs/en-US/health-checks-for-lb-rules.xml
+++ /dev/null
@@ -1,51 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Health Checks for Load Balancer Rules
- (NetScaler load balancer only; requires NetScaler version 10.0)
-
- Health checks are used in load-balanced applications to ensure that requests are forwarded
- only to running, available services.
- When creating a load balancer rule, you can specify a health check policy.
- This is in addition to specifying the
- stickiness policy, algorithm, and other load balancer rule options.
- You can configure one health check policy per load balancer rule.
- Any load balancer rule defined on a NetScaler load balancer in &PRODUCT; can have a health check policy.
- The policy consists of a ping path, thresholds to define "healthy" and "unhealthy" states,
- health check frequency, and timeout wait interval.
- When a health check policy is in effect,
- the load balancer will stop forwarding requests to any resources that are found to be unhealthy.
- If the resource later becomes available again, the periodic health check
- will discover it, and the resource will once again be added to the pool of resources that can
- receive requests from the load balancer.
- At any given time, the most recent result of the health check is displayed in the UI.
- For any VM that is attached to a load balancer rule with a health check configured,
- the state will be shown as UP or DOWN in the UI depending on the result of the most recent health check.
- You can delete or modify existing health check policies.
- To configure how often the health check is performed by default, use the global
- configuration setting healthcheck.update.interval (default value is 600 seconds).
- You can override this value for an individual health check policy.
- For details on how to set a health check policy using the UI, see .
-
diff --git a/docs/en-US/host-add-vsphere.xml b/docs/en-US/host-add-vsphere.xml
deleted file mode 100644
index b47846448d7..00000000000
--- a/docs/en-US/host-add-vsphere.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Adding a Host (vSphere)
- For vSphere servers, we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to &PRODUCT;. See Add Cluster: vSphere.
-
diff --git a/docs/en-US/host-add-xenserver-kvm-ovm.xml b/docs/en-US/host-add-xenserver-kvm-ovm.xml
deleted file mode 100644
index 91c36aba7f6..00000000000
--- a/docs/en-US/host-add-xenserver-kvm-ovm.xml
+++ /dev/null
@@ -1,157 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Adding a Host (XenServer or KVM)
- XenServer and KVM hosts can be added to a cluster at any time.
-
- Requirements for XenServer and KVM Hosts
-
- Make sure the hypervisor host does not have any VMs already running before you add it to
- &PRODUCT;.
-
- Configuration requirements:
-
-
- Each cluster must contain only hosts with the identical hypervisor.
-
-
- For XenServer, do not put more than 8 hosts in a cluster.
-
-
- For KVM, do not put more than 16 hosts in a cluster.
-
-
- For hardware requirements, see the installation section for your hypervisor in the
- &PRODUCT; Installation Guide.
-
- XenServer Host Additional Requirements
- If network bonding is in use, the administrator must cable the new host identically to
- other hosts in the cluster.
- For all additional hosts to be added to the cluster, run the following command. This
- will cause the host to join the master in a XenServer pool.
- # xe pool-join master-address=[master IP] master-username=root master-password=[your password]
-
- When copying and pasting a command, be sure the command has pasted as a single line
- before executing. Some document viewers may introduce unwanted line breaks in copied
- text.
-
- With all hosts added to the XenServer pool, run the cloud-setup-bond script. This script
- will complete the configuration and setup of the bonds on the new hosts in the
- cluster.
-
-
- Copy the script from the Management Server in
- /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/cloud-setup-bonding.sh to the
- master host and ensure it is executable.
-
-
- Run the script:
- # ./cloud-setup-bonding.sh
-
-
-
-
- KVM Host Additional Requirements
-
-
- If shared mountpoint storage is in use, the administrator should ensure that the new
- host has all the same mountpoints (with storage mounted) as the other hosts in the
- cluster.
-
-
- Make sure the new host has the same network configuration (guest, private, and
- public network) as other hosts in the cluster.
-
-
- If you are using OpenVswitch bridges edit the file agent.properties on the KVM host
- and set the parameter network.bridge.type to
- openvswitch before adding the host to &PRODUCT;
-
-
-
-
-
-
- Adding a XenServer or KVM Host
-
-
- If you have not already done so, install the hypervisor software on the host. You will
- need to know which version of the hypervisor software version is supported by &PRODUCT;
- and what additional configuration is required to ensure the host will work with &PRODUCT;.
- To find these installation details, see the appropriate section for your hypervisor in the
- &PRODUCT; Installation Guide.
-
-
- Log in to the &PRODUCT; UI as administrator.
-
-
- In the left navigation, choose Infrastructure. In Zones, click View More, then click
- the zone in which you want to add the host.
-
-
- Click the Compute tab. In the Clusters node, click View All.
-
-
- Click the cluster where you want to add the host.
-
-
- Click View Hosts.
-
-
- Click Add Host.
-
-
- Provide the following information.
-
-
- Host Name. The DNS name or IP address of the host.
-
-
- Username. Usually root.
-
-
- Password. This is the password for the user from your XenServer or KVM
- install).
-
-
- Host Tags (Optional). Any labels that you use to categorize hosts for ease of
- maintenance. For example, you can set to the cloud's HA tag (set in the ha.tag global
- configuration parameter) if you want this host to be used only for VMs with the "high
- availability" feature enabled. For more information, see HA-Enabled Virtual Machines
- as well as HA for Hosts.
-
-
- There may be a slight delay while the host is provisioned. It should automatically
- display in the UI.
-
-
- Repeat for additional hosts.
-
-
-
-
diff --git a/docs/en-US/host-add.xml b/docs/en-US/host-add.xml
deleted file mode 100644
index 74509d69be7..00000000000
--- a/docs/en-US/host-add.xml
+++ /dev/null
@@ -1,42 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Adding a Host
-
- Before adding a host to the &PRODUCT; configuration, you must first install your chosen hypervisor on the host. &PRODUCT; can manage hosts running VMs under a variety of hypervisors.
- The &PRODUCT; Installation Guide provides instructions on how to install each supported hypervisor
- and configure it for use with &PRODUCT;. See the appropriate section in the Installation Guide for information about which version of your chosen hypervisor is supported, as well as crucial additional steps to configure the hypervisor hosts for use with &PRODUCT;.
- Be sure you have performed the additional &PRODUCT;-specific configuration steps described in the hypervisor installation section for your particular hypervisor.
-
- Now add the hypervisor host to &PRODUCT;. The technique to use varies depending on the hypervisor.
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/host-allocation.xml b/docs/en-US/host-allocation.xml
deleted file mode 100644
index dddffd553ac..00000000000
--- a/docs/en-US/host-allocation.xml
+++ /dev/null
@@ -1,123 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Assigning VMs to Hosts
- At any point in time, each virtual machine instance is running on a single host.
- How does &PRODUCT; determine which host to place a VM on? There are several ways:
-
- Automatic default host allocation. &PRODUCT; can automatically pick
- the most appropriate host to run each virtual machine.
- Instance type preferences. &PRODUCT; administrators can specify that certain hosts should have a preference for particular types of guest instances.
- For example, an administrator could state that a host should have a preference to run Windows guests.
- The default host allocator will attempt to place guests of that OS type on such hosts first.
- If no such host is available, the allocator will place the instance wherever there is sufficient physical capacity.
- Vertical and horizontal allocation.
- Vertical allocation consumes all the resources of a given host before allocating any guests on a second host.
- This reduces power consumption in the cloud. Horizontal allocation places a guest on each host in a round-robin fashion.
- This may yield better performance to the guests in some cases.
- End user preferences.
- Users can not control exactly which host will run a given VM instance,
- but they can specify a zone for the VM.
- &PRODUCT; is then restricted to allocating the VM only to one of the hosts in that zone.
- Host tags. The administrator can assign tags to hosts. These tags can be used to
- specify which host a VM should use.
- The &PRODUCT; administrator decides whether to define host tags, then create a service offering using those tags and offer it to the user.
-
- Affinity groups.
- By defining affinity groups and assigning VMs to them, the user or administrator can
- influence (but not dictate) which VMs should run on separate hosts.
- This feature is to let users specify that certain VMs won't be on the same host.
- &PRODUCT; also provides a pluggable interface for adding new allocators.
- These custom allocators can provide any policy the administrator desires.
-
-
- Affinity Groups
- By defining affinity groups and assigning VMs to them, the user or administrator can
- influence (but not dictate) which VMs should run on separate hosts.
- This feature is to let users specify that VMs with the same “host anti-affinity†type won’t be on the same host.
- This serves to increase fault tolerance.
- If a host fails, another VM offering the same service (for example, hosting the user's website) is still up and running on another host.
- The scope of an affinity group is per user account.
- Creating a New Affinity Group
- To add an affinity group:
-
- Log in to the &PRODUCT; UI as an administrator or user.
- In the left navigation bar, click Affinity Groups.
- Click Add affinity group. In the dialog box, fill in the following fields:
-
- Name. Give the group a name.
- Description. Any desired text to tell more about the purpose of the group.
- Type. The only supported type shipped with &PRODUCT; is Host Anti-Affinity.
- This indicates that the VMs in this group should avoid being placed on the same VM with each other.
- If you see other types in this list, it means that your installation of &PRODUCT; has been extended
- with customized affinity group plugins.
-
-
-
- Assign a New VM to an Affinity Group
- To assign a new VM to an affinity group:
-
- Create the VM as usual, as described in .
- In the Add Instance wizard, there is a new Affinity tab where you can select the affinity group.
-
- Change Affinity Group for an Existing VM
- To assign an existing VM to an affinity group:
-
- Log in to the &PRODUCT; UI as an administrator or user.
- In the left navigation bar, click Instances.
- Click the name of the VM you want to work with.
- Stop the VM by clicking the Stop button.
- Click the Change Affinity button.
-
-
-
-
- change-affinity-button.png: button to assign an affinity group
- to a virtual machine
-
-
-
-
- View Members of an Affinity Group
- To see which VMs are currently assigned to a particular affinity group:
-
- In the left navigation bar, click Affinity Groups.
- Click the name of the group you are interested in.
- Click View Instances. The members of the group are listed.
- From here, you can click the name of any VM in the list to access all its details and controls.
-
- Delete an Affinity Group
- To delete an affinity group:
-
- In the left navigation bar, click Affinity Groups.
- Click the name of the group you are interested in.
- Click Delete.
- Any VM that is a member of the affinity group will be disassociated from the group.
- The former group members will continue to run normally on the current hosts, but if the
- VM is restarted, it will no longer follow the host allocation rules from its former
- affinity group.
-
-
-
diff --git a/docs/en-US/hypervisor-host-install-agent.xml b/docs/en-US/hypervisor-host-install-agent.xml
deleted file mode 100644
index e339165d0da..00000000000
--- a/docs/en-US/hypervisor-host-install-agent.xml
+++ /dev/null
@@ -1,79 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Install and configure the Agent
- To manage KVM instances on the host &PRODUCT; uses a Agent. This Agent communicates with the Management server and controls all the instances on the host.
- First we start by installing the agent:
- In RHEL or CentOS:
- $ yum install cloudstack-agent
- In Ubuntu:
- $ apt-get install cloudstack-agent
- The host is now ready to be added to a cluster. This is covered in a later section, see . It is recommended that you continue to read the documentation before adding the host!
-
- Configure CPU model for KVM guest (Optional)
- In additional,the &PRODUCT; Agent allows host administrator to control the guest CPU model which is exposed to KVM instances. By default, the CPU model of KVM instance is likely QEMU Virtual CPU version x.x.x with least CPU features exposed. There are a couple of reasons to specify the CPU model:
-
- To maximise performance of instances by exposing new host CPU features to the KVM instances;
- To ensure a consistent default CPU across all machines,removing reliance of variable QEMU defaults;
-
- For the most part it will be sufficient for the host administrator to specify the guest CPU config in the per-host configuration file (/etc/cloudstack/agent/agent.properties). This will be achieved by introducing two new configuration parameters:
- guest.cpu.mode=custom|host-model|host-passthrough
-guest.cpu.model=from /usr/share/libvirt/cpu_map.xml(only valid when guest.cpu.mode=custom)
-
- There are three choices to fulfill the cpu model changes:
-
-
- custom: you can explicitly specify one of the supported named model in /usr/share/libvirt/cpu_map.xml
-
-
- host-model: libvirt will identify the CPU model in /usr/share/libvirt/cpu_map.xml which most closely matches the host, and then request additional CPU flags to complete the match. This should give close to maximum functionality/performance, which maintaining good reliability/compatibility if the guest is migrated to another host with slightly different host CPUs.
-
-
- host-passthrough: libvirt will tell KVM to passthrough the host CPU with no modifications. The difference to host-model, instead of just matching feature flags, every last detail of the host CPU is matched. This gives absolutely best performance, and can be important to some apps which check low level CPU details, but it comes at a cost with respect to migration: the guest can only be migrated to an exactly matching host CPU.
-
-
- Here are some examples:
-
-
- custom
- guest.cpu.mode=custom
-guest.cpu.model=SandyBridge
-
-
-
- host-model
- guest.cpu.mode=host-model
-
-
- host-passthrough
- guest.cpu.mode=host-passthrough
-
-
-
- host-passthrough may lead to migration failure,if you have this problem,you should use host-model or custom
-
-
-
-
diff --git a/docs/en-US/hypervisor-host-install-finish.xml b/docs/en-US/hypervisor-host-install-finish.xml
deleted file mode 100644
index ff530c79038..00000000000
--- a/docs/en-US/hypervisor-host-install-finish.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Add the host to CloudStack
- The host is now ready to be added to a cluster. This is covered in a later section, see . It is recommended that you continue to read the documentation before adding the host!
-
diff --git a/docs/en-US/hypervisor-host-install-firewall.xml b/docs/en-US/hypervisor-host-install-firewall.xml
deleted file mode 100644
index c6658731819..00000000000
--- a/docs/en-US/hypervisor-host-install-firewall.xml
+++ /dev/null
@@ -1,59 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Configuring the firewall
- The hypervisor needs to be able to communicate with other hypervisors and the management server needs to be able to reach the hypervisor.
- In order to do so we have to open the following TCP ports (if you are using a firewall):
-
- 22 (SSH)
- 1798
- 16509 (libvirt)
- 5900 - 6100 (VNC consoles)
- 49152 - 49216 (libvirt live migration)
-
- It depends on the firewall you are using how to open these ports. Below you'll find examples how to open these ports in RHEL/CentOS and Ubuntu.
-
- Open ports in RHEL/CentOS
- RHEL and CentOS use iptables for firewalling the system, you can open extra ports by executing the following iptable commands:
- $ iptables -I INPUT -p tcp -m tcp --dport 22 -j ACCEPT
- $ iptables -I INPUT -p tcp -m tcp --dport 1798 -j ACCEPT
- $ iptables -I INPUT -p tcp -m tcp --dport 16509 -j ACCEPT
- $ iptables -I INPUT -p tcp -m tcp --dport 5900:6100 -j ACCEPT
- $ iptables -I INPUT -p tcp -m tcp --dport 49152:49216 -j ACCEPT
- These iptable settings are not persistent accross reboots, we have to save them first.
- $ iptables-save > /etc/sysconfig/iptables
-
-
- Open ports in Ubuntu
- The default firewall under Ubuntu is UFW (Uncomplicated FireWall), which is a Python wrapper around iptables.
- To open the required ports, execute the following commands:
- $ ufw allow proto tcp from any to any port 22
- $ ufw allow proto tcp from any to any port 1798
- $ ufw allow proto tcp from any to any port 16509
- $ ufw allow proto tcp from any to any port 5900:6100
- $ ufw allow proto tcp from any to any port 49152:49216
- By default UFW is not enabled on Ubuntu. Executing these commands with the firewall disabled does not enable the firewall.
-
-
diff --git a/docs/en-US/hypervisor-host-install-libvirt.xml b/docs/en-US/hypervisor-host-install-libvirt.xml
deleted file mode 100644
index d3d6b9b4e80..00000000000
--- a/docs/en-US/hypervisor-host-install-libvirt.xml
+++ /dev/null
@@ -1,57 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Install and Configure libvirt
- &PRODUCT; uses libvirt for managing virtual machines. Therefore it is vital that libvirt is configured correctly. Libvirt is a dependency of cloudstack-agent and should already be installed.
-
-
- In order to have live migration working libvirt has to listen for unsecured TCP connections. We also need to turn off libvirts attempt to use Multicast DNS advertising. Both of these settings are in /etc/libvirt/libvirtd.conf
- Set the following parameters:
- listen_tls = 0
- listen_tcp = 1
- tcp_port = "16509"
- auth_tcp = "none"
- mdns_adv = 0
-
-
- Turning on "listen_tcp" in libvirtd.conf is not enough, we have to change the parameters as well:
- On RHEL or CentOS modify /etc/sysconfig/libvirtd:
- Uncomment the following line:
- #LIBVIRTD_ARGS="--listen"
- On Ubuntu: modify /etc/default/libvirt-bin
- Add "-l" to the following line::
- libvirtd_opts="-d"
- so it looks like:
- libvirtd_opts="-d -l"
-
-
- Restart libvirt
- In RHEL or CentOS:
- $ service libvirtd restart
- In Ubuntu:
- $ service libvirt-bin restart
-
-
-
diff --git a/docs/en-US/hypervisor-host-install-network-openvswitch.xml b/docs/en-US/hypervisor-host-install-network-openvswitch.xml
deleted file mode 100644
index a16dc8e0e8d..00000000000
--- a/docs/en-US/hypervisor-host-install-network-openvswitch.xml
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Configure the network using OpenVswitch
- This is a very important section, please make sure you read this thoroughly.
- In order to forward traffic to your instances you will need at least two bridges: public and private.
- By default these bridges are called cloudbr0 and cloudbr1, but you do have to make sure they are available on each hypervisor.
- The most important factor is that you keep the configuration consistent on all your hypervisors.
-
- Preparing
- To make sure that the native bridge module will not interfere with openvswitch the bridge module should be added to the blacklist. See the modprobe documentation for your distribution on where to find the blacklist. Make sure the module is not loaded either by rebooting or executing rmmod bridge before executing next steps.
- The network configurations below depend on the ifup-ovs and ifdown-ovs scripts which are part of the openvswitch installation. They should be installed in /etc/sysconfig/network-scripts/
-
-
- Network example
- There are many ways to configure your network. In the Basic networking mode you should have two (V)LAN's, one for your private network and one for the public network.
- We assume that the hypervisor has one NIC (eth0) with three tagged VLAN's:
-
- VLAN 100 for management of the hypervisor
- VLAN 200 for public network of the instances (cloudbr0)
- VLAN 300 for private network of the instances (cloudbr1)
-
- On VLAN 100 we give the Hypervisor the IP-Address 192.168.42.11/24 with the gateway 192.168.42.1
- The Hypervisor and Management server don't have to be in the same subnet!
-
-
- Configuring the network bridges
- It depends on the distribution you are using how to configure these, below you'll find
- examples for RHEL/CentOS.
- The goal is to have three bridges called 'mgmt0', 'cloudbr0' and 'cloudbr1' after this
- section. This should be used as a guideline only. The exact configuration will
- depend on your network layout.
-
- Configure OpenVswitch
- The network interfaces using OpenVswitch are created using the ovs-vsctl command. This command will configure the interfaces and persist them to the OpenVswitch database.
- First we create a main bridge connected to the eth0 interface. Next we create three fake bridges, each connected to a specific vlan tag.
-
-
-
- Configure in RHEL or CentOS
- The required packages were installed when openvswitch and libvirt were installed,
- we can proceed to configuring the network.
- First we configure eth0
- vi /etc/sysconfig/network-scripts/ifcfg-eth0
- Make sure it looks similar to:
-
- We have to configure the base bridge with the trunk.
- vi /etc/sysconfig/network-scripts/ifcfg-cloudbr
-
- We now have to configure the three VLAN bridges:
- vi /etc/sysconfig/network-scripts/ifcfg-mgmt0
-
- vi /etc/sysconfig/network-scripts/ifcfg-cloudbr0
-
- vi /etc/sysconfig/network-scripts/ifcfg-cloudbr1
-
- With this configuration you should be able to restart the network, although a reboot is recommended to see if everything works properly.
- Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a configuration error and the network stops functioning!
-
-
-
diff --git a/docs/en-US/hypervisor-host-install-network.xml b/docs/en-US/hypervisor-host-install-network.xml
deleted file mode 100644
index 80156d9b6a9..00000000000
--- a/docs/en-US/hypervisor-host-install-network.xml
+++ /dev/null
@@ -1,150 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Configure the network bridges
- This is a very important section, please make sure you read this thoroughly.
- This section details how to configure bridges using the native implementation in Linux. Please refer to the next section if you intend to use OpenVswitch
- In order to forward traffic to your instances you will need at least two bridges: public and private.
- By default these bridges are called cloudbr0 and cloudbr1, but you do have to make sure they are available on each hypervisor.
- The most important factor is that you keep the configuration consistent on all your hypervisors.
-
- Network example
- There are many ways to configure your network. In the Basic networking mode you should have two (V)LAN's, one for your private network and one for the public network.
- We assume that the hypervisor has one NIC (eth0) with three tagged VLAN's:
-
- VLAN 100 for management of the hypervisor
- VLAN 200 for public network of the instances (cloudbr0)
- VLAN 300 for private network of the instances (cloudbr1)
-
- On VLAN 100 we give the Hypervisor the IP-Address 192.168.42.11/24 with the gateway 192.168.42.1
- The Hypervisor and Management server don't have to be in the same subnet!
-
-
- Configuring the network bridges
- It depends on the distribution you are using how to configure these, below you'll find examples for RHEL/CentOS and Ubuntu.
- The goal is to have two bridges called 'cloudbr0' and 'cloudbr1' after this section. This should be used as a guideline only. The exact configuration will depend on your network layout.
-
- Configure in RHEL or CentOS
- The required packages were installed when libvirt was installed, we can proceed to configuring the network.
- First we configure eth0
- vi /etc/sysconfig/network-scripts/ifcfg-eth0
- Make sure it looks similar to:
-
- We now have to configure the three VLAN interfaces:
- vi /etc/sysconfig/network-scripts/ifcfg-eth0.100
-
- vi /etc/sysconfig/network-scripts/ifcfg-eth0.200
-
- vi /etc/sysconfig/network-scripts/ifcfg-eth0.300
-
- Now we have the VLAN interfaces configured we can add the bridges on top of them.
- vi /etc/sysconfig/network-scripts/ifcfg-cloudbr0
- Now we just configure it is a plain bridge without an IP-Address
-
- We do the same for cloudbr1
- vi /etc/sysconfig/network-scripts/ifcfg-cloudbr1
-
- With this configuration you should be able to restart the network, although a reboot is recommended to see if everything works properly.
- Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a configuration error and the network stops functioning!
-
-
- Configure in Ubuntu
- All the required packages were installed when you installed libvirt, so we only have to configure the network.
- vi /etc/network/interfaces
- Modify the interfaces file to look like this:
-
- With this configuration you should be able to restart the network, although a reboot is recommended to see if everything works properly.
- Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a configuration error and the network stops functioning!
-
-
-
diff --git a/docs/en-US/hypervisor-host-install-overview.xml b/docs/en-US/hypervisor-host-install-overview.xml
deleted file mode 100644
index 716b43ddf91..00000000000
--- a/docs/en-US/hypervisor-host-install-overview.xml
+++ /dev/null
@@ -1,37 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- KVM Installation Overview
- If you want to use the Linux Kernel Virtual Machine (KVM) hypervisor to run guest virtual machines, install KVM on the host(s) in your cloud. The material in this section doesn't duplicate KVM installation docs. It provides the &PRODUCT;-specific steps that are needed to prepare a KVM host to work with &PRODUCT;.
- Before continuing, make sure that you have applied the latest updates to your host.
- It is NOT recommended to run services on this host not controlled by &PRODUCT;.
- The procedure for installing a KVM Hypervisor Host is:
-
- Prepare the Operating System
- Install and configure libvirt
- Configure Security Policies (AppArmor and SELinux)
- Install and configure the Agent
-
-
\ No newline at end of file
diff --git a/docs/en-US/hypervisor-host-install-prepare-os.xml b/docs/en-US/hypervisor-host-install-prepare-os.xml
deleted file mode 100644
index 44852f21c2d..00000000000
--- a/docs/en-US/hypervisor-host-install-prepare-os.xml
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Prepare the Operating System
- The OS of the Host must be prepared to host the &PRODUCT; Agent and run KVM instances.
-
- Log in to your OS as root.
-
- Check for a fully qualified hostname.
- $ hostname --fqdn
- This should return a fully qualified hostname such as "kvm1.lab.example.org". If it does not, edit /etc/hosts so that it does.
-
-
- Make sure that the machine can reach the Internet.
- $ ping www.cloudstack.org
-
-
- Turn on NTP for time synchronization.
- NTP is required to synchronize the clocks of the servers in your cloud. Unsynchronized clocks can cause unexpected problems.
-
- Install NTP
- On RHEL or CentOS:
- $ yum install ntp
- On Ubuntu:
- $ apt-get install openntpd
-
-
-
- Repeat all of these steps on every hypervisor host.
-
-
\ No newline at end of file
diff --git a/docs/en-US/hypervisor-host-install-security-policies.xml b/docs/en-US/hypervisor-host-install-security-policies.xml
deleted file mode 100644
index 03da04b6eb3..00000000000
--- a/docs/en-US/hypervisor-host-install-security-policies.xml
+++ /dev/null
@@ -1,70 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Configure the Security Policies
- &PRODUCT; does various things which can be blocked by security mechanisms like AppArmor and SELinux. These have to be disabled to ensure the Agent has all the required permissions.
-
-
- Configure SELinux (RHEL and CentOS)
-
-
- Check to see whether SELinux is installed on your machine. If not, you can skip this section.
- In RHEL or CentOS, SELinux is installed and enabled by default. You can verify this with:
- $ rpm -qa | grep selinux
-
-
- Set the SELINUX variable in /etc/selinux/config to "permissive". This ensures that the permissive setting will be maintained after a system reboot.
- In RHEL or CentOS:
- vi /etc/selinux/config
- Change the following line
- SELINUX=enforcing
- to this
- SELINUX=permissive
-
-
- Then set SELinux to permissive starting immediately, without requiring a system reboot.
- $ setenforce permissive
-
-
-
-
- Configure Apparmor (Ubuntu)
-
-
- Check to see whether AppArmor is installed on your machine. If not, you can skip this section.
- In Ubuntu AppArmor is installed and enabled by default. You can verify this with:
- $ dpkg --list 'apparmor'
-
-
- Disable the AppArmor profiles for libvirt
- $ ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/
- $ ln -s /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper /etc/apparmor.d/disable/
- $ apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd
- $ apparmor_parser -R /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper
-
-
-
-
-
\ No newline at end of file
diff --git a/docs/en-US/hypervisor-installation.xml b/docs/en-US/hypervisor-installation.xml
deleted file mode 100644
index 5ee7dea696a..00000000000
--- a/docs/en-US/hypervisor-installation.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Hypervisor Installation
-
-
-
-
-
-
diff --git a/docs/en-US/hypervisor-kvm-install-flow.xml b/docs/en-US/hypervisor-kvm-install-flow.xml
deleted file mode 100644
index aa19e47be77..00000000000
--- a/docs/en-US/hypervisor-kvm-install-flow.xml
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- KVM Hypervisor Host Installation
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/hypervisor-kvm-requirements.xml b/docs/en-US/hypervisor-kvm-requirements.xml
deleted file mode 100644
index cdfc808e490..00000000000
--- a/docs/en-US/hypervisor-kvm-requirements.xml
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- System Requirements for KVM Hypervisor Hosts
- KVM is included with a variety of Linux-based operating systems. Although you are not required to run these distributions, the following are recommended:
-
- CentOS / RHEL: 6.3
- Ubuntu: 12.04(.1)
-
- The main requirement for KVM hypervisors is the libvirt and Qemu version. No matter what
- Linux distribution you are using, make sure the following requirements are met:
-
- libvirt: 0.9.4 or higher
- Qemu/KVM: 1.0 or higher
-
- The default bridge in &PRODUCT; is the Linux native bridge implementation (bridge module). &PRODUCT; includes an option to work with OpenVswitch, the requirements are listed below
-
- libvirt: 0.9.11 or higher
- openvswitch: 1.7.1 or higher
-
- In addition, the following hardware requirements apply:
-
- Within a single cluster, the hosts must be of the same distribution version.
- All hosts within a cluster must be homogenous. The CPUs must be of the same type, count, and feature flags.
- Must support HVM (Intel-VT or AMD-V enabled)
- 64-bit x86 CPU (more cores results in better performance)
- 4 GB of memory
- At least 1 NIC
- When you deploy &PRODUCT;, the hypervisor host must not have any VMs already running
-
-
diff --git a/docs/en-US/hypervisor-support-for-primarystorage.xml b/docs/en-US/hypervisor-support-for-primarystorage.xml
deleted file mode 100644
index fdef1f2b6e0..00000000000
--- a/docs/en-US/hypervisor-support-for-primarystorage.xml
+++ /dev/null
@@ -1,104 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Hypervisor Support for Primary Storage
- The following table shows storage options and parameters for different hypervisors.
-
-
-
-
-
-
-
-
-
-
- VMware vSphere
- Citrix XenServer
- KVM
-
-
-
-
- Format for Disks, Templates, and
- Snapshots
- VMDK
- VHD
- QCOW2
-
-
- iSCSI support
- VMFS
- Clustered LVM
- Yes, via Shared Mountpoint
-
-
- Fiber Channel support
- VMFS
- Yes, via Existing SR
- Yes, via Shared Mountpoint
-
-
- NFS support
- Y
- Y
- Y
-
-
- Local storage support
- Y
- Y
- Y
-
-
- Storage over-provisioning
- NFS and iSCSI
- NFS
- NFS
-
-
-
-
- XenServer uses a clustered LVM system to store VM images on iSCSI and Fiber Channel volumes
- and does not support over-provisioning in the hypervisor. The storage server itself, however,
- can support thin-provisioning. As a result the &PRODUCT; can still support storage
- over-provisioning by running on thin-provisioned storage volumes.
- KVM supports "Shared Mountpoint" storage. A shared mountpoint is a file system path local to
- each server in a given cluster. The path must be the same across all Hosts in the cluster, for
- example /mnt/primary1. This shared mountpoint is assumed to be a clustered filesystem such as
- OCFS2. In this case the &PRODUCT; does not attempt to mount or unmount the storage as is done
- with NFS. The &PRODUCT; requires that the administrator insure that the storage is
- available
-
- With NFS storage, &PRODUCT; manages the overprovisioning. In this case the global
- configuration parameter storage.overprovisioning.factor controls the degree of overprovisioning.
- This is independent of hypervisor type.
- Local storage is an option for primary storage for vSphere, XenServer, and KVM. When the
- local disk option is enabled, a local disk storage pool is automatically created on each host.
- To use local storage for the System Virtual Machines (such as the Virtual Router), set
- system.vm.use.local.storage to true in global configuration.
- &PRODUCT; supports multiple primary storage pools in a Cluster. For example, you could
- provision 2 NFS servers in primary storage. Or you could provision 1 iSCSI LUN initially and
- then add a second iSCSI LUN when the first approaches capacity.
-
diff --git a/docs/en-US/images/1000-foot-view.png b/docs/en-US/images/1000-foot-view.png
deleted file mode 100644
index 2fe3c1658b3..00000000000
Binary files a/docs/en-US/images/1000-foot-view.png and /dev/null differ
diff --git a/docs/en-US/images/DevCloud-hostonly.png b/docs/en-US/images/DevCloud-hostonly.png
deleted file mode 100644
index 111f93ac700..00000000000
Binary files a/docs/en-US/images/DevCloud-hostonly.png and /dev/null differ
diff --git a/docs/en-US/images/DevCloud.png b/docs/en-US/images/DevCloud.png
deleted file mode 100644
index 5e83ca946c7..00000000000
Binary files a/docs/en-US/images/DevCloud.png and /dev/null differ
diff --git a/docs/en-US/images/VMSnapshotButton.png b/docs/en-US/images/VMSnapshotButton.png
deleted file mode 100644
index 52177402198..00000000000
Binary files a/docs/en-US/images/VMSnapshotButton.png and /dev/null differ
diff --git a/docs/en-US/images/Workloads.png b/docs/en-US/images/Workloads.png
deleted file mode 100644
index 9282f57b344..00000000000
Binary files a/docs/en-US/images/Workloads.png and /dev/null differ
diff --git a/docs/en-US/images/add-account-screen.png b/docs/en-US/images/add-account-screen.png
deleted file mode 100644
index aaa798f6766..00000000000
Binary files a/docs/en-US/images/add-account-screen.png and /dev/null differ
diff --git a/docs/en-US/images/add-cluster.png b/docs/en-US/images/add-cluster.png
deleted file mode 100644
index 26ae3fd298e..00000000000
Binary files a/docs/en-US/images/add-cluster.png and /dev/null differ
diff --git a/docs/en-US/images/add-gateway.png b/docs/en-US/images/add-gateway.png
deleted file mode 100644
index da8eed955f5..00000000000
Binary files a/docs/en-US/images/add-gateway.png and /dev/null differ
diff --git a/docs/en-US/images/add-gslb.png b/docs/en-US/images/add-gslb.png
deleted file mode 100644
index 827a913093b..00000000000
Binary files a/docs/en-US/images/add-gslb.png and /dev/null differ
diff --git a/docs/en-US/images/add-guest-network.png b/docs/en-US/images/add-guest-network.png
deleted file mode 100644
index b22181e3b22..00000000000
Binary files a/docs/en-US/images/add-guest-network.png and /dev/null differ
diff --git a/docs/en-US/images/add-ip-range.png b/docs/en-US/images/add-ip-range.png
deleted file mode 100644
index 9f4d9d48ef9..00000000000
Binary files a/docs/en-US/images/add-ip-range.png and /dev/null differ
diff --git a/docs/en-US/images/add-ldap-configuration-ad.png b/docs/en-US/images/add-ldap-configuration-ad.png
deleted file mode 100644
index d4d3e789b29..00000000000
Binary files a/docs/en-US/images/add-ldap-configuration-ad.png and /dev/null differ
diff --git a/docs/en-US/images/add-ldap-configuration-failure.png b/docs/en-US/images/add-ldap-configuration-failure.png
deleted file mode 100644
index 312a1d6d61b..00000000000
Binary files a/docs/en-US/images/add-ldap-configuration-failure.png and /dev/null differ
diff --git a/docs/en-US/images/add-ldap-configuration-openldap.png b/docs/en-US/images/add-ldap-configuration-openldap.png
deleted file mode 100644
index 70ce579f87c..00000000000
Binary files a/docs/en-US/images/add-ldap-configuration-openldap.png and /dev/null differ
diff --git a/docs/en-US/images/add-ldap-configuration.png b/docs/en-US/images/add-ldap-configuration.png
deleted file mode 100644
index e43cbafb81c..00000000000
Binary files a/docs/en-US/images/add-ldap-configuration.png and /dev/null differ
diff --git a/docs/en-US/images/add-new-gateway-vpc.png b/docs/en-US/images/add-new-gateway-vpc.png
deleted file mode 100644
index 5145622a2f4..00000000000
Binary files a/docs/en-US/images/add-new-gateway-vpc.png and /dev/null differ
diff --git a/docs/en-US/images/add-tier.png b/docs/en-US/images/add-tier.png
deleted file mode 100644
index 0994dbd0a5a..00000000000
Binary files a/docs/en-US/images/add-tier.png and /dev/null differ
diff --git a/docs/en-US/images/add-vlan-icon.png b/docs/en-US/images/add-vlan-icon.png
deleted file mode 100644
index 04655dc37ad..00000000000
Binary files a/docs/en-US/images/add-vlan-icon.png and /dev/null differ
diff --git a/docs/en-US/images/add-vm-vpc.png b/docs/en-US/images/add-vm-vpc.png
deleted file mode 100644
index b2821a69156..00000000000
Binary files a/docs/en-US/images/add-vm-vpc.png and /dev/null differ
diff --git a/docs/en-US/images/add-vpc.png b/docs/en-US/images/add-vpc.png
deleted file mode 100644
index f3348623416..00000000000
Binary files a/docs/en-US/images/add-vpc.png and /dev/null differ
diff --git a/docs/en-US/images/add-vpn-customer-gateway.png b/docs/en-US/images/add-vpn-customer-gateway.png
deleted file mode 100644
index fdc3177e9eb..00000000000
Binary files a/docs/en-US/images/add-vpn-customer-gateway.png and /dev/null differ
diff --git a/docs/en-US/images/addAccount-icon.png b/docs/en-US/images/addAccount-icon.png
deleted file mode 100644
index 4743dbef2cf..00000000000
Binary files a/docs/en-US/images/addAccount-icon.png and /dev/null differ
diff --git a/docs/en-US/images/addvm-tier-sharednw.png b/docs/en-US/images/addvm-tier-sharednw.png
deleted file mode 100644
index e60205f7219..00000000000
Binary files a/docs/en-US/images/addvm-tier-sharednw.png and /dev/null differ
diff --git a/docs/en-US/images/async-calls.png b/docs/en-US/images/async-calls.png
deleted file mode 100644
index e24eee79beb..00000000000
Binary files a/docs/en-US/images/async-calls.png and /dev/null differ
diff --git a/docs/en-US/images/attach-disk-icon.png b/docs/en-US/images/attach-disk-icon.png
deleted file mode 100644
index 5e81d04fda2..00000000000
Binary files a/docs/en-US/images/attach-disk-icon.png and /dev/null differ
diff --git a/docs/en-US/images/autoscale-config.png b/docs/en-US/images/autoscale-config.png
deleted file mode 100644
index 735ae961f81..00000000000
Binary files a/docs/en-US/images/autoscale-config.png and /dev/null differ
diff --git a/docs/en-US/images/basic-deployment.png b/docs/en-US/images/basic-deployment.png
deleted file mode 100644
index 894a05327bf..00000000000
Binary files a/docs/en-US/images/basic-deployment.png and /dev/null differ
diff --git a/docs/en-US/images/change-admin-password.png b/docs/en-US/images/change-admin-password.png
deleted file mode 100644
index 938e8616a35..00000000000
Binary files a/docs/en-US/images/change-admin-password.png and /dev/null differ
diff --git a/docs/en-US/images/change-affinity-button.png b/docs/en-US/images/change-affinity-button.png
deleted file mode 100644
index c21ef758dc2..00000000000
Binary files a/docs/en-US/images/change-affinity-button.png and /dev/null differ
diff --git a/docs/en-US/images/change-password.png b/docs/en-US/images/change-password.png
deleted file mode 100644
index fbb203a5e25..00000000000
Binary files a/docs/en-US/images/change-password.png and /dev/null differ
diff --git a/docs/en-US/images/change-service-icon.png b/docs/en-US/images/change-service-icon.png
deleted file mode 100644
index 780e235f2f5..00000000000
Binary files a/docs/en-US/images/change-service-icon.png and /dev/null differ
diff --git a/docs/en-US/images/cluster-overview.png b/docs/en-US/images/cluster-overview.png
deleted file mode 100644
index 18a86c39afe..00000000000
Binary files a/docs/en-US/images/cluster-overview.png and /dev/null differ
diff --git a/docs/en-US/images/clusterDefinition.png b/docs/en-US/images/clusterDefinition.png
deleted file mode 100644
index 6170f9fb6ae..00000000000
Binary files a/docs/en-US/images/clusterDefinition.png and /dev/null differ
diff --git a/docs/en-US/images/compute-service-offerings.png b/docs/en-US/images/compute-service-offerings.png
deleted file mode 100644
index 88eb6f80597..00000000000
Binary files a/docs/en-US/images/compute-service-offerings.png and /dev/null differ
diff --git a/docs/en-US/images/configuration-edit-success.png b/docs/en-US/images/configuration-edit-success.png
deleted file mode 100644
index 2e21dc129a4..00000000000
Binary files a/docs/en-US/images/configuration-edit-success.png and /dev/null differ
diff --git a/docs/en-US/images/configurations-screen.png b/docs/en-US/images/configurations-screen.png
deleted file mode 100644
index 54586086c4c..00000000000
Binary files a/docs/en-US/images/configurations-screen.png and /dev/null differ
diff --git a/docs/en-US/images/console-icon.png b/docs/en-US/images/console-icon.png
deleted file mode 100644
index bf288869745..00000000000
Binary files a/docs/en-US/images/console-icon.png and /dev/null differ
diff --git a/docs/en-US/images/create-account-post.png b/docs/en-US/images/create-account-post.png
deleted file mode 100644
index ea5ce3feb7d..00000000000
Binary files a/docs/en-US/images/create-account-post.png and /dev/null differ
diff --git a/docs/en-US/images/create-account-request.png b/docs/en-US/images/create-account-request.png
deleted file mode 100644
index b36d1ff557a..00000000000
Binary files a/docs/en-US/images/create-account-request.png and /dev/null differ
diff --git a/docs/en-US/images/create-vpn-connection.png b/docs/en-US/images/create-vpn-connection.png
deleted file mode 100644
index cd5515f53c7..00000000000
Binary files a/docs/en-US/images/create-vpn-connection.png and /dev/null differ
diff --git a/docs/en-US/images/dedicate-resource-button.png b/docs/en-US/images/dedicate-resource-button.png
deleted file mode 100644
index 0ac38e00eca..00000000000
Binary files a/docs/en-US/images/dedicate-resource-button.png and /dev/null differ
diff --git a/docs/en-US/images/del-tier.png b/docs/en-US/images/del-tier.png
deleted file mode 100644
index aa9846cfd9b..00000000000
Binary files a/docs/en-US/images/del-tier.png and /dev/null differ
diff --git a/docs/en-US/images/delete-button.png b/docs/en-US/images/delete-button.png
deleted file mode 100644
index 27145cebbc7..00000000000
Binary files a/docs/en-US/images/delete-button.png and /dev/null differ
diff --git a/docs/en-US/images/delete-ldap-configuration-failure.png b/docs/en-US/images/delete-ldap-configuration-failure.png
deleted file mode 100644
index 2b7bfe525cf..00000000000
Binary files a/docs/en-US/images/delete-ldap-configuration-failure.png and /dev/null differ
diff --git a/docs/en-US/images/delete-ldap-configuration.png b/docs/en-US/images/delete-ldap-configuration.png
deleted file mode 100644
index c2f6c4695fb..00000000000
Binary files a/docs/en-US/images/delete-ldap-configuration.png and /dev/null differ
diff --git a/docs/en-US/images/delete-ldap.png b/docs/en-US/images/delete-ldap.png
deleted file mode 100644
index c97bb4c47c3..00000000000
Binary files a/docs/en-US/images/delete-ldap.png and /dev/null differ
diff --git a/docs/en-US/images/destroy-instance.png b/docs/en-US/images/destroy-instance.png
deleted file mode 100644
index aa9846cfd9b..00000000000
Binary files a/docs/en-US/images/destroy-instance.png and /dev/null differ
diff --git a/docs/en-US/images/detach-disk-icon.png b/docs/en-US/images/detach-disk-icon.png
deleted file mode 100644
index 536a4f8d001..00000000000
Binary files a/docs/en-US/images/detach-disk-icon.png and /dev/null differ
diff --git a/docs/en-US/images/dvswitch-config.png b/docs/en-US/images/dvswitch-config.png
deleted file mode 100644
index edce6e8b90e..00000000000
Binary files a/docs/en-US/images/dvswitch-config.png and /dev/null differ
diff --git a/docs/en-US/images/dvswitchconfig.png b/docs/en-US/images/dvswitchconfig.png
deleted file mode 100644
index 55b1ef7daf3..00000000000
Binary files a/docs/en-US/images/dvswitchconfig.png and /dev/null differ
diff --git a/docs/en-US/images/ec2-s3-configuration.png b/docs/en-US/images/ec2-s3-configuration.png
deleted file mode 100644
index e69de29bb2d..00000000000
diff --git a/docs/en-US/images/edit-configuration.png b/docs/en-US/images/edit-configuration.png
deleted file mode 100644
index 43874bf46e3..00000000000
Binary files a/docs/en-US/images/edit-configuration.png and /dev/null differ
diff --git a/docs/en-US/images/edit-icon.png b/docs/en-US/images/edit-icon.png
deleted file mode 100644
index 42417e278d3..00000000000
Binary files a/docs/en-US/images/edit-icon.png and /dev/null differ
diff --git a/docs/en-US/images/edit-traffic-type.png b/docs/en-US/images/edit-traffic-type.png
deleted file mode 100644
index 16cda947fdb..00000000000
Binary files a/docs/en-US/images/edit-traffic-type.png and /dev/null differ
diff --git a/docs/en-US/images/egress-firewall-rule.png b/docs/en-US/images/egress-firewall-rule.png
deleted file mode 100644
index fa1d8ecd0bd..00000000000
Binary files a/docs/en-US/images/egress-firewall-rule.png and /dev/null differ
diff --git a/docs/en-US/images/eip-ns-basiczone.png b/docs/en-US/images/eip-ns-basiczone.png
deleted file mode 100644
index bc88570531a..00000000000
Binary files a/docs/en-US/images/eip-ns-basiczone.png and /dev/null differ
diff --git a/docs/en-US/images/enable-disable-autoscale.png b/docs/en-US/images/enable-disable-autoscale.png
deleted file mode 100644
index ee02ef21c69..00000000000
Binary files a/docs/en-US/images/enable-disable-autoscale.png and /dev/null differ
diff --git a/docs/en-US/images/enable-disable.png b/docs/en-US/images/enable-disable.png
deleted file mode 100644
index cab31ae3d59..00000000000
Binary files a/docs/en-US/images/enable-disable.png and /dev/null differ
diff --git a/docs/en-US/images/gslb.png b/docs/en-US/images/gslb.png
deleted file mode 100644
index f0a04db45e1..00000000000
Binary files a/docs/en-US/images/gslb.png and /dev/null differ
diff --git a/docs/en-US/images/guest-traffic-setup.png b/docs/en-US/images/guest-traffic-setup.png
deleted file mode 100644
index 52508194ac1..00000000000
Binary files a/docs/en-US/images/guest-traffic-setup.png and /dev/null differ
diff --git a/docs/en-US/images/http-access.png b/docs/en-US/images/http-access.png
deleted file mode 100644
index 817f197985a..00000000000
Binary files a/docs/en-US/images/http-access.png and /dev/null differ
diff --git a/docs/en-US/images/icon.svg b/docs/en-US/images/icon.svg
deleted file mode 100644
index 37f94c06c1b..00000000000
--- a/docs/en-US/images/icon.svg
+++ /dev/null
@@ -1,37 +0,0 @@
-
-
-
diff --git a/docs/en-US/images/infrastructure-overview.png b/docs/en-US/images/infrastructure-overview.png
deleted file mode 100644
index 24aeecfcd1e..00000000000
Binary files a/docs/en-US/images/infrastructure-overview.png and /dev/null differ
diff --git a/docs/en-US/images/installation-complete.png b/docs/en-US/images/installation-complete.png
deleted file mode 100644
index 4626f86d133..00000000000
Binary files a/docs/en-US/images/installation-complete.png and /dev/null differ
diff --git a/docs/en-US/images/instances-screen.png b/docs/en-US/images/instances-screen.png
deleted file mode 100644
index 74a1f08e43d..00000000000
Binary files a/docs/en-US/images/instances-screen.png and /dev/null differ
diff --git a/docs/en-US/images/iso-icon.png b/docs/en-US/images/iso-icon.png
deleted file mode 100644
index 8d547fb397e..00000000000
Binary files a/docs/en-US/images/iso-icon.png and /dev/null differ
diff --git a/docs/en-US/images/jenkins-pipeline.png b/docs/en-US/images/jenkins-pipeline.png
deleted file mode 100644
index 0788c26a485..00000000000
Binary files a/docs/en-US/images/jenkins-pipeline.png and /dev/null differ
diff --git a/docs/en-US/images/l3_services.png b/docs/en-US/images/l3_services.png
deleted file mode 100644
index f68aaf33745..00000000000
Binary files a/docs/en-US/images/l3_services.png and /dev/null differ
diff --git a/docs/en-US/images/large-scale-redundant-setup.png b/docs/en-US/images/large-scale-redundant-setup.png
deleted file mode 100644
index 5d2581afb43..00000000000
Binary files a/docs/en-US/images/large-scale-redundant-setup.png and /dev/null differ
diff --git a/docs/en-US/images/launchHadoopClusterApi.png b/docs/en-US/images/launchHadoopClusterApi.png
deleted file mode 100644
index 6f94c744d02..00000000000
Binary files a/docs/en-US/images/launchHadoopClusterApi.png and /dev/null differ
diff --git a/docs/en-US/images/launchHadoopClusterCmd.png b/docs/en-US/images/launchHadoopClusterCmd.png
deleted file mode 100644
index 66a0c75ed64..00000000000
Binary files a/docs/en-US/images/launchHadoopClusterCmd.png and /dev/null differ
diff --git a/docs/en-US/images/ldap-account-addition.png b/docs/en-US/images/ldap-account-addition.png
deleted file mode 100644
index 0c8573ff9c9..00000000000
Binary files a/docs/en-US/images/ldap-account-addition.png and /dev/null differ
diff --git a/docs/en-US/images/ldap-configuration.png b/docs/en-US/images/ldap-configuration.png
deleted file mode 100644
index c840e597e1b..00000000000
Binary files a/docs/en-US/images/ldap-configuration.png and /dev/null differ
diff --git a/docs/en-US/images/ldap-global-settings.png b/docs/en-US/images/ldap-global-settings.png
deleted file mode 100644
index 0567de84374..00000000000
Binary files a/docs/en-US/images/ldap-global-settings.png and /dev/null differ
diff --git a/docs/en-US/images/ldap-list-users.png b/docs/en-US/images/ldap-list-users.png
deleted file mode 100644
index 8dabbb88663..00000000000
Binary files a/docs/en-US/images/ldap-list-users.png and /dev/null differ
diff --git a/docs/en-US/images/list-domain-vms.png b/docs/en-US/images/list-domain-vms.png
deleted file mode 100644
index 1717f559e12..00000000000
Binary files a/docs/en-US/images/list-domain-vms.png and /dev/null differ
diff --git a/docs/en-US/images/list-ldap-configuration.png b/docs/en-US/images/list-ldap-configuration.png
deleted file mode 100644
index 6bf778893dc..00000000000
Binary files a/docs/en-US/images/list-ldap-configuration.png and /dev/null differ
diff --git a/docs/en-US/images/list-specific-vm.png b/docs/en-US/images/list-specific-vm.png
deleted file mode 100644
index 4fa1da451d5..00000000000
Binary files a/docs/en-US/images/list-specific-vm.png and /dev/null differ
diff --git a/docs/en-US/images/list-virtualmachines.png b/docs/en-US/images/list-virtualmachines.png
deleted file mode 100644
index cd9401eed5a..00000000000
Binary files a/docs/en-US/images/list-virtualmachines.png and /dev/null differ
diff --git a/docs/en-US/images/mesos-integration-arch.jpg b/docs/en-US/images/mesos-integration-arch.jpg
deleted file mode 100644
index e69de29bb2d..00000000000
diff --git a/docs/en-US/images/migrate-instance.png b/docs/en-US/images/migrate-instance.png
deleted file mode 100644
index 25ff57245b3..00000000000
Binary files a/docs/en-US/images/migrate-instance.png and /dev/null differ
diff --git a/docs/en-US/images/multi-node-management-server.png b/docs/en-US/images/multi-node-management-server.png
deleted file mode 100644
index 5cf5ed5456f..00000000000
Binary files a/docs/en-US/images/multi-node-management-server.png and /dev/null differ
diff --git a/docs/en-US/images/multi-site-deployment.png b/docs/en-US/images/multi-site-deployment.png
deleted file mode 100644
index f3ae5bb6b5c..00000000000
Binary files a/docs/en-US/images/multi-site-deployment.png and /dev/null differ
diff --git a/docs/en-US/images/multi-tier-app.png b/docs/en-US/images/multi-tier-app.png
deleted file mode 100644
index cec11228e26..00000000000
Binary files a/docs/en-US/images/multi-tier-app.png and /dev/null differ
diff --git a/docs/en-US/images/network-acl.png b/docs/en-US/images/network-acl.png
deleted file mode 100644
index 5602827f415..00000000000
Binary files a/docs/en-US/images/network-acl.png and /dev/null differ
diff --git a/docs/en-US/images/network-setup-zone.png b/docs/en-US/images/network-setup-zone.png
deleted file mode 100644
index 8324ff8beaa..00000000000
Binary files a/docs/en-US/images/network-setup-zone.png and /dev/null differ
diff --git a/docs/en-US/images/network-singlepod.png b/docs/en-US/images/network-singlepod.png
deleted file mode 100644
index e1214ea7f69..00000000000
Binary files a/docs/en-US/images/network-singlepod.png and /dev/null differ
diff --git a/docs/en-US/images/network_service.png b/docs/en-US/images/network_service.png
deleted file mode 100644
index 95281aa2daa..00000000000
Binary files a/docs/en-US/images/network_service.png and /dev/null differ
diff --git a/docs/en-US/images/networking-in-a-pod.png b/docs/en-US/images/networking-in-a-pod.png
deleted file mode 100644
index bf731712042..00000000000
Binary files a/docs/en-US/images/networking-in-a-pod.png and /dev/null differ
diff --git a/docs/en-US/images/networking-in-a-zone.png b/docs/en-US/images/networking-in-a-zone.png
deleted file mode 100644
index fb740da448e..00000000000
Binary files a/docs/en-US/images/networking-in-a-zone.png and /dev/null differ
diff --git a/docs/en-US/images/nic-bonding-and-multipath-io.png b/docs/en-US/images/nic-bonding-and-multipath-io.png
deleted file mode 100644
index 0fe60b66ed6..00000000000
Binary files a/docs/en-US/images/nic-bonding-and-multipath-io.png and /dev/null differ
diff --git a/docs/en-US/images/nvp-add-controller.png b/docs/en-US/images/nvp-add-controller.png
deleted file mode 100644
index e02d31f0a37..00000000000
Binary files a/docs/en-US/images/nvp-add-controller.png and /dev/null differ
diff --git a/docs/en-US/images/nvp-enable-provider.png b/docs/en-US/images/nvp-enable-provider.png
deleted file mode 100644
index 0f2d02ddfa9..00000000000
Binary files a/docs/en-US/images/nvp-enable-provider.png and /dev/null differ
diff --git a/docs/en-US/images/nvp-network-offering.png b/docs/en-US/images/nvp-network-offering.png
deleted file mode 100644
index c2d25c48c19..00000000000
Binary files a/docs/en-US/images/nvp-network-offering.png and /dev/null differ
diff --git a/docs/en-US/images/nvp-physical-network-stt.png b/docs/en-US/images/nvp-physical-network-stt.png
deleted file mode 100644
index 2ce7853ac54..00000000000
Binary files a/docs/en-US/images/nvp-physical-network-stt.png and /dev/null differ
diff --git a/docs/en-US/images/nvp-vpc-offering-edit.png b/docs/en-US/images/nvp-vpc-offering-edit.png
deleted file mode 100644
index ff235e24cd6..00000000000
Binary files a/docs/en-US/images/nvp-vpc-offering-edit.png and /dev/null differ
diff --git a/docs/en-US/images/odl_structure.jpg b/docs/en-US/images/odl_structure.jpg
deleted file mode 100644
index 08e0012f56b..00000000000
Binary files a/docs/en-US/images/odl_structure.jpg and /dev/null differ
diff --git a/docs/en-US/images/parallel-mode.png b/docs/en-US/images/parallel-mode.png
deleted file mode 100644
index 3b67a17af9d..00000000000
Binary files a/docs/en-US/images/parallel-mode.png and /dev/null differ
diff --git a/docs/en-US/images/plugin1.jpg b/docs/en-US/images/plugin1.jpg
deleted file mode 100644
index 970233d8475..00000000000
Binary files a/docs/en-US/images/plugin1.jpg and /dev/null differ
diff --git a/docs/en-US/images/plugin2.jpg b/docs/en-US/images/plugin2.jpg
deleted file mode 100644
index 9c8a6107ba9..00000000000
Binary files a/docs/en-US/images/plugin2.jpg and /dev/null differ
diff --git a/docs/en-US/images/plugin3.jpg b/docs/en-US/images/plugin3.jpg
deleted file mode 100644
index 07fae790e22..00000000000
Binary files a/docs/en-US/images/plugin3.jpg and /dev/null differ
diff --git a/docs/en-US/images/plugin4.jpg b/docs/en-US/images/plugin4.jpg
deleted file mode 100644
index 2bcec9f773a..00000000000
Binary files a/docs/en-US/images/plugin4.jpg and /dev/null differ
diff --git a/docs/en-US/images/plugin_intro.jpg b/docs/en-US/images/plugin_intro.jpg
deleted file mode 100644
index 113ffb32781..00000000000
Binary files a/docs/en-US/images/plugin_intro.jpg and /dev/null differ
diff --git a/docs/en-US/images/pod-overview.png b/docs/en-US/images/pod-overview.png
deleted file mode 100644
index c180060ba48..00000000000
Binary files a/docs/en-US/images/pod-overview.png and /dev/null differ
diff --git a/docs/en-US/images/provisioning-overview.png b/docs/en-US/images/provisioning-overview.png
deleted file mode 100644
index 25cc97e3557..00000000000
Binary files a/docs/en-US/images/provisioning-overview.png and /dev/null differ
diff --git a/docs/en-US/images/region-overview.png b/docs/en-US/images/region-overview.png
deleted file mode 100644
index 528445c9d89..00000000000
Binary files a/docs/en-US/images/region-overview.png and /dev/null differ
diff --git a/docs/en-US/images/release-ip-icon.png b/docs/en-US/images/release-ip-icon.png
deleted file mode 100644
index aa9846cfd9b..00000000000
Binary files a/docs/en-US/images/release-ip-icon.png and /dev/null differ
diff --git a/docs/en-US/images/remove-nic.png b/docs/en-US/images/remove-nic.png
deleted file mode 100644
index 27145cebbc7..00000000000
Binary files a/docs/en-US/images/remove-nic.png and /dev/null differ
diff --git a/docs/en-US/images/remove-tier.png b/docs/en-US/images/remove-tier.png
deleted file mode 100644
index e14d08f8052..00000000000
Binary files a/docs/en-US/images/remove-tier.png and /dev/null differ
diff --git a/docs/en-US/images/remove-vpc.png b/docs/en-US/images/remove-vpc.png
deleted file mode 100644
index aa9846cfd9b..00000000000
Binary files a/docs/en-US/images/remove-vpc.png and /dev/null differ
diff --git a/docs/en-US/images/remove-vpn.png b/docs/en-US/images/remove-vpn.png
deleted file mode 100644
index 27145cebbc7..00000000000
Binary files a/docs/en-US/images/remove-vpn.png and /dev/null differ
diff --git a/docs/en-US/images/replace-acl-icon.png b/docs/en-US/images/replace-acl-icon.png
deleted file mode 100644
index ae953ba2032..00000000000
Binary files a/docs/en-US/images/replace-acl-icon.png and /dev/null differ
diff --git a/docs/en-US/images/replace-acl-list.png b/docs/en-US/images/replace-acl-list.png
deleted file mode 100644
index 33750173b18..00000000000
Binary files a/docs/en-US/images/replace-acl-list.png and /dev/null differ
diff --git a/docs/en-US/images/reset-vpn.png b/docs/en-US/images/reset-vpn.png
deleted file mode 100644
index 04655dc37ad..00000000000
Binary files a/docs/en-US/images/reset-vpn.png and /dev/null differ
diff --git a/docs/en-US/images/resize-volume-icon.png b/docs/en-US/images/resize-volume-icon.png
deleted file mode 100644
index 48499021f06..00000000000
Binary files a/docs/en-US/images/resize-volume-icon.png and /dev/null differ
diff --git a/docs/en-US/images/resize-volume.png b/docs/en-US/images/resize-volume.png
deleted file mode 100644
index 6195623ab49..00000000000
Binary files a/docs/en-US/images/resize-volume.png and /dev/null differ
diff --git a/docs/en-US/images/restart-vpc.png b/docs/en-US/images/restart-vpc.png
deleted file mode 100644
index 04655dc37ad..00000000000
Binary files a/docs/en-US/images/restart-vpc.png and /dev/null differ
diff --git a/docs/en-US/images/revert-vm.png b/docs/en-US/images/revert-vm.png
deleted file mode 100644
index 04655dc37ad..00000000000
Binary files a/docs/en-US/images/revert-vm.png and /dev/null differ
diff --git a/docs/en-US/images/search-button.png b/docs/en-US/images/search-button.png
deleted file mode 100644
index f329aef4a25..00000000000
Binary files a/docs/en-US/images/search-button.png and /dev/null differ
diff --git a/docs/en-US/images/select-vm-staticnat-vpc.png b/docs/en-US/images/select-vm-staticnat-vpc.png
deleted file mode 100644
index 12fde26d883..00000000000
Binary files a/docs/en-US/images/select-vm-staticnat-vpc.png and /dev/null differ
diff --git a/docs/en-US/images/separate-storage-network.png b/docs/en-US/images/separate-storage-network.png
deleted file mode 100644
index 24dbbefc5b4..00000000000
Binary files a/docs/en-US/images/separate-storage-network.png and /dev/null differ
diff --git a/docs/en-US/images/set-default-nic.png b/docs/en-US/images/set-default-nic.png
deleted file mode 100644
index f329aef4a25..00000000000
Binary files a/docs/en-US/images/set-default-nic.png and /dev/null differ
diff --git a/docs/en-US/images/small-scale-deployment.png b/docs/en-US/images/small-scale-deployment.png
deleted file mode 100644
index 1c88520e7b4..00000000000
Binary files a/docs/en-US/images/small-scale-deployment.png and /dev/null differ
diff --git a/docs/en-US/images/software-license.png b/docs/en-US/images/software-license.png
deleted file mode 100644
index 67aa2555341..00000000000
Binary files a/docs/en-US/images/software-license.png and /dev/null differ
diff --git a/docs/en-US/images/start-vm-screen.png b/docs/en-US/images/start-vm-screen.png
deleted file mode 100644
index 75a604a7a0e..00000000000
Binary files a/docs/en-US/images/start-vm-screen.png and /dev/null differ
diff --git a/docs/en-US/images/stop-instance-icon.png b/docs/en-US/images/stop-instance-icon.png
deleted file mode 100644
index 209afce5086..00000000000
Binary files a/docs/en-US/images/stop-instance-icon.png and /dev/null differ
diff --git a/docs/en-US/images/suspend-icon.png b/docs/en-US/images/suspend-icon.png
deleted file mode 100644
index cab31ae3d59..00000000000
Binary files a/docs/en-US/images/suspend-icon.png and /dev/null differ
diff --git a/docs/en-US/images/sysmanager.png b/docs/en-US/images/sysmanager.png
deleted file mode 100644
index 5b9df347a60..00000000000
Binary files a/docs/en-US/images/sysmanager.png and /dev/null differ
diff --git a/docs/en-US/images/traffic-label.png b/docs/en-US/images/traffic-label.png
deleted file mode 100644
index f161c89ce19..00000000000
Binary files a/docs/en-US/images/traffic-label.png and /dev/null differ
diff --git a/docs/en-US/images/traffic-type.png b/docs/en-US/images/traffic-type.png
deleted file mode 100644
index 10d5ddb25ed..00000000000
Binary files a/docs/en-US/images/traffic-type.png and /dev/null differ
diff --git a/docs/en-US/images/vds-name.png b/docs/en-US/images/vds-name.png
deleted file mode 100644
index bf5b4fcf35c..00000000000
Binary files a/docs/en-US/images/vds-name.png and /dev/null differ
diff --git a/docs/en-US/images/view-console-button.png b/docs/en-US/images/view-console-button.png
deleted file mode 100644
index b321ceadefe..00000000000
Binary files a/docs/en-US/images/view-console-button.png and /dev/null differ
diff --git a/docs/en-US/images/view-systemvm-details.png b/docs/en-US/images/view-systemvm-details.png
deleted file mode 100755
index bce270bf258..00000000000
Binary files a/docs/en-US/images/view-systemvm-details.png and /dev/null differ
diff --git a/docs/en-US/images/vm-lifecycle.png b/docs/en-US/images/vm-lifecycle.png
deleted file mode 100644
index 97823fc568a..00000000000
Binary files a/docs/en-US/images/vm-lifecycle.png and /dev/null differ
diff --git a/docs/en-US/images/vm-running.png b/docs/en-US/images/vm-running.png
deleted file mode 100644
index e50cd16c7b2..00000000000
Binary files a/docs/en-US/images/vm-running.png and /dev/null differ
diff --git a/docs/en-US/images/vmware-increase-ports.png b/docs/en-US/images/vmware-increase-ports.png
deleted file mode 100644
index fe968153262..00000000000
Binary files a/docs/en-US/images/vmware-increase-ports.png and /dev/null differ
diff --git a/docs/en-US/images/vmware-iscsi-datastore.png b/docs/en-US/images/vmware-iscsi-datastore.png
deleted file mode 100644
index 9f6b33f01ed..00000000000
Binary files a/docs/en-US/images/vmware-iscsi-datastore.png and /dev/null differ
diff --git a/docs/en-US/images/vmware-iscsi-general.png b/docs/en-US/images/vmware-iscsi-general.png
deleted file mode 100644
index 863602b9eb7..00000000000
Binary files a/docs/en-US/images/vmware-iscsi-general.png and /dev/null differ
diff --git a/docs/en-US/images/vmware-iscsi-initiator-properties.png b/docs/en-US/images/vmware-iscsi-initiator-properties.png
deleted file mode 100644
index 1fab03143b1..00000000000
Binary files a/docs/en-US/images/vmware-iscsi-initiator-properties.png and /dev/null differ
diff --git a/docs/en-US/images/vmware-iscsi-initiator.png b/docs/en-US/images/vmware-iscsi-initiator.png
deleted file mode 100644
index a9a8301d74d..00000000000
Binary files a/docs/en-US/images/vmware-iscsi-initiator.png and /dev/null differ
diff --git a/docs/en-US/images/vmware-iscsi-target-add.png b/docs/en-US/images/vmware-iscsi-target-add.png
deleted file mode 100644
index f016da7956d..00000000000
Binary files a/docs/en-US/images/vmware-iscsi-target-add.png and /dev/null differ
diff --git a/docs/en-US/images/vmware-mgt-network-properties.png b/docs/en-US/images/vmware-mgt-network-properties.png
deleted file mode 100644
index 9141af9c42f..00000000000
Binary files a/docs/en-US/images/vmware-mgt-network-properties.png and /dev/null differ
diff --git a/docs/en-US/images/vmware-nexus-add-cluster.png b/docs/en-US/images/vmware-nexus-add-cluster.png
deleted file mode 100644
index 7c1dd73f775..00000000000
Binary files a/docs/en-US/images/vmware-nexus-add-cluster.png and /dev/null differ
diff --git a/docs/en-US/images/vmware-nexus-port-profile.png b/docs/en-US/images/vmware-nexus-port-profile.png
deleted file mode 100644
index 19b264f7a0a..00000000000
Binary files a/docs/en-US/images/vmware-nexus-port-profile.png and /dev/null differ
diff --git a/docs/en-US/images/vmware-physical-network.png b/docs/en-US/images/vmware-physical-network.png
deleted file mode 100644
index a7495c77b14..00000000000
Binary files a/docs/en-US/images/vmware-physical-network.png and /dev/null differ
diff --git a/docs/en-US/images/vmware-vswitch-properties.png b/docs/en-US/images/vmware-vswitch-properties.png
deleted file mode 100644
index bc247d276d6..00000000000
Binary files a/docs/en-US/images/vmware-vswitch-properties.png and /dev/null differ
diff --git a/docs/en-US/images/vpc-lb.png b/docs/en-US/images/vpc-lb.png
deleted file mode 100644
index 4269e8b9f9e..00000000000
Binary files a/docs/en-US/images/vpc-lb.png and /dev/null differ
diff --git a/docs/en-US/images/vpc-setting.png b/docs/en-US/images/vpc-setting.png
deleted file mode 100644
index 782299e9f54..00000000000
Binary files a/docs/en-US/images/vpc-setting.png and /dev/null differ
diff --git a/docs/en-US/images/vpn-icon.png b/docs/en-US/images/vpn-icon.png
deleted file mode 100644
index 2ac12f77c40..00000000000
Binary files a/docs/en-US/images/vpn-icon.png and /dev/null differ
diff --git a/docs/en-US/images/vsphere-client.png b/docs/en-US/images/vsphere-client.png
deleted file mode 100644
index 2acc8b802ad..00000000000
Binary files a/docs/en-US/images/vsphere-client.png and /dev/null differ
diff --git a/docs/en-US/images/whirrDependency.png b/docs/en-US/images/whirrDependency.png
deleted file mode 100644
index acdec78e5ac..00000000000
Binary files a/docs/en-US/images/whirrDependency.png and /dev/null differ
diff --git a/docs/en-US/images/whirrOutput.png b/docs/en-US/images/whirrOutput.png
deleted file mode 100644
index 7c3b51297e5..00000000000
Binary files a/docs/en-US/images/whirrOutput.png and /dev/null differ
diff --git a/docs/en-US/images/zone-overview.png b/docs/en-US/images/zone-overview.png
deleted file mode 100644
index 24aeecfcd1e..00000000000
Binary files a/docs/en-US/images/zone-overview.png and /dev/null differ
diff --git a/docs/en-US/import-ami.xml b/docs/en-US/import-ami.xml
deleted file mode 100644
index 16fe78a1579..00000000000
--- a/docs/en-US/import-ami.xml
+++ /dev/null
@@ -1,114 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Importing Amazon Machine Images
- The following procedures describe how to import an Amazon Machine Image (AMI) into &PRODUCT; when using the XenServer hypervisor.
- Assume you have an AMI file and this file is called CentOS_6.2_x64. Assume further that you are working on a CentOS host. If the AMI is a Fedora image, you need to be working on a Fedora host initially.
- You need to have a XenServer host with a file-based storage repository (either a local ext3 SR or an NFS SR) to convert to a VHD once the image file has been customized on the Centos/Fedora host.
- When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text.
-
-
- To import an AMI:
-
- Set up loopback on image file:# mkdir -p /mnt/loop/centos62
-# mount -o loop CentOS_6.2_x64 /mnt/loop/centos54
-
- Install the kernel-xen package into the image. This downloads the PV kernel and ramdisk to the image.# yum -c /mnt/loop/centos54/etc/yum.conf --installroot=/mnt/loop/centos62/ -y install kernel-xen
- Create a grub entry in /boot/grub/grub.conf.# mkdir -p /mnt/loop/centos62/boot/grub
-# touch /mnt/loop/centos62/boot/grub/grub.conf
-# echo "" > /mnt/loop/centos62/boot/grub/grub.conf
-
- Determine the name of the PV kernel that has been installed into the image.
- # cd /mnt/loop/centos62
-# ls lib/modules/
-2.6.16.33-xenU 2.6.16-xenU 2.6.18-164.15.1.el5xen 2.6.18-164.6.1.el5.centos.plus 2.6.18-xenU-ec2-v1.0 2.6.21.7-2.fc8xen 2.6.31-302-ec2
-# ls boot/initrd*
-boot/initrd-2.6.18-164.6.1.el5.centos.plus.img boot/initrd-2.6.18-164.15.1.el5xen.img
-# ls boot/vmlinuz*
-boot/vmlinuz-2.6.18-164.15.1.el5xen boot/vmlinuz-2.6.18-164.6.1.el5.centos.plus boot/vmlinuz-2.6.18-xenU-ec2-v1.0 boot/vmlinuz-2.6.21-2952.fc8xen
-
- Xen kernels/ramdisk always end with "xen". For the kernel version you choose, there has to be an entry for that version under lib/modules, there has to be an initrd and vmlinuz corresponding to that. Above, the only kernel that satisfies this condition is 2.6.18-164.15.1.el5xen.
- Based on your findings, create an entry in the grub.conf file. Below is an example entry.default=0
-timeout=5
-hiddenmenu
-title CentOS (2.6.18-164.15.1.el5xen)
- root (hd0,0)
- kernel /boot/vmlinuz-2.6.18-164.15.1.el5xen ro root=/dev/xvda
- initrd /boot/initrd-2.6.18-164.15.1.el5xen.img
-
- Edit etc/fstab, changing “sda1†to “xvda†and changing “sdb†to “xvdbâ€.
- # cat etc/fstab
-/dev/xvda / ext3 defaults 1 1
-/dev/xvdb /mnt ext3 defaults 0 0
-none /dev/pts devpts gid=5,mode=620 0 0
-none /proc proc defaults 0 0
-none /sys sysfs defaults 0 0
-
- Enable login via the console. The default console device in a XenServer system is xvc0. Ensure that etc/inittab and etc/securetty have the following lines respectively:
- # grep xvc0 etc/inittab
-co:2345:respawn:/sbin/agetty xvc0 9600 vt100-nav
-# grep xvc0 etc/securetty
-xvc0
-
- Ensure the ramdisk supports PV disk and PV network. Customize this for the kernel version you have determined above.
- # chroot /mnt/loop/centos54
-# cd /boot/
-# mv initrd-2.6.18-164.15.1.el5xen.img initrd-2.6.18-164.15.1.el5xen.img.bak
-# mkinitrd -f /boot/initrd-2.6.18-164.15.1.el5xen.img --with=xennet --preload=xenblk --omit-scsi-modules 2.6.18-164.15.1.el5xen
-
- Change the password.
- # passwd
-Changing password for user root.
-New UNIX password:
-Retype new UNIX password:
-passwd: all authentication tokens updated successfully.
-
- Exit out of chroot.# exit
- Check etc/ssh/sshd_config for lines allowing ssh login using a password.
- # egrep "PermitRootLogin|PasswordAuthentication" /mnt/loop/centos54/etc/ssh/sshd_config
-PermitRootLogin yes
-PasswordAuthentication yes
-
- If you need the template to be enabled to reset passwords from the &PRODUCT; UI or API,
- install the password change script into the image at this point. See
- .
- Unmount and delete loopback mount.# umount /mnt/loop/centos54
-# losetup -d /dev/loop0
-
- Copy the image file to your XenServer host's file-based storage repository. In the example below, the Xenserver is "xenhost". This XenServer has an NFS repository whose uuid is a9c5b8c8-536b-a193-a6dc-51af3e5ff799.
- # scp CentOS_6.2_x64 xenhost:/var/run/sr-mount/a9c5b8c8-536b-a193-a6dc-51af3e5ff799/
- Log in to the Xenserver and create a VDI the same size as the image.
- [root@xenhost ~]# cd /var/run/sr-mount/a9c5b8c8-536b-a193-a6dc-51af3e5ff799
-[root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# ls -lh CentOS_6.2_x64
--rw-r--r-- 1 root root 10G Mar 16 16:49 CentOS_6.2_x64
-[root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# xe vdi-create virtual-size=10GiB sr-uuid=a9c5b8c8-536b-a193-a6dc-51af3e5ff799 type=user name-label="Centos 6.2 x86_64"
-cad7317c-258b-4ef7-b207-cdf0283a7923
-
- Import the image file into the VDI. This may take 10–20 minutes.[root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# xe vdi-import filename=CentOS_6.2_x64 uuid=cad7317c-258b-4ef7-b207-cdf0283a7923
- Locate a the VHD file. This is the file with the VDI’s UUID as its name. Compress it and upload it to your web server.
- [root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# bzip2 -c cad7317c-258b-4ef7-b207-cdf0283a7923.vhd > CentOS_6.2_x64.vhd.bz2
-[root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# scp CentOS_6.2_x64.vhd.bz2 webserver:/var/www/html/templates/
-
-
-
diff --git a/docs/en-US/increase-management-server-max-memory.xml b/docs/en-US/increase-management-server-max-memory.xml
deleted file mode 100644
index 8992ad6f16a..00000000000
--- a/docs/en-US/increase-management-server-max-memory.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Increase Management Server Maximum Memory
- If the Management Server is subject to high demand, the default maximum JVM memory allocation can be insufficient. To increase the memory:
-
- Edit the Tomcat configuration file:/etc/cloudstack/management/tomcat6.conf
- Change the command-line parameter -XmxNNNm to a higher value of N.For example, if the current value is -Xmx128m, change it to -Xmx1024m or higher.
- To put the new setting into effect, restart the Management Server.# service cloudstack-management restart
-
- For more information about memory issues, see "FAQ: Memory" at Tomcat Wiki.
-
-
diff --git a/docs/en-US/incremental-snapshots-backup.xml b/docs/en-US/incremental-snapshots-backup.xml
deleted file mode 100644
index ade00c90c17..00000000000
--- a/docs/en-US/incremental-snapshots-backup.xml
+++ /dev/null
@@ -1,51 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Incremental Snapshots and Backup
- Snapshots are created on primary storage where a disk resides. After a snapshot is created, it is immediately backed up to secondary storage and removed from primary storage for optimal utilization of space on primary storage.
- &PRODUCT; does incremental backups for some hypervisors. When incremental backups are supported, every N backup is a full backup.
-
-
-
-
-
-
- VMware vSphere
- Citrix XenServer
- KVM
-
-
-
-
- Support incremental backup
- N
- Y
- N
-
-
-
-
-
-
diff --git a/docs/en-US/initial-setup-of-external-firewalls-loadbalancers.xml b/docs/en-US/initial-setup-of-external-firewalls-loadbalancers.xml
deleted file mode 100644
index 332afa04ebb..00000000000
--- a/docs/en-US/initial-setup-of-external-firewalls-loadbalancers.xml
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Initial Setup of External Firewalls and Load Balancers
- When the first VM is created for a new account, &PRODUCT; programs the external firewall and load balancer to work with the VM. The following objects are created on the firewall:
-
- A new logical interface to connect to the account's private VLAN. The interface IP is always the first IP of the account's private subnet (e.g. 10.1.1.1).
- A source NAT rule that forwards all outgoing traffic from the account's private VLAN to the public Internet, using the account's public IP address as the source address
- A firewall filter counter that measures the number of bytes of outgoing traffic for the account
-
- The following objects are created on the load balancer:
-
- A new VLAN that matches the account's provisioned Zone VLAN
- A self IP for the VLAN. This is always the second IP of the account's private subnet (e.g. 10.1.1.2).
-
-
diff --git a/docs/en-US/initialize-and-test.xml b/docs/en-US/initialize-and-test.xml
deleted file mode 100644
index 2dd6e259176..00000000000
--- a/docs/en-US/initialize-and-test.xml
+++ /dev/null
@@ -1,77 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Initialize and Test
- After everything is configured, &PRODUCT; will perform its initialization. This can take 30 minutes or more, depending on the speed of your network. When the initialization has completed successfully, the administrator's Dashboard should be displayed in the &PRODUCT; UI.
-
-
-
- Verify that the system is ready. In the left navigation bar, select Templates. Click on the CentOS 5.5 (64bit) no Gui (KVM) template. Check to be sure that the status is "Download Complete." Do not proceed to the next step until this status is displayed.
-
- Go to the Instances tab, and filter by My Instances.
-
- Click Add Instance and follow the steps in the wizard.
-
-
-
- Choose the zone you just added.
-
- In the template selection, choose the template to use in the VM. If this is a fresh installation, likely only the provided CentOS template is available.
-
- Select a service offering. Be sure that the hardware you have allows starting the selected service offering.
-
- In data disk offering, if desired, add another data disk. This is a second volume that will be available to but not mounted in the guest. For example, in Linux on XenServer you will see /dev/xvdb in the guest after rebooting the VM. A reboot is not required if you have a PV-enabled OS kernel in use.
-
- In default network, choose the primary network for the guest. In a trial installation, you would have only one option here.
- Optionally give your VM a name and a group. Use any descriptive text you would like.
-
- Click Launch VM. Your VM will be created and started. It might take some time to download the template and complete the VM startup. You can watch the VM’s progress in the Instances screen.
-
-
-
-
-
-
-
- To use the VM, click the View Console button.
-
-
-
-
-
- ConsoleButton.png: button to launch a console
-
-
-
-
-
- For more information about using VMs, including instructions for how to allow incoming network traffic to the VM, start, stop, and delete VMs, and move a VM from one host to another, see Working With Virtual Machines in the Administrator’s Guide.
-
-
-
-
- Congratulations! You have successfully completed a &PRODUCT; Installation.
-
- If you decide to grow your deployment, you can add more hosts, primary storage, zones, pods, and clusters.
-
diff --git a/docs/en-US/install-usage-server.xml b/docs/en-US/install-usage-server.xml
deleted file mode 100644
index ffd748d758e..00000000000
--- a/docs/en-US/install-usage-server.xml
+++ /dev/null
@@ -1,61 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Installing the Usage Server (Optional)
- You can optionally install the Usage Server once the Management Server is configured properly. The Usage Server takes data from the events in the system and enables usage-based billing for accounts.
- When multiple Management Servers are present, the Usage Server may be installed on any number of them. The Usage Servers will coordinate usage processing. A site that is concerned about availability should install Usage Servers on at least two Management Servers.
-
- Requirements for Installing the Usage Server
-
- The Management Server must be running when the Usage Server is installed.
- The Usage Server must be installed on the same server as a Management Server.
-
-
-
- Steps to Install the Usage Server
-
-
- Run ./install.sh.
-
-# ./install.sh
-
- You should see a few messages as the installer prepares, followed by a list of choices.
-
-
- Choose "S" to install the Usage Server.
-
- > S
-
-
-
- Once installed, start the Usage Server with the following command.
-
-# service cloudstack-usage start
-
-
-
- The Administration Guide discusses further configuration of the Usage Server.
-
-
diff --git a/docs/en-US/installation-complete.xml b/docs/en-US/installation-complete.xml
deleted file mode 100644
index b39040ba0cf..00000000000
--- a/docs/en-US/installation-complete.xml
+++ /dev/null
@@ -1,39 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Installation Complete! Next Steps
- Congratulations! You have now installed &PRODUCT; Management Server and the database it uses to persist system data.
-
-
-
-
- installation-complete.png: Finished installs with single Management Server and multiple Management Servers
-
- What should you do next?
-
- Even without adding any cloud infrastructure, you can run the UI to get a feel for what's offered and how you will interact with &PRODUCT; on an ongoing basis. See .
- When you're ready, add the cloud infrastructure and try running some virtual machines on it, so you can watch how &PRODUCT; manages the infrastructure. See .
-
-
diff --git a/docs/en-US/installation-steps-overview.xml b/docs/en-US/installation-steps-overview.xml
deleted file mode 100644
index ea00057bab3..00000000000
--- a/docs/en-US/installation-steps-overview.xml
+++ /dev/null
@@ -1,67 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Overview of Installation Steps
- For anything more than a simple trial installation, you will need guidance for a variety of configuration choices. It is strongly recommended that you read the following:
-
- Choosing a Deployment Architecture
- Choosing a Hypervisor: Supported Features
- Network Setup
- Storage Setup
- Best Practices
-
-
-
- Make sure you have the required hardware ready. See
-
-
- Install the Management Server (choose single-node or multi-node). See
-
-
- Log in to the UI. See
-
-
- Add a zone. Includes the first pod, cluster, and host. See
-
-
- Add more pods (optional). See
-
-
- Add more clusters (optional). See
-
-
- Add more hosts (optional). See
-
-
- Add more primary storage (optional). See
-
-
- Add more secondary storage (optional). See
-
-
- Try using the cloud. See
-
-
-
diff --git a/docs/en-US/installation.xml b/docs/en-US/installation.xml
deleted file mode 100644
index 5fc550edad6..00000000000
--- a/docs/en-US/installation.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Installation
-
-
-
-
-
-
diff --git a/docs/en-US/installation_steps_overview.xml b/docs/en-US/installation_steps_overview.xml
deleted file mode 100644
index 2632a4d6243..00000000000
--- a/docs/en-US/installation_steps_overview.xml
+++ /dev/null
@@ -1,84 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Overview of Installation Steps
- For anything more than a simple trial installation, you will need
- guidance for a variety of configuration choices. It is strongly
- recommended that you read the following:
-
- Choosing a Deployment Architecture
- Choosing a Hypervisor: Supported Features
- Network Setup
- Storage Setup
- Best Practices
-
-
-
-
- Prepare
-
- Make sure you have the required hardware ready
-
-
- (Optional) Fill out the preparation checklists
-
-
- Install the &PRODUCT; software
-
-
- Install the Management Server (choose single-node or multi-node)
-
-
- Log in to the UI
-
-
- Provision your cloud infrastructure
-
-
- Add a zone. Includes the first pod, cluster, and host
-
-
- Add more pods
-
-
- Add more clusters
-
-
- Add more hosts
-
-
- Add more primary storage
-
-
- Add more secondary storage
-
-
- Try using the cloud
-
-
- Initialization and testing
-
-
-
diff --git a/docs/en-US/installing-publican.xml b/docs/en-US/installing-publican.xml
deleted file mode 100644
index 9f180aad375..00000000000
--- a/docs/en-US/installing-publican.xml
+++ /dev/null
@@ -1,46 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Installing Publican
- &PRODUCT; documentation is built using publican. This section describes how to install publican on your own machine so that you can build the documentation guides.
-
- The &PRODUCT; documentation source code is located under /docs
- Publican documentation itself is also very useful.
-
- On RHEL and RHEL derivatives, install publican with the following command:
- yum install publican publican-doc
- On Ubuntu, install publican with the following command:
- apt-get install publican publican-doc
- For other distribution refer to the publican documentation listed above. For latest versions of OSX you may have to install from source and tweak it to your own setup.
- Once publican is installed, you need to setup the so-called &PRODUCT; brand defined in the docs/publican-&PRODUCT; directory.
- To do so, enter the following commands:
-
- sudo cp -R publican-cloudstack /usr/share/publican/Common_Content/cloudstack
-
- If this fails or you later face errors related to the brand files, see the publican documentation.
- With publican installed and the &PRODUCT; brand files in place, you should be able to build any documentation guide.
-
-
-
diff --git a/docs/en-US/inter-vlan-routing.xml b/docs/en-US/inter-vlan-routing.xml
deleted file mode 100644
index 59115deb581..00000000000
--- a/docs/en-US/inter-vlan-routing.xml
+++ /dev/null
@@ -1,107 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- About Inter-VLAN Routing (nTier Apps)
- Inter-VLAN Routing (nTier Apps) is the capability to route network traffic between VLANs.
- This feature enables you to build Virtual Private Clouds (VPC), an isolated segment of your
- cloud, that can hold multi-tier applications. These tiers are deployed on different VLANs that
- can communicate with each other. You provision VLANs to the tiers your create, and VMs can be
- deployed on different tiers. The VLANs are connected to a virtual router, which facilitates
- communication between the VMs. In effect, you can segment VMs by means of VLANs into different
- networks that can host multi-tier applications, such as Web, Application, or Database. Such
- segmentation by means of VLANs logically separate application VMs for higher security and lower
- broadcasts, while remaining physically connected to the same device.
- This feature is supported on XenServer, KVM, and VMware hypervisors.
- The major advantages are:
-
-
- The administrator can deploy a set of VLANs and allow users to deploy VMs on these
- VLANs. A guest VLAN is randomly alloted to an account from a pre-specified set of guest
- VLANs. All the VMs of a certain tier of an account reside on the guest VLAN allotted to that
- account.
-
- A VLAN allocated for an account cannot be shared between multiple accounts.
-
-
-
- The administrator can allow users create their own VPC and deploy the application. In
- this scenario, the VMs that belong to the account are deployed on the VLANs allotted to that
- account.
-
-
- Both administrators and users can create multiple VPCs. The guest network NIC is plugged
- to the VPC virtual router when the first VM is deployed in a tier.
-
-
- The administrator can create the following gateways to send to or receive traffic from
- the VMs:
-
-
- VPN Gateway: For more information, see .
-
-
- Public Gateway: The public gateway for a VPC is
- added to the virtual router when the virtual router is created for VPC. The public
- gateway is not exposed to the end users. You are not allowed to list it, nor allowed to
- create any static routes.
-
-
- Private Gateway: For more information, see .
-
-
-
-
- Both administrators and users can create various possible destinations-gateway
- combinations. However, only one gateway of each type can be used in a deployment.
- For example:
-
-
- VLANs and Public Gateway: For example, an
- application is deployed in the cloud, and the Web application VMs communicate with the
- Internet.
-
-
- VLANs, VPN Gateway, and Public Gateway: For
- example, an application is deployed in the cloud; the Web application VMs communicate
- with the Internet; and the database VMs communicate with the on-premise devices.
-
-
-
-
- The administrator can define Network Access Control List (ACL) on the virtual router to
- filter the traffic among the VLANs or between the Internet and a VLAN. You can define ACL
- based on CIDR, port range, protocol, type code (if ICMP protocol is selected) and
- Ingress/Egress type.
-
-
- The following figure shows the possible deployment scenarios of a Inter-VLAN setup:
-
-
-
-
-
- mutltier.png: a multi-tier setup.
-
-
- To set up a multi-tier Inter-VLAN deployment, see .
-
diff --git a/docs/en-US/introduction.xml b/docs/en-US/introduction.xml
deleted file mode 100644
index 9aca8bdfc93..00000000000
--- a/docs/en-US/introduction.xml
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Introduction
-
-
-
-
diff --git a/docs/en-US/ip-forwarding-firewalling.xml b/docs/en-US/ip-forwarding-firewalling.xml
deleted file mode 100644
index d1beb2eb0f2..00000000000
--- a/docs/en-US/ip-forwarding-firewalling.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- IP Forwarding and Firewalling
- By default, all incoming traffic to the public IP address is rejected. All outgoing traffic
- from the guests is also blocked by default.
- To allow outgoing traffic, follow the procedure in .
- To allow incoming traffic, users may set up firewall rules and/or port forwarding rules. For
- example, you can use a firewall rule to open a range of ports on the public IP address, such as
- 33 through 44. Then use port forwarding rules to direct traffic from individual ports within
- that range to specific ports on user VMs. For example, one port forwarding rule could route
- incoming traffic on the public IP's port 33 to port 100 on one user VM's private IP.
-
-
-
-
diff --git a/docs/en-US/ip-load-balancing.xml b/docs/en-US/ip-load-balancing.xml
deleted file mode 100644
index ae569e7d969..00000000000
--- a/docs/en-US/ip-load-balancing.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- IP Load Balancing
- The user may choose to associate the same public IP for multiple guests. &PRODUCT; implements a TCP-level load balancer with the following policies.
-
- Round-robin
- Least connection
- Source IP
-
- This is similar to port forwarding but the destination may be multiple IP addresses.
-
diff --git a/docs/en-US/ip-vlan-tenant.xml b/docs/en-US/ip-vlan-tenant.xml
deleted file mode 100644
index d58d49be63a..00000000000
--- a/docs/en-US/ip-vlan-tenant.xml
+++ /dev/null
@@ -1,212 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Reserving Public IP Addresses and VLANs for Accounts
- &PRODUCT; provides you the ability to reserve a set of public IP addresses and VLANs
- exclusively for an account. During zone creation, you can continue defining a set of VLANs and
- multiple public IP ranges. This feature extends the functionality to enable you to dedicate a
- fixed set of VLANs and guest IP addresses for a tenant.
- Note that if an account has consumed all the VLANs and IPs dedicated to it, the account can
- acquire two more resources from the system. &PRODUCT; provides the root admin with two
- configuration parameter to modify this default behavior—use.system.public.ips and
- use.system.guest.vlans. These global parameters enable the root admin to disallow an account
- from acquiring public IPs and guest VLANs from the system, if the account has dedicated
- resources and these dedicated resources have all been consumed. Both these configurations are
- configurable at the account level.
- This feature provides you the following capabilities:
-
-
- Reserve a VLAN range and public IP address range from an Advanced zone and assign it to
- an account
-
-
- Disassociate a VLAN and public IP address range from an account
-
-
- View the number of public IP addresses allocated to an account
-
-
- Check whether the required range is available and is conforms to account limits.
- The maximum IPs per account limit cannot be superseded.
-
-
-
- Dedicating IP Address Ranges to an Account
-
-
- Log in to the &PRODUCT; UI as administrator.
-
-
- In the left navigation bar, click Infrastructure.
-
-
- In Zones, click View All.
-
-
- Choose the zone you want to work with.
-
-
- Click the Physical Network tab.
-
-
- In the Public node of the diagram, click Configure.
-
-
- Click the IP Ranges tab.
- You can either assign an existing IP range to an account, or create a new IP range and
- assign to an account.
-
-
- To assign an existing IP range to an account, perform the following:
-
-
- Locate the IP range you want to work with.
-
-
- Click Add Account
-
-
-
-
- addAccount-icon.png: button to assign an IP range to an account.
-
- button.
- The Add Account dialog is displayed.
-
-
- Specify the following:
-
-
- Account: The account to which you want to
- assign the IP address range.
-
-
- Domain: The domain associated with the
- account.
-
-
- To create a new IP range and assign an account, perform the following:
-
-
- Specify the following:
-
-
- Gateway
-
-
- Netmask
-
-
- VLAN
-
-
- Start IP
-
-
- End IP
-
-
- Account: Perform the following:
-
-
- Click Account.
- The Add Account page is displayed.
-
-
- Specify the following:
-
-
- Account: The account to which you want to
- assign an IP address range.
-
-
- Domain: The domain associated with the
- account.
-
-
-
-
- Click OK.
-
-
-
-
-
-
- Click Add.
-
-
-
-
-
-
-
-
- Dedicating VLAN Ranges to an Account
-
-
- After the &PRODUCT; Management Server is installed, log in to the &PRODUCT; UI as
- administrator.
-
-
- In the left navigation bar, click Infrastructure.
-
-
- In Zones, click View All.
-
-
- Choose the zone you want to work with.
-
-
- Click the Physical Network tab.
-
-
- In the Guest node of the diagram, click Configure.
-
-
- Select the Dedicated VLAN Ranges tab.
-
-
- Click Dedicate VLAN Range.
- The Dedicate VLAN Range dialog is displayed.
-
-
- Specify the following:
-
-
- VLAN Range: The
- VLAN range that you want to assign to an account.
-
-
- Account: The
- account to which you want to assign the selected VLAN range.
-
-
- Domain: The
- domain associated with the account.
-
-
-
-
-
-
diff --git a/docs/en-US/ipaddress-usage-record-format.xml b/docs/en-US/ipaddress-usage-record-format.xml
deleted file mode 100644
index 1a0385b999e..00000000000
--- a/docs/en-US/ipaddress-usage-record-format.xml
+++ /dev/null
@@ -1,42 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- IP Address Usage Record Format
- For IP address usage the following fields exist in a usage record.
-
- account - name of the account
- accountid - ID of the account
- domainid - ID of the domain in which this account resides
- zoneid - Zone where the usage occurred
- description - A string describing what the usage record is tracking
- usage - String representation of the usage, including the units of usage
- usagetype - A number representing the usage type (see Usage Types)
- rawusage - A number representing the actual usage in hours
- usageid - IP address ID
- startdate, enddate - The range of time for which the usage is aggregated; see Dates in the Usage Record
- issourcenat - Whether source NAT is enabled for the IP address
- iselastic - True if the IP address is elastic.
-
-
diff --git a/docs/en-US/ipv6-support.xml b/docs/en-US/ipv6-support.xml
deleted file mode 100644
index bc14c8eab0e..00000000000
--- a/docs/en-US/ipv6-support.xml
+++ /dev/null
@@ -1,191 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- IPv6 Support in &PRODUCT;
- &PRODUCT; supports Internet Protocol version 6 (IPv6), the recent version of the Internet
- Protocol (IP) that defines routing the network traffic. IPv6 uses a 128-bit address that
- exponentially expands the current address space that is available to the users. IPv6 addresses
- consist of eight groups of four hexadecimal digits separated by colons, for example,
- 5001:0dt8:83a3:1012:1000:8s2e:0870:7454. &PRODUCT; supports IPv6 for public IPs in shared
- networks. With IPv6 support, VMs in shared networks can obtain both IPv4 and IPv6 addresses from
- the DHCP server. You can deploy VMs either in a IPv6 or IPv4 network, or in a dual network
- environment. If IPv6 network is used, the VM generates a link-local IPv6 address by itself, and
- receives a stateful IPv6 address from the DHCPv6 server.
- IPv6 is supported only on KVM and XenServer hypervisors. The IPv6 support is only an
- experimental feature.
- Here's the sequence of events when IPv6 is used:
-
-
- The administrator creates an IPv6 shared network in an advanced zone.
-
-
- The user deploys a VM in an IPv6 shared network.
-
-
- The user VM generates an IPv6 link local address by itself, and gets an IPv6 global or
- site local address through DHCPv6.
- For information on API changes, see .
-
-
-
- Prerequisites and Guidelines
- Consider the following:
-
-
- CIDR size must be 64 for IPv6 networks.
-
-
- The DHCP client of the guest VMs should support generating DUID based on Link-layer
- Address (DUID- LL). DUID-LL derives from the MAC address of guest VMs, and therefore the
- user VM can be identified by using DUID. See Dynamic Host Configuration Protocol for IPv6
- for more information.
-
-
- The gateway of the guest network generates Router Advisement and Response messages to
- Router Solicitation. The M (Managed Address Configuration) flag of Router Advisement
- should enable stateful IP address configuration. Set the M flag to where the end nodes
- receive their IPv6 addresses from the DHCPv6 server as opposed to the router or
- switch.
-
- The M flag is the 1-bit Managed Address Configuration flag for Router Advisement.
- When set, Dynamic Host Configuration Protocol (DHCPv6) is available for address
- configuration in addition to any IPs set by using stateless address
- auto-configuration.
-
-
-
- Use the System VM template exclusively designed to support IPv6. Download the System
- VM template from http://cloudstack.apt-get.eu/systemvm/.
-
-
- The concept of Default Network applies to IPv6 networks. However, unlike IPv4
- &PRODUCT; does not control the routing information of IPv6 in shared network; the choice
- of Default Network will not affect the routing in the user VM.
-
-
- In a multiple shared network, the default route is set by the rack router, rather than
- the DHCP server, which is out of &PRODUCT; control. Therefore, in order for the user VM to
- get only the default route from the default NIC, modify the configuration of the user VM,
- and set non-default NIC's accept_ra to 0 explicitly. The
- accept_ra parameter accepts Router Advertisements and auto-configure
- /proc/sys/net/ipv6/conf/interface with received data.
-
-
-
-
- Limitations of IPv6 in &PRODUCT;
- The following are not yet supported:
-
-
- Security groups
-
-
- Userdata and metadata
-
-
- Passwords
-
-
-
-
- Guest VM Configuration for DHCPv6
- For the guest VMs to get IPv6 address, run dhclient command manually on each of the VMs.
- Use DUID-LL to set up dhclient.
- The IPv6 address is lost when a VM is stopped and started. Therefore, use the same procedure
- to get an IPv6 address when a VM is stopped and started.
-
-
- Set up dhclient by using DUID-LL.
- Perform the following for DHCP Client 4.2 and above:
-
-
- Run the following command on the selected VM to get the dhcpv6 offer from
- VR:
- dhclient -6 -D LL <dev>
-
-
- Perform the following for DHCP Client 4.1:
-
-
- Open the following to the dhclient configuration file:
- vi /etc/dhcp/dhclient.conf
-
-
- Add the following to the dhclient configuration file:
- send dhcp6.client-id = concat(00:03:00, hardware);
-
-
-
-
- Get IPv6 address from DHCP server as part of the system or network restart.
- Based on the operating systems, perform the following:
- On CentOS 6.2:
-
-
- Open the Ethernet interface configuration file:
- vi /etc/sysconfig/network-scripts/ifcfg-eth0
- The ifcfg-eth0 file controls the first NIC in a system.
-
-
- Make the necessary configuration changes, as given below:
- DEVICE=eth0
-HWADDR=06:A0:F0:00:00:38
-NM_CONTROLLED=no
-ONBOOT=yes
-BOOTPROTO=dhcp6
-TYPE=Ethernet
-USERCTL=no
-PEERDNS=yes
-IPV6INIT=yes
-DHCPV6C=yes
-
-
- Open the following:
- vi /etc/sysconfig/network
-
-
- Make the necessary configuration changes, as given below:
- NETWORKING=yes
-HOSTNAME=centos62mgmt.lab.vmops.com
-NETWORKING_IPV6=yes
-IPV6_AUTOCONF=no
-
-
- On Ubuntu 12.10
-
-
- Open the following:
- etc/network/interfaces:
-
-
- Make the necessary configuration changes, as given below:
- iface eth0 inet6 dhcp
-autoconf 0
-accept_ra 1
-
-
-
-
-
-
diff --git a/docs/en-US/isolated-networks.xml b/docs/en-US/isolated-networks.xml
deleted file mode 100644
index c8560445d2f..00000000000
--- a/docs/en-US/isolated-networks.xml
+++ /dev/null
@@ -1,41 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Isolated Networks
- An isolated network can be accessed only by virtual machines of a single account. Isolated
- networks have the following properties.
-
-
- Resources such as VLAN are allocated and garbage collected dynamically
-
-
- There is one network offering for the entire network
-
-
- The network offering can be upgraded or downgraded but it is for the entire
- network
-
-
- For more information, see .
-
diff --git a/docs/en-US/job-status.xml b/docs/en-US/job-status.xml
deleted file mode 100644
index da0f76c5dff..00000000000
--- a/docs/en-US/job-status.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Job Status
- The key to using an asynchronous command is the job ID that is returned immediately once the command has been executed. With the job ID, you can periodically check the job status by making calls to queryAsyncJobResult command. The command will return three possible job status integer values:
-
- 0 - Job is still in progress. Continue to periodically poll for any status changes.
- 1 - Job has successfully completed. The job will return any successful response values associated with command that was originally executed.
- 2 - Job has failed to complete. Please check the "jobresultcode" tag for failure reason code and "jobresult" for the failure reason.
-
-
-
diff --git a/docs/en-US/kvm-topology-req.xml b/docs/en-US/kvm-topology-req.xml
deleted file mode 100644
index 0dff491b364..00000000000
--- a/docs/en-US/kvm-topology-req.xml
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- KVM Topology Requirements
- The Management Servers communicate with KVM hosts on port 22 (ssh).
-
diff --git a/docs/en-US/large_scale_redundant_setup.xml b/docs/en-US/large_scale_redundant_setup.xml
deleted file mode 100644
index 427a42d9182..00000000000
--- a/docs/en-US/large_scale_redundant_setup.xml
+++ /dev/null
@@ -1,42 +0,0 @@
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Large-Scale Redundant Setup
-
-
-
-
- Large-Scale Redundant Setup
-
- This diagram illustrates the network architecture of a large-scale &PRODUCT; deployment.
-
- A layer-3 switching layer is at the core of the data center. A router redundancy protocol like VRRP should be deployed. Typically high-end core switches also include firewall modules. Separate firewall appliances may also be used if the layer-3 switch does not have integrated firewall capabilities. The firewalls are configured in NAT mode. The firewalls provide the following functions:
-
- Forwards HTTP requests and API calls from the Internet to the Management Server. The Management Server resides on the management network.
- When the cloud spans multiple zones, the firewalls should enable site-to-site VPN such that servers in different zones can directly reach each other.
-
-
- A layer-2 access switch layer is established for each pod. Multiple switches can be stacked to increase port count. In either case, redundant pairs of layer-2 switches should be deployed.
- The Management Server cluster (including front-end load balancers, Management Server nodes, and the MySQL database) is connected to the management network through a pair of load balancers.
- Secondary storage servers are connected to the management network.
- Each pod contains storage and computing servers. Each storage and computing server should have redundant NICs connected to separate layer-2 access switches.
-
-
diff --git a/docs/en-US/layer2-switch.xml b/docs/en-US/layer2-switch.xml
deleted file mode 100644
index acef5a7c207..00000000000
--- a/docs/en-US/layer2-switch.xml
+++ /dev/null
@@ -1,41 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Layer-2 Switch
- The layer-2 switch is the access switching layer inside the pod.
-
-
- It should trunk all VLANs into every computing host.
-
-
- It should switch traffic for the management network containing computing and storage
- hosts. The layer-3 switch will serve as the gateway for the management network.
-
-
-
- Example Configurations
- This section contains example configurations for specific switch models for pod-level
- layer-2 switching. It assumes VLAN management protocols such as VTP or GVRP have been
- disabled. The scripts must be changed appropriately if you choose to use VTP or GVRP.
-
-
-
-
diff --git a/docs/en-US/lb-policy-pfwd-rule-usage-record-format.xml b/docs/en-US/lb-policy-pfwd-rule-usage-record-format.xml
deleted file mode 100644
index e27a49d6b96..00000000000
--- a/docs/en-US/lb-policy-pfwd-rule-usage-record-format.xml
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Load Balancer Policy or Port Forwarding Rule Usage Record Format
-
- account - name of the account
- accountid - ID of the account
- domainid - ID of the domain in which this account resides
- zoneid - Zone where the usage occurred
- description - A string describing what the usage record is tracking
- usage - String representation of the usage, including the units of usage (e.g. 'Hrs' for hours)
- usagetype - A number representing the usage type (see Usage Types)
- rawusage - A number representing the actual usage in hours
- usageid - ID of the load balancer policy or port forwarding rule
- usagetype - A number representing the usage type (see Usage Types)
- startdate, enddate - The range of time for which the usage is aggregated; see Dates in the Usage Record
-
-
diff --git a/docs/en-US/libcloud-examples.xml b/docs/en-US/libcloud-examples.xml
deleted file mode 100644
index d2db5269eb9..00000000000
--- a/docs/en-US/libcloud-examples.xml
+++ /dev/null
@@ -1,75 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Apache Libcloud
- There are many tools available to interface with the &PRODUCT; API. Apache Libcloud is one of those. In this section
- we provide a basic example of how to use Libcloud with &PRODUCT;. It assumes that you have access to a &PRODUCT; endpoint and that you have the API access key and secret key of a user.
- To install Libcloud refer to the libcloud website. If you are familiar with Pypi simply do:
- pip install apache-libcloud
- You should see the following output:
-
-pip install apache-libcloud
-Downloading/unpacking apache-libcloud
- Downloading apache-libcloud-0.12.4.tar.bz2 (376kB): 376kB downloaded
- Running setup.py egg_info for package apache-libcloud
-
-Installing collected packages: apache-libcloud
- Running setup.py install for apache-libcloud
-
-Successfully installed apache-libcloud
-Cleaning up...
-
-
- You can then open a Python interactive shell, create an instance of a &PRODUCT; driver and call the available methods via the libcloud API.
-
-
- >> from libcloud.compute.types import Provider
->>> from libcloud.compute.providers import get_driver
->>> Driver = get_driver(Provider.CLOUDSTACK)
->>> apikey='plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdM-kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg'
->>> secretkey='VDaACYb0LV9eNjTetIOElcVQkvJck_J_QljX_FcHRj87ZKiy0z0ty0ZsYBkoXkY9b7eq1EhwJaw7FF3akA3KBQ'
->>> host='http://localhost:8080'
->>> path='/client/api'
->>> conn=Driver(apikey,secretkey,secure='False',host='localhost:8080',path=path)
->>> conn=Driver(key=apikey,secret=secretkey,secure=False,host='localhost',port='8080',path=path)
->>> conn.list_images()
-[]
->>> conn.list_sizes()
-[, , ]
->>> images=conn.list_images()
->>> offerings=conn.list_sizes()
->>> node=conn.create_node(name='toto',image=images[0],size=offerings[0])
->>> help(node)
->>> node.get_uuid()
-'b1aa381ba1de7f2d5048e248848993d5a900984f'
->>> node.name
-u'toto'
-]]>
-
-
- One of the interesting use cases of Libcloud is that you can use multiple Cloud Providers, such as AWS, Rackspace, OpenNebula, vCloud and so on. You can then create Driver instances to each of these clouds and create your own multi cloud application.
-
-
diff --git a/docs/en-US/limit-accounts-domains.xml b/docs/en-US/limit-accounts-domains.xml
deleted file mode 100644
index 78a642b3a5a..00000000000
--- a/docs/en-US/limit-accounts-domains.xml
+++ /dev/null
@@ -1,371 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Limiting Resource Usage
- &PRODUCT; allows you to control resource usage based on the types of resources, such as CPU,
- RAM, Primary storage, and Secondary storage. A new set of resource types has been added to the
- existing pool of resources to support the new customization model—need-basis usage, such
- as large VM or small VM. The new resource types are now broadly classified as CPU, RAM, Primary
- storage, and Secondary storage. The root administrator is able to impose resource usage limit by
- the following resource types for Domain, Project, and Accounts.
-
-
- CPUs
-
-
- Memory (RAM)
-
-
- Primary Storage (Volumes)
-
-
- Secondary Storage (Snapshots, Templates, ISOs)
-
-
- To control the behaviour of this feature, the following configuration parameters have been
- added:
-
-
-
-
- Parameter Name
- Description
-
-
-
-
- max.account.cpus
- Maximum number of CPU cores that can be used for an account.
- Default is 40.
-
-
- max.account.ram (MB)
- Maximum RAM that can be used for an account.
- Default is 40960.
-
-
- max.account.primary.storage (GB)
- Maximum primary storage space that can be used for an account.
- Default is 200.
-
-
-
- max.account.secondary.storage (GB)
- Maximum secondary storage space that can be used for an account.
- Default is 400.
-
-
- max.project.cpus
-
- Maximum number of CPU cores that can be used for an account.
- Default is 40.
-
-
-
- max.project.ram (MB)
-
- Maximum RAM that can be used for an account.
- Default is 40960.
-
-
-
- max.project.primary.storage (GB)
-
- Maximum primary storage space that can be used for an account.
- Default is 200.
-
-
-
- max.project.secondary.storage (GB)
-
- Maximum secondary storage space that can be used for an account.
- Default is 400.
-
-
-
-
-
-
- User Permission
- The root administrator, domain administrators and users are able to list resources. Ensure
- that proper logs are maintained in the vmops.log and
- api.log files.
-
-
- The root admin will have the privilege to list and update resource limits.
-
-
- The domain administrators are allowed to list and change these resource limits only
- for the sub-domains and accounts under their own domain or the sub-domains.
-
-
- The end users will the privilege to list resource limits. Use the listResourceLimits
- API.
-
-
-
-
- Limit Usage Considerations
-
-
- Primary or Secondary storage space refers to the stated size of the volume and not the
- physical size— the actual consumed size on disk in case of thin provisioning.
-
-
- If the admin reduces the resource limit for an account and set it to less than the
- resources that are currently being consumed, the existing VMs/templates/volumes are not
- destroyed. Limits are imposed only if the user under that account tries to execute a new
- operation using any of these resources. For example, the existing behavior in the case of
- a VM are:
-
-
- migrateVirtualMachine: The users under that account will be able to migrate the
- running VM into any other host without facing any limit issue.
-
-
- recoverVirtualMachine: Destroyed VMs cannot be recovered.
-
-
-
-
- For any resource type, if a domain has limit X, sub-domains or accounts under that
- domain can have there own limits. However, the sum of resource allocated to a sub-domain
- or accounts under the domain at any point of time should not exceed the value X.
- For example, if a domain has the CPU limit of 40 and the sub-domain D1 and account A1
- can have limits of 30 each, but at any point of time the resource allocated to D1 and A1
- should not exceed the limit of 40.
-
-
- If any operation needs to pass through two of more resource limit check, then the
- lower of 2 limits will be enforced, For example: if an account has the VM limit of 10 and
- CPU limit of 20, and a user under that account requests 5 VMs of 4 CPUs each. The user
- can deploy 5 more VMs because VM limit is 10. However, the user cannot deploy any more
- instances because the CPU limit has been exhausted.
-
-
-
-
- Limiting Resource Usage in a Domain
- &PRODUCT; allows the configuration of limits on a domain basis. With a domain limit in
- place, all users still have their account limits. They are additionally limited, as a group,
- to not exceed the resource limits set on their domain. Domain limits aggregate the usage of
- all accounts in the domain as well as all the accounts in all the sub-domains of that domain.
- Limits set at the root domain level apply to the sum of resource usage by the accounts in all
- the domains and sub-domains below that root domain.
- To set a domain limit:
-
-
- Log in to the &PRODUCT; UI.
-
-
- In the left navigation tree, click Domains.
-
-
- Select the domain you want to modify. The current domain limits are displayed.
- A value of -1 shows that there is no limit in place.
-
-
- Click the Edit button
-
-
-
-
- editbutton.png: edits the settings.
-
-
-
-
- Edit the following as per your requirement:
-
-
-
-
- Parameter Name
- Description
-
-
-
-
- Instance Limits
- The number of instances that can be used in a domain.
-
-
- Public IP Limits
-
- The number of public IP addresses that can be used in a
- domain.
-
-
- Volume Limits
- The number of disk volumes that can be created in a domain.
-
-
-
- Snapshot Limits
- The number of snapshots that can be created in a domain.
-
-
- Template Limits
- The number of templates that can be registered in a
- domain.
-
-
- VPC limits
- The number of VPCs that can be created in a domain.
-
-
- CPU limits
-
- The number of CPU cores that can be used for a domain.
-
-
-
- Memory limits (MB)
-
- The number of RAM that can be used for a domain.
-
-
-
- Primary Storage limits (GB)
-
- The primary storage space that can be used for a domain.
-
-
-
- Secondary Storage limits (GB)
-
- The secondary storage space that can be used for a domain.
-
-
-
-
-
-
-
- Click Apply.
-
-
-
-
- Default Account Resource Limits
- You can limit resource use by accounts. The default limits are set by using Global
- configuration parameters, and they affect all accounts within a cloud. The relevant parameters
- are those beginning with max.account, for example: max.account.snapshots.
- To override a default limit for a particular account, set a per-account resource
- limit.
-
-
- Log in to the &PRODUCT; UI.
-
-
- In the left navigation tree, click Accounts.
-
-
- Select the account you want to modify. The current limits are displayed.
- A value of -1 shows that there is no limit in place.
-
-
- Click the Edit button.
-
-
-
-
- editbutton.png: edits the settings
-
-
-
-
- Edit the following as per your requirement:
-
-
-
-
- Parameter Name
- Description
-
-
-
-
- Instance Limits
- The number of instances that can be used in an account.
- The default is 20.
-
-
- Public IP Limits
-
- The number of public IP addresses that can be used in an account.
- The default is 20.
-
-
- Volume Limits
- The number of disk volumes that can be created in an account.
- The default is 20.
-
-
- Snapshot Limits
- The number of snapshots that can be created in an account.
- The default is 20.
-
-
- Template Limits
- The number of templates that can be registered in an account.
- The default is 20.
-
-
- VPC limits
- The number of VPCs that can be created in an account.
- The default is 20.
-
-
- CPU limits
-
- The number of CPU cores that can be used for an account.
- The default is 40.
-
-
- Memory limits (MB)
-
- The number of RAM that can be used for an account.
- The default is 40960.
-
-
- Primary Storage limits (GB)
-
- The primary storage space that can be used for an account.
- The default is 200.
-
-
- Secondary Storage limits (GB)
-
- The secondary storage space that can be used for an account.
- The default is 400.
-
-
-
-
-
-
- Click Apply.
-
-
-
-
diff --git a/docs/en-US/linux-installation.xml b/docs/en-US/linux-installation.xml
deleted file mode 100644
index 28be32dad72..00000000000
--- a/docs/en-US/linux-installation.xml
+++ /dev/null
@@ -1,86 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Linux OS Installation
- Use the following steps to begin the Linux OS installation:
-
-
- Download the script file cloud-set-guest-password:
-
-
- Linux:
-
-
-
- Windows:
-
-
-
-
-
- Copy this file to /etc/init.d.
- On some Linux distributions, copy the file to
- /etc/rc.d/init.d.
-
-
- Run the following command to make the script executable:
- chmod +x /etc/init.d/cloud-set-guest-password
-
-
- Depending on the Linux distribution, continue with the appropriate step.
-
-
- On Fedora, CentOS/RHEL, and Debian, run:
- chkconfig --add cloud-set-guest-password
-
-
- On Ubuntu with VMware tools, link the script file to the
- /etc/network/if-up and /etc/network/if-down
- folders, and run the script:
- #ln -s /etc/init.d/cloud-set-guest-password /etc/network/if-up/cloud-set-guest-password
-#ln -s /etc/init.d/cloud-set-guest-password /etc/network/if-down/cloud-set-guest-password
-
-
- If you are using Ubuntu 11.04, create a directory called
- /var/lib/dhcp3 on your Ubuntu machine.
- This is to work around a known issue with this version of
- Ubuntu.
- Run the following command:
- sudo update-rc.d cloud-set-guest-password defaults 98
-
-
- On all Ubuntu versions, run:
- sudo update-rc.d cloud-set-guest-password defaults 98
- To test, run mkpasswd and check whether a
- new password is generated. If the mkpasswd command does not exist,
- run sudo apt-get install whois or sudo apt-get install
- mkpasswd, depending on your Ubuntu version.
-
-
-
-
-
diff --git a/docs/en-US/load-balancer-rules.xml b/docs/en-US/load-balancer-rules.xml
deleted file mode 100644
index 884647c6f8b..00000000000
--- a/docs/en-US/load-balancer-rules.xml
+++ /dev/null
@@ -1,41 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Load Balancer Rules
- A &PRODUCT; user or administrator may create load balancing rules that balance traffic
- received at a public IP to one or more VMs. A user creates a rule, specifies an algorithm, and
- assigns the rule to a set of VMs.
-
- If you create load balancing rules while using a network service offering that includes an
- external load balancer device such as NetScaler, and later change the network service offering
- to one that uses the &PRODUCT; virtual router, you must create a firewall rule on the virtual
- router for each of your existing load balancing rules so that they continue to
- function.
-
-
-
-
-
-
diff --git a/docs/en-US/log-in-root-admin.xml b/docs/en-US/log-in-root-admin.xml
deleted file mode 100644
index 0243bd645fe..00000000000
--- a/docs/en-US/log-in-root-admin.xml
+++ /dev/null
@@ -1,46 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Logging In as the Root Administrator
- After the Management Server software is installed and running, you can run the &PRODUCT; user interface. This UI is there to help you provision, view, and manage your cloud infrastructure.
-
- Open your favorite Web browser and go to this URL. Substitute the IP address of your own Management Server:
- http://<management-server-ip-address>:8080/client
- After logging into a fresh Management Server installation, a guided tour splash screen appears. On later visits, you’ll be taken directly into the Dashboard.
-
- If you see the first-time splash screen, choose one of the following.
-
- Continue with basic setup. Choose this if you're just trying &PRODUCT;, and you want a guided walkthrough of the simplest possible configuration so that you can get started right away. We'll help you set up a cloud with the following features: a single machine that runs &PRODUCT; software and uses NFS to provide storage; a single machine running VMs under the XenServer or KVM hypervisor; and a shared public network.
- The prompts in this guided tour should give you all the information you need, but if you want just a bit more detail, you can follow along in the Trial Installation Guide.
-
- I have used &PRODUCT; before. Choose this if you have already gone through a design phase and planned a more sophisticated deployment, or you are ready to start scaling up a trial cloud that you set up earlier with the basic setup screens. In the Administrator UI, you can start using the more powerful features of &PRODUCT;, such as advanced VLAN networking, high availability, additional network elements such as load balancers and firewalls, and support for multiple hypervisors including Citrix XenServer, KVM, and VMware vSphere.
- The root administrator Dashboard appears.
-
-
-
- You should set a new root administrator password. If you chose basic setup, you’ll be prompted to create a new password right away. If you chose experienced user, use the steps in .
-
- You are logging in as the root administrator. This account manages the &PRODUCT; deployment, including physical infrastructure. The root administrator can modify configuration settings to change basic functionality, create or delete user accounts, and take many actions that should be performed only by an authorized person. Please change the default password to a new, unique password.
-
-
diff --git a/docs/en-US/log-in.xml b/docs/en-US/log-in.xml
deleted file mode 100644
index 84328ce4d45..00000000000
--- a/docs/en-US/log-in.xml
+++ /dev/null
@@ -1,48 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Log In to the UI
- &PRODUCT; provides a web-based UI that can be used by both administrators and end users. The appropriate version of the UI is displayed depending on the credentials used to log in. The UI is available in popular browsers including IE7, IE8, IE9, Firefox 3.5+, Firefox 4, Safari 4, and Safari 5. The URL is: (substitute your own management server IP address)
- http://<management-server-ip-address>:8080/client
- On a fresh Management Server installation, a guided tour splash screen appears. On later visits, you’ll see a login screen where you specify the following to proceed to your Dashboard:
-
- Username
- The user ID of your account. The default username is admin.
-
-
- Password
- The password associated with the user ID. The password for the default username is password.
-
-
- Domain
- If you are a root user, leave this field blank.
-
- If you are a user in the sub-domains, enter the full path to the domain, excluding the root domain.
- For example, suppose multiple levels are created under the root domain, such as Comp1/hr. The users in the Comp1 domain should enter Comp1 in the Domain field, whereas the users in the Comp1/sales domain should enter Comp1/sales.
- For more guidance about the choices that appear when you log in to this UI, see Logging In as the Root Administrator.
-
-
-
-
-
diff --git a/docs/en-US/long-running-job-events.xml b/docs/en-US/long-running-job-events.xml
deleted file mode 100644
index cae2b747586..00000000000
--- a/docs/en-US/long-running-job-events.xml
+++ /dev/null
@@ -1,41 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Long Running Job Events
- The events log records three types of standard events.
-
- INFO. This event is generated when an operation has been successfully performed.
- WARN. This event is generated in the following circumstances.
-
- When a network is disconnected while monitoring a template download.
- When a template download is abandoned.
- When an issue on the storage server causes the volumes to fail over to the mirror storage server.
-
-
- ERROR. This event is generated when an operation has not been successfully performed
-
-
-
-
diff --git a/docs/en-US/lxc-install.xml b/docs/en-US/lxc-install.xml
deleted file mode 100644
index 40f6a0aaa69..00000000000
--- a/docs/en-US/lxc-install.xml
+++ /dev/null
@@ -1,110 +0,0 @@
-
-
- %BOOK_ENTITIES;
- ]>
-
-
-
-
- LXC Installation and Configuration
-
- System Requirements for LXC Hosts
- LXC requires the Linux kernel cgroups functionality which is available starting 2.6.24. Although you are not required to run these distributions, the following are recommended:
-
- CentOS / RHEL: 6.3
- Ubuntu: 12.04(.1)
-
- The main requirement for LXC hypervisors is the libvirt and Qemu version. No matter what
- Linux distribution you are using, make sure the following requirements are met:
-
- libvirt: 1.0.0 or higher
- Qemu/KVM: 1.0 or higher
-
- The default bridge in &PRODUCT; is the Linux native bridge implementation (bridge module). &PRODUCT; includes an option to work with OpenVswitch, the requirements are listed below
-
- libvirt: 1.0.0 or higher
- openvswitch: 1.7.1 or higher
-
- In addition, the following hardware requirements apply:
-
- Within a single cluster, the hosts must be of the same distribution version.
- All hosts within a cluster must be homogenous. The CPUs must be of the same type, count, and feature flags.
- Must support HVM (Intel-VT or AMD-V enabled)
- 64-bit x86 CPU (more cores results in better performance)
- 4 GB of memory
- At least 1 NIC
- When you deploy &PRODUCT;, the hypervisor host must not have any VMs already running
-
-
-
- LXC Installation Overview
- LXC does not have any native system VMs, instead KVM will be used to run system VMs. This means that your host will need to support both LXC and KVM, thus most of the installation and configuration will be identical to the KVM installation. The material in this section doesn't duplicate KVM installation docs. It provides the &PRODUCT;-specific steps that are needed to prepare a KVM host to work with &PRODUCT;.
- Before continuing, make sure that you have applied the latest updates to your host.
- It is NOT recommended to run services on this host not controlled by &PRODUCT;.
- The procedure for installing an LXC Host is:
-
- Prepare the Operating System
- Install and configure libvirt
- Configure Security Policies (AppArmor and SELinux)
- Install and configure the Agent
-
-
-
-
-
-
- Install and configure the Agent
- To manage LXC instances on the host &PRODUCT; uses a Agent. This Agent communicates with the Management server and controls all the instances on the host.
- First we start by installing the agent:
- In RHEL or CentOS:
- $ yum install cloudstack-agent
- In Ubuntu:
- $ apt-get install cloudstack-agent
- Next step is to update the Agent configuration setttings. The settings are in /etc/cloudstack/agent/agent.properties
-
-
- Set the Agent to run in LXC mode:
- hypervisor.type=lxc
-
-
- Optional: If you would like to use direct networking (instead of the default bridge networking), configure these lines:
- libvirt.vif.driver=com.cloud.hypervisor.kvm.resource.DirectVifDriver
- network.direct.source.mode=private
- network.direct.device=eth0
-
-
- The host is now ready to be added to a cluster. This is covered in a later section, see . It is recommended that you continue to read the documentation before adding the host!
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/lxc-topology-req.xml b/docs/en-US/lxc-topology-req.xml
deleted file mode 100644
index 315863dd34c..00000000000
--- a/docs/en-US/lxc-topology-req.xml
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- LXC Topology Requirements
- The Management Servers communicate with LXC hosts on port 22 (ssh).
-
diff --git a/docs/en-US/maintain-hypervisors-on-hosts.xml b/docs/en-US/maintain-hypervisors-on-hosts.xml
deleted file mode 100644
index 43f3f790733..00000000000
--- a/docs/en-US/maintain-hypervisors-on-hosts.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Maintaining Hypervisors on Hosts
- When running hypervisor software on hosts, be sure all the hotfixes provided by the hypervisor vendor are applied. Track the release of hypervisor patches through your hypervisor vendor’s support channel, and apply patches as soon as possible after they are released. &PRODUCT; will not track or notify you of required hypervisor patches. It is essential that your hosts are completely up to date with the provided hypervisor patches. The hypervisor vendor is likely to refuse to support any system that is not up to date with patches.
- The lack of up-do-date hotfixes can lead to data corruption and lost VMs.
- (XenServer) For more information, see Highly Recommended Hotfixes for XenServer in the &PRODUCT; Knowledge Base.
-
diff --git a/docs/en-US/maintenance-mode-for-primary-storage.xml b/docs/en-US/maintenance-mode-for-primary-storage.xml
deleted file mode 100644
index 54c3a0d8901..00000000000
--- a/docs/en-US/maintenance-mode-for-primary-storage.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Maintenance Mode for Primary Storage
- Primary storage may be placed into maintenance mode. This is useful, for example, to replace faulty RAM in a storage device. Maintenance mode for a storage device will first stop any new guests from being provisioned on the storage device. Then it will stop all guests that have any volume on that storage device. When all such guests are stopped the storage device is in maintenance mode and may be shut down. When the storage device is online again you may cancel maintenance mode for the device. The &PRODUCT; will bring the device back online and attempt to start all guests that were running at the time of the entry into maintenance mode.
-
diff --git a/docs/en-US/making-api-request.xml b/docs/en-US/making-api-request.xml
deleted file mode 100644
index 49ea158bb21..00000000000
--- a/docs/en-US/making-api-request.xml
+++ /dev/null
@@ -1,54 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Making API Requests
- All &PRODUCT; API requests are submitted in the form of a HTTP GET/POST with an associated command and any parameters. A request is composed of the following whether in HTTP or HTTPS:
-
-
- &PRODUCT; API URL: This is the web services API entry point(for example, http://www.cloud.com:8080/client/api)
- Command: The web services command you wish to execute, such as start a virtual machine or create a disk volume
- Parameters: Any additional required or optional parameters for the command
-
- A sample API GET request looks like the following:
- http://localhost:8080/client/api?command=deployVirtualMachine&serviceOfferingId=1&diskOfferingId=1&templateId=2&zoneId=4&apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXq-jB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ&signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D
-
- Or in a more readable format:
-
-1. http://localhost:8080/client/api
-2. ?command=deployVirtualMachine
-3. &serviceOfferingId=1
-4. &diskOfferingId=1
-5. &templateId=2
-6. &zoneId=4
-7. &apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXqjB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ
-8. &signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D
-
- The first line is the &PRODUCT; API URL. This is the Cloud instance you wish to interact with.
- The second line refers to the command you wish to execute. In our example, we are attempting to deploy a fresh new virtual machine. It is preceded by a (?) to separate itself from the &PRODUCT; API URL.
- Lines 3-6 are the parameters for this given command. To see the command and its request parameters, please refer to the appropriate section in the &PRODUCT; API documentation. Each parameter field-value pair (field=value) is preceded by an ampersand character (&).
- Line 7 is the user API Key that uniquely identifies the account. See Signing API Requests on page 7.
- Line 8 is the signature hash created to authenticate the user account executing the API command. See Signing API Requests on page 7.
-
-
diff --git a/docs/en-US/manage-cloud.xml b/docs/en-US/manage-cloud.xml
deleted file mode 100644
index 6bc45e21de2..00000000000
--- a/docs/en-US/manage-cloud.xml
+++ /dev/null
@@ -1,33 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Managing the Cloud
-
-
-
-
-
-
-
diff --git a/docs/en-US/management-server-install-client.xml b/docs/en-US/management-server-install-client.xml
deleted file mode 100644
index 2c5ded76352..00000000000
--- a/docs/en-US/management-server-install-client.xml
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Install the Management Server on the First Host
- The first step in installation, whether you are installing the Management Server on one host
- or many, is to install the software on a single node.
-
- If you are planning to install the Management Server on multiple nodes for high
- availability, do not proceed to the additional nodes yet. That step will come later.
-
- The &PRODUCT; Management server can be installed using either RPM or DEB packages. These
- packages will depend on everything you need to run the Management server.
-
- Install on CentOS/RHEL
- We start by installing the required packages:
- yum install cloudstack-management
-
-
- Install on Ubuntu
- apt-get install cloudstack-mangagement
-
-
-
- Downloading vhd-util
- This procedure is required only for installations where XenServer is installed on the
- hypervisor hosts.
- Before setting up the Management Server, download vhd-util from vhd-util.
- If the Management Server is RHEL or CentOS, copy vhd-util to
- /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver.
- If the Management Server is Ubuntu, copy vhd-util to
- /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver.
-
-
diff --git a/docs/en-US/management-server-install-complete.xml b/docs/en-US/management-server-install-complete.xml
deleted file mode 100644
index 8f4aa6f68de..00000000000
--- a/docs/en-US/management-server-install-complete.xml
+++ /dev/null
@@ -1,39 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Installation Complete! Next Steps
- Congratulations! You have now installed &PRODUCT; Management Server and the database it uses to persist system data.
-
-
-
-
- installation-complete.png: Finished installs with single Management Server and multiple Management Servers
-
- What should you do next?
-
- Even without adding any cloud infrastructure, you can run the UI to get a feel for what's offered and how you will interact with &PRODUCT; on an ongoing basis. See Log In to the UI.
- When you're ready, add the cloud infrastructure and try running some virtual machines on it, so you can watch how &PRODUCT; manages the infrastructure. See Provision Your Cloud Infrastructure.
-
-
diff --git a/docs/en-US/management-server-install-db-external.xml b/docs/en-US/management-server-install-db-external.xml
deleted file mode 100644
index 29507209fbf..00000000000
--- a/docs/en-US/management-server-install-db-external.xml
+++ /dev/null
@@ -1,145 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Install the Database on a Separate Node
- This section describes how to install MySQL on a standalone machine, separate from the
- Management Server. This technique is intended for a deployment that includes several Management
- Server nodes. If you have a single-node Management Server deployment, you will typically use the
- same node for MySQL. See .
-
- The management server doesn't require a specific distribution for the MySQL node. You can
- use a distribution or Operating System of your choice. Using the same distribution as the
- management server is recommended, but not required. See .
-
-
-
- Install MySQL from the package repository from your distribution:
- On RHEL or CentOS:
- yum install mysql-server
- On Ubuntu:
- apt-get install mysql-server
-
-
- Edit the MySQL configuration (/etc/my.cnf or /etc/mysql/my.cnf, depending on your OS)
- and insert the following lines in the [mysqld] section. You can put these lines below the
- datadir line. The max_connections parameter should be set to 350 multiplied by the number of
- Management Servers you are deploying. This example assumes two Management Servers.
-
- On Ubuntu, you can also create /etc/mysql/conf.d/cloudstack.cnf file and add these
- directives there. Don't forget to add [mysqld] on the first line of the file.
-
- innodb_rollback_on_timeout=1
-innodb_lock_wait_timeout=600
-max_connections=700
-log-bin=mysql-bin
-binlog-format = 'ROW'
-bind-address = 0.0.0.0
-
-
- Start or restart MySQL to put the new configuration into effect.
- On RHEL/CentOS, MySQL doesn't automatically start after installation. Start it
- manually.
- service mysqld start
- On Ubuntu, restart MySQL.
- service mysqld restart
-
-
- (CentOS and RHEL only; not required on Ubuntu)
-
- On RHEL and CentOS, MySQL does not set a root password by default. It is very strongly
- recommended that you set a root password as a security precaution.
-
- Run the following command to secure your installation. You can answer "Y" to all
- questions except "Disallow root login remotely?". Remote root login is required to set up
- the databases.
- mysql_secure_installation
-
-
- If a firewall is present on the system, open TCP port 3306 so external MySQL connections
- can be established.
- On Ubuntu, UFW is the default firewall. Open the port with this command:
- ufw allow mysql
- On RHEL/CentOS:
-
-
- Edit the /etc/sysconfig/iptables file and add the following line at the beginning of
- the INPUT chain.
- -A INPUT -p tcp --dport 3306 -j ACCEPT
-
-
- Now reload the iptables rules.
- service iptables restart
-
-
-
-
- Return to the root shell on your first Management Server.
-
-
- Set up the database. The following command creates the cloud user on the
- database.
-
-
- In dbpassword, specify the password to be assigned to the cloud user. You can choose
- to provide no password.
-
-
- In deploy-as, specify the username and password of the user deploying the database.
- In the following command, it is assumed the root user is deploying the database and
- creating the cloud user.
-
-
- (Optional) For encryption_type, use file or web to indicate the technique used to
- pass in the database encryption password. Default: file. See .
-
-
- (Optional) For management_server_key, substitute the default key that is used to
- encrypt confidential parameters in the &PRODUCT; properties file. Default: password. It
- is highly recommended that you replace this with a more secure value. See About Password
- and Key Encryption.
-
-
- (Optional) For database_key, substitute the default key that is used to encrypt
- confidential parameters in the &PRODUCT; database. Default: password. It is highly
- recommended that you replace this with a more secure value. See .
-
-
- (Optional) For management_server_ip, you may explicitly specify cluster management
- server node IP. If not specified, the local IP address will be used.
-
-
- cloudstack-setup-databases cloud:<dbpassword>@<ip address mysql server> \
---deploy-as=root:<password> \
--e <encryption_type> \
--m <management_server_key> \
--k <database_key> \
--i <management_server_ip>
- When this script is finished, you should see a message like “Successfully initialized
- the database.â€
-
-
-
diff --git a/docs/en-US/management-server-install-db-local.xml b/docs/en-US/management-server-install-db-local.xml
deleted file mode 100644
index ff5ab60b91f..00000000000
--- a/docs/en-US/management-server-install-db-local.xml
+++ /dev/null
@@ -1,167 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Install the Database on the Management Server Node
- This section describes how to install MySQL on the same machine with the Management Server.
- This technique is intended for a simple deployment that has a single Management Server node. If
- you have a multi-node Management Server deployment, you will typically use a separate node for
- MySQL. See .
-
-
- Install MySQL from the package repository of your distribution:
- On RHEL or CentOS:
- yum install mysql-server
- On Ubuntu:
- apt-get install mysql-server
-
-
- Open the MySQL configuration file. The configuration file is /etc/my.cnf or
- /etc/mysql/my.cnf, depending on your OS.
-
-
- Insert the following lines in the [mysqld] section.
- You can put these lines below the datadir line. The max_connections parameter should be
- set to 350 multiplied by the number of Management Servers you are deploying. This example
- assumes one Management Server.
-
- On Ubuntu, you can also create a file /etc/mysql/conf.d/cloudstack.cnf and add these
- directives there. Don't forget to add [mysqld] on the first line of the file.
-
- innodb_rollback_on_timeout=1
-innodb_lock_wait_timeout=600
-max_connections=350
-log-bin=mysql-bin
-binlog-format = 'ROW'
-
-
- Start or restart MySQL to put the new configuration into effect.
- On RHEL/CentOS, MySQL doesn't automatically start after installation. Start it
- manually.
- service mysqld start
- On Ubuntu, restart MySQL.
- service mysqld restart
-
-
- (CentOS and RHEL only; not required on Ubuntu)
-
- On RHEL and CentOS, MySQL does not set a root password by default. It is very strongly
- recommended that you set a root password as a security precaution.
-
- Run the following command to secure your installation. You can answer "Y" to all
- questions.
- mysql_secure_installation
-
-
- &PRODUCT; can be blocked by security mechanisms, such as SELinux. Disable SELinux to
- ensure + that the Agent has all the required permissions.
- Configure SELinux (RHEL and CentOS):
-
-
- Check whether SELinux is installed on your machine. If not, you can skip this
- section.
- In RHEL or CentOS, SELinux is installed and enabled by default. You can verify this
- with:
- $ rpm -qa | grep selinux
-
-
- Set the SELINUX variable in /etc/selinux/config to
- "permissive". This ensures that the permissive setting will be maintained after a system
- reboot.
- In RHEL or CentOS:
- vi /etc/selinux/config
- Change the following line
- SELINUX=enforcing
- to this:
- SELINUX=permissive
-
-
- Set SELinux to permissive starting immediately, without requiring a system
- reboot.
- $ setenforce permissive
-
-
-
-
- Set up the database. The following command creates the "cloud" user on the
- database.
-
-
- In dbpassword, specify the password to be assigned to the "cloud" user. You can
- choose to provide no password although that is not recommended.
-
-
- In deploy-as, specify the username and password of the user deploying the database.
- In the following command, it is assumed the root user is deploying the database and
- creating the "cloud" user.
-
-
- (Optional) For encryption_type, use file or web to indicate the technique used to
- pass in the database encryption password. Default: file. See .
-
-
- (Optional) For management_server_key, substitute the default key that is used to
- encrypt confidential parameters in the &PRODUCT; properties file. Default: password. It
- is highly recommended that you replace this with a more secure value. See .
-
-
- (Optional) For database_key, substitute the default key that is used to encrypt
- confidential parameters in the &PRODUCT; database. Default: password. It is highly
- recommended that you replace this with a more secure value. See .
-
-
- (Optional) For management_server_ip, you may explicitly specify cluster management
- server node IP. If not specified, the local IP address will be used.
-
-
- cloudstack-setup-databases cloud:<dbpassword>@localhost \
---deploy-as=root:<password> \
--e <encryption_type> \
--m <management_server_key> \
--k <database_key> \
--i <management_server_ip>
- When this script is finished, you should see a message like “Successfully initialized
- the database.â€
-
- If the script is unable to connect to the MySQL database, check
- the "localhost" loopback address in /etc/hosts. It should
- be pointing to the IPv4 loopback address "127.0.0.1" and not the IPv6 loopback
- address ::1. Alternatively, reconfigure MySQL to bind to the IPv6 loopback
- interface.
-
-
-
-
- If you are running the KVM hypervisor on the same machine with the Management Server,
- edit /etc/sudoers and add the following line:
- Defaults:cloud !requiretty
-
-
- Now that the database is set up, you can finish configuring the OS for the Management
- Server. This command will set up iptables, sudoers, and start the Management Server.
- # cloudstack-setup-management
- You should see the message “&PRODUCT; Management Server setup is done.â€
-
-
-
diff --git a/docs/en-US/management-server-install-db.xml b/docs/en-US/management-server-install-db.xml
deleted file mode 100644
index 9d41af2562b..00000000000
--- a/docs/en-US/management-server-install-db.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Install the database server
- The &PRODUCT; management server uses a MySQL database server to store its data.
- When you are installing the management server on a single node, you can install the MySQL server locally.
- For an installation that has multiple management server nodes, we assume the MySQL database also runs on a separate node.
-
- &PRODUCT; has been tested with MySQL 5.1 and 5.5. These versions are included in RHEL/CentOS and Ubuntu.
-
-
-
diff --git a/docs/en-US/management-server-install-flow.xml b/docs/en-US/management-server-install-flow.xml
deleted file mode 100644
index cd73c69e587..00000000000
--- a/docs/en-US/management-server-install-flow.xml
+++ /dev/null
@@ -1,42 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Management Server Installation
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/management-server-install-multi-node.xml b/docs/en-US/management-server-install-multi-node.xml
deleted file mode 100644
index 480d84ea94f..00000000000
--- a/docs/en-US/management-server-install-multi-node.xml
+++ /dev/null
@@ -1,69 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Prepare and Start Additional Management Servers
- For your second and subsequent Management Servers, you will install the Management Server
- software, connect it to the database, and set up the OS for the Management Server.
-
-
- Perform the steps in and or as
- appropriate.
-
-
- This step is required only for installations where XenServer is installed on the hypervisor hosts.
- Download vhd-util from vhd-util
- Copy vhd-util to
- /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver.
-
-
- Ensure that necessary services are started and set to start on boot.
- # service rpcbind start
-# service nfs start
-# chkconfig nfs on
-# chkconfig rpcbind on
-
-
-
-
- Configure the database client. Note the absence of the --deploy-as argument in this
- case. (For more details about the arguments to this command, see .)
- # cloudstack-setup-databases cloud:dbpassword@dbhost -e encryption_type -m management_server_key -k database_key -i management_server_ip
-
-
-
- Configure the OS and start the Management Server:
- # cloudstack-setup-management
- The Management Server on this node should now be running.
-
-
- Repeat these steps on each additional Management Server.
-
-
- Be sure to configure a load balancer for the Management Servers. See .
-
-
-
diff --git a/docs/en-US/management-server-install-nfs-shares.xml b/docs/en-US/management-server-install-nfs-shares.xml
deleted file mode 100644
index a12e09c3eca..00000000000
--- a/docs/en-US/management-server-install-nfs-shares.xml
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Prepare NFS Shares
- &PRODUCT; needs a place to keep primary and secondary storage (see Cloud Infrastructure Overview). Both of these can be NFS shares. This section tells how to set up the NFS shares before adding the storage to &PRODUCT;.
- Alternative Storage
- NFS is not the only option for primary or secondary storage. For example, you may use Ceph RBD, GlusterFS, iSCSI, and others. The choice of storage system will depend on the choice of hypervisor and whether you are dealing with primary or secondary storage.
-
- The requirements for primary and secondary storage are described in:
-
-
-
-
- A production installation typically uses a separate NFS server. See .
- You can also use the Management Server node as the NFS server. This is more typical of a trial installation, but is technically possible in a larger deployment. See .
-
-
-
diff --git a/docs/en-US/management-server-install-overview.xml b/docs/en-US/management-server-install-overview.xml
deleted file mode 100644
index 5f46b0099bd..00000000000
--- a/docs/en-US/management-server-install-overview.xml
+++ /dev/null
@@ -1,48 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Management Server Installation Overview
- This section describes installing the Management Server. There are two slightly different installation flows, depending on how many Management Server nodes will be in your cloud:
-
- A single Management Server node, with MySQL on the same node.
- Multiple Management Server nodes, with MySQL on a node separate from the Management Servers.
-
- In either case, each machine must meet the system requirements described in System Requirements.
- For the sake of security, be sure the public Internet can not access port 8096 or port 8250 on the Management Server.
- The procedure for installing the Management Server is:
-
-
- Prepare the Operating System
-
-
- (XenServer only) Download and install vhd-util.
-
- Install the First Management Server
- Install and Configure the MySQL database
- Prepare NFS Shares
- Prepare and Start Additional Management Servers (optional)
- Prepare the System VM Template
-
-
diff --git a/docs/en-US/management-server-install-prepare-os.xml b/docs/en-US/management-server-install-prepare-os.xml
deleted file mode 100644
index 02453a0b207..00000000000
--- a/docs/en-US/management-server-install-prepare-os.xml
+++ /dev/null
@@ -1,54 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Prepare the Operating System
- The OS must be prepared to host the Management Server using the following steps. These steps must be performed on each Management Server node.
-
- Log in to your OS as root.
-
- Check for a fully qualified hostname.
- hostname --fqdn
- This should return a fully qualified hostname such as "management1.lab.example.org". If it does not, edit /etc/hosts so that it does.
-
-
- Make sure that the machine can reach the Internet.
- ping www.cloudstack.org
-
-
- Turn on NTP for time synchronization.
- NTP is required to synchronize the clocks of the servers in your cloud.
-
-
- Install NTP.
- On RHEL or CentOS:
- yum install ntp
- On Ubuntu:
- apt-get install openntpd
-
-
-
- Repeat all of these steps on every host where the Management Server will be installed.
-
-
diff --git a/docs/en-US/management-server-install-systemvm.xml b/docs/en-US/management-server-install-systemvm.xml
deleted file mode 100644
index 0d930ad62e0..00000000000
--- a/docs/en-US/management-server-install-systemvm.xml
+++ /dev/null
@@ -1,76 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Prepare the System VM Template
- Secondary storage must be seeded with a template that is used for &PRODUCT; system
- VMs.
-
- When copying and pasting a command, be sure the command has pasted as a single line before
- executing. Some document viewers may introduce unwanted line breaks in copied text.
-
-
-
- On the Management Server, run one or more of the following cloud-install-sys-tmplt
- commands to retrieve and decompress the system VM template. Run the command for each
- hypervisor type that you expect end users to run in this Zone.
- If your secondary storage mount point is not named /mnt/secondary, substitute your own
- mount point name.
- If you set the &PRODUCT; database encryption type to "web" when you set up the database,
- you must now add the parameter -s <management-server-secret-key>. See .
- This process will require approximately 5 GB of free space on the local file system and
- up to 30 minutes each time it runs.
-
-
- For XenServer:
- # /usr/lib64/cloud/common/scripts/storage/secondary/cloud-install-sys-tmplt -m /mnt/secondary -u http://download.cloud.com/templates/acton/acton-systemvm-02062012.vhd.bz2 -h xenserver -s <optional-management-server-secret-key> -F
-
-
- For vSphere:
- # /usr/lib64/cloud/common/scripts/storage/secondary/cloud-install-sys-tmplt -m /mnt/secondary -u http://download.cloud.com/templates/burbank/burbank-systemvm-08012012.ova -h vmware -s <optional-management-server-secret-key> -F
-
-
- For KVM:
- # /usr/lib64/cloud/common/scripts/storage/secondary/cloud-install-sys-tmplt -m /mnt/secondary -u http://download.cloud.com/templates/acton/acton-systemvm-02062012.qcow2.bz2 -h kvm -s <optional-management-server-secret-key> -F
-
-
- For LXC:
- # /usr/lib64/cloud/common/scripts/storage/secondary/cloud-install-sys-tmplt -m /mnt/secondary -u http://download.cloud.com/templates/acton/acton-systemvm-02062012.qcow2.bz2 -h lxc -s <optional-management-server-secret-key> -F
-
-
- On Ubuntu, use the following path instead:
- # /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt
-
-
- If you are using a separate NFS server, perform this step. If you are using the
- Management Server as the NFS server, you MUST NOT perform this step.
- When the script has finished, unmount secondary storage and remove the created
- directory.
- # umount /mnt/secondary
-# rmdir /mnt/secondary
-
-
- Repeat these steps for each secondary storage server.
-
-
-
diff --git a/docs/en-US/management-server-lb.xml b/docs/en-US/management-server-lb.xml
deleted file mode 100644
index 13f87560e10..00000000000
--- a/docs/en-US/management-server-lb.xml
+++ /dev/null
@@ -1,66 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Management Server Load Balancing
- &PRODUCT; can use a load balancer to provide a virtual IP for multiple Management
- Servers. The administrator is responsible for creating the load balancer rules for the
- Management Servers. The application requires persistence or stickiness across multiple sessions.
- The following chart lists the ports that should be load balanced and whether or not persistence
- is required.
- Even if persistence is not required, enabling it is permitted.
-
-
-
-
- Source Port
- Destination Port
- Protocol
- Persistence Required?
-
-
-
-
- 80 or 443
- 8080 (or 20400 with AJP)
- HTTP (or AJP)
- Yes
-
-
- 8250
- 8250
- TCP
- Yes
-
-
- 8096
- 8096
- HTTP
- No
-
-
-
-
- In addition to above settings, the administrator is responsible for setting the 'host' global
- config value from the management server IP to load balancer virtual IP address.
- If the 'host' value is not set to the VIP for Port 8250 and one of your management servers crashes,
- the UI is still available but the system VMs will not be able to contact the management server.
-
-
diff --git a/docs/en-US/management-server-overview.xml b/docs/en-US/management-server-overview.xml
deleted file mode 100644
index b8e2d53f052..00000000000
--- a/docs/en-US/management-server-overview.xml
+++ /dev/null
@@ -1,76 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Management Server Overview
-
- The Management Server is the &PRODUCT; software that manages cloud
- resources. By interacting with the Management Server through its UI or
- API, you can configure and manage your cloud infrastructure.
-
-
- The Management Server runs on a dedicated server or VM. It controls
- allocation of virtual machines to hosts and assigns storage and IP
- addresses to the virtual machine instances. The Management Server
- runs in a Tomcat container and requires a MySQL database for persistence.
-
-
- The machine must meet the system requirements described in System
- Requirements.
-
- The Management Server:
-
-
-
-
- Provides the web user interface for the administrator and a
- reference user interface for end users.
-
-
-
- Provides the APIs for &PRODUCT;.
-
-
- Manages the assignment of guest VMs to particular hosts.
-
-
-
- Manages the assignment of public and private IP addresses to
- particular accounts.
-
-
-
- Manages the allocation of storage to guests as virtual disks.
-
-
-
- Manages snapshots, templates, and ISO images, possibly
- replicating them across data centers.
-
-
-
- Provides a single point of configuration for the cloud.
-
-
-
diff --git a/docs/en-US/manual-live-migration.xml b/docs/en-US/manual-live-migration.xml
deleted file mode 100644
index 1daa6d3d937..00000000000
--- a/docs/en-US/manual-live-migration.xml
+++ /dev/null
@@ -1,56 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Moving VMs Between Hosts (Manual Live Migration)
- The &PRODUCT; administrator can move a running VM from one host to another without interrupting service to users or going into maintenance mode. This is called manual live migration, and can be done under the following conditions:
-
- The root administrator is logged in. Domain admins and users can not perform manual live migration of VMs.
- The VM is running. Stopped VMs can not be live migrated.
- The destination host must have enough available capacity. If not, the VM will remain in the "migrating" state until memory becomes available.
- (KVM) The VM must not be using local disk storage. (On XenServer and VMware, VM live migration
- with local disk is enabled by &PRODUCT; support for XenMotion and vMotion.)
- (KVM) The destination host must be in the same cluster as the original host.
- (On XenServer and VMware, VM live migration from one cluster to another is enabled by &PRODUCT; support for XenMotion and vMotion.)
-
-
- To manually live migrate a virtual machine
-
- Log in to the &PRODUCT; UI as a user or admin.
- In the left navigation, click Instances.
- Choose the VM that you want to migrate.
- Click the Migrate Instance button.
-
-
-
- Migrateinstance.png: button to migrate an instance
-
-
- From the list of suitable hosts, choose the one to which you want to move the VM.
- If the VM's storage has to be migrated along with the VM, this will be noted in the host
- list. &PRODUCT; will take care of the storage migration for you.
-
- Click OK.
-
-
-
diff --git a/docs/en-US/marvin.xml b/docs/en-US/marvin.xml
deleted file mode 100644
index 8fd2c96fe3f..00000000000
--- a/docs/en-US/marvin.xml
+++ /dev/null
@@ -1,33 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Marvin
- Marvin is the &PRODUCT; automation framework. It originated as a tool for integration testing but is now also used to build DevCloud as well as to provide a Python &PRODUCT; API binding.
-
- Marvin's complete documenation is on the wiki at https://cwiki.apache.org/CLOUDSTACK/testing-with-python.html
- The source code is located at tools/marvin
-
-
-
diff --git a/docs/en-US/max-result-page-returned.xml b/docs/en-US/max-result-page-returned.xml
deleted file mode 100644
index fdbf63962d4..00000000000
--- a/docs/en-US/max-result-page-returned.xml
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Maximum Result Pages Returned
-
- For each cloud, there is a default upper limit on the number of results that any API command will return in a single page. This is to help prevent overloading the cloud servers and prevent DOS attacks. For example, if the page size limit is 500 and a command returns 10,000 results, the command will return 20 pages.
-
- The default page size limit can be different for each cloud. It is set in the global configuration parameter default.page.size. If your cloud has many users with lots of VMs, you might need to increase the value of this parameter. At the same time, be careful not to set it so high that your site can be taken down by an enormous return from an API call. For more information about how to set global configuration parameters, see "Describe Your Deployment" in the Installation Guide.
- To decrease the page size limit for an individual API command, override the global setting with the page and pagesize parameters, which are available in any list* command (listCapabilities, listDiskOfferings, etc.).
-
- Both parameters must be specified together.
- The value of the pagesize parameter must be smaller than the value of default.page.size. That is, you can not increase the number of possible items in a result page, only decrease it.
-
- For syntax information on the list* commands, see the API Reference.
-
-
diff --git a/docs/en-US/migrate-datadisk-volume-new-storage-pool.xml b/docs/en-US/migrate-datadisk-volume-new-storage-pool.xml
deleted file mode 100644
index 1ed6bbd7cd3..00000000000
--- a/docs/en-US/migrate-datadisk-volume-new-storage-pool.xml
+++ /dev/null
@@ -1,78 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Migrating a Data Volume to a New Storage Pool
- There are two situations when you might want to migrate a disk:
-
- Move the disk to new storage, but leave it attached to the same running VM.
- Detach the disk from its current VM, move it to new storage, and attach it to a new VM.
-
-
- Migrating Storage For a Running VM
- (Supported on XenServer and VMware)
-
- Log in to the &PRODUCT; UI as a user or admin.
- In the left navigation bar, click Instances, click the VM name, and click View Volumes.
- Click the volume you want to migrate.
- Detach the disk from the VM.
- See but skip the “reattach†step at the end. You
- will do that after migrating to new storage.
- Click the Migrate Volume button
-
-
-
-
- Migrateinstance.png: button to migrate a volume
-
-
- and choose the destination from the dropdown list.
- Watch for the volume status to change to Migrating, then back to Ready.
-
-
-
- Migrating Storage and Attaching to a Different VM
-
- Log in to the &PRODUCT; UI as a user or admin.
- Detach the disk from the VM.
- See but skip the “reattach†step at the end. You
- will do that after migrating to new storage.
- Click the Migrate Volume button
-
-
-
-
- Migrateinstance.png: button to migrate a volume
-
-
- and choose the destination from the dropdown list.
- Watch for the volume status to change to Migrating, then back to Ready. You can find the
- volume by clicking Storage in the left navigation bar. Make sure that Volumes is
- displayed at the top of the window, in the Select View dropdown.
- Attach the volume to any desired VM running in the same cluster as the new storage server. See
-
-
-
-
-
diff --git a/docs/en-US/migrate-vm-rootvolume-volume-new-storage-pool.xml b/docs/en-US/migrate-vm-rootvolume-volume-new-storage-pool.xml
deleted file mode 100644
index 3bcaff53c63..00000000000
--- a/docs/en-US/migrate-vm-rootvolume-volume-new-storage-pool.xml
+++ /dev/null
@@ -1,47 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Migrating a VM Root Volume to a New Storage Pool
- (XenServer, VMware) You can live migrate a VM's root disk from one storage pool to another, without stopping the VM first.
- (KVM) When migrating the root disk volume, the VM must first be stopped, and users can not access the VM. After migration is complete, the VM can be restarted.
-
- Log in to the &PRODUCT; UI as a user or admin.
- In the left navigation bar, click Instances, and click the VM name.
- (KVM only) Stop the VM.
- Click the Migrate button
-
-
-
-
- Migrateinstance.png: button to migrate a VM or volume
-
-
- and choose the destination from the dropdown list.
- If the VM's storage has to be migrated along with the VM, this will be noted in the host
- list. &PRODUCT; will take care of the storage migration for you.
- Watch for the volume status to change to Migrating, then back to Running (or Stopped, in the case of KVM). This
- can take some time.
- (KVM only) Restart the VM.
-
-
\ No newline at end of file
diff --git a/docs/en-US/minimum-system-requirements.xml b/docs/en-US/minimum-system-requirements.xml
deleted file mode 100644
index 870ef68eae4..00000000000
--- a/docs/en-US/minimum-system-requirements.xml
+++ /dev/null
@@ -1,74 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Minimum System Requirements
-
- Management Server, Database, and Storage System Requirements
-
- The machines that will run the Management Server and MySQL database must meet the following requirements.
- The same machines can also be used to provide primary and secondary storage, such as via localdisk or NFS.
- The Management Server may be placed on a virtual machine.
-
-
- Operating system:
-
- Preferred: CentOS/RHEL 6.3+ or Ubuntu 12.04(.1)
-
-
- 64-bit x86 CPU (more cores results in better performance)
- 4 GB of memory
- 250 GB of local disk (more results in better capability; 500 GB recommended)
- At least 1 NIC
- Statically allocated IP address
- Fully qualified domain name as returned by the hostname command
-
-
-
- Host/Hypervisor System Requirements
- The host is where the cloud services run in the form of guest virtual machines. Each host is one machine that meets the following requirements:
-
- Must support HVM (Intel-VT or AMD-V enabled).
- 64-bit x86 CPU (more cores results in better performance)
- Hardware virtualization support required
- 4 GB of memory
- 36 GB of local disk
- At least 1 NIC
- If DHCP is used for hosts, ensure that no conflict occurs between DHCP server used for these hosts and the DHCP router created by &PRODUCT;.
- Latest hotfixes applied to hypervisor software
- When you deploy &PRODUCT;, the hypervisor host must not have any VMs already running
- All hosts within a cluster must be homogeneous. The CPUs must be of the same type, count, and feature flags.
-
- Hosts have additional requirements depending on the hypervisor. See the requirements listed at the top of the Installation section for your chosen hypervisor:
-
- Be sure you fulfill the additional hypervisor requirements and installation steps provided in this Guide. Hypervisor hosts must be properly prepared to work with CloudStack. For example, the requirements for XenServer are listed under Citrix XenServer Installation.
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/modify-delete-service-offerings.xml b/docs/en-US/modify-delete-service-offerings.xml
deleted file mode 100644
index b917af48252..00000000000
--- a/docs/en-US/modify-delete-service-offerings.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Modifying or Deleting a Service Offering
- Service offerings cannot be changed once created. This applies to both compute offerings and disk offerings.
- A service offering can be deleted. If it is no longer in use, it is deleted immediately and permanently. If the service offering is still in use, it will remain in the database until all the virtual machines referencing it have been deleted. After deletion by the administrator, a service offering will not be available to end users that are creating new instances.
-
diff --git a/docs/en-US/multi_node_management_server.xml b/docs/en-US/multi_node_management_server.xml
deleted file mode 100644
index 1ff713dbd16..00000000000
--- a/docs/en-US/multi_node_management_server.xml
+++ /dev/null
@@ -1,36 +0,0 @@
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Multi-Node Management Server
- The &PRODUCT; Management Server is deployed on one or more front-end servers connected to a single MySQL database. Optionally a pair of hardware load balancers distributes requests from the web. A backup management server set may be deployed using MySQL replication at a remote site to add DR capabilities.
-
-
-
-
- Multi-Node Management Server
-
- The administrator must decide the following.
-
- Whether or not load balancers will be used.
- How many Management Servers will be deployed.
- Whether MySQL replication will be deployed to enable disaster recovery.
-
-
diff --git a/docs/en-US/multi_node_overview.xml b/docs/en-US/multi_node_overview.xml
deleted file mode 100644
index 1eee0377ba9..00000000000
--- a/docs/en-US/multi_node_overview.xml
+++ /dev/null
@@ -1,43 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Management Server Multi-Node Installation Overview
-
- This section describes installing multiple Management Servers and installing MySQL on a node separate from the Management Servers. The machines must meet the system requirements described in System Requirements.
-
- For the sake of security, be sure the public Internet can not access port 8096 or port 8250 on the Management Server.
-
- The procedure for a multi-node installation is:
-
-
- Prepare the Operating System
- Install the First Management Server
- Install and Configure the Database
- Prepare NFS Shares
- Prepare and Start Additional Management Servers
- Prepare the System VM Template
-
-
-
diff --git a/docs/en-US/multi_site_deployment.xml b/docs/en-US/multi_site_deployment.xml
deleted file mode 100644
index 8ad94aa2a70..00000000000
--- a/docs/en-US/multi_site_deployment.xml
+++ /dev/null
@@ -1,50 +0,0 @@
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Multi-Site Deployment
- The &PRODUCT; platform scales well into multiple sites through the use of zones. The following diagram shows an example of a multi-site deployment.
-
-
-
-
- Example Of A Multi-Site Deployment
-
- Data Center 1 houses the primary Management Server as well as zone 1. The MySQL database is replicated in real time to the secondary Management Server installation in Data Center 2.
-
-
-
-
- Separate Storage Network
-
- This diagram illustrates a setup with a separate storage network. Each server has four NICs, two connected to pod-level network switches and two connected to storage network switches.
- There are two ways to configure the storage network:
-
- Bonded NIC and redundant switches can be deployed for NFS. In NFS deployments, redundant switches and bonded NICs still result in one network (one CIDR block+ default gateway address).
- iSCSI can take advantage of two separate storage networks (two CIDR blocks each with its own default gateway). Multipath iSCSI client can failover and load balance between separate storage networks.
-
-
-
-
-
- NIC Bonding And Multipath I/O
-
- This diagram illustrates the differences between NIC bonding and Multipath I/O (MPIO). NIC bonding configuration involves only one network. MPIO involves two separate networks.
-
diff --git a/docs/en-US/multiple-ip-nic.xml b/docs/en-US/multiple-ip-nic.xml
deleted file mode 100644
index 344dc8df16f..00000000000
--- a/docs/en-US/multiple-ip-nic.xml
+++ /dev/null
@@ -1,95 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Configuring Multiple IP Addresses on a Single NIC
- &PRODUCT; provides you the ability to associate multiple private IP addresses per guest VM
- NIC. In addition to the primary IP, you can assign additional IPs to the guest VM NIC. This
- feature is supported on all the network configurations—Basic, Advanced, and VPC. Security
- Groups, Static NAT and Port forwarding services are supported on these additional IPs.
- As always, you can specify an IP from the guest subnet; if not specified, an IP is
- automatically picked up from the guest VM subnet. You can view the IPs associated with for each
- guest VM NICs on the UI. You can apply NAT on these additional guest IPs by using network
- configuration option in the &PRODUCT; UI. You must specify the NIC to which the IP should be
- associated.
- This feature is supported on XenServer, KVM, and VMware hypervisors. Note that Basic zone
- security groups are not supported on VMware.
-
- Use Cases
- Some of the use cases are described below:
-
-
- Network devices, such as firewalls and load balancers, generally work best when they
- have access to multiple IP addresses on the network interface.
-
-
- Moving private IP addresses between interfaces or instances. Applications that are
- bound to specific IP addresses can be moved between instances.
-
-
- Hosting multiple SSL Websites on a single instance. You can install multiple SSL
- certificates on a single instance, each associated with a distinct IP address.
-
-
-
-
- Guidelines
- To prevent IP conflict, configure different subnets when multiple networks are connected
- to the same VM.
-
-
- Assigning Additional IPs to a VM
-
-
- Log in to the &PRODUCT; UI.
-
-
- In the left navigation bar, click Instances.
-
-
- Click the name of the instance you want to work with.
-
-
- In the Details tab, click NICs.
-
-
- Click View Secondary IPs.
-
-
- Click Acquire New Secondary IP, and click Yes in the confirmation dialog.
- You need to configure the IP on the guest VM NIC manually. &PRODUCT; will not
- automatically configure the acquired IP address on the VM. Ensure that the IP address
- configuration persist on VM reboot.
- Within a few moments, the new IP address should appear with the state Allocated. You
- can now use the IP address in Port Forwarding or StaticNAT rules.
-
-
-
-
- Port Forwarding and StaticNAT Services Changes
- Because multiple IPs can be associated per NIC, you are allowed to select a desired IP for
- the Port Forwarding and StaticNAT services. The default is the primary IP. To enable this
- functionality, an extra optional parameter 'vmguestip' is added to the Port forwarding and
- StaticNAT APIs (enableStaticNat, createIpForwardingRule) to indicate on what IP address NAT
- need to be configured. If vmguestip is passed, NAT is configured on the specified private IP
- of the VM. if not passed, NAT is configured on the primary IP of the VM.
-
-
diff --git a/docs/en-US/multiple-ip-range.xml b/docs/en-US/multiple-ip-range.xml
deleted file mode 100644
index 42e0c2a9555..00000000000
--- a/docs/en-US/multiple-ip-range.xml
+++ /dev/null
@@ -1,42 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- About Multiple IP Ranges
-
- The feature can only be implemented on IPv4 addresses.
-
- &PRODUCT; provides you with the flexibility to add guest IP ranges from different subnets in
- Basic zones and security groups-enabled Advanced zones. For security groups-enabled Advanced
- zones, it implies multiple subnets can be added to the same VLAN. With the addition of this
- feature, you will be able to add IP address ranges from the same subnet or from a different one
- when IP address are exhausted. This would in turn allows you to employ higher number of subnets
- and thus reduce the address management overhead. To support this feature, the capability of
- createVlanIpRange API is extended to add IP ranges also from a different
- subnet.
- Ensure that you manually configure the gateway of the new subnet before adding the IP range.
- Note that &PRODUCT; supports only one gateway for a subnet; overlapping subnets are not
- currently supported.
- Use the deleteVlanRange API to delete IP ranges. This operation fails if an IP
- from the remove range is in use. If the remove range contains the IP address on which the DHCP
- server is running, &PRODUCT; acquires a new IP from the same subnet. If no IP is available in
- the subnet, the remove operation fails.
- This feature is supported on KVM, xenServer, and VMware hypervisors.
-
diff --git a/docs/en-US/multiple-system-vm-vmware.xml b/docs/en-US/multiple-system-vm-vmware.xml
deleted file mode 100644
index 014dfa1f329..00000000000
--- a/docs/en-US/multiple-system-vm-vmware.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Multiple System VM Support for VMware
- Every &PRODUCT; zone has single System VM for template processing tasks such as downloading templates, uploading templates, and uploading ISOs. In a zone where VMware is being used, additional System VMs can be launched to process VMware-specific tasks such as taking snapshots and creating private templates. The &PRODUCT; management server launches additional System VMs for VMware-specific tasks as the load increases. The management server monitors and weights all commands sent to these System VMs and performs dynamic load balancing and scaling-up of more System VMs.
-
diff --git a/docs/en-US/network-offering-usage-record-format.xml b/docs/en-US/network-offering-usage-record-format.xml
deleted file mode 100644
index a1b0da96221..00000000000
--- a/docs/en-US/network-offering-usage-record-format.xml
+++ /dev/null
@@ -1,43 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Network Offering Usage Record Format
-
- account – name of the account
- accountid – ID of the account
- domainid – ID of the domain in which this account resides
- zoneid – Zone where the usage occurred
- description – A string describing what the usage record is tracking
- usage – String representation of the usage, including the units of usage (e.g. 'Hrs' for hours)
- usagetype – A number representing the usage type (see Usage Types)
- rawusage – A number representing the actual usage in hours
- usageid – ID of the network offering
- usagetype – A number representing the usage type (see Usage Types)
- offeringid – Network offering ID
- virtualMachineId – The ID of the virtual machine
- virtualMachineId – The ID of the virtual machine
- startdate, enddate – The range of time for which the usage is aggregated; see Dates in the Usage Record
-
-
diff --git a/docs/en-US/network-offerings.xml b/docs/en-US/network-offerings.xml
deleted file mode 100644
index 8c685bfc903..00000000000
--- a/docs/en-US/network-offerings.xml
+++ /dev/null
@@ -1,87 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Network Offerings
-
- For the most up-to-date list of supported network services, see the &PRODUCT; UI or call
- listNetworkServices.
-
- A network offering is a named set of network services, such as:
-
-
- DHCP
-
-
- DNS
-
-
- Source NAT
-
-
- Static NAT
-
-
- Port Forwarding
-
-
- Load Balancing
-
-
- Firewall
-
-
- VPN
-
-
- (Optional) Name one of several available providers to use for a given service, such as
- Juniper for the firewall
-
-
- (Optional) Network tag to specify which physical network to use
-
-
- When creating a new VM, the user chooses one of the available network offerings, and that
- determines which network services the VM can use.
- The &PRODUCT; administrator can create any number of custom network offerings, in addition
- to the default network offerings provided by &PRODUCT;. By creating multiple custom network
- offerings, you can set up your cloud to offer different classes of service on a single
- multi-tenant physical network. For example, while the underlying physical wiring may be the same
- for two tenants, tenant A may only need simple firewall protection for their website, while
- tenant B may be running a web server farm and require a scalable firewall solution, load
- balancing solution, and alternate networks for accessing the database backend.
-
- If you create load balancing rules while using a network service offering that includes an
- external load balancer device such as NetScaler, and later change the network service offering
- to one that uses the &PRODUCT; virtual router, you must create a firewall rule on the virtual
- router for each of your existing load balancing rules so that they continue to
- function.
-
- When creating a new virtual network, the &PRODUCT; administrator chooses which network
- offering to enable for that network. Each virtual network is associated with one network
- offering. A virtual network can be upgraded or downgraded by changing its associated network
- offering. If you do this, be sure to reprogram the physical network to match.
- &PRODUCT; also has internal network offerings for use by &PRODUCT; system VMs. These network
- offerings are not visible to users but can be modified by administrators.
-
-
diff --git a/docs/en-US/network-rate.xml b/docs/en-US/network-rate.xml
deleted file mode 100644
index 56fe25c04a5..00000000000
--- a/docs/en-US/network-rate.xml
+++ /dev/null
@@ -1,144 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Network Throttling
- Network throttling is the process of controlling the network access and bandwidth usage
- based on certain rules. &PRODUCT; controls this behaviour of the guest networks in the cloud by
- using the network rate parameter. This parameter is defined as the default data transfer rate in
- Mbps (Megabits Per Second) allowed in a guest network. It defines the upper limits for network
- utilization. If the current utilization is below the allowed upper limits, access is granted,
- else revoked.
- You can throttle the network bandwidth either to control the usage above a certain limit for
- some accounts, or to control network congestion in a large cloud environment. The network rate
- for your cloud can be configured on the following:
-
-
- Network Offering
-
-
- Service Offering
-
-
- Global parameter
-
-
- If network rate is set to NULL in service offering, the value provided in the
- vm.network.throttling.rate global parameter is applied. If the value is set to NULL for network
- offering, the value provided in the network.throttling.rate global parameter is
- considered.
- For the default public, storage, and management networks, network rate is set to 0. This
- implies that the public, storage, and management networks will have unlimited bandwidth by
- default. For default guest networks, network rate is set to NULL. In this case, network rate is
- defaulted to the global parameter value.
- The following table gives you an overview of how network rate is applied on different types
- of networks in &PRODUCT;.
-
-
-
-
-
-
- Networks
- Network Rate Is Taken from
-
-
-
-
- Guest network of Virtual Router
- Guest Network Offering
-
-
- Public network of Virtual Router
- Guest Network Offering
-
-
- Storage network of Secondary Storage VM
- System Network Offering
-
-
- Management network of Secondary Storage VM
- System Network Offering
-
-
- Storage network of Console Proxy VM
- System Network Offering
-
-
- Management network of Console Proxy VM
- System Network Offering
-
-
- Storage network of Virtual Router
- System Network Offering
-
-
- Management network of Virtual Router
- System Network Offering
-
-
- Public network of Secondary Storage VM
- System Network Offering
-
-
- Public network of Console Proxy VM
- System Network Offering
-
-
- Default network of a guest VM
- Compute Offering
-
-
- Additional networks of a guest VM
- Corresponding Network Offerings
-
-
-
-
- A guest VM must have a default network, and can also have many additional networks.
- Depending on various parameters, such as the host and virtual switch used, you can observe a
- difference in the network rate in your cloud. For example, on a VMware host the actual network
- rate varies based on where they are configured (compute offering, network offering, or both);
- the network type (shared or isolated); and traffic direction (ingress or egress).
- The network rate set for a network offering used by a particular network in &PRODUCT; is
- used for the traffic shaping policy of a port group, for example: port group A, for that
- network: a particular subnet or VLAN on the actual network. The virtual routers for that network
- connects to the port group A, and by default instances in that network connects to this port
- group. However, if an instance is deployed with a compute offering with the network rate set,
- and if this rate is used for the traffic shaping policy of another port group for the network,
- for example port group B, then instances using this compute offering are connected to the port
- group B, instead of connecting to port group A.
- The traffic shaping policy on standard port groups in VMware only applies to the egress
- traffic, and the net effect depends on the type of network used in &PRODUCT;. In shared
- networks, ingress traffic is unlimited for &PRODUCT;, and egress traffic is limited to the rate
- that applies to the port group used by the instance if any. If the compute offering has a
- network rate configured, this rate applies to the egress traffic, otherwise the network rate set
- for the network offering applies. For isolated networks, the network rate set for the network
- offering, if any, effectively applies to the ingress traffic. This is mainly because the network
- rate set for the network offering applies to the egress traffic from the virtual router to the
- instance. The egress traffic is limited by the rate that applies to the port group used by the
- instance if any, similar to shared networks.
- For example:
- Network rate of network offering = 10 Mbps
- Network rate of compute offering = 200 Mbps
- In shared networks, ingress traffic will not be limited for &PRODUCT;, while egress traffic
- will be limited to 200 Mbps. In an isolated network, ingress traffic will be limited to 10 Mbps
- and egress to 200 Mbps.
-
diff --git a/docs/en-US/network-service-providers.xml b/docs/en-US/network-service-providers.xml
deleted file mode 100644
index 32f36ae3d47..00000000000
--- a/docs/en-US/network-service-providers.xml
+++ /dev/null
@@ -1,151 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Network Service Providers
-
- For the most up-to-date list of supported network service providers, see the &PRODUCT; UI
- or call listNetworkServiceProviders.
-
- A service provider (also called a network element) is hardware or virtual appliance that
- makes a network service possible; for example, a firewall appliance can be installed in the
- cloud to provide firewall service. On a single network, multiple providers can provide the same
- network service. For example, a firewall service may be provided by Cisco or Juniper devices in
- the same physical network.
- You can have multiple instances of the same service provider in a network (say, more than
- one Juniper SRX device).
- If different providers are set up to provide the same service on the network, the
- administrator can create network offerings so users can specify which network service provider
- they prefer (along with the other choices offered in network offerings). Otherwise, &PRODUCT;
- will choose which provider to use whenever the service is called for.
-
- Supported Network Service Providers
- &PRODUCT; ships with an internal list of the supported service providers, and you can
- choose from this list when creating a network offering.
-
-
-
-
-
-
-
-
-
-
-
-
-
- Virtual Router
- Citrix NetScaler
- Juniper SRX
- F5 BigIP
- Host based (KVM/Xen)
- Cisco VNMC
-
-
-
-
- Remote Access VPN
- Yes
- No
- No
- No
- No
- No
-
-
- DNS/DHCP/User Data
- Yes
- No
- No
- No
- No
- No
-
-
- Firewall
- Yes
- No
- Yes
- No
- No
- Yes
-
-
- Load Balancing
- Yes
- Yes
- No
- Yes
- No
- No
-
-
- Elastic IP
- No
- Yes
- No
- No
- No
- No
-
-
- Elastic LB
- No
- Yes
- No
- No
- No
- No
-
-
- Source NAT
- Yes
- No
- Yes
- No
- No
- Yes
-
-
- Static NAT
- Yes
- Yes
- Yes
- No
- No
- Yes
-
-
- Port Forwarding
- Yes
- No
- Yes
- No
- No
- Yes
-
-
-
-
-
diff --git a/docs/en-US/network-setup.xml b/docs/en-US/network-setup.xml
deleted file mode 100644
index ceee190d4ca..00000000000
--- a/docs/en-US/network-setup.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Network Setup
- Achieving the correct networking setup is crucial to a successful &PRODUCT;
- installation. This section contains information to help you make decisions and follow the right
- procedures to get your network set up correctly.
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/network-usage-record-format.xml b/docs/en-US/network-usage-record-format.xml
deleted file mode 100644
index 34b8f2d4955..00000000000
--- a/docs/en-US/network-usage-record-format.xml
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Network Usage Record Format
- For network usage (bytes sent/received), the following fields exist in a usage record.
-
- account – name of the account
- accountid – ID of the account
- domainid – ID of the domain in which this account resides
- zoneid – Zone where the usage occurred
- description – A string describing what the usage record is tracking
- usagetype – A number representing the usage type (see Usage Types)
- rawusage – A number representing the actual usage in hours
- usageid – Device ID (virtual router ID or external device ID)
- type – Device type (domain router, external load balancer, etc.)
- startdate, enddate – The range of time for which the usage is aggregated; see Dates in the Usage Record
-
-
diff --git a/docs/en-US/networking-in-a-pod.xml b/docs/en-US/networking-in-a-pod.xml
deleted file mode 100644
index 5a569bf4d1f..00000000000
--- a/docs/en-US/networking-in-a-pod.xml
+++ /dev/null
@@ -1,46 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Networking in a Pod
- The figure below illustrates network setup within a single pod. The hosts are connected to a
- pod-level switch. At a minimum, the hosts should have one physical uplink to each switch.
- Bonded NICs are supported as well. The pod-level switch is a pair of redundant gigabit
- switches with 10 G uplinks.
-
-
-
-
-
- networksinglepod.png: diagram showing logical view of network in a pod
-
-
- Servers are connected as follows:
-
- Storage devices are connected to only the network that carries management traffic.
- Hosts are connected to networks for both management traffic and public traffic.
- Hosts are also connected to one or more networks carrying guest traffic.
-
- We recommend the use of multiple physical Ethernet cards to implement each network interface as well as redundant switch fabric in order to maximize throughput and improve reliability.
-
-
diff --git a/docs/en-US/networking-in-a-zone.xml b/docs/en-US/networking-in-a-zone.xml
deleted file mode 100644
index e50efbac9ab..00000000000
--- a/docs/en-US/networking-in-a-zone.xml
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Networking in a Zone
- The following figure illustrates the network setup within a single zone.
-
-
-
-
-
- networksetupzone.png: Depicts network setup in a single zone
-
-
- A firewall for management traffic operates in the NAT mode. The network typically is assigned IP addresses in the 192.168.0.0/16 Class B private address space. Each pod is assigned IP addresses in the 192.168.*.0/24 Class C private address space.
- Each zone has its own set of public IP addresses. Public IP addresses from different zones do not overlap.
-
-
diff --git a/docs/en-US/networking-overview.xml b/docs/en-US/networking-overview.xml
deleted file mode 100644
index a71fe95a864..00000000000
--- a/docs/en-US/networking-overview.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Networking Overview
- &PRODUCT; offers two types of networking scenario:
-
-
- Basic. For AWS-style networking. Provides a single network where guest isolation can be provided through layer-3 means such as security groups (IP address source filtering).
- Advanced. For more sophisticated network topologies. This network model provides the most flexibility in defining guest networks.
-
- For more details, see Network Setup.
-
-
diff --git a/docs/en-US/networking_overview.xml b/docs/en-US/networking_overview.xml
deleted file mode 100644
index a5f27c31402..00000000000
--- a/docs/en-US/networking_overview.xml
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Networking Overview
-
- CloudStack offers two types of networking scenario:
-
-
- Basic. For AWS-style networking. Provides a single network where guest isolation can be provided through layer-3 means such as security groups (IP address source filtering).
- Advanced. For more sophisticated network topologies. This network model provides the most flexibility in defining guest networks.
-
- For more details, see Network Setup.
-
-
diff --git a/docs/en-US/networks-for-users-overview.xml b/docs/en-US/networks-for-users-overview.xml
deleted file mode 100644
index 19602c48b2a..00000000000
--- a/docs/en-US/networks-for-users-overview.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Overview of Setting Up Networking for Users
- People using cloud infrastructure have a variety of needs and preferences when it comes to the networking services provided by the cloud. As a &PRODUCT; administrator, you can do the following things to set up networking for your users:
-
- Set up physical networks in zones
- Set up several different providers for the same service on a single physical network (for example, both Cisco and Juniper firewalls)
- Bundle different types of network services into network offerings, so users can choose the desired network services for any given virtual machine
- Add new network offerings as time goes on so end users can upgrade to a better class of service on their network
- Provide more ways for a network to be accessed by a user, such as through a project of which the user is a member
-
-
diff --git a/docs/en-US/networks.xml b/docs/en-US/networks.xml
deleted file mode 100644
index b28f985a147..00000000000
--- a/docs/en-US/networks.xml
+++ /dev/null
@@ -1,58 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Managing Networks and Traffic
- In a &PRODUCT;, guest VMs can communicate with each other using shared infrastructure with
- the security and user perception that the guests have a private LAN. The &PRODUCT; virtual
- router is the main component providing networking features for guest traffic.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs/en-US/nfs-shares-on-management-server.xml b/docs/en-US/nfs-shares-on-management-server.xml
deleted file mode 100644
index 881ca8d7600..00000000000
--- a/docs/en-US/nfs-shares-on-management-server.xml
+++ /dev/null
@@ -1,117 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Using the Management Server as the NFS Server
- This section tells how to set up NFS shares for primary and secondary storage on the same node with the Management Server. This is more typical of a trial installation, but is technically possible in a larger deployment. It is assumed that you will have less than 16TB of storage on the host.
- The exact commands for the following steps may vary depending on your operating system version.
-
- On RHEL/CentOS systems, you'll need to install the nfs-utils package:
-
-$ sudo yum install nfs-utils
-
-
- On the Management Server host, create two directories that you will use for primary and secondary storage. For example:
-
-# mkdir -p /export/primary
-# mkdir -p /export/secondary
-
-
- To configure the new directories as NFS exports, edit /etc/exports. Export the NFS share(s) with rw,async,no_root_squash. For example:
- # vi /etc/exports
- Insert the following line.
- /export *(rw,async,no_root_squash)
-
- Export the /export directory.
- # exportfs -a
-
- Edit the /etc/sysconfig/nfs file.
- # vi /etc/sysconfig/nfs
- Uncomment the following lines:
-
-LOCKD_TCPPORT=32803
-LOCKD_UDPPORT=32769
-MOUNTD_PORT=892
-RQUOTAD_PORT=875
-STATD_PORT=662
-STATD_OUTGOING_PORT=2020
-
-
- Edit the /etc/sysconfig/iptables file.
- # vi /etc/sysconfig/iptables
- Add the following lines at the beginning of the INPUT chain where <NETWORK> is the network that you'll be using:
-
--A INPUT -s <NETWORK> -m state --state NEW -p udp --dport 111 -j ACCEPT
--A INPUT -s <NETWORK> -m state --state NEW -p tcp --dport 111 -j ACCEPT
--A INPUT -s <NETWORK> -m state --state NEW -p tcp --dport 2049 -j ACCEPT
--A INPUT -s <NETWORK> -m state --state NEW -p tcp --dport 32803 -j ACCEPT
--A INPUT -s <NETWORK> -m state --state NEW -p udp --dport 32769 -j ACCEPT
--A INPUT -s <NETWORK> -m state --state NEW -p tcp --dport 892 -j ACCEPT
--A INPUT -s <NETWORK> -m state --state NEW -p udp --dport 892 -j ACCEPT
--A INPUT -s <NETWORK> -m state --state NEW -p tcp --dport 875 -j ACCEPT
--A INPUT -s <NETWORK> -m state --state NEW -p udp --dport 875 -j ACCEPT
--A INPUT -s <NETWORK> -m state --state NEW -p tcp --dport 662 -j ACCEPT
--A INPUT -s <NETWORK> -m state --state NEW -p udp --dport 662 -j ACCEPT
-
-
- Run the following commands:
-
-# service iptables restart
-# service iptables save
-
-
- If NFS v4 communication is used between client and server, add your domain to /etc/idmapd.conf on both the hypervisor host and Management Server.
- # vi /etc/idmapd.conf
- Remove the character # from the beginning of the Domain line in idmapd.conf and replace the value in the file with your own domain. In the example below, the domain is company.com.
- Domain = company.com
-
- Reboot the Management Server host.
- Two NFS shares called /export/primary and /export/secondary are now set up.
-
- It is recommended that you test to be sure the previous steps have been successful.
-
- Log in to the hypervisor host.
- Be sure NFS and rpcbind are running. The commands might be different depending on your OS. For example:
-
-# service rpcbind start
-# service nfs start
-# chkconfig nfs on
-# chkconfig rpcbind on
-# reboot
-
-
- Log back in to the hypervisor host and try to mount the /export directories. For example (substitute your own management server name):
-
-# mkdir /primarymount
-# mount -t nfs <management-server-name>:/export/primary /primarymount
-# umount /primarymount
-# mkdir /secondarymount
-# mount -t nfs <management-server-name>:/export/secondary /secondarymount
-# umount /secondarymount
-
-
-
-
-
-
diff --git a/docs/en-US/nfs-shares-on-separate-server.xml b/docs/en-US/nfs-shares-on-separate-server.xml
deleted file mode 100644
index 947106dcd4f..00000000000
--- a/docs/en-US/nfs-shares-on-separate-server.xml
+++ /dev/null
@@ -1,52 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Using a Separate NFS Server
- This section tells how to set up NFS shares for secondary and (optionally) primary storage on an NFS server running on a separate node from the Management Server.
- The exact commands for the following steps may vary depending on your operating system version.
- (KVM only) Ensure that no volume is already mounted at your NFS mount point.
-
- On the storage server, create an NFS share for secondary storage and, if you are using NFS for primary storage as well, create a second NFS share. For example:
-
-# mkdir -p /export/primary
-# mkdir -p /export/secondary
-
-
- To configure the new directories as NFS exports, edit /etc/exports. Export the NFS share(s) with rw,async,no_root_squash. For example:
- # vi /etc/exports
- Insert the following line.
- /export *(rw,async,no_root_squash)
-
- Export the /export directory.
- # exportfs -a
-
- On the management server, create a mount point for secondary storage. For example:
- # mkdir -p /mnt/secondary
-
- Mount the secondary storage on your Management Server. Replace the example NFS server name and NFS share paths below with your own.
- # mount -t nfs nfsservername:/nfs/share/secondary /mnt/secondary
-
-
-
diff --git a/docs/en-US/non-contiguous-vlan.xml b/docs/en-US/non-contiguous-vlan.xml
deleted file mode 100644
index 193b91697c3..00000000000
--- a/docs/en-US/non-contiguous-vlan.xml
+++ /dev/null
@@ -1,67 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Adding Non Contiguous VLAN Ranges
- &PRODUCT; provides you with the flexibility to add non contiguous VLAN ranges to your
- network. The administrator can either update an existing VLAN range or add multiple non
- contiguous VLAN ranges while creating a zone. You can also use the UpdatephysicalNetwork API to
- extend the VLAN range.
-
-
- Log in to the &PRODUCT; UI as an administrator or end user.
-
-
- Ensure that the VLAN range does not already exist.
-
-
- In the left navigation, choose Infrastructure.
-
-
- On Zones, click View More, then click the zone to which you want to work with.
-
-
- Click Physical Network.
-
-
- In the Guest node of the diagram, click Configure.
-
-
- Click Edit
-
-
-
-
- edit-icon.png: button to edit the VLAN range.
-
-
- The VLAN Ranges field now is editable.
-
-
- Specify the start and end of the VLAN range in comma-separated list.
- Specify all the VLANs you want to use, VLANs not specified will be removed if you are
- adding new ranges to the existing list.
-
-
- Click Apply.
-
-
-
diff --git a/docs/en-US/offerings.xml b/docs/en-US/offerings.xml
deleted file mode 100644
index c880a9c4810..00000000000
--- a/docs/en-US/offerings.xml
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Service Offerings
- In this chapter we discuss compute, disk, and system service offerings. Network offerings
- are discussed in the section on setting up networking for users.
-
-
-
-
-
diff --git a/docs/en-US/ongoing-config-of-external-firewalls-lb.xml b/docs/en-US/ongoing-config-of-external-firewalls-lb.xml
deleted file mode 100644
index f5864da2b2d..00000000000
--- a/docs/en-US/ongoing-config-of-external-firewalls-lb.xml
+++ /dev/null
@@ -1,46 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
- Ongoing Configuration of External Firewalls and Load Balancers
- Additional user actions (e.g. setting a port forward) will cause further programming of the
- firewall and load balancer. A user may request additional public IP addresses and forward
- traffic received at these IPs to specific VMs. This is accomplished by enabling static NAT for a
- public IP address, assigning the IP to a VM, and specifying a set of protocols and port ranges
- to open. When a static NAT rule is created, &PRODUCT; programs the zone's external firewall with
- the following objects:
-
-
- A static NAT rule that maps the public IP address to the private IP address of a
- VM.
-
-
- A security policy that allows traffic within the set of protocols and port ranges that
- are specified.
-
-
- A firewall filter counter that measures the number of bytes of incoming traffic to the
- public IP.
-
-
- The number of incoming and outgoing bytes through source NAT, static NAT, and load balancing
- rules is measured and saved on each external element. This data is collected on a regular basis
- and stored in the &PRODUCT; database.
-
diff --git a/docs/en-US/over-provisioning-service-offering-limits.xml b/docs/en-US/over-provisioning-service-offering-limits.xml
deleted file mode 100644
index 5a403a30536..00000000000
--- a/docs/en-US/over-provisioning-service-offering-limits.xml
+++ /dev/null
@@ -1,161 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Over-Provisioning and Service Offering Limits
- (Supported for XenServer, KVM, and VMware)
- CPU and memory (RAM) over-provisioning factors can be set for each cluster to change the
- number of VMs that can run on each host in the cluster. This helps optimize the use of
- resources. By increasing the over-provisioning ratio, more resource capacity will be used. If
- the ratio is set to 1, no over-provisioning is done.
- The administrator can also set global default over-provisioning ratios
- in the cpu.overprovisioning.factor and mem.overprovisioning.factor global configuration variables.
- The default value of these variables is 1: over-provisioning is turned off by default.
-
- Over-provisioning ratios are dynamically substituted in &PRODUCT;'s capacity
- calculations. For example:
- Capacity = 2 GB
- Over-provisioning factor = 2
- Capacity after over-provisioning = 4 GB
- With this configuration, suppose you deploy 3 VMs of 1 GB each:
- Used = 3 GB
- Free = 1 GB
- The administrator can specify a memory over-provisioning ratio, and can specify both CPU and
- memory over-provisioning ratios on a per-cluster basis.
- In any given cloud, the optimum number of VMs for each host is affected by such things as
- the hypervisor, storage, and hardware configuration. These may be different for each cluster in
- the same cloud. A single global over-provisioning setting can not provide the best utilization
- for all the different clusters in the cloud. It has to be set for the lowest common denominator.
- The per-cluster setting provides a finer granularity for better utilization of resources, no
- matter where the &PRODUCT; placement algorithm decides to place a VM.
- The overprovisioning settings can be used along with dedicated resources (assigning a
- specific cluster to an account) to effectively offer different levels of service to
- different accounts. For example, an account paying for a more expensive level of service
- could be assigned to a dedicated cluster with an over-provisioning ratio of 1, and a
- lower-paying account to a cluster with a ratio of 2.
- When a new host is added to a cluster, &PRODUCT; will assume the host has the
- capability to perform the CPU and RAM over-provisioning which is configured for that
- cluster. It is up to the administrator to be sure the host is actually suitable for the
- level of over-provisioning which has been set.
-
- Limitations on Over-Provisioning in XenServer and KVM
-
- In XenServer, due to a constraint of this hypervisor, you can not use an
- over-provisioning factor greater than 4.
- The KVM hypervisor can not manage memory allocation to VMs dynamically.
- &PRODUCT; sets the minimum and maximum amount of memory that a VM can use.
- The hypervisor adjusts the memory within the set limits based on the memory contention.
-
-
-
- Requirements for Over-Provisioning
- Several prerequisites are required in order for over-provisioning to function
- properly. The feature is dependent on the OS type, hypervisor capabilities, and certain
- scripts. It is the administrator's responsibility to ensure that these requirements are
- met.
-
- Balloon Driver
- All VMs should have a balloon driver installed in them. The hypervisor
- communicates with the balloon driver to free up and make the memory available to a
- VM.
-
- XenServer
- The balloon driver can be found as a part of xen pv or PVHVM drivers. The xen
- pvhvm drivers are included in upstream linux kernels 2.6.36+.
-
-
- VMware
- The balloon driver can be found as a part of the VMware tools. All the VMs that
- are deployed in a over-provisioned cluster should have the VMware tools
- installed.
-
-
- KVM
- All VMs are required to support the virtio drivers. These drivers are installed
- in all Linux kernel versions 2.6.25 and greater. The administrator must set
- CONFIG_VIRTIO_BALLOON=y in the virtio configuration.
-
-
-
- Hypervisor capabilities
- The hypervisor must be capable of using the memory ballooning.
-
- XenServer
- The DMC (Dynamic Memory Control) capability of the hypervisor should be enabled.
- Only XenServer Advanced and above versions have this feature.
-
-
- VMware, KVM
- Memory ballooning is supported by default.
-
-
-
-
- Setting Over-Provisioning Ratios
- There are two ways the root admin can set CPU and RAM over-provisioning ratios. First, the
- global configuration settings cpu.overprovisioning.factor and mem.overprovisioning.factor will
- be applied when a new cluster is created. Later, the ratios can be modified for an existing
- cluster.
- Only VMs deployed after the change are affected by the new setting.
- If you want VMs deployed before the change to adopt the new over-provisioning ratio,
- you must stop and restart the VMs.
- When this is done, &PRODUCT; recalculates or scales the used and
- reserved capacities based on the new over-provisioning ratios,
- to ensure that &PRODUCT; is correctly tracking the amount of free capacity.
- It is safer not to deploy additional new VMs while the capacity recalculation is underway, in
- case the new values for available capacity are not high enough to accommodate the new VMs.
- Just wait for the new used/available values to become available, to be sure there is room
- for all the new VMs you want.
- To change the over-provisioning ratios for an existing cluster:
-
-
- Log in as administrator to the &PRODUCT; UI.
-
-
- In the left navigation bar, click Infrastructure.
-
-
- Under Clusters, click View All.
-
-
- Select the cluster you want to work with, and click the Edit button.
-
-
- Fill in your desired over-provisioning multipliers in the fields CPU overcommit
- ratio and RAM overcommit ratio. The value which is intially shown in these
- fields is the default value inherited from the global configuration settings.
-
-
- In XenServer, due to a constraint of this hypervisor, you can not use an
- over-provisioning factor greater than 4.
-
-
-
-
-
- Service Offering Limits and Over-Provisioning
- Service offering limits (e.g. 1 GHz, 1 core) are strictly enforced for core count. For example, a guest with a service offering of one core will have only one core available to it regardless of other activity on the Host.
- Service offering limits for gigahertz are enforced only in the presence of contention for CPU resources. For example, suppose that a guest was created with a service offering of 1 GHz on a Host that has 2 GHz cores, and that guest is the only guest running on the Host. The guest will have the full 2 GHz available to it. When multiple guests are attempting to use the CPU a weighting factor is used to schedule CPU resources. The weight is based on the clock speed in the service offering. Guests receive a CPU allocation that is proportionate to the GHz in the service offering. For example, a guest created from a 2 GHz service offering will receive twice the CPU allocation as a guest created from a 1 GHz service offering. &PRODUCT; does not perform memory over-provisioning.
-
-
\ No newline at end of file
diff --git a/docs/en-US/ovm-install.xml b/docs/en-US/ovm-install.xml
deleted file mode 100644
index fa4a86b0776..00000000000
--- a/docs/en-US/ovm-install.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Installing OVM for &PRODUCT;
- TODO
-
-
diff --git a/docs/en-US/ovm-requirements.xml b/docs/en-US/ovm-requirements.xml
deleted file mode 100644
index 70a8920a8ac..00000000000
--- a/docs/en-US/ovm-requirements.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- System Requirements for OVM
- TODO
-
diff --git a/docs/en-US/password-storage-engine.xml b/docs/en-US/password-storage-engine.xml
deleted file mode 100644
index 8bbc96fcac2..00000000000
--- a/docs/en-US/password-storage-engine.xml
+++ /dev/null
@@ -1,74 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Changing the Default Password Encryption
- Passwords are encoded when creating or updating users. &PRODUCT; allows you to determine the
- default encoding and authentication mechanism for admin and user logins. Two new configurable
- lists have been introduced—userPasswordEncoders and userAuthenticators.
- userPasswordEncoders allows you to configure the order of preference for encoding passwords,
- whereas userAuthenticators allows you to configure the order in which authentication schemes are
- invoked to validate user passwords.
- Additionally, the plain text user authenticator has been modified not to convert supplied
- passwords to their md5 sums before checking them with the database entries. It performs a simple
- string comparison between retrieved and supplied login passwords instead of comparing the
- retrieved md5 hash of the stored password against the supplied md5 hash of the password because
- clients no longer hash the password. The following method determines what encoding scheme is
- used to encode the password supplied during user creation or modification.
- When a new user is created, the user password is encoded by using the first valid encoder
- loaded as per the sequence specified in the UserPasswordEncoders property in the
- ComponentContext.xml or nonossComponentContext.xml
- files. The order of authentication schemes is determined by the UserAuthenticators
- property in the same files. If Non-OSS components, such as VMware environments, are to be
- deployed, modify the UserPasswordEncoders and UserAuthenticators lists
- in the nonossComponentContext.xml file, for OSS environments, such as
- XenServer or KVM, modify the ComponentContext.xml file. It is recommended
- to make uniform changes across both the files. When a new authenticator or encoder is added, you
- can add them to this list. While doing so, ensure that the new authenticator or encoder is
- specified as a bean in both these files. The administrator can change the ordering of both these
- properties as preferred to change the order of schemes. Modify the following list properties
- available in client/tomcatconf/nonossComponentContext.xml.in or
- client/tomcatconf/componentContext.xml.in as applicable, to the desired
- order:
- <property name="UserAuthenticators">
- <list>
- <ref bean="SHA256SaltedUserAuthenticator"/>
- <ref bean="MD5UserAuthenticator"/>
- <ref bean="LDAPUserAuthenticator"/>
- <ref bean="PlainTextUserAuthenticator"/>
- </list>
- </property>
- <property name="UserPasswordEncoders">
- <list>
- <ref bean="SHA256SaltedUserAuthenticator"/>
- <ref bean="MD5UserAuthenticator"/>
- <ref bean="LDAPUserAuthenticator"/>
- <ref bean="PlainTextUserAuthenticator"/>
- </list>
- In the above default ordering, SHA256Salt is used first for
- UserPasswordEncoders. If the module is found and encoding returns a valid value,
- the encoded password is stored in the user table's password column. If it fails for any reason,
- the MD5UserAuthenticator will be tried next, and the order continues. For
- UserAuthenticators, SHA256Salt authentication is tried first. If it succeeds, the
- user is logged into the Management server. If it fails, md5 is tried next, and attempts
- continues until any of them succeeds and the user logs in . If none of them works, the user is
- returned an invalid credential message.
-
diff --git a/docs/en-US/per-domain-limits.xml b/docs/en-US/per-domain-limits.xml
deleted file mode 100644
index c20e84d4a58..00000000000
--- a/docs/en-US/per-domain-limits.xml
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Per-Domain Limits
- &PRODUCT; allows the configuration of limits on a domain basis. With a domain limit in place, all users still have their account limits. They are additionally limited, as a group, to not exceed the resource limits set on their domain. Domain limits aggregate the usage of all accounts in the domain as well as all accounts in all subdomains of that domain. Limits set at the root domain level apply to the sum of resource usage by the accounts in all domains and sub-domains below that root domain.
- To set a domain limit:
-
- Log in to the &PRODUCT; UI.
- In the left navigation tree, click Domains.
- Select the domain you want to modify. The current domain limits are displayed. A value of -1 shows that there is no limit in place.
- Click the Edit button
-
-
-
- editbutton.png: edits the settings.
-
-
-
diff --git a/docs/en-US/performance-monitoring.xml b/docs/en-US/performance-monitoring.xml
deleted file mode 100644
index 70efbf783df..00000000000
--- a/docs/en-US/performance-monitoring.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Performance Monitoring
- Host and guest performance monitoring is available to end users and administrators. This allows the user to monitor their utilization of resources and determine when it is appropriate to choose a more powerful service offering or larger disk.
-
-
diff --git a/docs/en-US/persistent-network.xml b/docs/en-US/persistent-network.xml
deleted file mode 100644
index 1ccc99c59a6..00000000000
--- a/docs/en-US/persistent-network.xml
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
- Persistent Networks
- The network that you can provision without having to deploy any VMs on it is called a
- persistent network. A persistent network can be part of a VPC or a non-VPC environment.
- When you create other types of network, a network is only a database entry until the first
- VM is created on that network. When the first VM is created, a VLAN ID is assigned and the
- network is provisioned. Also, when the last VM is destroyed, the VLAN ID is released and the
- network is no longer available. With the addition of persistent network, you will have the
- ability to create a network in &PRODUCT; in which physical devices can be deployed without
- having to run any VMs. Additionally, you can deploy physical devices on that network.
- One of the advantages of having a persistent network is that you can create a VPC with a tier
- consisting of only physical devices. For example, you might create a VPC for a three-tier
- application, deploy VMs for Web and Application tier, and use physical machines for the
- Database tier. Another use case is that if you are providing services by using physical
- hardware, you can define the network as persistent and therefore even if all its VMs are
- destroyed the services will not be discontinued.
-
- Persistent Network Considerations
-
-
- Persistent network is designed for isolated networks.
-
-
- All default network offerings are non-persistent.
-
-
- A network offering cannot be editable because changing it affects the behavior of the
- existing networks that were created using this network offering.
-
-
- When you create a guest network, the network offering that you select defines the
- network persistence. This in turn depends on whether persistent network is enabled in the
- selected network offering.
-
-
- An existing network can be made persistent by changing its network offering to an
- offering that has the Persistent option enabled. While setting this property, even if the
- network has no running VMs, the network is provisioned.
-
-
- An existing network can be made non-persistent by changing its network offering to an
- offering that has the Persistent option disabled. If the network has no running VMs,
- during the next network garbage collection run the network is shut down.
-
-
- When the last VM on a network is destroyed, the network garbage collector checks if
- the network offering associated with the network is persistent, and shuts down the network
- only if it is non-persistent.
-
-
-
-
- Creating a Persistent Guest Network
- To create a persistent network, perform the following:
-
-
- Create a network offering with the Persistent option enabled.
- See .
- See the Administration Guide.
-
-
- Select Network from the left navigation pane.
-
-
- Select the guest network that you want to offer this network service to.
-
-
- Click the Edit button.
-
-
- From the Network Offering drop-down, select the persistent network offering you have
- just created.
-
-
- Click OK.
-
-
-
-
diff --git a/docs/en-US/physical-network-configuration-settings.xml b/docs/en-US/physical-network-configuration-settings.xml
deleted file mode 100644
index 4ab18b01d30..00000000000
--- a/docs/en-US/physical-network-configuration-settings.xml
+++ /dev/null
@@ -1,37 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Configurable Characteristics of Physical Networks
- &PRODUCT; provides configuration settings you can use to set up a physical network in a zone, including:
-
- What type of network traffic it carries (guest, public, management, storage)
- VLANs
- Unique name that the hypervisor can use to find that particular network
- Enabled or disabled. When a network is first set up, it is disabled – not in use yet. The administrator sets the physical network to enabled, and it begins to be used. The administrator can later disable the network again, which prevents any new virtual networks from being created on that physical network; the existing network traffic continues even though the state is disabled.
- Speed
- Tags, so network offerings can be matched to physical networks
- Isolation method
-
-
diff --git a/docs/en-US/plugin-development.xml b/docs/en-US/plugin-development.xml
deleted file mode 100644
index 0492877eba4..00000000000
--- a/docs/en-US/plugin-development.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Plugin Development
-
-
diff --git a/docs/en-US/plugin-midonet-about.xml b/docs/en-US/plugin-midonet-about.xml
deleted file mode 100644
index dd9b3ad08e0..00000000000
--- a/docs/en-US/plugin-midonet-about.xml
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- The MidoNet Plugin
-
-
-
diff --git a/docs/en-US/plugin-midonet-features.xml b/docs/en-US/plugin-midonet-features.xml
deleted file mode 100644
index f242d63d0ee..00000000000
--- a/docs/en-US/plugin-midonet-features.xml
+++ /dev/null
@@ -1,57 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Features of the MidoNet Plugin
-
-
-
- In &PRODUCT; 4.2.0 only the KVM hypervisor is supported for use in combination with MidoNet.
-
-
-
- In &PRODUCT; release 4.2.0 this plugin supports several services in the Advanced Isolated network mode.
-
-
-
- When tenants create new isolated layer 3 networks, instead of spinning up extra Virtual Router VMs, the relevant L3 elements (routers etc) are created in the MidoNet virtual topology by making the appropriate calls to the MidoNet API. Instead of using VLANs, isolation is provided by MidoNet.
-
-
-
- Aside from the above service (Connectivity), several extra features are supported in the 4.2.0 release:
-
-
-
- DHCP
- Firewall (ingress)
- Source NAT
- Static NAT
- Port Forwarding
-
-
-
- The plugin has been tested with MidoNet version 12.12. (Caddo).
-
-
-
-
-
diff --git a/docs/en-US/plugin-midonet-introduction.xml b/docs/en-US/plugin-midonet-introduction.xml
deleted file mode 100644
index 7793ecbc884..00000000000
--- a/docs/en-US/plugin-midonet-introduction.xml
+++ /dev/null
@@ -1,26 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Introduction to the MidoNet Plugin
- The MidoNet plugin allows &PRODUCT; to use the MidoNet virtualized networking solution as a provider for &PRODUCT; networks and services. For more information on MidoNet and how it works, see http://www.midokura.com/midonet/.
-
diff --git a/docs/en-US/plugin-midonet-preparations.xml b/docs/en-US/plugin-midonet-preparations.xml
deleted file mode 100644
index cf78774ec2b..00000000000
--- a/docs/en-US/plugin-midonet-preparations.xml
+++ /dev/null
@@ -1,90 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Prerequisites
-
- In order to use the MidoNet plugin, the compute hosts must be running the MidoNet Agent, and the MidoNet API server must be available. Please consult the MidoNet User Guide for more information. The following section describes the &PRODUCT; side setup.
-
-
-
- &PRODUCT; needs to have at least one physical network with the isolation method set to "MIDO". This network should be enabled for the Guest and Public traffic types.
-
-
-
- Next, we need to set the following &PRODUCT; settings under "Global Settings" in the UI:
-
-
&PRODUCT; settings
-
-
-
- Setting Name
- Description
- Example
-
-
-
-
- midonet.apiserver.address
- Specify the address at which the Midonet API server can be contacted
- http://192.168.1.144:8081/midolmanj-mgmt
-
-
- midonet.providerrouter.id
- Specifies the UUID of the Midonet provider router
- d7c5e6a3-e2f4-426b-b728-b7ce6a0448e5
-
-
-
-
-
-
-
-
- We also want MidoNet to take care of public traffic, so in componentContext.xml we need to replace this line:
-
- ]]>
-
-
- With this:
-
- ]]>
-
-
-
-
-
-
-
- On the compute host, MidoNet takes advantage of per-traffic type VIF driver support in &PRODUCT; KVM.
-
-
- In agent.properties, we set the following to make MidoNet take care of Guest and Public traffic:
-
-libvirt.vif.driver.Guest=com.cloud.network.resource.MidoNetVifDriver
-libvirt.vif.driver.Public=com.cloud.network.resource.MidoNetVifDriver
-
- This is explained further in MidoNet User Guide.
-
-
-
-
diff --git a/docs/en-US/plugin-midonet-provider.xml b/docs/en-US/plugin-midonet-provider.xml
deleted file mode 100644
index 904828caecd..00000000000
--- a/docs/en-US/plugin-midonet-provider.xml
+++ /dev/null
@@ -1,39 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Enabling the MidoNet service provider via the API
-
- To enable via the API, use the following API calls:
- addNetworkServiceProvider
-
- name = "MidoNet"
- physicalnetworkid = <the uuid of the physical network>
-
- updateNetworkServiceProvider
-
- id = <the provider uuid returned by the previous call>
- state = "Enabled"
-
-
-
-
\ No newline at end of file
diff --git a/docs/en-US/plugin-midonet-revisions.xml b/docs/en-US/plugin-midonet-revisions.xml
deleted file mode 100644
index 73def2325b5..00000000000
--- a/docs/en-US/plugin-midonet-revisions.xml
+++ /dev/null
@@ -1,45 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Revision History
-
-
-
- 0-0
- Wed Mar 13 2013
-
- Dave
- Cahill
- dcahill@midokura.com
-
-
-
- Documentation created for 4.2.0 version of the MidoNet Plugin
-
-
-
-
-
-
diff --git a/docs/en-US/plugin-midonet-ui.xml b/docs/en-US/plugin-midonet-ui.xml
deleted file mode 100644
index 8ee9850e5a7..00000000000
--- a/docs/en-US/plugin-midonet-ui.xml
+++ /dev/null
@@ -1,65 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Enabling the MidoNet service provider via the UI
- To allow &PRODUCT; to use the MidoNet Plugin the network service provider needs to be enabled on the physical network.
-
-
-
- The steps to enable via the UI are as follows:
-
-
- In the left navbar, click Infrastructure
-
-
-
- In Zones, click View All
-
-
-
- Click the name of the Zone on which you are setting up MidoNet
-
-
-
- Click the Physical Network tab
-
-
-
- Click the Name of the Network on which you are setting up MidoNet
-
-
-
- Click Configure on the Network Service Providers box
-
-
-
- Click on the name MidoNet
-
-
-
- Click the Enable Provider button in the Network tab
-
-
-
-
-
diff --git a/docs/en-US/plugin-midonet-usage.xml b/docs/en-US/plugin-midonet-usage.xml
deleted file mode 100644
index a314581dcda..00000000000
--- a/docs/en-US/plugin-midonet-usage.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Using the MidoNet Plugin
-
-
-
-
-
diff --git a/docs/en-US/plugin-niciranvp-about.xml b/docs/en-US/plugin-niciranvp-about.xml
deleted file mode 100644
index cfab83c73c3..00000000000
--- a/docs/en-US/plugin-niciranvp-about.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- The Nicira NVP Plugin
-
-
-
-
diff --git a/docs/en-US/plugin-niciranvp-devicemanagement.xml b/docs/en-US/plugin-niciranvp-devicemanagement.xml
deleted file mode 100644
index 761c39f3179..00000000000
--- a/docs/en-US/plugin-niciranvp-devicemanagement.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Device Management
- In &PRODUCT; a Nicira NVP setup is considered a "device" that can be added and removed from a physical network. To complete the configuration of the Nicira NVP plugin a device needs to be added to the physical network. Press the "Add NVP Controller" button on the provider panel and enter the configuration details.
-
-
-
-
-
- nvp-physical-network-stt.png: a screenshot of the device configuration popup.
-
-
-
-
diff --git a/docs/en-US/plugin-niciranvp-features.xml b/docs/en-US/plugin-niciranvp-features.xml
deleted file mode 100644
index e439f1b4923..00000000000
--- a/docs/en-US/plugin-niciranvp-features.xml
+++ /dev/null
@@ -1,84 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Features of the Nicira NVP Plugin
- The following table lists the CloudStack network services provided by the Nicira NVP Plugin.
-
- The Virtual Networking service was originally called 'Connectivity' in CloudStack 4.0
- The following hypervisors are supported by the Nicira NVP Plugin.
-
- Please refer to the Nicira NVP configuration guide on how to prepare the hypervisors for Nicira NVP integration.
-
diff --git a/docs/en-US/plugin-niciranvp-introduction.xml b/docs/en-US/plugin-niciranvp-introduction.xml
deleted file mode 100644
index a06f12317e5..00000000000
--- a/docs/en-US/plugin-niciranvp-introduction.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Introduction to the Nicira NVP Plugin
- The Nicira NVP plugin adds Nicira NVP as one of the available SDN implementations in
- CloudStack. With the plugin an exisiting Nicira NVP setup can be used by CloudStack to
- implement isolated guest networks and to provide additional services like routing and
- NAT.
-
diff --git a/docs/en-US/plugin-niciranvp-networkofferings.xml b/docs/en-US/plugin-niciranvp-networkofferings.xml
deleted file mode 100644
index b30437e97ba..00000000000
--- a/docs/en-US/plugin-niciranvp-networkofferings.xml
+++ /dev/null
@@ -1,131 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Network Offerings
- Using the Nicira NVP plugin requires a network offering with Virtual Networking enabled and configured to use the NiciraNvp element. Typical use cases combine services from the Virtual Router appliance and the Nicira NVP plugin.
-
-
-
-
-
-
- nvp-physical-network-stt.png: a screenshot of a network offering.
-
-
- The tag in the network offering should be set to the name of the physical network with the NVP provider.
- Isolated network with network services. The virtual router is still required to provide network services like dns and dhcp.
-
-
-
diff --git a/docs/en-US/plugin-niciranvp-physicalnet.xml b/docs/en-US/plugin-niciranvp-physicalnet.xml
deleted file mode 100644
index d3202905fb1..00000000000
--- a/docs/en-US/plugin-niciranvp-physicalnet.xml
+++ /dev/null
@@ -1,37 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Zone Configuration
- &PRODUCT; needs to have at least one physical network with the isolation method set to "STT". This network should be enabled for the Guest traffic type.
- The Guest traffic type should be configured with the traffic label that matches the name of
- the Integration Bridge on the hypervisor. See the Nicira NVP User Guide for more details
- on how to set this up in XenServer or KVM.
-
-
-
-
-
- nvp-physical-network-stt.png: a screenshot of a physical network with the STT isolation type
-
-
-
diff --git a/docs/en-US/plugin-niciranvp-preparations.xml b/docs/en-US/plugin-niciranvp-preparations.xml
deleted file mode 100644
index 60725591fda..00000000000
--- a/docs/en-US/plugin-niciranvp-preparations.xml
+++ /dev/null
@@ -1,37 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Prerequisites
- Before enabling the Nicira NVP plugin the NVP Controller needs to be configured. Please review the NVP User Guide on how to do that.
- Make sure you have the following information ready:
-
- The IP address of the NVP Controller
- The username to access the API
- The password to access the API
- The UUID of the Transport Zone that contains the hypervisors in this Zone
-
- The UUID of the Gateway Service used to provide router and NAT services.
-
-
- The gateway service uuid is optional and is used for Layer 3 services only (SourceNat, StaticNat and PortForwarding)
-
diff --git a/docs/en-US/plugin-niciranvp-provider.xml b/docs/en-US/plugin-niciranvp-provider.xml
deleted file mode 100644
index 8694478b483..00000000000
--- a/docs/en-US/plugin-niciranvp-provider.xml
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Enabling the service provider
- The Nicira NVP provider is disabled by default. Navigate to the "Network Service Providers" configuration of the physical network with the STT isolation type. Navigate to the Nicira NVP provider and press the "Enable Provider" button.
- CloudStack 4.0 does not have the UI interface to configure the Nicira NVP plugin. Configuration needs to be done using the API directly.
-
-
-
-
-
- nvp-physical-network-stt.png: a screenshot of an enabled Nicira NVP provider
-
-
-
-
\ No newline at end of file
diff --git a/docs/en-US/plugin-niciranvp-revisions.xml b/docs/en-US/plugin-niciranvp-revisions.xml
deleted file mode 100644
index b58d3336aba..00000000000
--- a/docs/en-US/plugin-niciranvp-revisions.xml
+++ /dev/null
@@ -1,59 +0,0 @@
-
-
-%BOOK_ENTITIES;
-]>
-
-
-
-
- Revision History
-
-
-
- 0-0
- Wed Oct 03 2012
-
- Hugo
- Trippaers
- hugo@apache.org
-
-
-
- Documentation created for 4.0.0-incubating version of the NVP Plugin
-
-
-
-
- 1-0
- Wed May 22 2013
-
- Hugo
- Trippaers
- hugo@apache.org
-
-
-
- Documentation updated for &PRODUCT; 4.1.0
-
-
-
-
-
-
diff --git a/docs/en-US/plugin-niciranvp-tables.xml b/docs/en-US/plugin-niciranvp-tables.xml
deleted file mode 100644
index 615f3494c09..00000000000
--- a/docs/en-US/plugin-niciranvp-tables.xml
+++ /dev/null
@@ -1,106 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Database tables
- The following tables are added to the cloud database for the Nicira NVP Plugin
-
- nicira_nvp_nic_map
-
-
-
- id
- auto incrementing id
-
-
- logicalswitch
- uuid of the logical switch this port is connected to
-
-
- logicalswitchport
- uuid of the logical switch port for this nic
-
-
- nic
- the &PRODUCT; uuid for this nic, reference to the nics table
-
-
-
-
-
-
- external_nicira_nvp_devices
-
-
-
- id
- auto incrementing id
-
-
- uuid
- UUID identifying this device
-
-
- physical_network_id
- the physical network this device is configured on
-
-
- provider_name
- NiciraNVP
-
-
- device_name
- display name for this device
-
-
- host_id
- reference to the host table with the device configuration
-
-
-
-
-
-
- nicira_nvp_router_map
-
-
-
- id
- auto incrementing id
-
-
- logicalrouter_uuid
- uuid of the logical router
-
-
- network_id
- id of the network this router is linked to
-
-
-
-
-
-
- nicira_nvp_router_map is only available in &PRODUCT; 4.1 and above
-
-
-
\ No newline at end of file
diff --git a/docs/en-US/plugin-niciranvp-troubleshooting.xml b/docs/en-US/plugin-niciranvp-troubleshooting.xml
deleted file mode 100644
index 02b06555914..00000000000
--- a/docs/en-US/plugin-niciranvp-troubleshooting.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Troubleshooting the Nicira NVP Plugin
-
-
-
-
diff --git a/docs/en-US/plugin-niciranvp-ui.xml b/docs/en-US/plugin-niciranvp-ui.xml
deleted file mode 100644
index 8b1bbad8395..00000000000
--- a/docs/en-US/plugin-niciranvp-ui.xml
+++ /dev/null
@@ -1,26 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Configuring the Nicira NVP plugin from the UI
- In CloudStack 4.1.0-incubating the Nicira NVP plugin and its resources can be configured in the infrastructure tab of the UI. Navigate to the physical network with STT isolation and configure the network elements. The NiciraNvp is listed here.
-
diff --git a/docs/en-US/plugin-niciranvp-usage.xml b/docs/en-US/plugin-niciranvp-usage.xml
deleted file mode 100644
index 9f04c382bd6..00000000000
--- a/docs/en-US/plugin-niciranvp-usage.xml
+++ /dev/null
@@ -1,33 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Configuring the Nicira NVP Plugin
-
-
-
-
-
-
-
diff --git a/docs/en-US/plugin-niciranvp-uuidreferences.xml b/docs/en-US/plugin-niciranvp-uuidreferences.xml
deleted file mode 100644
index cb5f1cae834..00000000000
--- a/docs/en-US/plugin-niciranvp-uuidreferences.xml
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- UUID References
- The plugin maintains several references in the &PRODUCT; database to items created on the NVP Controller.
- Every guest network that is created will have its broadcast type set to Lswitch and if the network is in state "Implemented", the broadcast URI will have the UUID of the Logical Switch that was created for this network on the NVP Controller.
- The Nics that are connected to one of the Logical Switches will have their Logical Switch Port UUID listed in the nicira_nvp_nic_map table
- All devices created on the NVP Controller will have a tag set to domain-account of the owner of the network, this string can be used to search for items in the NVP Controller.
-
-
diff --git a/docs/en-US/plugin-niciranvp-vpc.xml b/docs/en-US/plugin-niciranvp-vpc.xml
deleted file mode 100644
index a43c5fa85d3..00000000000
--- a/docs/en-US/plugin-niciranvp-vpc.xml
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Using the Nicira NVP plugin with VPC
-
-
-
-
-
-
diff --git a/docs/en-US/plugin-niciranvp-vpcfeatures.xml b/docs/en-US/plugin-niciranvp-vpcfeatures.xml
deleted file mode 100644
index a8d8194e9ba..00000000000
--- a/docs/en-US/plugin-niciranvp-vpcfeatures.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- Supported VPC features
- The Nicira NVP plugin supports &PRODUCT; VPC to a certain extent. Starting with &PRODUCT; version 4.1 VPCs can be deployed using NVP isolated networks.
- It is not possible to use a Nicira NVP Logical Router for as a VPC Router
- It is not possible to connect a private gateway using a Nicira NVP Logical Switch
-
diff --git a/docs/en-US/plugin-niciranvp-vpcnetworkoffering.xml b/docs/en-US/plugin-niciranvp-vpcnetworkoffering.xml
deleted file mode 100644
index 141006ee350..00000000000
--- a/docs/en-US/plugin-niciranvp-vpcnetworkoffering.xml
+++ /dev/null
@@ -1,81 +0,0 @@
-
-
-%BOOK_ENTITIES;
-
-%xinclude;
-]>
-
-
- VPC Network Offerings
- The VPC needs specific network offerings with the VPC flag enabled. Otherwise these network offerings are identical to regular network offerings. To allow VPC networks with a Nicira NVP isolated network the offerings need to support the Virtual Networking service with the NiciraNVP provider.
- In a typical configuration two network offerings need to be created. One with the loadbalancing service enabled and one without loadbalancing.
-