cloudstack: make code more inclusive

Inclusivity changes for CloudStack

- Change default git branch name from 'master' to 'main' (post renaming/changing default git branch to 'main' in git repo)
- Rename some offensive words/terms as appropriate for inclusiveness.

This PR updates the default git branch to 'main', as part of #4887.

Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This commit is contained in:
Suresh Kumar Anaparti 2021-06-08 15:44:53 +05:30 committed by Rohit Yadav
parent d10cdb495f
commit 958182481e
161 changed files with 1221 additions and 1188 deletions

View File

@ -14,14 +14,14 @@ Bug fixes
It's very important that we can easily track bug fix commits, so their hashes should remain the same in all branches. It's very important that we can easily track bug fix commits, so their hashes should remain the same in all branches.
Therefore, a pull request (PR) that fixes a bug, should be sent against a release branch. Therefore, a pull request (PR) that fixes a bug, should be sent against a release branch.
This can be either the "current release" or the "previous release", depending on which ones are maintained. This can be either the "current release" or the "previous release", depending on which ones are maintained.
Since the goal is a stable master, bug fixes should be "merged forward" to the next branch in order: "previous release" -> "current release" -> master (in other words: old to new) Since the goal is a stable main, bug fixes should be "merged forward" to the next branch in order: "previous release" -> "current release" -> main (in other words: old to new)
Developing new features Developing new features
----------------------- -----------------------
Development should be done in a feature branch, branched off of master. Development should be done in a feature branch, branched off of main.
Send a PR(steps below) to get it into master (2x LGTM applies). Send a PR(steps below) to get it into main (2x LGTM applies).
PR will only be merged when master is open, will be held otherwise until master is open again. PR will only be merged when main is open, will be held otherwise until main is open again.
No back porting / cherry-picking features to existing branches! No back porting / cherry-picking features to existing branches!
PendingReleaseNotes file PendingReleaseNotes file
@ -46,9 +46,9 @@ On your computer, follow these steps to setup a local repository for working on
$ git clone https://github.com/YOUR_ACCOUNT/cloudstack.git $ git clone https://github.com/YOUR_ACCOUNT/cloudstack.git
$ cd cloudstack $ cd cloudstack
$ git remote add upstream https://github.com/apache/cloudstack.git $ git remote add upstream https://github.com/apache/cloudstack.git
$ git checkout master $ git checkout main
$ git fetch upstream $ git fetch upstream
$ git rebase upstream/master $ git rebase upstream/main
``` ```
@ -56,7 +56,7 @@ Making changes
-------------- --------------
It is important that you create a new branch to make changes on and that you do not change the `master` branch (other than to rebase in changes from `upstream/master`). In this example I will assume you will be making your changes to a branch called `feature_x`. This `feature_x` branch will be created on your local repository and will be pushed to your forked repository on GitHub. Once this branch is on your fork you will create a Pull Request for the changes to be added to the ACS project. It is important that you create a new branch to make changes on and that you do not change the `main` branch (other than to rebase in changes from `upstream/main`). In this example I will assume you will be making your changes to a branch called `feature_x`. This `feature_x` branch will be created on your local repository and will be pushed to your forked repository on GitHub. Once this branch is on your fork you will create a Pull Request for the changes to be added to the ACS project.
It is best practice to create a new branch each time you want to contribute to the project and only track the changes for that pull request in this branch. It is best practice to create a new branch each time you want to contribute to the project and only track the changes for that pull request in this branch.
@ -71,26 +71,26 @@ $ git commit -a -m "descriptive commit message for your changes"
> The `-b` specifies that you want to create a new branch called `feature_x`. You only specify `-b` the first time you checkout because you are creating a new branch. Once the `feature_x` branch exists, you can later switch to it with only `git checkout feature_x`. > The `-b` specifies that you want to create a new branch called `feature_x`. You only specify `-b` the first time you checkout because you are creating a new branch. Once the `feature_x` branch exists, you can later switch to it with only `git checkout feature_x`.
Rebase `feature_x` to include updates from `upstream/master` Rebase `feature_x` to include updates from `upstream/main`
------------------------------------------------------------ ------------------------------------------------------------
It is important that you maintain an up-to-date `master` branch in your local repository. This is done by rebasing in the code changes from `upstream/master` (the official ACS project repository) into your local repository. You will want to do this before you start working on a feature as well as right before you submit your changes as a pull request. I recommend you do this process periodically while you work to make sure you are working off the most recent project code. It is important that you maintain an up-to-date `main` branch in your local repository. This is done by rebasing in the code changes from `upstream/main` (the official ACS project repository) into your local repository. You will want to do this before you start working on a feature as well as right before you submit your changes as a pull request. I recommend you do this process periodically while you work to make sure you are working off the most recent project code.
This process will do the following: This process will do the following:
1. Checkout your local `master` branch 1. Checkout your local `main` branch
2. Synchronize your local `master` branch with the `upstream/master` so you have all the latest changes from the project 2. Synchronize your local `main` branch with the `upstream/main` so you have all the latest changes from the project
3. Rebase the latest project code into your `feature_x` branch so it is up-to-date with the upstream code 3. Rebase the latest project code into your `feature_x` branch so it is up-to-date with the upstream code
``` bash ``` bash
$ git checkout master $ git checkout main
$ git fetch upstream $ git fetch upstream
$ git rebase upstream/master $ git rebase upstream/main
$ git checkout feature_x $ git checkout feature_x
$ git rebase master $ git rebase main
``` ```
> Now your `feature_x` branch is up-to-date with all the code in `upstream/master`. > Now your `feature_x` branch is up-to-date with all the code in `upstream/main`.
Make a GitHub Pull Request to contribute your changes Make a GitHub Pull Request to contribute your changes
@ -100,10 +100,10 @@ When you are happy with your changes and you are ready to contribute them, you w
Please include JIRA id, detailed information about the bug/feature, what all tests are executed, how the reviewer can test this feature etc. Incase of UI PRs, a screenshot is preferred. Please include JIRA id, detailed information about the bug/feature, what all tests are executed, how the reviewer can test this feature etc. Incase of UI PRs, a screenshot is preferred.
> **IMPORTANT:** Make sure you have rebased your `feature_x` branch to include the latest code from `upstream/master` _before_ you do this. > **IMPORTANT:** Make sure you have rebased your `feature_x` branch to include the latest code from `upstream/main` _before_ you do this.
``` bash ``` bash
$ git push origin master $ git push origin main
$ git push origin feature_x $ git push origin feature_x
``` ```
@ -113,7 +113,7 @@ To initiate the pull request, do the following:
1. In your browser, navigate to your forked repository: [https://github.com/YOUR_ACCOUNT/cloudstack](https://github.com/YOUR_ACCOUNT/cloudstack) 1. In your browser, navigate to your forked repository: [https://github.com/YOUR_ACCOUNT/cloudstack](https://github.com/YOUR_ACCOUNT/cloudstack)
2. Click the new button called '**Compare & pull request**' that showed up just above the main area in your forked repository 2. Click the new button called '**Compare & pull request**' that showed up just above the main area in your forked repository
3. Validate the pull request will be into the upstream `master` and will be from your `feature_x` branch 3. Validate the pull request will be into the upstream `main` and will be from your `feature_x` branch
4. Enter a detailed description of the work you have done and then click '**Send pull request**' 4. Enter a detailed description of the work you have done and then click '**Send pull request**'
If you are requested to make modifications to your proposed changes, make the changes locally on your `feature_x` branch, re-push the `feature_x` branch to your fork. The existing pull request should automatically pick up the change and update accordingly. If you are requested to make modifications to your proposed changes, make the changes locally on your `feature_x` branch, re-push the `feature_x` branch to your fork. The existing pull request should automatically pick up the change and update accordingly.
@ -122,14 +122,14 @@ If you are requested to make modifications to your proposed changes, make the ch
Cleaning up after a successful pull request Cleaning up after a successful pull request
------------------------------------------- -------------------------------------------
Once the `feature_x` branch has been committed into the `upstream/master` branch, your local `feature_x` branch and the `origin/feature_x` branch are no longer needed. If you want to make additional changes, restart the process with a new branch. Once the `feature_x` branch has been committed into the `upstream/main` branch, your local `feature_x` branch and the `origin/feature_x` branch are no longer needed. If you want to make additional changes, restart the process with a new branch.
> **IMPORTANT:** Make sure that your changes are in `upstream/master` before you delete your `feature_x` and `origin/feature_x` branches! > **IMPORTANT:** Make sure that your changes are in `upstream/main` before you delete your `feature_x` and `origin/feature_x` branches!
You can delete these deprecated branches with the following: You can delete these deprecated branches with the following:
``` bash ``` bash
$ git checkout master $ git checkout main
$ git branch -D feature_x $ git branch -D feature_x
$ git push origin :feature_x $ git push origin :feature_x
``` ```

View File

@ -1,6 +1,6 @@
<!-- <!--
Verify first that your issue/request is not already reported on GitHub. Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release and master branch are affected too. Also test if the latest release and main branch are affected too.
Always add information AFTER of these HTML comments, but no need to delete the comments. Always add information AFTER of these HTML comments, but no need to delete the comments.
--> -->
@ -23,7 +23,7 @@ Categorize the issue, e.g. API, VR, VPN, UI, etc.
##### CLOUDSTACK VERSION ##### CLOUDSTACK VERSION
<!-- <!--
New line separated list of affected versions, commit ID for issues on master branch. New line separated list of affected versions, commit ID for issues on main branch.
--> -->
~~~ ~~~

View File

@ -48,4 +48,4 @@ This PR...
<!-- see how your change affects other areas of the code, etc. --> <!-- see how your change affects other areas of the code, etc. -->
<!-- Please read the [CONTRIBUTING](https://github.com/apache/cloudstack/blob/master/CONTRIBUTING.md) document --> <!-- Please read the [CONTRIBUTING](https://github.com/apache/cloudstack/blob/main/CONTRIBUTING.md) document -->

View File

@ -1,4 +1,4 @@
# Apache CloudStack [![Build Status](https://travis-ci.org/apache/cloudstack.svg?branch=master)](https://travis-ci.org/apache/cloudstack) [![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=apachecloudstack&metric=alert_status)](https://sonarcloud.io/dashboard?id=apachecloudstack) [![Lines of Code](https://sonarcloud.io/api/project_badges/measure?project=apachecloudstack&metric=ncloc)](https://sonarcloud.io/dashboard?id=apachecloudstack) ![GitHub language count](https://img.shields.io/github/languages/count/apache/cloudstack.svg) ![GitHub top language](https://img.shields.io/github/languages/top/apache/cloudstack.svg) # Apache CloudStack [![Build Status](https://travis-ci.org/apache/cloudstack.svg?branch=main)](https://travis-ci.org/apache/cloudstack) [![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=apachecloudstack&metric=alert_status)](https://sonarcloud.io/dashboard?id=apachecloudstack) [![Lines of Code](https://sonarcloud.io/api/project_badges/measure?project=apachecloudstack&metric=ncloc)](https://sonarcloud.io/dashboard?id=apachecloudstack) ![GitHub language count](https://img.shields.io/github/languages/count/apache/cloudstack.svg) ![GitHub top language](https://img.shields.io/github/languages/top/apache/cloudstack.svg)
![Apache CloudStack](tools/logo/apache_cloudstack.png) ![Apache CloudStack](tools/logo/apache_cloudstack.png)

View File

@ -35,7 +35,7 @@ public interface VirtualRouter extends VirtualMachine {
boolean getIsRedundantRouter(); boolean getIsRedundantRouter();
public enum RedundantState { public enum RedundantState {
UNKNOWN, MASTER, BACKUP, FAULT UNKNOWN, PRIMARY, BACKUP, FAULT
} }
RedundantState getRedundantState(); RedundantState getRedundantState();

View File

@ -824,6 +824,7 @@ public class ApiConstants {
public static final String KUBERNETES_VERSION_ID = "kubernetesversionid"; public static final String KUBERNETES_VERSION_ID = "kubernetesversionid";
public static final String KUBERNETES_VERSION_NAME = "kubernetesversionname"; public static final String KUBERNETES_VERSION_NAME = "kubernetesversionname";
public static final String MASTER_NODES = "masternodes"; public static final String MASTER_NODES = "masternodes";
public static final String CONTROL_NODES = "controlnodes";
public static final String MIN_SEMANTIC_VERSION = "minimumsemanticversion"; public static final String MIN_SEMANTIC_VERSION = "minimumsemanticversion";
public static final String MIN_KUBERNETES_VERSION_ID = "minimumkubernetesversionid"; public static final String MIN_KUBERNETES_VERSION_ID = "minimumkubernetesversionid";
public static final String NODE_ROOT_DISK_SIZE = "noderootdisksize"; public static final String NODE_ROOT_DISK_SIZE = "noderootdisksize";

View File

@ -54,8 +54,8 @@ public class ListResourceLimitsCmd extends BaseListProjectAndAccountResourcesCmd
+ "5 - Project. Number of projects an account can own. " + "5 - Project. Number of projects an account can own. "
+ "6 - Network. Number of networks an account can own. " + "6 - Network. Number of networks an account can own. "
+ "7 - VPC. Number of VPC an account can own. " + "7 - VPC. Number of VPC an account can own. "
+ "8 - CPU. Number of CPU an account can allocate for his resources. " + "8 - CPU. Number of CPU an account can allocate for their resources. "
+ "9 - Memory. Amount of RAM an account can allocate for his resources. " + "9 - Memory. Amount of RAM an account can allocate for their resources. "
+ "10 - PrimaryStorage. Total primary storage space (in GiB) a user can use. " + "10 - PrimaryStorage. Total primary storage space (in GiB) a user can use. "
+ "11 - SecondaryStorage. Total secondary storage space (in GiB) a user can use. ") + "11 - SecondaryStorage. Total secondary storage space (in GiB) a user can use. ")
private Integer resourceType; private Integer resourceType;
@ -69,8 +69,8 @@ public class ListResourceLimitsCmd extends BaseListProjectAndAccountResourcesCmd
+ "project - Project. Number of projects an account can own. " + "project - Project. Number of projects an account can own. "
+ "network - Network. Number of networks an account can own. " + "network - Network. Number of networks an account can own. "
+ "vpc - VPC. Number of VPC an account can own. " + "vpc - VPC. Number of VPC an account can own. "
+ "cpu - CPU. Number of CPU an account can allocate for his resources. " + "cpu - CPU. Number of CPU an account can allocate for their resources. "
+ "memory - Memory. Amount of RAM an account can allocate for his resources. " + "memory - Memory. Amount of RAM an account can allocate for their resources. "
+ "primary_storage - PrimaryStorage. Total primary storage space (in GiB) a user can use. " + "primary_storage - PrimaryStorage. Total primary storage space (in GiB) a user can use. "
+ "secondary_storage - SecondaryStorage. Total secondary storage space (in GiB) a user can use. ") + "secondary_storage - SecondaryStorage. Total secondary storage space (in GiB) a user can use. ")
private String resourceTypeName; private String resourceTypeName;

View File

@ -92,8 +92,8 @@ public interface QueryService {
ConfigKey<Boolean> AllowUserViewDestroyedVM = new ConfigKey<>("Advanced", Boolean.class, "allow.user.view.destroyed.vm", "false", ConfigKey<Boolean> AllowUserViewDestroyedVM = new ConfigKey<>("Advanced", Boolean.class, "allow.user.view.destroyed.vm", "false",
"Determines whether users can view their destroyed or expunging vm ", true, ConfigKey.Scope.Account); "Determines whether users can view their destroyed or expunging vm ", true, ConfigKey.Scope.Account);
static final ConfigKey<String> UserVMBlacklistedDetails = new ConfigKey<String>("Advanced", String.class, static final ConfigKey<String> UserVMDeniedDetails = new ConfigKey<String>("Advanced", String.class,
"user.vm.blacklisted.details", "rootdisksize, cpuOvercommitRatio, memoryOvercommitRatio, Message.ReservedCapacityFreed.Flag", "user.vm.denied.details", "rootdisksize, cpuOvercommitRatio, memoryOvercommitRatio, Message.ReservedCapacityFreed.Flag",
"Determines whether users can view certain VM settings. When set to empty, default value used is: rootdisksize, cpuOvercommitRatio, memoryOvercommitRatio, Message.ReservedCapacityFreed.Flag.", true); "Determines whether users can view certain VM settings. When set to empty, default value used is: rootdisksize, cpuOvercommitRatio, memoryOvercommitRatio, Message.ReservedCapacityFreed.Flag.", true);
static final ConfigKey<String> UserVMReadOnlyDetails = new ConfigKey<String>("Advanced", String.class, static final ConfigKey<String> UserVMReadOnlyDetails = new ConfigKey<String>("Advanced", String.class,

View File

@ -83,21 +83,21 @@ db.simulator.autoReconnect=true
db.ha.enabled=false db.ha.enabled=false
db.ha.loadBalanceStrategy=com.cloud.utils.db.StaticStrategy db.ha.loadBalanceStrategy=com.cloud.utils.db.StaticStrategy
# cloud stack Database # cloud stack Database
db.cloud.slaves=localhost,localhost db.cloud.replicas=localhost,localhost
db.cloud.autoReconnect=true db.cloud.autoReconnect=true
db.cloud.failOverReadOnly=false db.cloud.failOverReadOnly=false
db.cloud.reconnectAtTxEnd=true db.cloud.reconnectAtTxEnd=true
db.cloud.autoReconnectForPools=true db.cloud.autoReconnectForPools=true
db.cloud.secondsBeforeRetryMaster=3600 db.cloud.secondsBeforeRetrySource=3600
db.cloud.queriesBeforeRetryMaster=5000 db.cloud.queriesBeforeRetrySource=5000
db.cloud.initialTimeout=3600 db.cloud.initialTimeout=3600
#usage Database #usage Database
db.usage.slaves=localhost,localhost db.usage.replicas=localhost,localhost
db.usage.autoReconnect=true db.usage.autoReconnect=true
db.usage.failOverReadOnly=false db.usage.failOverReadOnly=false
db.usage.reconnectAtTxEnd=true db.usage.reconnectAtTxEnd=true
db.usage.autoReconnectForPools=true db.usage.autoReconnectForPools=true
db.usage.secondsBeforeRetryMaster=3600 db.usage.secondsBeforeRetrySource=3600
db.usage.queriesBeforeRetryMaster=5000 db.usage.queriesBeforeRetrySource=5000
db.usage.initialTimeout=3600 db.usage.initialTimeout=3600

View File

@ -48,8 +48,8 @@ public class CheckRouterAnswer extends Answer {
state = RedundantState.UNKNOWN; state = RedundantState.UNKNOWN;
return false; return false;
} }
if (details.startsWith("Status: MASTER")) { if (details.startsWith("Status: PRIMARY")) {
state = RedundantState.MASTER; state = RedundantState.PRIMARY;
} else if (details.startsWith("Status: BACKUP")) { } else if (details.startsWith("Status: BACKUP")) {
state = RedundantState.BACKUP; state = RedundantState.BACKUP;
} else if (details.startsWith("Status: FAULT")) { } else if (details.startsWith("Status: FAULT")) {

View File

@ -303,3 +303,14 @@ from
-- Update name for global configuration user.vm.readonly.ui.details -- Update name for global configuration user.vm.readonly.ui.details
Update configuration set name='user.vm.readonly.details' where name='user.vm.readonly.ui.details'; Update configuration set name='user.vm.readonly.details' where name='user.vm.readonly.ui.details';
-- Update name for global configuration 'user.vm.readonly.ui.details' to 'user.vm.denied.details'
UPDATE `cloud`.`configuration` SET name='user.vm.denied.details' WHERE name='user.vm.blacklisted.details';
-- Update name for global configuration 'blacklisted.routes' to 'denied.routes'
UPDATE `cloud`.`configuration` SET name='denied.routes', description='Routes that are denied, can not be used for Static Routes creation for the VPC Private Gateway' WHERE name='blacklisted.routes';
-- Rename 'master_node_count' to 'control_node_count' in kubernetes_cluster table
ALTER TABLE `cloud`.`kubernetes_cluster` CHANGE master_node_count control_node_count bigint NOT NULL default '0' COMMENT 'the number of the control nodes deployed for this Kubernetes cluster';
UPDATE `cloud`.`domain_router` SET redundant_state = 'PRIMARY' WHERE redundant_state = 'MASTER';

View File

@ -72,7 +72,7 @@ import com.cloud.hypervisor.Hypervisor;
import com.cloud.org.Cluster; import com.cloud.org.Cluster;
import com.cloud.org.Managed; import com.cloud.org.Managed;
import com.cloud.resource.ResourceState; import com.cloud.resource.ResourceState;
import com.cloud.server.LockMasterListener; import com.cloud.server.LockControllerListener;
import com.cloud.storage.DataStoreRole; import com.cloud.storage.DataStoreRole;
import com.cloud.storage.ScopeType; import com.cloud.storage.ScopeType;
import com.cloud.storage.Storage; import com.cloud.storage.Storage;
@ -120,12 +120,12 @@ public class EndpointSelectorTest {
@Inject @Inject
AccountManager accountManager; AccountManager accountManager;
LockMasterListener lockMasterListener; LockControllerListener lockControllerListener;
VolumeInfo vol = null; VolumeInfo vol = null;
FakePrimaryDataStoreDriver driver = new FakePrimaryDataStoreDriver(); FakePrimaryDataStoreDriver driver = new FakePrimaryDataStoreDriver();
@Inject @Inject
MockStorageMotionStrategy mockStorageMotionStrategy; MockStorageMotionStrategy mockStorageMotionStrategy;
Merovingian2 _lockMaster; Merovingian2 _lockController;
@Inject @Inject
DataStoreManager dataStoreManager; DataStoreManager dataStoreManager;
@Inject @Inject
@ -187,12 +187,12 @@ public class EndpointSelectorTest {
when(accountManager.getSystemAccount()).thenReturn(account); when(accountManager.getSystemAccount()).thenReturn(account);
when(accountManager.getSystemUser()).thenReturn(user); when(accountManager.getSystemUser()).thenReturn(user);
if (Merovingian2.getLockMaster() == null) { if (Merovingian2.getLockController() == null) {
_lockMaster = Merovingian2.createLockMaster(1234); _lockController = Merovingian2.createLockController(1234);
} else { } else {
_lockMaster = Merovingian2.getLockMaster(); _lockController = Merovingian2.getLockController();
} }
_lockMaster.cleanupThisServer(); _lockController.cleanupThisServer();
ComponentContext.initComponentsLifeCycle(); ComponentContext.initComponentsLifeCycle();
} }

View File

@ -73,7 +73,7 @@ import com.cloud.dc.dao.HostPodDao;
import com.cloud.hypervisor.Hypervisor; import com.cloud.hypervisor.Hypervisor;
import com.cloud.org.Cluster; import com.cloud.org.Cluster;
import com.cloud.org.Managed; import com.cloud.org.Managed;
import com.cloud.server.LockMasterListener; import com.cloud.server.LockControllerListener;
import com.cloud.storage.CreateSnapshotPayload; import com.cloud.storage.CreateSnapshotPayload;
import com.cloud.storage.DataStoreRole; import com.cloud.storage.DataStoreRole;
import com.cloud.storage.ScopeType; import com.cloud.storage.ScopeType;
@ -134,12 +134,12 @@ public class SnapshotTestWithFakeData {
ImageStoreVO imageStore; ImageStoreVO imageStore;
@Inject @Inject
AccountManager accountManager; AccountManager accountManager;
LockMasterListener lockMasterListener; LockControllerListener lockControllerListener;
VolumeInfo vol = null; VolumeInfo vol = null;
FakePrimaryDataStoreDriver driver = new FakePrimaryDataStoreDriver(); FakePrimaryDataStoreDriver driver = new FakePrimaryDataStoreDriver();
@Inject @Inject
MockStorageMotionStrategy mockStorageMotionStrategy; MockStorageMotionStrategy mockStorageMotionStrategy;
Merovingian2 _lockMaster; Merovingian2 _lockController;
@Inject @Inject
SnapshotPolicyDao snapshotPolicyDao; SnapshotPolicyDao snapshotPolicyDao;
@ -189,18 +189,18 @@ public class SnapshotTestWithFakeData {
when(accountManager.getSystemAccount()).thenReturn(account); when(accountManager.getSystemAccount()).thenReturn(account);
when(accountManager.getSystemUser()).thenReturn(user); when(accountManager.getSystemUser()).thenReturn(user);
if (Merovingian2.getLockMaster() == null) { if (Merovingian2.getLockController() == null) {
_lockMaster = Merovingian2.createLockMaster(1234); _lockController = Merovingian2.createLockController(1234);
} else { } else {
_lockMaster = Merovingian2.getLockMaster(); _lockController = Merovingian2.getLockController();
} }
_lockMaster.cleanupThisServer(); _lockController.cleanupThisServer();
ComponentContext.initComponentsLifeCycle(); ComponentContext.initComponentsLifeCycle();
} }
@After @After
public void tearDown() throws Exception { public void tearDown() throws Exception {
_lockMaster.cleanupThisServer(); _lockController.cleanupThisServer();
} }
private SnapshotVO createSnapshotInDb() { private SnapshotVO createSnapshotInDb() {

View File

@ -68,7 +68,7 @@ public class Merovingian2 extends StandardMBean implements MerovingianMBean {
conn = TransactionLegacy.getStandaloneConnectionWithException(); conn = TransactionLegacy.getStandaloneConnectionWithException();
conn.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED); conn.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);
conn.setAutoCommit(true); conn.setAutoCommit(true);
_concierge = new ConnectionConcierge("LockMaster", conn, true); _concierge = new ConnectionConcierge("LockController", conn, true);
} catch (SQLException e) { } catch (SQLException e) {
s_logger.error("Unable to get a new db connection", e); s_logger.error("Unable to get a new db connection", e);
throw new CloudRuntimeException("Unable to initialize a connection to the database for locking purposes", e); throw new CloudRuntimeException("Unable to initialize a connection to the database for locking purposes", e);
@ -83,8 +83,8 @@ public class Merovingian2 extends StandardMBean implements MerovingianMBean {
} }
} }
public static synchronized Merovingian2 createLockMaster(long msId) { public static synchronized Merovingian2 createLockController(long msId) {
assert s_instance == null : "No lock can serve two masters. Either he will hate the one and love the other, or he will be devoted to the one and despise the other."; assert s_instance == null : "No lock can serve two controllers. Either we will hate the one and love the other, or we will be devoted to the one and despise the other.";
s_instance = new Merovingian2(msId); s_instance = new Merovingian2(msId);
s_instance.cleanupThisServer(); s_instance.cleanupThisServer();
try { try {
@ -95,7 +95,7 @@ public class Merovingian2 extends StandardMBean implements MerovingianMBean {
return s_instance; return s_instance;
} }
public static Merovingian2 getLockMaster() { public static Merovingian2 getLockController() {
return s_instance; return s_instance;
} }

View File

@ -377,19 +377,19 @@ public class TransactionLegacy implements Closeable {
} }
public boolean lock(final String name, final int timeoutSeconds) { public boolean lock(final String name, final int timeoutSeconds) {
Merovingian2 lockMaster = Merovingian2.getLockMaster(); Merovingian2 lockController = Merovingian2.getLockController();
if (lockMaster == null) { if (lockController == null) {
throw new CloudRuntimeException("There's no support for locking yet"); throw new CloudRuntimeException("There's no support for locking yet");
} }
return lockMaster.acquire(name, timeoutSeconds); return lockController.acquire(name, timeoutSeconds);
} }
public boolean release(final String name) { public boolean release(final String name) {
Merovingian2 lockMaster = Merovingian2.getLockMaster(); Merovingian2 lockController = Merovingian2.getLockController();
if (lockMaster == null) { if (lockController == null) {
throw new CloudRuntimeException("There's no support for locking yet"); throw new CloudRuntimeException("There's no support for locking yet");
} }
return lockMaster.release(name); return lockController.release(name);
} }
/** /**
@ -644,9 +644,9 @@ public class TransactionLegacy implements Closeable {
closeConnection(); closeConnection();
_stack.clear(); _stack.clear();
Merovingian2 lockMaster = Merovingian2.getLockMaster(); Merovingian2 lockController = Merovingian2.getLockController();
if (lockMaster != null) { if (lockController != null) {
lockMaster.cleanupThread(); lockController.cleanupThread();
} }
} }
@ -1063,11 +1063,11 @@ public class TransactionLegacy implements Closeable {
final String url = dbProps.getProperty("db.cloud.url.params"); final String url = dbProps.getProperty("db.cloud.url.params");
String cloudDbHAParams = null; String cloudDbHAParams = null;
String cloudSlaves = null; String cloudReplicas = null;
if (s_dbHAEnabled) { if (s_dbHAEnabled) {
cloudDbHAParams = getDBHAParams("cloud", dbProps); cloudDbHAParams = getDBHAParams("cloud", dbProps);
cloudSlaves = dbProps.getProperty("db.cloud.slaves"); cloudReplicas = dbProps.getProperty("db.cloud.replicas");
s_logger.info("The slaves configured for Cloud Data base is/are : " + cloudSlaves); s_logger.info("The replicas configured for Cloud Data base is/are : " + cloudReplicas);
} }
final boolean useSSL = Boolean.parseBoolean(dbProps.getProperty("db.cloud.useSSL")); final boolean useSSL = Boolean.parseBoolean(dbProps.getProperty("db.cloud.useSSL"));
@ -1078,7 +1078,7 @@ public class TransactionLegacy implements Closeable {
System.setProperty("javax.net.ssl.trustStorePassword", dbProps.getProperty("db.cloud.trustStorePassword")); System.setProperty("javax.net.ssl.trustStorePassword", dbProps.getProperty("db.cloud.trustStorePassword"));
} }
final String cloudConnectionUri = cloudDriver + "://" + cloudHost + (s_dbHAEnabled ? "," + cloudSlaves : "") + ":" + cloudPort + "/" + cloudDbName + final String cloudConnectionUri = cloudDriver + "://" + cloudHost + (s_dbHAEnabled ? "," + cloudReplicas : "") + ":" + cloudPort + "/" + cloudDbName +
"?autoReconnect=" + cloudAutoReconnect + (url != null ? "&" + url : "") + (useSSL ? "&useSSL=true" : "") + "?autoReconnect=" + cloudAutoReconnect + (url != null ? "&" + url : "") + (useSSL ? "&useSSL=true" : "") +
(s_dbHAEnabled ? "&" + cloudDbHAParams : "") + (s_dbHAEnabled ? "&loadBalanceStrategy=" + loadBalanceStrategy : ""); (s_dbHAEnabled ? "&" + cloudDbHAParams : "") + (s_dbHAEnabled ? "&loadBalanceStrategy=" + loadBalanceStrategy : "");
DriverLoader.loadDriver(cloudDriver); DriverLoader.loadDriver(cloudDriver);
@ -1101,7 +1101,7 @@ public class TransactionLegacy implements Closeable {
final boolean usageAutoReconnect = Boolean.parseBoolean(dbProps.getProperty("db.usage.autoReconnect")); final boolean usageAutoReconnect = Boolean.parseBoolean(dbProps.getProperty("db.usage.autoReconnect"));
final String usageUrl = dbProps.getProperty("db.usage.url.params"); final String usageUrl = dbProps.getProperty("db.usage.url.params");
final String usageConnectionUri = usageDriver + "://" + usageHost + (s_dbHAEnabled ? "," + dbProps.getProperty("db.cloud.slaves") : "") + ":" + usagePort + final String usageConnectionUri = usageDriver + "://" + usageHost + (s_dbHAEnabled ? "," + dbProps.getProperty("db.cloud.replicas") : "") + ":" + usagePort +
"/" + usageDbName + "?autoReconnect=" + usageAutoReconnect + (usageUrl != null ? "&" + usageUrl : "") + "/" + usageDbName + "?autoReconnect=" + usageAutoReconnect + (usageUrl != null ? "&" + usageUrl : "") +
(s_dbHAEnabled ? "&" + getDBHAParams("usage", dbProps) : "") + (s_dbHAEnabled ? "&loadBalanceStrategy=" + loadBalanceStrategy : ""); (s_dbHAEnabled ? "&" + getDBHAParams("usage", dbProps) : "") + (s_dbHAEnabled ? "&loadBalanceStrategy=" + loadBalanceStrategy : "");
DriverLoader.loadDriver(usageDriver); DriverLoader.loadDriver(usageDriver);
@ -1196,8 +1196,8 @@ public class TransactionLegacy implements Closeable {
sb.append("failOverReadOnly=" + dbProps.getProperty("db." + dbName + ".failOverReadOnly")); sb.append("failOverReadOnly=" + dbProps.getProperty("db." + dbName + ".failOverReadOnly"));
sb.append("&").append("reconnectAtTxEnd=" + dbProps.getProperty("db." + dbName + ".reconnectAtTxEnd")); sb.append("&").append("reconnectAtTxEnd=" + dbProps.getProperty("db." + dbName + ".reconnectAtTxEnd"));
sb.append("&").append("autoReconnectForPools=" + dbProps.getProperty("db." + dbName + ".autoReconnectForPools")); sb.append("&").append("autoReconnectForPools=" + dbProps.getProperty("db." + dbName + ".autoReconnectForPools"));
sb.append("&").append("secondsBeforeRetryMaster=" + dbProps.getProperty("db." + dbName + ".secondsBeforeRetryMaster")); sb.append("&").append("secondsBeforeRetrySource=" + dbProps.getProperty("db." + dbName + ".secondsBeforeRetrySource"));
sb.append("&").append("queriesBeforeRetryMaster=" + dbProps.getProperty("db." + dbName + ".queriesBeforeRetryMaster")); sb.append("&").append("queriesBeforeRetrySource=" + dbProps.getProperty("db." + dbName + ".queriesBeforeRetrySource"));
sb.append("&").append("initialTimeout=" + dbProps.getProperty("db." + dbName + ".initialTimeout")); sb.append("&").append("initialTimeout=" + dbProps.getProperty("db." + dbName + ".initialTimeout"));
return sb.toString(); return sb.toString();
} }

View File

@ -26,53 +26,53 @@ import org.junit.Test;
public class Merovingian2Test extends TestCase { public class Merovingian2Test extends TestCase {
static final Logger s_logger = Logger.getLogger(Merovingian2Test.class); static final Logger s_logger = Logger.getLogger(Merovingian2Test.class);
Merovingian2 _lockMaster = Merovingian2.createLockMaster(1234); Merovingian2 _lockController = Merovingian2.createLockController(1234);
@Override @Override
@Before @Before
protected void setUp() throws Exception { protected void setUp() throws Exception {
_lockMaster.cleanupThisServer(); _lockController.cleanupThisServer();
} }
@Override @Override
@After @After
protected void tearDown() throws Exception { protected void tearDown() throws Exception {
_lockMaster.cleanupThisServer(); _lockController.cleanupThisServer();
} }
@Test @Test
public void testLockAndRelease() { public void testLockAndRelease() {
s_logger.info("Testing first acquire"); s_logger.info("Testing first acquire");
boolean result = _lockMaster.acquire("first" + 1234, 5); boolean result = _lockController.acquire("first" + 1234, 5);
Assert.assertTrue(result); Assert.assertTrue(result);
s_logger.info("Testing acquire of different lock"); s_logger.info("Testing acquire of different lock");
result = _lockMaster.acquire("second" + 1234, 5); result = _lockController.acquire("second" + 1234, 5);
Assert.assertTrue(result); Assert.assertTrue(result);
s_logger.info("Testing reacquire of the same lock"); s_logger.info("Testing reacquire of the same lock");
result = _lockMaster.acquire("first" + 1234, 5); result = _lockController.acquire("first" + 1234, 5);
Assert.assertTrue(result); Assert.assertTrue(result);
int count = _lockMaster.owns("first" + 1234); int count = _lockController.owns("first" + 1234);
Assert.assertEquals(count, 2); Assert.assertEquals(count, 2);
count = _lockMaster.owns("second" + 1234); count = _lockController.owns("second" + 1234);
Assert.assertEquals(count, 1); Assert.assertEquals(count, 1);
s_logger.info("Testing release of the first lock"); s_logger.info("Testing release of the first lock");
result = _lockMaster.release("first" + 1234); result = _lockController.release("first" + 1234);
Assert.assertTrue(result); Assert.assertTrue(result);
count = _lockMaster.owns("first" + 1234); count = _lockController.owns("first" + 1234);
Assert.assertEquals(count, 1); Assert.assertEquals(count, 1);
s_logger.info("Testing release of the second lock"); s_logger.info("Testing release of the second lock");
result = _lockMaster.release("second" + 1234); result = _lockController.release("second" + 1234);
Assert.assertTrue(result); Assert.assertTrue(result);
result = _lockMaster.release("first" + 1234); result = _lockController.release("first" + 1234);
Assert.assertTrue(result); Assert.assertTrue(result);
} }

View File

@ -58,7 +58,7 @@ public class DynamicRoleBasedAPIAccessChecker extends AdapterBase implements API
} }
private void denyApiAccess(final String commandName) throws PermissionDeniedException { private void denyApiAccess(final String commandName) throws PermissionDeniedException {
throw new PermissionDeniedException("The API " + commandName + " is blacklisted for the account's role."); throw new PermissionDeniedException("The API " + commandName + " is denied for the account's role.");
} }
public boolean isDisabled() { public boolean isDisabled() {

View File

@ -55,7 +55,7 @@ public class ProjectRoleBasedApiAccessChecker extends AdapterBase implements AP
} }
private void denyApiAccess(final String commandName) throws PermissionDeniedException { private void denyApiAccess(final String commandName) throws PermissionDeniedException {
throw new PermissionDeniedException("The API " + commandName + " is blacklisted for the user's/account's project role."); throw new PermissionDeniedException("The API " + commandName + " is denied for the user's/account's project role.");
} }

View File

@ -90,7 +90,7 @@ public class StaticRoleBasedAPIAccessChecker extends AdapterBase implements APIA
} }
if (commandNames.contains(commandName)) { if (commandNames.contains(commandName)) {
throw new PermissionDeniedException("The API is blacklisted. Role type=" + roleType.toString() + " is not allowed to request the api: " + commandName); throw new PermissionDeniedException("The API is denied. Role type=" + roleType.toString() + " is not allowed to request the api: " + commandName);
} else { } else {
throw new UnavailableCommandException("The API " + commandName + " does not exist or is not available for this account."); throw new UnavailableCommandException("The API " + commandName + " does not exist or is not available for this account.");
} }

View File

@ -44,21 +44,21 @@ public class StaticStrategy implements BalanceStrategy {
SQLException ex = null; SQLException ex = null;
List<String> whiteList = new ArrayList<String>(numHosts); List<String> allowList = new ArrayList<String>(numHosts);
whiteList.addAll(configuredHosts); allowList.addAll(configuredHosts);
Map<String, Long> blackList = ((LoadBalancedConnectionProxy) proxy).getGlobalBlacklist(); Map<String, Long> denylist = ((LoadBalancedConnectionProxy) proxy).getGlobalBlacklist();
whiteList.removeAll(blackList.keySet()); allowList.removeAll(denylist.keySet());
Map<String, Integer> whiteListMap = this.getArrayIndexMap(whiteList); Map<String, Integer> allowListMap = this.getArrayIndexMap(allowList);
for (int attempts = 0; attempts < numRetries;) { for (int attempts = 0; attempts < numRetries;) {
if (whiteList.size() == 0) { if (allowList.size() == 0) {
throw SQLError.createSQLException("No hosts configured", null); throw SQLError.createSQLException("No hosts configured", null);
} }
String hostPortSpec = whiteList.get(0); //Always take the first host String hostPortSpec = allowList.get(0); //Always take the first host
ConnectionImpl conn = (ConnectionImpl) liveConnections.get(hostPortSpec); ConnectionImpl conn = (ConnectionImpl) liveConnections.get(hostPortSpec);
@ -70,16 +70,16 @@ public class StaticStrategy implements BalanceStrategy {
if (((LoadBalancedConnectionProxy) proxy).shouldExceptionTriggerFailover(sqlEx)) { if (((LoadBalancedConnectionProxy) proxy).shouldExceptionTriggerFailover(sqlEx)) {
Integer whiteListIndex = whiteListMap.get(hostPortSpec); Integer allowListIndex = allowListMap.get(hostPortSpec);
// exclude this host from being picked again // exclude this host from being picked again
if (whiteListIndex != null) { if (allowListIndex != null) {
whiteList.remove(whiteListIndex.intValue()); allowList.remove(allowListIndex.intValue());
whiteListMap = this.getArrayIndexMap(whiteList); allowListMap = this.getArrayIndexMap(allowList);
} }
((LoadBalancedConnectionProxy) proxy).addToGlobalBlacklist(hostPortSpec); ((LoadBalancedConnectionProxy) proxy).addToGlobalBlacklist(hostPortSpec);
if (whiteList.size() == 0) { if (allowList.size() == 0) {
attempts++; attempts++;
try { try {
Thread.sleep(250); Thread.sleep(250);
@ -88,12 +88,12 @@ public class StaticStrategy implements BalanceStrategy {
} }
// start fresh // start fresh
whiteListMap = new HashMap<String, Integer>(numHosts); allowListMap = new HashMap<String, Integer>(numHosts);
whiteList.addAll(configuredHosts); allowList.addAll(configuredHosts);
blackList = ((LoadBalancedConnectionProxy) proxy).getGlobalBlacklist(); denylist = ((LoadBalancedConnectionProxy) proxy).getGlobalBlacklist();
whiteList.removeAll(blackList.keySet()); allowList.removeAll(denylist.keySet());
whiteListMap = this.getArrayIndexMap(whiteList); allowListMap = this.getArrayIndexMap(allowList);
} }
continue; continue;

View File

@ -224,7 +224,7 @@ public class OvmResourceBase implements ServerResource, HypervisorResource {
_conn = new Connection(_ip, _agentUserName, _agentPassword); _conn = new Connection(_ip, _agentUserName, _agentPassword);
try { try {
OvmHost.registerAsMaster(_conn); OvmHost.registerAsPrimary(_conn);
OvmHost.registerAsVmServer(_conn); OvmHost.registerAsVmServer(_conn);
_bridges = OvmBridge.getAllBridges(_conn); _bridges = OvmBridge.getAllBridges(_conn);
} catch (XmlRpcException e) { } catch (XmlRpcException e) {
@ -398,11 +398,11 @@ public class OvmResourceBase implements ServerResource, HypervisorResource {
try { try {
OvmHost.Details d = OvmHost.getDetails(_conn); OvmHost.Details d = OvmHost.getDetails(_conn);
//TODO: cleanup halted vm //TODO: cleanup halted vm
if (d.masterIp.equalsIgnoreCase(_ip)) { if (d.primaryIp.equalsIgnoreCase(_ip)) {
return new ReadyAnswer(cmd); return new ReadyAnswer(cmd);
} else { } else {
s_logger.debug("Master IP changes to " + d.masterIp + ", it should be " + _ip); s_logger.debug("Primary IP changes to " + d.primaryIp + ", it should be " + _ip);
return new ReadyAnswer(cmd, "I am not the master server"); return new ReadyAnswer(cmd, "I am not the primary server");
} }
} catch (XmlRpcException e) { } catch (XmlRpcException e) {
s_logger.debug("XML RPC Exception" + e.getMessage(), e); s_logger.debug("XML RPC Exception" + e.getMessage(), e);

View File

@ -26,7 +26,7 @@ public class OvmHost extends OvmObject {
public static final String XEN = "xen"; public static final String XEN = "xen";
public static class Details { public static class Details {
public String masterIp; public String primaryIp;
public Integer cpuNum; public Integer cpuNum;
public Integer cpuSpeed; public Integer cpuSpeed;
public Long totalMemory; public Long totalMemory;
@ -42,9 +42,9 @@ public class OvmHost extends OvmObject {
} }
} }
public static void registerAsMaster(Connection c) throws XmlRpcException { public static void registerAsPrimary(Connection c) throws XmlRpcException {
Object[] params = {c.getIp(), c.getUserName(), c.getPassword(), c.getPort(), c.getIsSsl()}; Object[] params = {c.getIp(), c.getUserName(), c.getPassword(), c.getPort(), c.getIsSsl()};
c.call("OvmHost.registerAsMaster", params, false); c.call("OvmHost.registerAsPrimary", params, false);
} }
public static void registerAsVmServer(Connection c) throws XmlRpcException { public static void registerAsVmServer(Connection c) throws XmlRpcException {

View File

@ -38,8 +38,6 @@ public class Test {
//pool.registerServer("192.168.105.155", Pool.ServerType.UTILITY); //pool.registerServer("192.168.105.155", Pool.ServerType.UTILITY);
//pool.registerServer("192.168.105.155", Pool.ServerType.XEN); //pool.registerServer("192.168.105.155", Pool.ServerType.XEN);
System.out.println("Is:" + pool.isServerRegistered()); System.out.println("Is:" + pool.isServerRegistered());
//String ip = pool.getMasterIp();
//System.out.println("IP:" + ip);
System.out.println(pool.getServerConfig()); System.out.println(pool.getServerConfig());
System.out.println(pool.getServerXmInfo()); System.out.println(pool.getServerXmInfo());
System.out.println(pool.getHostInfo()); System.out.println(pool.getHostInfo());
@ -89,8 +87,6 @@ public class Test {
/* This is not being used at the moment. /* This is not being used at the moment.
* Coverity issue: 1012179 * Coverity issue: 1012179
*/ */
//final String txt =
// "{\"MasterIp\": \"192.168.189.12\", \"dom0Memory\": 790626304, \"freeMemory\": 16378757120, \"totalMemory\": 17169383424, \"cpuNum\": 4, \"agentVersion\": \"2.3-38\", \"cpuSpeed\": 2261}";
//OvmHost.Details d = new GsonBuilder().create().fromJson(txt, OvmHost.Details.class); //OvmHost.Details d = new GsonBuilder().create().fromJson(txt, OvmHost.Details.class);
//OvmHost.Details d = Coder.fromJson(txt, OvmHost.Details.class); //OvmHost.Details d = Coder.fromJson(txt, OvmHost.Details.class);

View File

@ -41,7 +41,7 @@ errCode = {
"OvmDispatch.InvaildFunction":OvmDispatcherStub+3, "OvmDispatch.InvaildFunction":OvmDispatcherStub+3,
"OvmVm.reboot":OvmDispatcherStub+4, "OvmVm.reboot":OvmDispatcherStub+4,
"OvmHost.registerAsMaster":OvmHostErrCodeStub+1, "OvmHost.registerAsPrimary":OvmHostErrCodeStub+1,
"OvmHost.registerAsVmServer":OvmHostErrCodeStub+2, "OvmHost.registerAsVmServer":OvmHostErrCodeStub+2,
"OvmHost.ping":OvmHostErrCodeStub+3, "OvmHost.ping":OvmHostErrCodeStub+3,
"OvmHost.getDetails":OvmHostErrCodeStub+4, "OvmHost.getDetails":OvmHostErrCodeStub+4,

View File

@ -95,9 +95,9 @@ class OvmHost(OvmObject):
raise NoVmFoundException("No domain id for %s found"%vmName) raise NoVmFoundException("No domain id for %s found"%vmName)
@staticmethod @staticmethod
def registerAsMaster(hostname, username="oracle", password="password", port=8899, isSsl=False): def registerAsPrimary(hostname, username="oracle", password="password", port=8899, isSsl=False):
try: try:
logger.debug(OvmHost.registerAsMaster, "ip=%s, username=%s, password=%s, port=%s, isSsl=%s"%(hostname, username, password, port, isSsl)) logger.debug(OvmHost.registerAsPrimary, "ip=%s, username=%s, password=%s, port=%s, isSsl=%s"%(hostname, username, password, port, isSsl))
exceptionIfNoSuccess(register_server(hostname, 'site', False, username, password, port, isSsl), exceptionIfNoSuccess(register_server(hostname, 'site', False, username, password, port, isSsl),
"Register %s as site failed"%hostname) "Register %s as site failed"%hostname)
exceptionIfNoSuccess(register_server(hostname, 'utility', False, username, password, port, isSsl), exceptionIfNoSuccess(register_server(hostname, 'utility', False, username, password, port, isSsl),
@ -106,8 +106,8 @@ class OvmHost(OvmObject):
return rs return rs
except Exception, e: except Exception, e:
errmsg = fmt_err_msg(e) errmsg = fmt_err_msg(e)
logger.error(OvmHost.registerAsMaster, errmsg) logger.error(OvmHost.registerAsPrimary, errmsg)
raise XmlRpcFault(toErrCode(OvmHost, OvmHost.registerAsMaster), errmsg) raise XmlRpcFault(toErrCode(OvmHost, OvmHost.registerAsPrimary), errmsg)
@staticmethod @staticmethod
def registerAsVmServer(hostname, username="oracle", password="password", port=8899, isSsl=False): def registerAsVmServer(hostname, username="oracle", password="password", port=8899, isSsl=False):

View File

@ -91,7 +91,7 @@ public class Cluster extends OvmObject {
* update_clusterConfiguration, <class 'agent.api.cluster.o2cb.ClusterO2CB'> * update_clusterConfiguration, <class 'agent.api.cluster.o2cb.ClusterO2CB'>
* argument: self - default: None argument: cluster_conf - default: None <( * argument: self - default: None argument: cluster_conf - default: None <(
* ? cluster_conf can be a "dict" or a plain file: print * ? cluster_conf can be a "dict" or a plain file: print
* master.update_clusterConfiguration( * primary.update_clusterConfiguration(
* "heartbeat:\n\tregion = 0004FB0000050000E70FBDDEB802208F\n\tcluster = ba9aaf00ae5e2d72\n\nnode:\n\tip_port = 7777\n\tip_address = 192.168.1.64\n\tnumber = 0\n\tname = ovm-1\n\tcluster = ba9aaf00ae5e2d72\n\nnode:\n\tip_port = 7777\n\tip_address = 192.168.1.65\n\tnumber = 1\n\tname = ovm-2\n\tcluster = ba9aaf00ae5e2d72\n\ncluster:\n\tnode_count = 2\n\theartbeat_mode = global\n\tname = ba9aaf00ae5e2d72\n" * "heartbeat:\n\tregion = 0004FB0000050000E70FBDDEB802208F\n\tcluster = ba9aaf00ae5e2d72\n\nnode:\n\tip_port = 7777\n\tip_address = 192.168.1.64\n\tnumber = 0\n\tname = ovm-1\n\tcluster = ba9aaf00ae5e2d72\n\nnode:\n\tip_port = 7777\n\tip_address = 192.168.1.65\n\tnumber = 1\n\tname = ovm-2\n\tcluster = ba9aaf00ae5e2d72\n\ncluster:\n\tnode_count = 2\n\theartbeat_mode = global\n\tname = ba9aaf00ae5e2d72\n"
* ) * )
*/ */

View File

@ -57,7 +57,7 @@ public class Linux extends OvmObject {
* {OS_Major_Version=5, Statistic=20, Membership_State=Unowned, * {OS_Major_Version=5, Statistic=20, Membership_State=Unowned,
* OVM_Version=3.2.1-517, OS_Type=Linux, Hypervisor_Name=Xen, * OVM_Version=3.2.1-517, OS_Type=Linux, Hypervisor_Name=Xen,
* CPU_Type=x86_64, Manager_Core_API_Version=3.2.1.516, * CPU_Type=x86_64, Manager_Core_API_Version=3.2.1.516,
* Is_Current_Master=false, OS_Name=Oracle VM Server, * Is_Primary=false, OS_Name=Oracle VM Server,
* Server_Roles=xen,utility, Pool_Unique_Id=none, * Server_Roles=xen,utility, Pool_Unique_Id=none,
* Host_Kernel_Release=2.6.39-300.22.2.el5uek, OS_Minor_Version=7, * Host_Kernel_Release=2.6.39-300.22.2.el5uek, OS_Minor_Version=7,
* Agent_Version=3.2.1-183, Boot_Time=1392366638, RPM_Version=3.2.1-183, * Agent_Version=3.2.1-183, Boot_Time=1392366638, RPM_Version=3.2.1-183,
@ -154,8 +154,8 @@ public class Linux extends OvmObject {
return get("Server_Roles"); return get("Server_Roles");
} }
public boolean getIsMaster() throws Ovm3ResourceException { public boolean getIsPrimary() throws Ovm3ResourceException {
return Boolean.parseBoolean(get("Is_Current_Master")); return Boolean.parseBoolean(get("Is_Primary"));
} }
public String getOvmVersion() throws Ovm3ResourceException { public String getOvmVersion() throws Ovm3ResourceException {

View File

@ -42,7 +42,7 @@ public class Pool extends OvmObject {
}; };
private List<String> poolHosts = new ArrayList<String>(); private List<String> poolHosts = new ArrayList<String>();
private final List<String> poolRoles = new ArrayList<String>(); private final List<String> poolRoles = new ArrayList<String>();
private String poolMasterVip; private String poolPrimaryVip;
private String poolAlias; private String poolAlias;
private String poolId = null; private String poolId = null;
@ -50,8 +50,8 @@ public class Pool extends OvmObject {
setClient(c); setClient(c);
} }
public String getPoolMasterVip() { public String getPoolPrimaryVip() {
return poolMasterVip; return poolPrimaryVip;
} }
public String getPoolAlias() { public String getPoolAlias() {
@ -115,7 +115,7 @@ public class Pool extends OvmObject {
/* /*
* public Boolean updatePoolVirtualIp(String ip) throws * public Boolean updatePoolVirtualIp(String ip) throws
* Ovm3ResourceException { Object x = callWrapper("update_pool_virtual_ip", * Ovm3ResourceException { Object x = callWrapper("update_pool_virtual_ip",
* ip); if (x == null) { poolMasterVip = ip; return true; } return false; } * ip); if (x == null) { poolPrimaryVip = ip; return true; } return false; }
*/ */
public Boolean leaveServerPool(String uuid) throws Ovm3ResourceException{ public Boolean leaveServerPool(String uuid) throws Ovm3ResourceException{
@ -199,7 +199,7 @@ public class Pool extends OvmObject {
String path = "//Discover_Server_Pool_Result/Server_Pool"; String path = "//Discover_Server_Pool_Result/Server_Pool";
poolId = xmlToString(path + "/Unique_Id", xmlDocument); poolId = xmlToString(path + "/Unique_Id", xmlDocument);
poolAlias = xmlToString(path + "/Pool_Alias", xmlDocument); poolAlias = xmlToString(path + "/Pool_Alias", xmlDocument);
poolMasterVip = xmlToString(path + "/Master_Virtual_Ip", poolPrimaryVip = xmlToString(path + "/Primary_Virtual_Ip",
xmlDocument); xmlDocument);
poolHosts.addAll(xmlToList(path + "//Registered_IP", xmlDocument)); poolHosts.addAll(xmlToList(path + "//Registered_IP", xmlDocument));
if (poolId == null) { if (poolId == null) {

View File

@ -308,7 +308,7 @@ public class Ovm3HypervisorResource extends ServerResourceBase implements Hyperv
@Override @Override
public boolean configure(String name, Map<String, Object> params) throws ConfigurationException { public boolean configure(String name, Map<String, Object> params) throws ConfigurationException {
LOGGER.debug("configure " + name + " with params: " + params); LOGGER.debug("configure " + name + " with params: " + params);
/* check if we're master or not and if we can connect */ /* check if we're primary or not and if we can connect */
try { try {
configuration = new Ovm3Configuration(params); configuration = new Ovm3Configuration(params);
if (!configuration.getIsTest()) { if (!configuration.getIsTest()) {
@ -320,7 +320,7 @@ public class Ovm3HypervisorResource extends ServerResourceBase implements Hyperv
if (!configuration.getIsTest()) { if (!configuration.getIsTest()) {
hypervisorsupport.setupServer(configuration.getAgentSshKeyFileName()); hypervisorsupport.setupServer(configuration.getAgentSshKeyFileName());
} }
hypervisorsupport.masterCheck(); hypervisorsupport.primaryCheck();
} catch (Exception e) { } catch (Exception e) {
throw new CloudRuntimeException("Base checks failed for " + configuration.getAgentHostname(), e); throw new CloudRuntimeException("Base checks failed for " + configuration.getAgentHostname(), e);
} }

View File

@ -50,8 +50,8 @@ public class Ovm3Configuration {
private Boolean agentOvsAgentSsl = false; private Boolean agentOvsAgentSsl = false;
private String agentSshKeyFile = "id_rsa.cloud"; private String agentSshKeyFile = "id_rsa.cloud";
private String agentOwnedByUuid = "d1a749d4295041fb99854f52ea4dea97"; private String agentOwnedByUuid = "d1a749d4295041fb99854f52ea4dea97";
private Boolean agentIsMaster = false; private Boolean agentIsPrimary = false;
private Boolean agentHasMaster = false; private Boolean agentHasPrimary = false;
private Boolean agentInOvm3Pool = false; private Boolean agentInOvm3Pool = false;
private Boolean agentInOvm3Cluster = false; private Boolean agentInOvm3Cluster = false;
private String ovm3PoolVip = ""; private String ovm3PoolVip = "";
@ -266,20 +266,20 @@ public class Ovm3Configuration {
this.agentOwnedByUuid = agentOwnedByUuid; this.agentOwnedByUuid = agentOwnedByUuid;
} }
public Boolean getAgentIsMaster() { public Boolean getAgentIsPrimary() {
return agentIsMaster; return agentIsPrimary;
} }
public void setAgentIsMaster(Boolean agentIsMaster) { public void setAgentIsPrimary(Boolean agentIsPrimary) {
this.agentIsMaster = agentIsMaster; this.agentIsPrimary = agentIsPrimary;
} }
public Boolean getAgentHasMaster() { public Boolean getAgentHasPrimary() {
return agentHasMaster; return agentHasPrimary;
} }
public void setAgentHasMaster(Boolean agentHasMaster) { public void setAgentHasPrimary(Boolean agentHasPrimary) {
this.agentHasMaster = agentHasMaster; this.agentHasPrimary = agentHasPrimary;
} }
public Boolean getAgentInOvm3Pool() { public Boolean getAgentInOvm3Pool() {

View File

@ -246,8 +246,8 @@ public class Ovm3HypervisorSupport {
d.put("private.network.device", config.getAgentPrivateNetworkName()); d.put("private.network.device", config.getAgentPrivateNetworkName());
d.put("guest.network.device", config.getAgentGuestNetworkName()); d.put("guest.network.device", config.getAgentGuestNetworkName());
d.put("storage.network.device", config.getAgentStorageNetworkName()); d.put("storage.network.device", config.getAgentStorageNetworkName());
d.put("ismaster", config.getAgentIsMaster().toString()); d.put("isprimary", config.getAgentIsPrimary().toString());
d.put("hasmaster", config.getAgentHasMaster().toString()); d.put("hasprimary", config.getAgentHasPrimary().toString());
cmd.setHostDetails(d); cmd.setHostDetails(d);
LOGGER.debug("Add an Ovm3 host " + config.getAgentHostname() + ":" LOGGER.debug("Add an Ovm3 host " + config.getAgentHostname() + ":"
+ cmd.getHostDetails()); + cmd.getHostDetails());
@ -571,13 +571,13 @@ public class Ovm3HypervisorSupport {
} }
/** /**
* materCheck * primaryCheck
* *
* @return * @return
*/ */
public boolean masterCheck() { public boolean primaryCheck() {
if ("".equals(config.getOvm3PoolVip())) { if ("".equals(config.getOvm3PoolVip())) {
LOGGER.debug("No cluster vip, not checking for master"); LOGGER.debug("No cluster vip, not checking for primary");
return false; return false;
} }
@ -585,26 +585,26 @@ public class Ovm3HypervisorSupport {
CloudstackPlugin cSp = new CloudstackPlugin(c); CloudstackPlugin cSp = new CloudstackPlugin(c);
if (cSp.dom0HasIp(config.getOvm3PoolVip())) { if (cSp.dom0HasIp(config.getOvm3PoolVip())) {
LOGGER.debug(config.getAgentHostname() LOGGER.debug(config.getAgentHostname()
+ " is a master, already has vip " + " is a primary, already has vip "
+ config.getOvm3PoolVip()); + config.getOvm3PoolVip());
config.setAgentIsMaster(true); config.setAgentIsPrimary(true);
} else if (cSp.ping(config.getOvm3PoolVip())) { } else if (cSp.ping(config.getOvm3PoolVip())) {
LOGGER.debug(config.getAgentHostname() LOGGER.debug(config.getAgentHostname()
+ " has a master, someone has vip " + " has a primary, someone has vip "
+ config.getOvm3PoolVip()); + config.getOvm3PoolVip());
config.setAgentHasMaster(true); config.setAgentHasPrimary(true);
} else { } else {
LOGGER.debug(config.getAgentHostname() LOGGER.debug(config.getAgentHostname()
+ " becomes a master, no one has vip " + " becomes a primary, no one has vip "
+ config.getOvm3PoolVip()); + config.getOvm3PoolVip());
config.setAgentIsMaster(true); config.setAgentIsPrimary(true);
} }
} catch (Ovm3ResourceException e) { } catch (Ovm3ResourceException e) {
LOGGER.debug(config.getAgentHostname() LOGGER.debug(config.getAgentHostname()
+ " can't reach master: " + e.getMessage()); + " can't reach primary: " + e.getMessage());
config.setAgentHasMaster(false); config.setAgentHasPrimary(false);
} }
return config.getAgentIsMaster(); return config.getAgentIsPrimary();
} }
/* Check if the host is in ready state for CS */ /* Check if the host is in ready state for CS */
@ -614,22 +614,22 @@ public class Ovm3HypervisorSupport {
Pool pool = new Pool(c); Pool pool = new Pool(c);
/* only interesting when doing cluster */ /* only interesting when doing cluster */
if (!host.getIsMaster() && config.getAgentInOvm3Cluster()) { if (!host.getIsPrimary() && config.getAgentInOvm3Cluster()) {
if (pool.getPoolMasterVip().equalsIgnoreCase(c.getIp())) { if (pool.getPoolPrimaryVip().equalsIgnoreCase(c.getIp())) {
/* check pool state here */ /* check pool state here */
return new ReadyAnswer(cmd); return new ReadyAnswer(cmd);
} else { } else {
LOGGER.debug("Master IP changes to " LOGGER.debug("Primary IP changes to "
+ pool.getPoolMasterVip() + ", it should be " + pool.getPoolPrimaryVip() + ", it should be "
+ c.getIp()); + c.getIp());
return new ReadyAnswer(cmd, "I am not the master server"); return new ReadyAnswer(cmd, "I am not the primary server");
} }
} else if (host.getIsMaster()) { } else if (host.getIsPrimary()) {
LOGGER.debug("Master, not clustered " LOGGER.debug("Primary, not clustered "
+ config.getAgentHostname()); + config.getAgentHostname());
return new ReadyAnswer(cmd); return new ReadyAnswer(cmd);
} else { } else {
LOGGER.debug("No master, not clustered " LOGGER.debug("No primary, not clustered "
+ config.getAgentHostname()); + config.getAgentHostname());
return new ReadyAnswer(cmd); return new ReadyAnswer(cmd);
} }

View File

@ -138,7 +138,7 @@ public class Ovm3StoragePool {
* @throws ConfigurationException * @throws ConfigurationException
*/ */
public boolean prepareForPool() throws ConfigurationException { public boolean prepareForPool() throws ConfigurationException {
/* need single master uuid */ /* need single primary uuid */
try { try {
Linux host = new Linux(c); Linux host = new Linux(c);
Pool pool = new Pool(c); Pool pool = new Pool(c);
@ -201,7 +201,7 @@ public class Ovm3StoragePool {
Pool poolHost = new Pool(c); Pool poolHost = new Pool(c);
PoolOCFS2 poolFs = new PoolOCFS2(c); PoolOCFS2 poolFs = new PoolOCFS2(c);
if (config.getAgentIsMaster()) { if (config.getAgentIsPrimary()) {
try { try {
LOGGER.debug("Create poolfs on " + config.getAgentHostname() LOGGER.debug("Create poolfs on " + config.getAgentHostname()
+ " for repo " + primUuid); + " for repo " + primUuid);
@ -218,7 +218,7 @@ public class Ovm3StoragePool {
} catch (Ovm3ResourceException e) { } catch (Ovm3ResourceException e) {
throw e; throw e;
} }
} else if (config.getAgentHasMaster()) { } else if (config.getAgentHasPrimary()) {
try { try {
poolHost.joinServerPool(poolAlias, primUuid, poolHost.joinServerPool(poolAlias, primUuid,
config.getOvm3PoolVip(), poolSize + 1, config.getOvm3PoolVip(), poolSize + 1,
@ -262,15 +262,15 @@ public class Ovm3StoragePool {
try { try {
Connection m = new Connection(config.getOvm3PoolVip(), c.getPort(), Connection m = new Connection(config.getOvm3PoolVip(), c.getPort(),
c.getUserName(), c.getPassword()); c.getUserName(), c.getPassword());
Pool poolMaster = new Pool(m); Pool poolPrimary = new Pool(m);
if (poolMaster.isInAPool()) { if (poolPrimary.isInAPool()) {
members.addAll(poolMaster.getPoolMemberList()); members.addAll(poolPrimary.getPoolMemberList());
if (!poolMaster.getPoolMemberList().contains(c.getIp()) if (!poolPrimary.getPoolMemberList().contains(c.getIp())
&& c.getIp().equals(config.getOvm3PoolVip())) { && c.getIp().equals(config.getOvm3PoolVip())) {
members.add(c.getIp()); members.add(c.getIp());
} }
} else { } else {
LOGGER.warn(c.getIp() + " noticed master " LOGGER.warn(c.getIp() + " noticed primary "
+ config.getOvm3PoolVip() + " is not part of pool"); + config.getOvm3PoolVip() + " is not part of pool");
return false; return false;
} }
@ -306,7 +306,7 @@ public class Ovm3StoragePool {
try { try {
Pool pool = new Pool(c); Pool pool = new Pool(c);
pool.leaveServerPool(cmd.getPool().getUuid()); pool.leaveServerPool(cmd.getPool().getUuid());
/* also connect to the master and update the pool list ? */ /* also connect to the primary and update the pool list ? */
} catch (Ovm3ResourceException e) { } catch (Ovm3ResourceException e) {
LOGGER.debug( LOGGER.debug(
"Delete storage pool on host " "Delete storage pool on host "
@ -448,8 +448,8 @@ public class Ovm3StoragePool {
GlobalLock lock = GlobalLock.getInternLock("prepare.systemvm"); GlobalLock lock = GlobalLock.getInternLock("prepare.systemvm");
try { try {
/* double check */ /* double check */
if (config.getAgentHasMaster() && config.getAgentInOvm3Pool()) { if (config.getAgentHasPrimary() && config.getAgentInOvm3Pool()) {
LOGGER.debug("Skip systemvm iso copy, leave it to the master"); LOGGER.debug("Skip systemvm iso copy, leave it to the primary");
return; return;
} }
if (lock.lock(3600)) { if (lock.lock(3600)) {

View File

@ -71,8 +71,8 @@ public class LinuxTest {
+ "&lt;Registered_IP&gt;192.168.1.64&lt;/Registered_IP&gt;" + "&lt;Registered_IP&gt;192.168.1.64&lt;/Registered_IP&gt;"
+ "&lt;Node_Number&gt;1&lt;/Node_Number&gt;" + "&lt;Node_Number&gt;1&lt;/Node_Number&gt;"
+ "&lt;Server_Roles&gt;xen,utility&lt;/Server_Roles&gt;" + "&lt;Server_Roles&gt;xen,utility&lt;/Server_Roles&gt;"
+ "&lt;Is_Current_Master&gt;true&lt;/Is_Current_Master&gt;" + "&lt;Is_Primary&gt;true&lt;/Is_Primary&gt;"
+ "&lt;Master_Virtual_Ip&gt;192.168.1.230&lt;/Master_Virtual_Ip&gt;" + "&lt;Primary_Virtual_Ip&gt;192.168.1.230&lt;/Primary_Virtual_Ip&gt;"
+ "&lt;Manager_Core_API_Version&gt;3.2.1.516&lt;/Manager_Core_API_Version&gt;" + "&lt;Manager_Core_API_Version&gt;3.2.1.516&lt;/Manager_Core_API_Version&gt;"
+ "&lt;Membership_State&gt;Pooled&lt;/Membership_State&gt;" + "&lt;Membership_State&gt;Pooled&lt;/Membership_State&gt;"
+ "&lt;Cluster_State&gt;Offline&lt;/Cluster_State&gt;" + "&lt;Cluster_State&gt;Offline&lt;/Cluster_State&gt;"

View File

@ -46,9 +46,9 @@ public class PoolTest {
+ " <Pool_Alias>" + " <Pool_Alias>"
+ ALIAS + ALIAS
+ "</Pool_Alias>" + "</Pool_Alias>"
+ " <Master_Virtual_Ip>" + " <Primary_Virtual_Ip>"
+ VIP + VIP
+ "</Master_Virtual_Ip>" + "</Primary_Virtual_Ip>"
+ " <Member_List>" + " <Member_List>"
+ " <Member>" + " <Member>"
+ " <Registered_IP>" + " <Registered_IP>"
@ -78,7 +78,7 @@ public class PoolTest {
results.basicStringTest(pool.getPoolId(), UUID); results.basicStringTest(pool.getPoolId(), UUID);
results.basicStringTest(pool.getPoolId(), UUID); results.basicStringTest(pool.getPoolId(), UUID);
results.basicStringTest(pool.getPoolAlias(), ALIAS); results.basicStringTest(pool.getPoolAlias(), ALIAS);
results.basicStringTest(pool.getPoolMasterVip(), VIP); results.basicStringTest(pool.getPoolPrimaryVip(), VIP);
results.basicBooleanTest(pool.getPoolMemberList().contains(IP)); results.basicBooleanTest(pool.getPoolMemberList().contains(IP));
results.basicBooleanTest(pool.getPoolMemberList().contains(IP2)); results.basicBooleanTest(pool.getPoolMemberList().contains(IP2));
} }

View File

@ -45,7 +45,7 @@ public class Ovm3ConfigurationTest {
params.put("password", "unknown"); params.put("password", "unknown");
params.put("username", "root"); params.put("username", "root");
params.put("pool", "a9c1219d-817d-4242-b23e-2607801c79d5"); params.put("pool", "a9c1219d-817d-4242-b23e-2607801c79d5");
params.put("ismaster", "false"); params.put("isprimary", "false");
params.put("storage.network.device", "xenbr0"); params.put("storage.network.device", "xenbr0");
params.put("Host.OS.Version", "5.7"); params.put("Host.OS.Version", "5.7");
params.put("xenserver.nics.max", "7"); params.put("xenserver.nics.max", "7");
@ -64,7 +64,7 @@ public class Ovm3ConfigurationTest {
params.put("ip", "192.168.1.64"); params.put("ip", "192.168.1.64");
params.put("guid", "19e5f1e7-22f4-3b6d-8d41-c82f89c65295"); params.put("guid", "19e5f1e7-22f4-3b6d-8d41-c82f89c65295");
params.put("ovm3vip", "192.168.1.230"); params.put("ovm3vip", "192.168.1.230");
params.put("hasmaster", "true"); params.put("hasprimary", "true");
params.put("guest.network.device", "xenbr0"); params.put("guest.network.device", "xenbr0");
params.put("cluster", "1"); params.put("cluster", "1");
params.put("xenserver.heartbeat.timeout", "120"); params.put("xenserver.heartbeat.timeout", "120");

View File

@ -192,9 +192,9 @@ public class Ovm3HypervisorSupportTest {
} }
@Test @Test
public void masterCheckTest() throws ConfigurationException { public void primaryCheckTest() throws ConfigurationException {
con = prepare(); con = prepare();
// System.out.println(hypervisor.masterCheck()); // System.out.println(hypervisor.primaryCheck());
} }
@Test @Test

View File

@ -254,7 +254,7 @@ try:
for node in poolDom.getElementsByTagName('Server_Pool'): for node in poolDom.getElementsByTagName('Server_Pool'):
id = node.getElementsByTagName('Unique_Id')[0].firstChild.nodeValue id = node.getElementsByTagName('Unique_Id')[0].firstChild.nodeValue
alias = node.getElementsByTagName('Pool_Alias')[0].firstChild.nodeValue alias = node.getElementsByTagName('Pool_Alias')[0].firstChild.nodeValue
mvip = node.getElementsByTagName('Master_Virtual_Ip')[0].firstChild.nodeValue mvip = node.getElementsByTagName('Primary_Virtual_Ip')[0].firstChild.nodeValue
print "pool: %s, %s, %s" % (id, mvip, alias) print "pool: %s, %s, %s" % (id, mvip, alias)
members = node.getElementsByTagName('Member') members = node.getElementsByTagName('Member')
for member in members: for member in members:

View File

@ -68,14 +68,14 @@ def is_it_up(host, port):
print "host: %s:%s UP" % (host, port) print "host: %s:%s UP" % (host, port)
return True return True
# hmm master actions don't apply to a slave # hmm primary actions don't apply to a secondary
master="192.168.1.161" primary="192.168.1.161"
port=8899 port=8899
user = "oracle" user = "oracle"
password = "test123" password = "test123"
auth = "%s:%s" % (user, password) auth = "%s:%s" % (user, password)
server = getCon(auth, 'localhost', port) server = getCon(auth, 'localhost', port)
mserver = getCon(auth, master, port) mserver = getCon(auth, primary, port)
poolNode=True poolNode=True
interface = "c0a80100" interface = "c0a80100"
role='xen,utility' role='xen,utility'
@ -93,7 +93,7 @@ try:
for node in poolDom.getElementsByTagName('Server_Pool'): for node in poolDom.getElementsByTagName('Server_Pool'):
id = node.getElementsByTagName('Unique_Id')[0].firstChild.nodeValue id = node.getElementsByTagName('Unique_Id')[0].firstChild.nodeValue
alias = node.getElementsByTagName('Pool_Alias')[0].firstChild.nodeValue alias = node.getElementsByTagName('Pool_Alias')[0].firstChild.nodeValue
mvip = node.getElementsByTagName('Master_Virtual_Ip')[0].firstChild.nodeValue mvip = node.getElementsByTagName('Primary_Virtual_Ip')[0].firstChild.nodeValue
print "pool: %s, %s, %s" % (id, mvip, alias) print "pool: %s, %s, %s" % (id, mvip, alias)
members = node.getElementsByTagName('Member') members = node.getElementsByTagName('Member')
for member in members: for member in members:

View File

@ -42,7 +42,7 @@ def getCon(host, port):
return server return server
# hmm master actions don't apply to a slave # hmm primary actions don't apply to a secondary
port = 8899 port = 8899
user = "oracle" user = "oracle"
password = "test123" password = "test123"

View File

@ -44,14 +44,14 @@ def is_it_up(host, port):
print "host: %s:%s UP" % (host, port) print "host: %s:%s UP" % (host, port)
return True return True
# hmm master actions don't apply to a slave # hmm primary actions don't apply to a secondary
master = "192.168.1.161" primary = "192.168.1.161"
port = 8899 port = 8899
user = "oracle" user = "oracle"
password = "*******" password = "*******"
auth = "%s:%s" % (user, password) auth = "%s:%s" % (user, password)
server = ServerProxy("http://%s:%s" % ("localhost", port)) server = ServerProxy("http://%s:%s" % ("localhost", port))
mserver = ServerProxy("http://%s@%s:%s" % (auth, master, port)) mserver = ServerProxy("http://%s@%s:%s" % (auth, primary, port))
poolNode = True poolNode = True
interface = "c0a80100" interface = "c0a80100"
role = 'xen,utility' role = 'xen,utility'
@ -63,11 +63,11 @@ xserver = server
print "setting up password" print "setting up password"
server.update_agent_password(user, password) server.update_agent_password(user, password)
if (is_it_up(master, port)): if (is_it_up(primary, port)):
print "master seems to be up, slaving" print "primary seems to be up, will become secondary"
xserver = mserver xserver = mserver
else: else:
print "no master yet, will become master" print "no primary yet, will become primary"
# other mechanism must be used to make interfaces equal... # other mechanism must be used to make interfaces equal...
try: try:
@ -79,7 +79,7 @@ try:
poolfsuuid = poolid poolfsuuid = poolid
clusterid = "ba9aaf00ae5e2d72" clusterid = "ba9aaf00ae5e2d72"
mgr = "d1a749d4295041fb99854f52ea4dea97" mgr = "d1a749d4295041fb99854f52ea4dea97"
poolmvip = master poolmvip = primary
poolfsnfsbaseuuid = "6824e646-5908-48c9-ba44-bb1a8a778084" poolfsnfsbaseuuid = "6824e646-5908-48c9-ba44-bb1a8a778084"
repoid = "6824e646590848c9ba44bb1a8a778084" repoid = "6824e646590848c9ba44bb1a8a778084"
@ -114,7 +114,7 @@ try:
for node in poolDom.getElementsByTagName('Server_Pool'): for node in poolDom.getElementsByTagName('Server_Pool'):
id = node.getElementsByTagName('Unique_Id')[0].firstChild.nodeValue id = node.getElementsByTagName('Unique_Id')[0].firstChild.nodeValue
alias = node.getElementsByTagName('Pool_Alias')[0].firstChild.nodeValue alias = node.getElementsByTagName('Pool_Alias')[0].firstChild.nodeValue
mvip = node.getElementsByTagName('Master_Virtual_Ip')[0].firstChild.nodeValue mvip = node.getElementsByTagName('Primary_Virtual_Ip')[0].firstChild.nodeValue
print "pool: %s, %s, %s" % (id, mvip, alias) print "pool: %s, %s, %s" % (id, mvip, alias)
members = node.getElementsByTagName('Member') members = node.getElementsByTagName('Member')
for member in members: for member in members:
@ -127,7 +127,7 @@ try:
poolMembers.append(mip) poolMembers.append(mip)
except Error, v: except Error, v:
print "no master will become master, %s" % v print "no primary will become primary, %s" % v
if (pooled == False): if (pooled == False):
# setup the repository # setup the repository

View File

@ -55,14 +55,14 @@ def get_ip_address(ifname):
struct.pack('256s', ifname[:15]) struct.pack('256s', ifname[:15])
)[20:24]) )[20:24])
# hmm master actions don't apply to a slave # hmm primary actions don't apply to a secondary
master = "192.168.1.161" primary = "192.168.1.161"
port = 8899 port = 8899
passw = 'test123' passw = 'test123'
user = 'oracle' user = 'oracle'
auth = "%s:%s" % (user, passw) auth = "%s:%s" % (user, passw)
server = getCon(auth, "localhost", port) server = getCon(auth, "localhost", port)
mserver = getCon(auth, master, port) mserver = getCon(auth, primary, port)
try: try:
mserver.echo("test") mserver.echo("test")
except AttributeError, v: except AttributeError, v:
@ -81,7 +81,7 @@ try:
poolalias = "Pool 0" poolalias = "Pool 0"
clusterid = "ba9aaf00ae5e2d72" clusterid = "ba9aaf00ae5e2d72"
mgr = "d1a749d4295041fb99854f52ea4dea97" mgr = "d1a749d4295041fb99854f52ea4dea97"
poolmvip = master poolmvip = primary
# primary # primary
primuuid = "7718562d872f47a7b4548f9cac4ffa3a" primuuid = "7718562d872f47a7b4548f9cac4ffa3a"
@ -119,7 +119,7 @@ try:
for node in poolDom.getElementsByTagName('Server_Pool'): for node in poolDom.getElementsByTagName('Server_Pool'):
id = node.getElementsByTagName('Unique_Id')[0].firstChild.nodeValue id = node.getElementsByTagName('Unique_Id')[0].firstChild.nodeValue
alias = node.getElementsByTagName('Pool_Alias')[0].firstChild.nodeValue alias = node.getElementsByTagName('Pool_Alias')[0].firstChild.nodeValue
mvip = node.getElementsByTagName('Master_Virtual_Ip')[0].firstChild.nodeValue mvip = node.getElementsByTagName('Primary_Virtual_Ip')[0].firstChild.nodeValue
print "pool: %s, %s, %s" % (id, mvip, alias) print "pool: %s, %s, %s" % (id, mvip, alias)
members = node.getElementsByTagName('Member') members = node.getElementsByTagName('Member')
for member in members: for member in members:
@ -134,10 +134,10 @@ try:
# if (pooled == False): # if (pooled == False):
try: try:
if (poolCount == 0): if (poolCount == 0):
print "master" print "primary"
# check if a pool exists already if not create # check if a pool exists already if not create
# pool if so add us to the pool # pool if so add us to the pool
print server.configure_virtual_ip(master, ip) print server.configure_virtual_ip(primary, ip)
print server.create_pool_filesystem( print server.create_pool_filesystem(
fstype, fstype,
fsmntpoint, fsmntpoint,
@ -157,7 +157,7 @@ try:
) )
else: else:
try: try:
print "slave" print "secondary"
print server.join_server_pool(poolalias, print server.join_server_pool(poolalias,
primuuid, primuuid,
poolmvip, poolmvip,
@ -174,7 +174,7 @@ try:
# con = getCon(auth, node, port) # con = getCon(auth, node, port)
# print con.set_pool_member_ip_list(nodes); # print con.set_pool_member_ip_list(nodes);
print mserver.dispatch("http://%s@%s:%s/api/3" % (auth, node, port), "set_pool_member_ip_list", nodes) print mserver.dispatch("http://%s@%s:%s/api/3" % (auth, node, port), "set_pool_member_ip_list", nodes)
# print server.configure_virtual_ip(master, ip) # print server.configure_virtual_ip(primary, ip)
except Error, e: except Error, e:
print "something went wrong: %s" % (e) print "something went wrong: %s" % (e)

View File

@ -261,9 +261,9 @@ public class MockVmManagerImpl extends ManagerBase implements MockVmManager {
final MockVm vm = _mockVmDao.findByVmName(router_name); final MockVm vm = _mockVmDao.findByVmName(router_name);
final String args = vm.getBootargs(); final String args = vm.getBootargs();
if (args.indexOf("router_pr=100") > 0) { if (args.indexOf("router_pr=100") > 0) {
s_logger.debug("Router priority is for MASTER"); s_logger.debug("Router priority is for PRIMARY");
final CheckRouterAnswer ans = new CheckRouterAnswer(cmd, "Status: MASTER", true); final CheckRouterAnswer ans = new CheckRouterAnswer(cmd, "Status: PRIMARY", true);
ans.setState(VirtualRouter.RedundantState.MASTER); ans.setState(VirtualRouter.RedundantState.PRIMARY);
return ans; return ans;
} else { } else {
s_logger.debug("Router priority is for BACKUP"); s_logger.debug("Router priority is for BACKUP");

View File

@ -119,7 +119,7 @@ public interface KubernetesCluster extends ControlledEntity, com.cloud.utils.fsm
long getNetworkId(); long getNetworkId();
long getDomainId(); long getDomainId();
long getAccountId(); long getAccountId();
long getMasterNodeCount(); long getControlNodeCount();
long getNodeCount(); long getNodeCount();
long getTotalNodeCount(); long getTotalNodeCount();
String getKeyPair(); String getKeyPair();

View File

@ -583,7 +583,8 @@ public class KubernetesClusterManagerImpl extends ManagerBase implements Kuberne
DataCenterVO zone = ApiDBUtils.findZoneById(kubernetesCluster.getZoneId()); DataCenterVO zone = ApiDBUtils.findZoneById(kubernetesCluster.getZoneId());
response.setZoneId(zone.getUuid()); response.setZoneId(zone.getUuid());
response.setZoneName(zone.getName()); response.setZoneName(zone.getName());
response.setMasterNodes(kubernetesCluster.getMasterNodeCount()); response.setMasterNodes(kubernetesCluster.getControlNodeCount());
response.setControlNodes(kubernetesCluster.getControlNodeCount());
response.setClusterSize(kubernetesCluster.getNodeCount()); response.setClusterSize(kubernetesCluster.getNodeCount());
VMTemplateVO template = ApiDBUtils.findTemplateById(kubernetesCluster.getTemplateId()); VMTemplateVO template = ApiDBUtils.findTemplateById(kubernetesCluster.getTemplateId());
response.setTemplateId(template.getUuid()); response.setTemplateId(template.getUuid());
@ -651,7 +652,7 @@ public class KubernetesClusterManagerImpl extends ManagerBase implements Kuberne
final Account owner = accountService.getActiveAccountById(cmd.getEntityOwnerId()); final Account owner = accountService.getActiveAccountById(cmd.getEntityOwnerId());
final Long networkId = cmd.getNetworkId(); final Long networkId = cmd.getNetworkId();
final String sshKeyPair = cmd.getSSHKeyPairName(); final String sshKeyPair = cmd.getSSHKeyPairName();
final Long masterNodeCount = cmd.getMasterNodes(); final Long controlNodeCount = cmd.getControlNodes();
final Long clusterSize = cmd.getClusterSize(); final Long clusterSize = cmd.getClusterSize();
final String dockerRegistryUserName = cmd.getDockerRegistryUserName(); final String dockerRegistryUserName = cmd.getDockerRegistryUserName();
final String dockerRegistryPassword = cmd.getDockerRegistryPassword(); final String dockerRegistryPassword = cmd.getDockerRegistryPassword();
@ -664,8 +665,8 @@ public class KubernetesClusterManagerImpl extends ManagerBase implements Kuberne
throw new InvalidParameterValueException("Invalid name for the Kubernetes cluster name:" + name); throw new InvalidParameterValueException("Invalid name for the Kubernetes cluster name:" + name);
} }
if (masterNodeCount < 1 || masterNodeCount > 100) { if (controlNodeCount < 1 || controlNodeCount > 100) {
throw new InvalidParameterValueException("Invalid cluster master nodes count: " + masterNodeCount); throw new InvalidParameterValueException("Invalid cluster control nodes count: " + controlNodeCount);
} }
if (clusterSize < 1 || clusterSize > 100) { if (clusterSize < 1 || clusterSize > 100) {
@ -695,7 +696,7 @@ public class KubernetesClusterManagerImpl extends ManagerBase implements Kuberne
if (clusterKubernetesVersion.getZoneId() != null && !clusterKubernetesVersion.getZoneId().equals(zone.getId())) { if (clusterKubernetesVersion.getZoneId() != null && !clusterKubernetesVersion.getZoneId().equals(zone.getId())) {
throw new InvalidParameterValueException(String.format("Kubernetes version ID: %s is not available for zone ID: %s", clusterKubernetesVersion.getUuid(), zone.getUuid())); throw new InvalidParameterValueException(String.format("Kubernetes version ID: %s is not available for zone ID: %s", clusterKubernetesVersion.getUuid(), zone.getUuid()));
} }
if (masterNodeCount > 1 ) { if (controlNodeCount > 1 ) {
try { try {
if (KubernetesVersionManagerImpl.compareSemanticVersions(clusterKubernetesVersion.getSemanticVersion(), MIN_KUBERNETES_VERSION_HA_SUPPORT) < 0) { if (KubernetesVersionManagerImpl.compareSemanticVersions(clusterKubernetesVersion.getSemanticVersion(), MIN_KUBERNETES_VERSION_HA_SUPPORT) < 0) {
throw new InvalidParameterValueException(String.format("HA support is available only for Kubernetes version %s and above. Given version ID: %s is %s", MIN_KUBERNETES_VERSION_HA_SUPPORT, clusterKubernetesVersion.getUuid(), clusterKubernetesVersion.getSemanticVersion())); throw new InvalidParameterValueException(String.format("HA support is available only for Kubernetes version %s and above. Given version ID: %s is %s", MIN_KUBERNETES_VERSION_HA_SUPPORT, clusterKubernetesVersion.getUuid(), clusterKubernetesVersion.getSemanticVersion()));
@ -765,14 +766,14 @@ public class KubernetesClusterManagerImpl extends ManagerBase implements Kuberne
} }
} }
private Network getKubernetesClusterNetworkIfMissing(final String clusterName, final DataCenter zone, final Account owner, final int masterNodesCount, private Network getKubernetesClusterNetworkIfMissing(final String clusterName, final DataCenter zone, final Account owner, final int controlNodesCount,
final int nodesCount, final String externalLoadBalancerIpAddress, final Long networkId) throws CloudRuntimeException { final int nodesCount, final String externalLoadBalancerIpAddress, final Long networkId) throws CloudRuntimeException {
Network network = null; Network network = null;
if (networkId != null) { if (networkId != null) {
network = networkDao.findById(networkId); network = networkDao.findById(networkId);
if (Network.GuestType.Isolated.equals(network.getGuestType())) { if (Network.GuestType.Isolated.equals(network.getGuestType())) {
if (kubernetesClusterDao.listByNetworkId(network.getId()).isEmpty()) { if (kubernetesClusterDao.listByNetworkId(network.getId()).isEmpty()) {
if (!validateNetwork(network, masterNodesCount + nodesCount)) { if (!validateNetwork(network, controlNodesCount + nodesCount)) {
throw new InvalidParameterValueException(String.format("Network ID: %s is not suitable for Kubernetes cluster", network.getUuid())); throw new InvalidParameterValueException(String.format("Network ID: %s is not suitable for Kubernetes cluster", network.getUuid()));
} }
networkModel.checkNetworkPermissions(owner, network); networkModel.checkNetworkPermissions(owner, network);
@ -780,8 +781,8 @@ public class KubernetesClusterManagerImpl extends ManagerBase implements Kuberne
throw new InvalidParameterValueException(String.format("Network ID: %s is already under use by another Kubernetes cluster", network.getUuid())); throw new InvalidParameterValueException(String.format("Network ID: %s is already under use by another Kubernetes cluster", network.getUuid()));
} }
} else if (Network.GuestType.Shared.equals(network.getGuestType())) { } else if (Network.GuestType.Shared.equals(network.getGuestType())) {
if (masterNodesCount > 1 && Strings.isNullOrEmpty(externalLoadBalancerIpAddress)) { if (controlNodesCount > 1 && Strings.isNullOrEmpty(externalLoadBalancerIpAddress)) {
throw new InvalidParameterValueException(String.format("Multi-master, HA Kubernetes cluster with %s network ID: %s needs an external load balancer IP address. %s parameter can be used", throw new InvalidParameterValueException(String.format("Multi-control nodes, HA Kubernetes cluster with %s network ID: %s needs an external load balancer IP address. %s parameter can be used",
network.getGuestType().toString(), network.getUuid(), ApiConstants.EXTERNAL_LOAD_BALANCER_IP_ADDRESS)); network.getGuestType().toString(), network.getUuid(), ApiConstants.EXTERNAL_LOAD_BALANCER_IP_ADDRESS));
} }
} }
@ -1005,9 +1006,9 @@ public class KubernetesClusterManagerImpl extends ManagerBase implements Kuberne
validateKubernetesClusterCreateParameters(cmd); validateKubernetesClusterCreateParameters(cmd);
final DataCenter zone = dataCenterDao.findById(cmd.getZoneId()); final DataCenter zone = dataCenterDao.findById(cmd.getZoneId());
final long masterNodeCount = cmd.getMasterNodes(); final long controlNodeCount = cmd.getControlNodes();
final long clusterSize = cmd.getClusterSize(); final long clusterSize = cmd.getClusterSize();
final long totalNodeCount = masterNodeCount + clusterSize; final long totalNodeCount = controlNodeCount + clusterSize;
final ServiceOffering serviceOffering = serviceOfferingDao.findById(cmd.getServiceOfferingId()); final ServiceOffering serviceOffering = serviceOfferingDao.findById(cmd.getServiceOfferingId());
final Account owner = accountService.getActiveAccountById(cmd.getEntityOwnerId()); final Account owner = accountService.getActiveAccountById(cmd.getEntityOwnerId());
final KubernetesSupportedVersion clusterKubernetesVersion = kubernetesSupportedVersionDao.findById(cmd.getKubernetesVersionId()); final KubernetesSupportedVersion clusterKubernetesVersion = kubernetesSupportedVersionDao.findById(cmd.getKubernetesVersionId());
@ -1022,17 +1023,17 @@ public class KubernetesClusterManagerImpl extends ManagerBase implements Kuberne
logAndThrow(Level.ERROR, String.format("Creating Kubernetes cluster failed due to error while finding suitable deployment plan for cluster in zone : %s", zone.getName())); logAndThrow(Level.ERROR, String.format("Creating Kubernetes cluster failed due to error while finding suitable deployment plan for cluster in zone : %s", zone.getName()));
} }
final Network defaultNetwork = getKubernetesClusterNetworkIfMissing(cmd.getName(), zone, owner, (int)masterNodeCount, (int)clusterSize, cmd.getExternalLoadBalancerIpAddress(), cmd.getNetworkId()); final Network defaultNetwork = getKubernetesClusterNetworkIfMissing(cmd.getName(), zone, owner, (int)controlNodeCount, (int)clusterSize, cmd.getExternalLoadBalancerIpAddress(), cmd.getNetworkId());
final VMTemplateVO finalTemplate = getKubernetesServiceTemplate(deployDestination.getCluster().getHypervisorType()); final VMTemplateVO finalTemplate = getKubernetesServiceTemplate(deployDestination.getCluster().getHypervisorType());
final long cores = serviceOffering.getCpu() * (masterNodeCount + clusterSize); final long cores = serviceOffering.getCpu() * (controlNodeCount + clusterSize);
final long memory = serviceOffering.getRamSize() * (masterNodeCount + clusterSize); final long memory = serviceOffering.getRamSize() * (controlNodeCount + clusterSize);
final KubernetesClusterVO cluster = Transaction.execute(new TransactionCallback<KubernetesClusterVO>() { final KubernetesClusterVO cluster = Transaction.execute(new TransactionCallback<KubernetesClusterVO>() {
@Override @Override
public KubernetesClusterVO doInTransaction(TransactionStatus status) { public KubernetesClusterVO doInTransaction(TransactionStatus status) {
KubernetesClusterVO newCluster = new KubernetesClusterVO(cmd.getName(), cmd.getDisplayName(), zone.getId(), clusterKubernetesVersion.getId(), KubernetesClusterVO newCluster = new KubernetesClusterVO(cmd.getName(), cmd.getDisplayName(), zone.getId(), clusterKubernetesVersion.getId(),
serviceOffering.getId(), finalTemplate.getId(), defaultNetwork.getId(), owner.getDomainId(), serviceOffering.getId(), finalTemplate.getId(), defaultNetwork.getId(), owner.getDomainId(),
owner.getAccountId(), masterNodeCount, clusterSize, KubernetesCluster.State.Created, cmd.getSSHKeyPairName(), cores, memory, cmd.getNodeRootDiskSize(), ""); owner.getAccountId(), controlNodeCount, clusterSize, KubernetesCluster.State.Created, cmd.getSSHKeyPairName(), cores, memory, cmd.getNodeRootDiskSize(), "");
kubernetesClusterDao.persist(newCluster); kubernetesClusterDao.persist(newCluster);
return newCluster; return newCluster;
} }
@ -1318,7 +1319,7 @@ public class KubernetesClusterManagerImpl extends ManagerBase implements Kuberne
/* Kubernetes cluster scanner checks if the Kubernetes cluster is in desired state. If it detects Kubernetes cluster /* Kubernetes cluster scanner checks if the Kubernetes cluster is in desired state. If it detects Kubernetes cluster
is not in desired state, it will trigger an event and marks the Kubernetes cluster to be 'Alert' state. For e.g a is not in desired state, it will trigger an event and marks the Kubernetes cluster to be 'Alert' state. For e.g a
Kubernetes cluster in 'Running' state should mean all the cluster of node VM's in the custer should be running and Kubernetes cluster in 'Running' state should mean all the cluster of node VM's in the custer should be running and
number of the node VM's should be of cluster size, and the master node VM's is running. It is possible due to number of the node VM's should be of cluster size, and the control node VM's is running. It is possible due to
out of band changes by user or hosts going down, we may end up one or more VM's in stopped state. in which case out of band changes by user or hosts going down, we may end up one or more VM's in stopped state. in which case
scanner detects these changes and marks the cluster in 'Alert' state. Similarly cluster in 'Stopped' state means scanner detects these changes and marks the cluster in 'Alert' state. Similarly cluster in 'Stopped' state means
all the cluster VM's are in stopped state any mismatch in states should get picked up by Kubernetes cluster and all the cluster VM's are in stopped state any mismatch in states should get picked up by Kubernetes cluster and
@ -1442,7 +1443,7 @@ public class KubernetesClusterManagerImpl extends ManagerBase implements Kuberne
boolean isClusterVMsInDesiredState(KubernetesCluster kubernetesCluster, VirtualMachine.State state) { boolean isClusterVMsInDesiredState(KubernetesCluster kubernetesCluster, VirtualMachine.State state) {
List<KubernetesClusterVmMapVO> clusterVMs = kubernetesClusterVmMapDao.listByClusterId(kubernetesCluster.getId()); List<KubernetesClusterVmMapVO> clusterVMs = kubernetesClusterVmMapDao.listByClusterId(kubernetesCluster.getId());
// check cluster is running at desired capacity include master nodes as well // check cluster is running at desired capacity include control nodes as well
if (clusterVMs.size() < kubernetesCluster.getTotalNodeCount()) { if (clusterVMs.size() < kubernetesCluster.getTotalNodeCount()) {
if (LOGGER.isDebugEnabled()) { if (LOGGER.isDebugEnabled()) {
LOGGER.debug(String.format("Found only %d VMs in the Kubernetes cluster ID: %s while expected %d VMs to be in state: %s", LOGGER.debug(String.format("Found only %d VMs in the Kubernetes cluster ID: %s while expected %d VMs to be in state: %s",

View File

@ -69,8 +69,8 @@ public class KubernetesClusterVO implements KubernetesCluster {
@Column(name = "account_id") @Column(name = "account_id")
private long accountId; private long accountId;
@Column(name = "master_node_count") @Column(name = "control_node_count")
private long masterNodeCount; private long controlNodeCount;
@Column(name = "node_count") @Column(name = "node_count")
private long nodeCount; private long nodeCount;
@ -202,12 +202,12 @@ public class KubernetesClusterVO implements KubernetesCluster {
} }
@Override @Override
public long getMasterNodeCount() { public long getControlNodeCount() {
return masterNodeCount; return controlNodeCount;
} }
public void setMasterNodeCount(long masterNodeCount) { public void setControlNodeCount(long controlNodeCount) {
this.masterNodeCount = masterNodeCount; this.controlNodeCount = controlNodeCount;
} }
@Override @Override
@ -221,7 +221,7 @@ public class KubernetesClusterVO implements KubernetesCluster {
@Override @Override
public long getTotalNodeCount() { public long getTotalNodeCount() {
return this.masterNodeCount + this.nodeCount; return this.controlNodeCount + this.nodeCount;
} }
@Override @Override
@ -308,7 +308,7 @@ public class KubernetesClusterVO implements KubernetesCluster {
} }
public KubernetesClusterVO(String name, String description, long zoneId, long kubernetesVersionId, long serviceOfferingId, long templateId, public KubernetesClusterVO(String name, String description, long zoneId, long kubernetesVersionId, long serviceOfferingId, long templateId,
long networkId, long domainId, long accountId, long masterNodeCount, long nodeCount, State state, long networkId, long domainId, long accountId, long controlNodeCount, long nodeCount, State state,
String keyPair, long cores, long memory, Long nodeRootDiskSize, String endpoint) { String keyPair, long cores, long memory, Long nodeRootDiskSize, String endpoint) {
this.uuid = UUID.randomUUID().toString(); this.uuid = UUID.randomUUID().toString();
this.name = name; this.name = name;
@ -320,7 +320,7 @@ public class KubernetesClusterVO implements KubernetesCluster {
this.networkId = networkId; this.networkId = networkId;
this.domainId = domainId; this.domainId = domainId;
this.accountId = accountId; this.accountId = accountId;
this.masterNodeCount = masterNodeCount; this.controlNodeCount = controlNodeCount;
this.nodeCount = nodeCount; this.nodeCount = nodeCount;
this.state = state; this.state = state;
this.keyPair = keyPair; this.keyPair = keyPair;

View File

@ -231,9 +231,9 @@ public class KubernetesClusterActionWorker {
}); });
} }
private UserVm fetchMasterVmIfMissing(final UserVm masterVm) { private UserVm fetchControlVmIfMissing(final UserVm controlVm) {
if (masterVm != null) { if (controlVm != null) {
return masterVm; return controlVm;
} }
List<KubernetesClusterVmMapVO> clusterVMs = kubernetesClusterVmMapDao.listByClusterId(kubernetesCluster.getId()); List<KubernetesClusterVmMapVO> clusterVMs = kubernetesClusterVmMapDao.listByClusterId(kubernetesCluster.getId());
if (CollectionUtils.isEmpty(clusterVMs)) { if (CollectionUtils.isEmpty(clusterVMs)) {
@ -248,16 +248,16 @@ public class KubernetesClusterActionWorker {
return userVmDao.findById(vmIds.get(0)); return userVmDao.findById(vmIds.get(0));
} }
protected String getMasterVmPrivateIp() { protected String getControlVmPrivateIp() {
String ip = null; String ip = null;
UserVm vm = fetchMasterVmIfMissing(null); UserVm vm = fetchControlVmIfMissing(null);
if (vm != null) { if (vm != null) {
ip = vm.getPrivateIpAddress(); ip = vm.getPrivateIpAddress();
} }
return ip; return ip;
} }
protected Pair<String, Integer> getKubernetesClusterServerIpSshPort(UserVm masterVm) { protected Pair<String, Integer> getKubernetesClusterServerIpSshPort(UserVm controlVm) {
int port = CLUSTER_NODES_DEFAULT_START_SSH_PORT; int port = CLUSTER_NODES_DEFAULT_START_SSH_PORT;
KubernetesClusterDetailsVO detail = kubernetesClusterDetailsDao.findDetail(kubernetesCluster.getId(), ApiConstants.EXTERNAL_LOAD_BALANCER_IP_ADDRESS); KubernetesClusterDetailsVO detail = kubernetesClusterDetailsDao.findDetail(kubernetesCluster.getId(), ApiConstants.EXTERNAL_LOAD_BALANCER_IP_ADDRESS);
if (detail != null && !Strings.isNullOrEmpty(detail.getValue())) { if (detail != null && !Strings.isNullOrEmpty(detail.getValue())) {
@ -283,12 +283,12 @@ public class KubernetesClusterActionWorker {
return new Pair<>(null, port); return new Pair<>(null, port);
} else if (Network.GuestType.Shared.equals(network.getGuestType())) { } else if (Network.GuestType.Shared.equals(network.getGuestType())) {
port = 22; port = 22;
masterVm = fetchMasterVmIfMissing(masterVm); controlVm = fetchControlVmIfMissing(controlVm);
if (masterVm == null) { if (controlVm == null) {
LOGGER.warn(String.format("Unable to retrieve master VM for Kubernetes cluster : %s", kubernetesCluster.getName())); LOGGER.warn(String.format("Unable to retrieve control VM for Kubernetes cluster : %s", kubernetesCluster.getName()));
return new Pair<>(null, port); return new Pair<>(null, port);
} }
return new Pair<>(masterVm.getPrivateIpAddress(), port); return new Pair<>(controlVm.getPrivateIpAddress(), port);
} }
LOGGER.warn(String.format("Unable to retrieve server IP address for Kubernetes cluster : %s", kubernetesCluster.getName())); LOGGER.warn(String.format("Unable to retrieve server IP address for Kubernetes cluster : %s", kubernetesCluster.getName()));
return new Pair<>(null, port); return new Pair<>(null, port);

View File

@ -124,7 +124,7 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
throw new ManagementServerException("Firewall rule for node SSH access can't be provisioned"); throw new ManagementServerException("Firewall rule for node SSH access can't be provisioned");
} }
int existingFirewallRuleSourcePortEnd = firewallRule.getSourcePortEnd(); int existingFirewallRuleSourcePortEnd = firewallRule.getSourcePortEnd();
final int scaledTotalNodeCount = clusterSize == null ? (int)kubernetesCluster.getTotalNodeCount() : (int)(clusterSize + kubernetesCluster.getMasterNodeCount()); final int scaledTotalNodeCount = clusterSize == null ? (int)kubernetesCluster.getTotalNodeCount() : (int)(clusterSize + kubernetesCluster.getControlNodeCount());
// Provision new SSH firewall rules // Provision new SSH firewall rules
try { try {
provisionFirewallRules(publicIp, owner, CLUSTER_NODES_DEFAULT_START_SSH_PORT, CLUSTER_NODES_DEFAULT_START_SSH_PORT + scaledTotalNodeCount - 1); provisionFirewallRules(publicIp, owner, CLUSTER_NODES_DEFAULT_START_SSH_PORT, CLUSTER_NODES_DEFAULT_START_SSH_PORT + scaledTotalNodeCount - 1);
@ -170,7 +170,7 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
final ServiceOffering serviceOffering = newServiceOffering == null ? final ServiceOffering serviceOffering = newServiceOffering == null ?
serviceOfferingDao.findById(kubernetesCluster.getServiceOfferingId()) : newServiceOffering; serviceOfferingDao.findById(kubernetesCluster.getServiceOfferingId()) : newServiceOffering;
final Long serviceOfferingId = newServiceOffering == null ? null : serviceOffering.getId(); final Long serviceOfferingId = newServiceOffering == null ? null : serviceOffering.getId();
final long size = newSize == null ? kubernetesCluster.getTotalNodeCount() : (newSize + kubernetesCluster.getMasterNodeCount()); final long size = newSize == null ? kubernetesCluster.getTotalNodeCount() : (newSize + kubernetesCluster.getControlNodeCount());
final long cores = serviceOffering.getCpu() * size; final long cores = serviceOffering.getCpu() * size;
final long memory = serviceOffering.getRamSize() * size; final long memory = serviceOffering.getRamSize() * size;
KubernetesClusterVO kubernetesClusterVO = updateKubernetesClusterEntry(cores, memory, newSize, serviceOfferingId); KubernetesClusterVO kubernetesClusterVO = updateKubernetesClusterEntry(cores, memory, newSize, serviceOfferingId);
@ -309,7 +309,7 @@ public class KubernetesClusterScaleWorker extends KubernetesClusterResourceModif
final List<KubernetesClusterVmMapVO> originalVmList = getKubernetesClusterVMMaps(); final List<KubernetesClusterVmMapVO> originalVmList = getKubernetesClusterVMMaps();
int i = originalVmList.size() - 1; int i = originalVmList.size() - 1;
List<Long> removedVmIds = new ArrayList<>(); List<Long> removedVmIds = new ArrayList<>();
while (i >= kubernetesCluster.getMasterNodeCount() + clusterSize) { while (i >= kubernetesCluster.getControlNodeCount() + clusterSize) {
KubernetesClusterVmMapVO vmMapVO = originalVmList.get(i); KubernetesClusterVmMapVO vmMapVO = originalVmList.get(i);
UserVmVO userVM = userVmDao.findById(vmMapVO.getVmId()); UserVmVO userVM = userVmDao.findById(vmMapVO.getVmId());
if (!removeKubernetesClusterNode(publicIpAddress, sshPort, userVM, 3, 30000)) { if (!removeKubernetesClusterNode(publicIpAddress, sshPort, userVM, 3, 30000)) {

View File

@ -89,8 +89,8 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
return kubernetesClusterVersion; return kubernetesClusterVersion;
} }
private Pair<String, Map<Long, Network.IpAddresses>> getKubernetesMasterIpAddresses(final DataCenter zone, final Network network, final Account account) throws InsufficientAddressCapacityException { private Pair<String, Map<Long, Network.IpAddresses>> getKubernetesControlIpAddresses(final DataCenter zone, final Network network, final Account account) throws InsufficientAddressCapacityException {
String masterIp = null; String controlIp = null;
Map<Long, Network.IpAddresses> requestedIps = null; Map<Long, Network.IpAddresses> requestedIps = null;
if (Network.GuestType.Shared.equals(network.getGuestType())) { if (Network.GuestType.Shared.equals(network.getGuestType())) {
List<Long> vlanIds = new ArrayList<>(); List<Long> vlanIds = new ArrayList<>();
@ -100,16 +100,16 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
} }
PublicIp ip = ipAddressManager.getAvailablePublicIpAddressFromVlans(zone.getId(), null, account, Vlan.VlanType.DirectAttached, vlanIds,network.getId(), null, false); PublicIp ip = ipAddressManager.getAvailablePublicIpAddressFromVlans(zone.getId(), null, account, Vlan.VlanType.DirectAttached, vlanIds,network.getId(), null, false);
if (ip != null) { if (ip != null) {
masterIp = ip.getAddress().toString(); controlIp = ip.getAddress().toString();
} }
requestedIps = new HashMap<>(); requestedIps = new HashMap<>();
Ip ipAddress = ip.getAddress(); Ip ipAddress = ip.getAddress();
boolean isIp6 = ipAddress.isIp6(); boolean isIp6 = ipAddress.isIp6();
requestedIps.put(network.getId(), new Network.IpAddresses(ipAddress.isIp4() ? ip.getAddress().addr() : null, null)); requestedIps.put(network.getId(), new Network.IpAddresses(ipAddress.isIp4() ? ip.getAddress().addr() : null, null));
} else { } else {
masterIp = ipAddressManager.acquireGuestIpAddress(networkDao.findById(kubernetesCluster.getNetworkId()), null); controlIp = ipAddressManager.acquireGuestIpAddress(networkDao.findById(kubernetesCluster.getNetworkId()), null);
} }
return new Pair<>(masterIp, requestedIps); return new Pair<>(controlIp, requestedIps);
} }
private boolean isKubernetesVersionSupportsHA() { private boolean isKubernetesVersionSupportsHA() {
@ -127,10 +127,10 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
return haSupported; return haSupported;
} }
private String getKubernetesMasterConfig(final String masterIp, final String serverIp, private String getKubernetesControlConfig(final String controlIp, final String serverIp,
final String hostName, final boolean haSupported, final String hostName, final boolean haSupported,
final boolean ejectIso) throws IOException { final boolean ejectIso) throws IOException {
String k8sMasterConfig = readResourceFile("/conf/k8s-master.yml"); String k8sControlConfig = readResourceFile("/conf/k8s-control-node.yml");
final String apiServerCert = "{{ k8s_master.apiserver.crt }}"; final String apiServerCert = "{{ k8s_master.apiserver.crt }}";
final String apiServerKey = "{{ k8s_master.apiserver.key }}"; final String apiServerKey = "{{ k8s_master.apiserver.key }}";
final String caCert = "{{ k8s_master.ca.crt }}"; final String caCert = "{{ k8s_master.ca.crt }}";
@ -139,8 +139,8 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
final String clusterInitArgsKey = "{{ k8s_master.cluster.initargs }}"; final String clusterInitArgsKey = "{{ k8s_master.cluster.initargs }}";
final String ejectIsoKey = "{{ k8s.eject.iso }}"; final String ejectIsoKey = "{{ k8s.eject.iso }}";
final List<String> addresses = new ArrayList<>(); final List<String> addresses = new ArrayList<>();
addresses.add(masterIp); addresses.add(controlIp);
if (!serverIp.equals(masterIp)) { if (!serverIp.equals(controlIp)) {
addresses.add(serverIp); addresses.add(serverIp);
} }
final Certificate certificate = caManager.issueCertificate(null, Arrays.asList(hostName, "kubernetes", final Certificate certificate = caManager.issueCertificate(null, Arrays.asList(hostName, "kubernetes",
@ -149,9 +149,9 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
final String tlsClientCert = CertUtils.x509CertificateToPem(certificate.getClientCertificate()); final String tlsClientCert = CertUtils.x509CertificateToPem(certificate.getClientCertificate());
final String tlsPrivateKey = CertUtils.privateKeyToPem(certificate.getPrivateKey()); final String tlsPrivateKey = CertUtils.privateKeyToPem(certificate.getPrivateKey());
final String tlsCaCert = CertUtils.x509CertificatesToPem(certificate.getCaCertificates()); final String tlsCaCert = CertUtils.x509CertificatesToPem(certificate.getCaCertificates());
k8sMasterConfig = k8sMasterConfig.replace(apiServerCert, tlsClientCert.replace("\n", "\n ")); k8sControlConfig = k8sControlConfig.replace(apiServerCert, tlsClientCert.replace("\n", "\n "));
k8sMasterConfig = k8sMasterConfig.replace(apiServerKey, tlsPrivateKey.replace("\n", "\n ")); k8sControlConfig = k8sControlConfig.replace(apiServerKey, tlsPrivateKey.replace("\n", "\n "));
k8sMasterConfig = k8sMasterConfig.replace(caCert, tlsCaCert.replace("\n", "\n ")); k8sControlConfig = k8sControlConfig.replace(caCert, tlsCaCert.replace("\n", "\n "));
String pubKey = "- \"" + configurationDao.getValue("ssh.publickey") + "\""; String pubKey = "- \"" + configurationDao.getValue("ssh.publickey") + "\"";
String sshKeyPair = kubernetesCluster.getKeyPair(); String sshKeyPair = kubernetesCluster.getKeyPair();
if (!Strings.isNullOrEmpty(sshKeyPair)) { if (!Strings.isNullOrEmpty(sshKeyPair)) {
@ -160,8 +160,8 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
pubKey += "\n - \"" + sshkp.getPublicKey() + "\""; pubKey += "\n - \"" + sshkp.getPublicKey() + "\"";
} }
} }
k8sMasterConfig = k8sMasterConfig.replace(sshPubKey, pubKey); k8sControlConfig = k8sControlConfig.replace(sshPubKey, pubKey);
k8sMasterConfig = k8sMasterConfig.replace(clusterToken, KubernetesClusterUtil.generateClusterToken(kubernetesCluster)); k8sControlConfig = k8sControlConfig.replace(clusterToken, KubernetesClusterUtil.generateClusterToken(kubernetesCluster));
String initArgs = ""; String initArgs = "";
if (haSupported) { if (haSupported) {
initArgs = String.format("--control-plane-endpoint %s:%d --upload-certs --certificate-key %s ", initArgs = String.format("--control-plane-endpoint %s:%d --upload-certs --certificate-key %s ",
@ -171,55 +171,55 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
} }
initArgs += String.format("--apiserver-cert-extra-sans=%s", serverIp); initArgs += String.format("--apiserver-cert-extra-sans=%s", serverIp);
initArgs += String.format(" --kubernetes-version=%s", getKubernetesClusterVersion().getSemanticVersion()); initArgs += String.format(" --kubernetes-version=%s", getKubernetesClusterVersion().getSemanticVersion());
k8sMasterConfig = k8sMasterConfig.replace(clusterInitArgsKey, initArgs); k8sControlConfig = k8sControlConfig.replace(clusterInitArgsKey, initArgs);
k8sMasterConfig = k8sMasterConfig.replace(ejectIsoKey, String.valueOf(ejectIso)); k8sControlConfig = k8sControlConfig.replace(ejectIsoKey, String.valueOf(ejectIso));
return k8sMasterConfig; return k8sControlConfig;
} }
private UserVm createKubernetesMaster(final Network network, String serverIp) throws ManagementServerException, private UserVm createKubernetesControlNode(final Network network, String serverIp) throws ManagementServerException,
ResourceUnavailableException, InsufficientCapacityException { ResourceUnavailableException, InsufficientCapacityException {
UserVm masterVm = null; UserVm controlVm = null;
DataCenter zone = dataCenterDao.findById(kubernetesCluster.getZoneId()); DataCenter zone = dataCenterDao.findById(kubernetesCluster.getZoneId());
ServiceOffering serviceOffering = serviceOfferingDao.findById(kubernetesCluster.getServiceOfferingId()); ServiceOffering serviceOffering = serviceOfferingDao.findById(kubernetesCluster.getServiceOfferingId());
List<Long> networkIds = new ArrayList<Long>(); List<Long> networkIds = new ArrayList<Long>();
networkIds.add(kubernetesCluster.getNetworkId()); networkIds.add(kubernetesCluster.getNetworkId());
Pair<String, Map<Long, Network.IpAddresses>> ipAddresses = getKubernetesMasterIpAddresses(zone, network, owner); Pair<String, Map<Long, Network.IpAddresses>> ipAddresses = getKubernetesControlIpAddresses(zone, network, owner);
String masterIp = ipAddresses.first(); String controlIp = ipAddresses.first();
Map<Long, Network.IpAddresses> requestedIps = ipAddresses.second(); Map<Long, Network.IpAddresses> requestedIps = ipAddresses.second();
if (Network.GuestType.Shared.equals(network.getGuestType()) && Strings.isNullOrEmpty(serverIp)) { if (Network.GuestType.Shared.equals(network.getGuestType()) && Strings.isNullOrEmpty(serverIp)) {
serverIp = masterIp; serverIp = controlIp;
} }
Network.IpAddresses addrs = new Network.IpAddresses(masterIp, null); Network.IpAddresses addrs = new Network.IpAddresses(controlIp, null);
long rootDiskSize = kubernetesCluster.getNodeRootDiskSize(); long rootDiskSize = kubernetesCluster.getNodeRootDiskSize();
Map<String, String> customParameterMap = new HashMap<String, String>(); Map<String, String> customParameterMap = new HashMap<String, String>();
if (rootDiskSize > 0) { if (rootDiskSize > 0) {
customParameterMap.put("rootdisksize", String.valueOf(rootDiskSize)); customParameterMap.put("rootdisksize", String.valueOf(rootDiskSize));
} }
String hostName = kubernetesClusterNodeNamePrefix + "-master"; String hostName = kubernetesClusterNodeNamePrefix + "-control";
if (kubernetesCluster.getMasterNodeCount() > 1) { if (kubernetesCluster.getControlNodeCount() > 1) {
hostName += "-1"; hostName += "-1";
} }
hostName = getKubernetesClusterNodeAvailableName(hostName); hostName = getKubernetesClusterNodeAvailableName(hostName);
boolean haSupported = isKubernetesVersionSupportsHA(); boolean haSupported = isKubernetesVersionSupportsHA();
String k8sMasterConfig = null; String k8sControlConfig = null;
try { try {
k8sMasterConfig = getKubernetesMasterConfig(masterIp, serverIp, hostName, haSupported, Hypervisor.HypervisorType.VMware.equals(clusterTemplate.getHypervisorType())); k8sControlConfig = getKubernetesControlConfig(controlIp, serverIp, hostName, haSupported, Hypervisor.HypervisorType.VMware.equals(clusterTemplate.getHypervisorType()));
} catch (IOException e) { } catch (IOException e) {
logAndThrow(Level.ERROR, "Failed to read Kubernetes master configuration file", e); logAndThrow(Level.ERROR, "Failed to read Kubernetes control configuration file", e);
} }
String base64UserData = Base64.encodeBase64String(k8sMasterConfig.getBytes(StringUtils.getPreferredCharset())); String base64UserData = Base64.encodeBase64String(k8sControlConfig.getBytes(StringUtils.getPreferredCharset()));
masterVm = userVmService.createAdvancedVirtualMachine(zone, serviceOffering, clusterTemplate, networkIds, owner, controlVm = userVmService.createAdvancedVirtualMachine(zone, serviceOffering, clusterTemplate, networkIds, owner,
hostName, hostName, null, null, null, hostName, hostName, null, null, null,
Hypervisor.HypervisorType.None, BaseCmd.HTTPMethod.POST, base64UserData, kubernetesCluster.getKeyPair(), Hypervisor.HypervisorType.None, BaseCmd.HTTPMethod.POST, base64UserData, kubernetesCluster.getKeyPair(),
requestedIps, addrs, null, null, null, customParameterMap, null, null, null, null); requestedIps, addrs, null, null, null, customParameterMap, null, null, null, null);
if (LOGGER.isInfoEnabled()) { if (LOGGER.isInfoEnabled()) {
LOGGER.info(String.format("Created master VM ID: %s, %s in the Kubernetes cluster : %s", masterVm.getUuid(), hostName, kubernetesCluster.getName())); LOGGER.info(String.format("Created control VM ID: %s, %s in the Kubernetes cluster : %s", controlVm.getUuid(), hostName, kubernetesCluster.getName()));
} }
return masterVm; return controlVm;
} }
private String getKubernetesAdditionalMasterConfig(final String joinIp, final boolean ejectIso) throws IOException { private String getKubernetesAdditionalControlConfig(final String joinIp, final boolean ejectIso) throws IOException {
String k8sMasterConfig = readResourceFile("/conf/k8s-master-add.yml"); String k8sControlConfig = readResourceFile("/conf/k8s-control-node-add.yml");
final String joinIpKey = "{{ k8s_master.join_ip }}"; final String joinIpKey = "{{ k8s_master.join_ip }}";
final String clusterTokenKey = "{{ k8s_master.cluster.token }}"; final String clusterTokenKey = "{{ k8s_master.cluster.token }}";
final String sshPubKey = "{{ k8s.ssh.pub.key }}"; final String sshPubKey = "{{ k8s.ssh.pub.key }}";
@ -233,17 +233,17 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
pubKey += "\n - \"" + sshkp.getPublicKey() + "\""; pubKey += "\n - \"" + sshkp.getPublicKey() + "\"";
} }
} }
k8sMasterConfig = k8sMasterConfig.replace(sshPubKey, pubKey); k8sControlConfig = k8sControlConfig.replace(sshPubKey, pubKey);
k8sMasterConfig = k8sMasterConfig.replace(joinIpKey, joinIp); k8sControlConfig = k8sControlConfig.replace(joinIpKey, joinIp);
k8sMasterConfig = k8sMasterConfig.replace(clusterTokenKey, KubernetesClusterUtil.generateClusterToken(kubernetesCluster)); k8sControlConfig = k8sControlConfig.replace(clusterTokenKey, KubernetesClusterUtil.generateClusterToken(kubernetesCluster));
k8sMasterConfig = k8sMasterConfig.replace(clusterHACertificateKey, KubernetesClusterUtil.generateClusterHACertificateKey(kubernetesCluster)); k8sControlConfig = k8sControlConfig.replace(clusterHACertificateKey, KubernetesClusterUtil.generateClusterHACertificateKey(kubernetesCluster));
k8sMasterConfig = k8sMasterConfig.replace(ejectIsoKey, String.valueOf(ejectIso)); k8sControlConfig = k8sControlConfig.replace(ejectIsoKey, String.valueOf(ejectIso));
return k8sMasterConfig; return k8sControlConfig;
} }
private UserVm createKubernetesAdditionalMaster(final String joinIp, final int additionalMasterNodeInstance) throws ManagementServerException, private UserVm createKubernetesAdditionalControlNode(final String joinIp, final int additionalControlNodeInstance) throws ManagementServerException,
ResourceUnavailableException, InsufficientCapacityException { ResourceUnavailableException, InsufficientCapacityException {
UserVm additionalMasterVm = null; UserVm additionalControlVm = null;
DataCenter zone = dataCenterDao.findById(kubernetesCluster.getZoneId()); DataCenter zone = dataCenterDao.findById(kubernetesCluster.getZoneId());
ServiceOffering serviceOffering = serviceOfferingDao.findById(kubernetesCluster.getServiceOfferingId()); ServiceOffering serviceOffering = serviceOfferingDao.findById(kubernetesCluster.getServiceOfferingId());
List<Long> networkIds = new ArrayList<Long>(); List<Long> networkIds = new ArrayList<Long>();
@ -254,50 +254,50 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
if (rootDiskSize > 0) { if (rootDiskSize > 0) {
customParameterMap.put("rootdisksize", String.valueOf(rootDiskSize)); customParameterMap.put("rootdisksize", String.valueOf(rootDiskSize));
} }
String hostName = getKubernetesClusterNodeAvailableName(String.format("%s-master-%d", kubernetesClusterNodeNamePrefix, additionalMasterNodeInstance + 1)); String hostName = getKubernetesClusterNodeAvailableName(String.format("%s-control-%d", kubernetesClusterNodeNamePrefix, additionalControlNodeInstance + 1));
String k8sMasterConfig = null; String k8sControlConfig = null;
try { try {
k8sMasterConfig = getKubernetesAdditionalMasterConfig(joinIp, Hypervisor.HypervisorType.VMware.equals(clusterTemplate.getHypervisorType())); k8sControlConfig = getKubernetesAdditionalControlConfig(joinIp, Hypervisor.HypervisorType.VMware.equals(clusterTemplate.getHypervisorType()));
} catch (IOException e) { } catch (IOException e) {
logAndThrow(Level.ERROR, "Failed to read Kubernetes master configuration file", e); logAndThrow(Level.ERROR, "Failed to read Kubernetes control configuration file", e);
} }
String base64UserData = Base64.encodeBase64String(k8sMasterConfig.getBytes(StringUtils.getPreferredCharset())); String base64UserData = Base64.encodeBase64String(k8sControlConfig.getBytes(StringUtils.getPreferredCharset()));
additionalMasterVm = userVmService.createAdvancedVirtualMachine(zone, serviceOffering, clusterTemplate, networkIds, owner, additionalControlVm = userVmService.createAdvancedVirtualMachine(zone, serviceOffering, clusterTemplate, networkIds, owner,
hostName, hostName, null, null, null, hostName, hostName, null, null, null,
Hypervisor.HypervisorType.None, BaseCmd.HTTPMethod.POST, base64UserData, kubernetesCluster.getKeyPair(), Hypervisor.HypervisorType.None, BaseCmd.HTTPMethod.POST, base64UserData, kubernetesCluster.getKeyPair(),
null, addrs, null, null, null, customParameterMap, null, null, null, null); null, addrs, null, null, null, customParameterMap, null, null, null, null);
if (LOGGER.isInfoEnabled()) { if (LOGGER.isInfoEnabled()) {
LOGGER.info(String.format("Created master VM ID : %s, %s in the Kubernetes cluster : %s", additionalMasterVm.getUuid(), hostName, kubernetesCluster.getName())); LOGGER.info(String.format("Created control VM ID : %s, %s in the Kubernetes cluster : %s", additionalControlVm.getUuid(), hostName, kubernetesCluster.getName()));
} }
return additionalMasterVm; return additionalControlVm;
} }
private UserVm provisionKubernetesClusterMasterVm(final Network network, final String publicIpAddress) throws private UserVm provisionKubernetesClusterControlVm(final Network network, final String publicIpAddress) throws
ManagementServerException, InsufficientCapacityException, ResourceUnavailableException { ManagementServerException, InsufficientCapacityException, ResourceUnavailableException {
UserVm k8sMasterVM = null; UserVm k8sControlVM = null;
k8sMasterVM = createKubernetesMaster(network, publicIpAddress); k8sControlVM = createKubernetesControlNode(network, publicIpAddress);
addKubernetesClusterVm(kubernetesCluster.getId(), k8sMasterVM.getId()); addKubernetesClusterVm(kubernetesCluster.getId(), k8sControlVM.getId());
if (kubernetesCluster.getNodeRootDiskSize() > 0) { if (kubernetesCluster.getNodeRootDiskSize() > 0) {
resizeNodeVolume(k8sMasterVM); resizeNodeVolume(k8sControlVM);
} }
startKubernetesVM(k8sMasterVM); startKubernetesVM(k8sControlVM);
k8sMasterVM = userVmDao.findById(k8sMasterVM.getId()); k8sControlVM = userVmDao.findById(k8sControlVM.getId());
if (k8sMasterVM == null) { if (k8sControlVM == null) {
throw new ManagementServerException(String.format("Failed to provision master VM for Kubernetes cluster : %s" , kubernetesCluster.getName())); throw new ManagementServerException(String.format("Failed to provision control VM for Kubernetes cluster : %s" , kubernetesCluster.getName()));
} }
if (LOGGER.isInfoEnabled()) { if (LOGGER.isInfoEnabled()) {
LOGGER.info(String.format("Provisioned the master VM : %s in to the Kubernetes cluster : %s", k8sMasterVM.getDisplayName(), kubernetesCluster.getName())); LOGGER.info(String.format("Provisioned the control VM : %s in to the Kubernetes cluster : %s", k8sControlVM.getDisplayName(), kubernetesCluster.getName()));
} }
return k8sMasterVM; return k8sControlVM;
} }
private List<UserVm> provisionKubernetesClusterAdditionalMasterVms(final String publicIpAddress) throws private List<UserVm> provisionKubernetesClusterAdditionalControlVms(final String publicIpAddress) throws
InsufficientCapacityException, ManagementServerException, ResourceUnavailableException { InsufficientCapacityException, ManagementServerException, ResourceUnavailableException {
List<UserVm> additionalMasters = new ArrayList<>(); List<UserVm> additionalControlVms = new ArrayList<>();
if (kubernetesCluster.getMasterNodeCount() > 1) { if (kubernetesCluster.getControlNodeCount() > 1) {
for (int i = 1; i < kubernetesCluster.getMasterNodeCount(); i++) { for (int i = 1; i < kubernetesCluster.getControlNodeCount(); i++) {
UserVm vm = null; UserVm vm = null;
vm = createKubernetesAdditionalMaster(publicIpAddress, i); vm = createKubernetesAdditionalControlNode(publicIpAddress, i);
addKubernetesClusterVm(kubernetesCluster.getId(), vm.getId()); addKubernetesClusterVm(kubernetesCluster.getId(), vm.getId());
if (kubernetesCluster.getNodeRootDiskSize() > 0) { if (kubernetesCluster.getNodeRootDiskSize() > 0) {
resizeNodeVolume(vm); resizeNodeVolume(vm);
@ -305,15 +305,15 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
startKubernetesVM(vm); startKubernetesVM(vm);
vm = userVmDao.findById(vm.getId()); vm = userVmDao.findById(vm.getId());
if (vm == null) { if (vm == null) {
throw new ManagementServerException(String.format("Failed to provision additional master VM for Kubernetes cluster : %s" , kubernetesCluster.getName())); throw new ManagementServerException(String.format("Failed to provision additional control VM for Kubernetes cluster : %s" , kubernetesCluster.getName()));
} }
additionalMasters.add(vm); additionalControlVms.add(vm);
if (LOGGER.isInfoEnabled()) { if (LOGGER.isInfoEnabled()) {
LOGGER.info(String.format("Provisioned additional master VM : %s in to the Kubernetes cluster : %s", vm.getDisplayName(), kubernetesCluster.getName())); LOGGER.info(String.format("Provisioned additional control VM : %s in to the Kubernetes cluster : %s", vm.getDisplayName(), kubernetesCluster.getName()));
} }
} }
} }
return additionalMasters; return additionalControlVms;
} }
private Network startKubernetesClusterNetwork(final DeployDestination destination) throws ManagementServerException { private Network startKubernetesClusterNetwork(final DeployDestination destination) throws ManagementServerException {
@ -348,10 +348,10 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
account.getId(), false, NetUtils.TCP_PROTO, true); account.getId(), false, NetUtils.TCP_PROTO, true);
Map<Long, List<String>> vmIdIpMap = new HashMap<>(); Map<Long, List<String>> vmIdIpMap = new HashMap<>();
for (int i = 0; i < kubernetesCluster.getMasterNodeCount(); ++i) { for (int i = 0; i < kubernetesCluster.getControlNodeCount(); ++i) {
List<String> ips = new ArrayList<>(); List<String> ips = new ArrayList<>();
Nic masterVmNic = networkModel.getNicInNetwork(clusterVMIds.get(i), kubernetesCluster.getNetworkId()); Nic controlVmNic = networkModel.getNicInNetwork(clusterVMIds.get(i), kubernetesCluster.getNetworkId());
ips.add(masterVmNic.getIPv4Address()); ips.add(controlVmNic.getIPv4Address());
vmIdIpMap.put(clusterVMIds.get(i), ips); vmIdIpMap.put(clusterVMIds.get(i), ips);
} }
lbService.assignToLoadBalancer(lb.getId(), null, vmIdIpMap); lbService.assignToLoadBalancer(lb.getId(), null, vmIdIpMap);
@ -361,7 +361,7 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
* Setup network rules for Kubernetes cluster * Setup network rules for Kubernetes cluster
* Open up firewall port CLUSTER_API_PORT, secure port on which Kubernetes * Open up firewall port CLUSTER_API_PORT, secure port on which Kubernetes
* API server is running. Also create load balancing rule to forward public * API server is running. Also create load balancing rule to forward public
* IP traffic to master VMs' private IP. * IP traffic to control VMs' private IP.
* Open up firewall ports NODES_DEFAULT_START_SSH_PORT to NODES_DEFAULT_START_SSH_PORT+n * Open up firewall ports NODES_DEFAULT_START_SSH_PORT to NODES_DEFAULT_START_SSH_PORT+n
* for SSH access. Also create port-forwarding rule to forward public IP traffic to all * for SSH access. Also create port-forwarding rule to forward public IP traffic to all
* @param network * @param network
@ -405,7 +405,7 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
throw new ManagementServerException(String.format("Failed to provision firewall rules for SSH access for the Kubernetes cluster : %s", kubernetesCluster.getName()), e); throw new ManagementServerException(String.format("Failed to provision firewall rules for SSH access for the Kubernetes cluster : %s", kubernetesCluster.getName()), e);
} }
// Load balancer rule fo API access for master node VMs // Load balancer rule fo API access for control node VMs
try { try {
provisionLoadBalancerRule(publicIp, network, owner, clusterVMIds, CLUSTER_API_PORT); provisionLoadBalancerRule(publicIp, network, owner, clusterVMIds, CLUSTER_API_PORT);
} catch (NetworkRuleConflictException | InsufficientAddressCapacityException e) { } catch (NetworkRuleConflictException | InsufficientAddressCapacityException e) {
@ -450,9 +450,9 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
} }
String kubeConfig = KubernetesClusterUtil.getKubernetesClusterConfig(kubernetesCluster, publicIpAddress, sshPort, CLUSTER_NODE_VM_USER, sshKeyFile, timeoutTime); String kubeConfig = KubernetesClusterUtil.getKubernetesClusterConfig(kubernetesCluster, publicIpAddress, sshPort, CLUSTER_NODE_VM_USER, sshKeyFile, timeoutTime);
if (!Strings.isNullOrEmpty(kubeConfig)) { if (!Strings.isNullOrEmpty(kubeConfig)) {
final String masterVMPrivateIpAddress = getMasterVmPrivateIp(); final String controlVMPrivateIpAddress = getControlVmPrivateIp();
if (!Strings.isNullOrEmpty(masterVMPrivateIpAddress)) { if (!Strings.isNullOrEmpty(controlVMPrivateIpAddress)) {
kubeConfig = kubeConfig.replace(String.format("server: https://%s:%d", masterVMPrivateIpAddress, CLUSTER_API_PORT), kubeConfig = kubeConfig.replace(String.format("server: https://%s:%d", controlVMPrivateIpAddress, CLUSTER_API_PORT),
String.format("server: https://%s:%d", publicIpAddress, CLUSTER_API_PORT)); String.format("server: https://%s:%d", publicIpAddress, CLUSTER_API_PORT));
} }
kubernetesClusterDetailsDao.addDetail(kubernetesCluster.getId(), "kubeConfigData", Base64.encodeBase64String(kubeConfig.getBytes(StringUtils.getPreferredCharset())), false); kubernetesClusterDetailsDao.addDetail(kubernetesCluster.getId(), "kubeConfigData", Base64.encodeBase64String(kubeConfig.getBytes(StringUtils.getPreferredCharset())), false);
@ -503,29 +503,29 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
Pair<String, Integer> publicIpSshPort = getKubernetesClusterServerIpSshPort(null); Pair<String, Integer> publicIpSshPort = getKubernetesClusterServerIpSshPort(null);
publicIpAddress = publicIpSshPort.first(); publicIpAddress = publicIpSshPort.first();
if (Strings.isNullOrEmpty(publicIpAddress) && if (Strings.isNullOrEmpty(publicIpAddress) &&
(Network.GuestType.Isolated.equals(network.getGuestType()) || kubernetesCluster.getMasterNodeCount() > 1)) { // Shared network, single-master cluster won't have an IP yet (Network.GuestType.Isolated.equals(network.getGuestType()) || kubernetesCluster.getControlNodeCount() > 1)) { // Shared network, single-control node cluster won't have an IP yet
logTransitStateAndThrow(Level.ERROR, String.format("Failed to start Kubernetes cluster : %s as no public IP found for the cluster" , kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed); logTransitStateAndThrow(Level.ERROR, String.format("Failed to start Kubernetes cluster : %s as no public IP found for the cluster" , kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed);
} }
List<UserVm> clusterVMs = new ArrayList<>(); List<UserVm> clusterVMs = new ArrayList<>();
UserVm k8sMasterVM = null; UserVm k8sControlVM = null;
try { try {
k8sMasterVM = provisionKubernetesClusterMasterVm(network, publicIpAddress); k8sControlVM = provisionKubernetesClusterControlVm(network, publicIpAddress);
} catch (CloudRuntimeException | ManagementServerException | ResourceUnavailableException | InsufficientCapacityException e) { } catch (CloudRuntimeException | ManagementServerException | ResourceUnavailableException | InsufficientCapacityException e) {
logTransitStateAndThrow(Level.ERROR, String.format("Provisioning the master VM failed in the Kubernetes cluster : %s", kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed, e); logTransitStateAndThrow(Level.ERROR, String.format("Provisioning the control VM failed in the Kubernetes cluster : %s", kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed, e);
} }
clusterVMs.add(k8sMasterVM); clusterVMs.add(k8sControlVM);
if (Strings.isNullOrEmpty(publicIpAddress)) { if (Strings.isNullOrEmpty(publicIpAddress)) {
publicIpSshPort = getKubernetesClusterServerIpSshPort(k8sMasterVM); publicIpSshPort = getKubernetesClusterServerIpSshPort(k8sControlVM);
publicIpAddress = publicIpSshPort.first(); publicIpAddress = publicIpSshPort.first();
if (Strings.isNullOrEmpty(publicIpAddress)) { if (Strings.isNullOrEmpty(publicIpAddress)) {
logTransitStateAndThrow(Level.WARN, String.format("Failed to start Kubernetes cluster : %s as no public IP found for the cluster", kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed); logTransitStateAndThrow(Level.WARN, String.format("Failed to start Kubernetes cluster : %s as no public IP found for the cluster", kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed);
} }
} }
try { try {
List<UserVm> additionalMasterVMs = provisionKubernetesClusterAdditionalMasterVms(publicIpAddress); List<UserVm> additionalControlVMs = provisionKubernetesClusterAdditionalControlVms(publicIpAddress);
clusterVMs.addAll(additionalMasterVMs); clusterVMs.addAll(additionalControlVMs);
} catch (CloudRuntimeException | ManagementServerException | ResourceUnavailableException | InsufficientCapacityException e) { } catch (CloudRuntimeException | ManagementServerException | ResourceUnavailableException | InsufficientCapacityException e) {
logTransitStateAndThrow(Level.ERROR, String.format("Provisioning additional master VM failed in the Kubernetes cluster : %s", kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed, e); logTransitStateAndThrow(Level.ERROR, String.format("Provisioning additional control VM failed in the Kubernetes cluster : %s", kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed, e);
} }
try { try {
List<UserVm> nodeVMs = provisionKubernetesClusterNodeVms(kubernetesCluster.getNodeCount(), publicIpAddress); List<UserVm> nodeVMs = provisionKubernetesClusterNodeVms(kubernetesCluster.getNodeCount(), publicIpAddress);
@ -542,9 +542,9 @@ public class KubernetesClusterStartWorker extends KubernetesClusterResourceModif
logTransitStateAndThrow(Level.ERROR, String.format("Failed to setup Kubernetes cluster : %s, unable to setup network rules", kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed, e); logTransitStateAndThrow(Level.ERROR, String.format("Failed to setup Kubernetes cluster : %s, unable to setup network rules", kubernetesCluster.getName()), kubernetesCluster.getId(), KubernetesCluster.Event.CreateFailed, e);
} }
attachIsoKubernetesVMs(clusterVMs); attachIsoKubernetesVMs(clusterVMs);
if (!KubernetesClusterUtil.isKubernetesClusterMasterVmRunning(kubernetesCluster, publicIpAddress, publicIpSshPort.second(), startTimeoutTime)) { if (!KubernetesClusterUtil.isKubernetesClusterControlVmRunning(kubernetesCluster, publicIpAddress, publicIpSshPort.second(), startTimeoutTime)) {
String msg = String.format("Failed to setup Kubernetes cluster : %s in usable state as unable to access master node VMs of the cluster", kubernetesCluster.getName()); String msg = String.format("Failed to setup Kubernetes cluster : %s in usable state as unable to access control node VMs of the cluster", kubernetesCluster.getName());
if (kubernetesCluster.getMasterNodeCount() > 1 && Network.GuestType.Shared.equals(network.getGuestType())) { if (kubernetesCluster.getControlNodeCount() > 1 && Network.GuestType.Shared.equals(network.getGuestType())) {
msg = String.format("%s. Make sure external load-balancer has port forwarding rules for SSH access on ports %d-%d and API access on port %d", msg = String.format("%s. Make sure external load-balancer has port forwarding rules for SSH access on ports %d-%d and API access on port %d",
msg, msg,
CLUSTER_NODES_DEFAULT_START_SSH_PORT, CLUSTER_NODES_DEFAULT_START_SSH_PORT,

View File

@ -123,9 +123,9 @@ public class KubernetesClusterUpgradeWorker extends KubernetesClusterActionWorke
if (!KubernetesClusterUtil.uncordonKubernetesClusterNode(kubernetesCluster, publicIpAddress, sshPort, CLUSTER_NODE_VM_USER, getManagementServerSshPublicKeyFile(), vm, upgradeTimeoutTime, 15000)) { if (!KubernetesClusterUtil.uncordonKubernetesClusterNode(kubernetesCluster, publicIpAddress, sshPort, CLUSTER_NODE_VM_USER, getManagementServerSshPublicKeyFile(), vm, upgradeTimeoutTime, 15000)) {
logTransitStateDetachIsoAndThrow(Level.ERROR, String.format("Failed to upgrade Kubernetes cluster : %s, unable to uncordon Kubernetes node on VM : %s", kubernetesCluster.getName(), vm.getDisplayName()), kubernetesCluster, clusterVMs, KubernetesCluster.Event.OperationFailed, null); logTransitStateDetachIsoAndThrow(Level.ERROR, String.format("Failed to upgrade Kubernetes cluster : %s, unable to uncordon Kubernetes node on VM : %s", kubernetesCluster.getName(), vm.getDisplayName()), kubernetesCluster, clusterVMs, KubernetesCluster.Event.OperationFailed, null);
} }
if (i == 0) { // Wait for master to get in Ready state if (i == 0) { // Wait for control node to get in Ready state
if (!KubernetesClusterUtil.isKubernetesClusterNodeReady(kubernetesCluster, publicIpAddress, sshPort, CLUSTER_NODE_VM_USER, getManagementServerSshPublicKeyFile(), hostName, upgradeTimeoutTime, 15000)) { if (!KubernetesClusterUtil.isKubernetesClusterNodeReady(kubernetesCluster, publicIpAddress, sshPort, CLUSTER_NODE_VM_USER, getManagementServerSshPublicKeyFile(), hostName, upgradeTimeoutTime, 15000)) {
logTransitStateDetachIsoAndThrow(Level.ERROR, String.format("Failed to upgrade Kubernetes cluster : %s, unable to get master Kubernetes node on VM : %s in ready state", kubernetesCluster.getName(), vm.getDisplayName()), kubernetesCluster, clusterVMs, KubernetesCluster.Event.OperationFailed, null); logTransitStateDetachIsoAndThrow(Level.ERROR, String.format("Failed to upgrade Kubernetes cluster : %s, unable to get control Kubernetes node on VM : %s in ready state", kubernetesCluster.getName(), vm.getDisplayName()), kubernetesCluster, clusterVMs, KubernetesCluster.Event.OperationFailed, null);
} }
} }
if (LOGGER.isInfoEnabled()) { if (LOGGER.isInfoEnabled()) {

View File

@ -254,25 +254,25 @@ public class KubernetesClusterUtil {
return k8sApiServerSetup; return k8sApiServerSetup;
} }
public static boolean isKubernetesClusterMasterVmRunning(final KubernetesCluster kubernetesCluster, final String ipAddress, public static boolean isKubernetesClusterControlVmRunning(final KubernetesCluster kubernetesCluster, final String ipAddress,
final int port, final long timeoutTime) { final int port, final long timeoutTime) {
boolean masterVmRunning = false; boolean controlVmRunning = false;
while (!masterVmRunning && System.currentTimeMillis() < timeoutTime) { while (!controlVmRunning && System.currentTimeMillis() < timeoutTime) {
try (Socket socket = new Socket()) { try (Socket socket = new Socket()) {
socket.connect(new InetSocketAddress(ipAddress, port), 10000); socket.connect(new InetSocketAddress(ipAddress, port), 10000);
masterVmRunning = true; controlVmRunning = true;
} catch (IOException e) { } catch (IOException e) {
if (LOGGER.isInfoEnabled()) { if (LOGGER.isInfoEnabled()) {
LOGGER.info(String.format("Waiting for Kubernetes cluster : %s master node VMs to be accessible", kubernetesCluster.getName())); LOGGER.info(String.format("Waiting for Kubernetes cluster : %s control node VMs to be accessible", kubernetesCluster.getName()));
} }
try { try {
Thread.sleep(10000); Thread.sleep(10000);
} catch (InterruptedException ex) { } catch (InterruptedException ex) {
LOGGER.warn(String.format("Error while waiting for Kubernetes cluster : %s master node VMs to be accessible", kubernetesCluster.getName()), ex); LOGGER.warn(String.format("Error while waiting for Kubernetes cluster : %s control node VMs to be accessible", kubernetesCluster.getName()), ex);
} }
} }
} }
return masterVmRunning; return controlVmRunning;
} }
public static boolean validateKubernetesClusterReadyNodesCount(final KubernetesCluster kubernetesCluster, public static boolean validateKubernetesClusterReadyNodesCount(final KubernetesCluster kubernetesCluster,

View File

@ -109,9 +109,14 @@ public class CreateKubernetesClusterCmd extends BaseAsyncCreateCmd {
private String sshKeyPairName; private String sshKeyPairName;
@Parameter(name=ApiConstants.MASTER_NODES, type = CommandType.LONG, @Parameter(name=ApiConstants.MASTER_NODES, type = CommandType.LONG,
description = "number of Kubernetes cluster master nodes, default is 1") description = "number of Kubernetes cluster master nodes, default is 1. This option is deprecated, please use 'controlnodes' parameter.")
@Deprecated
private Long masterNodes; private Long masterNodes;
@Parameter(name=ApiConstants.CONTROL_NODES, type = CommandType.LONG,
description = "number of Kubernetes cluster control nodes, default is 1")
private Long controlNodes;
@Parameter(name=ApiConstants.EXTERNAL_LOAD_BALANCER_IP_ADDRESS, type = CommandType.STRING, @Parameter(name=ApiConstants.EXTERNAL_LOAD_BALANCER_IP_ADDRESS, type = CommandType.STRING,
description = "external load balancer IP address while using shared network with Kubernetes HA cluster") description = "external load balancer IP address while using shared network with Kubernetes HA cluster")
private String externalLoadBalancerIpAddress; private String externalLoadBalancerIpAddress;
@ -191,6 +196,13 @@ public class CreateKubernetesClusterCmd extends BaseAsyncCreateCmd {
return masterNodes; return masterNodes;
} }
public Long getControlNodes() {
if (controlNodes == null) {
return 1L;
}
return controlNodes;
}
public String getExternalLoadBalancerIpAddress() { public String getExternalLoadBalancerIpAddress() {
return externalLoadBalancerIpAddress; return externalLoadBalancerIpAddress;
} }

View File

@ -101,10 +101,15 @@ public class KubernetesClusterResponse extends BaseResponse implements Controlle
@Param(description = "keypair details") @Param(description = "keypair details")
private String keypair; private String keypair;
@Deprecated
@SerializedName(ApiConstants.MASTER_NODES) @SerializedName(ApiConstants.MASTER_NODES)
@Param(description = "the master nodes count for the Kubernetes cluster") @Param(description = "the master nodes count for the Kubernetes cluster")
private Long masterNodes; private Long masterNodes;
@SerializedName(ApiConstants.CONTROL_NODES)
@Param(description = "the control nodes count for the Kubernetes cluster")
private Long controlNodes;
@SerializedName(ApiConstants.SIZE) @SerializedName(ApiConstants.SIZE)
@Param(description = "the size (worker nodes count) of the Kubernetes cluster") @Param(description = "the size (worker nodes count) of the Kubernetes cluster")
private Long clusterSize; private Long clusterSize;
@ -269,6 +274,14 @@ public class KubernetesClusterResponse extends BaseResponse implements Controlle
this.masterNodes = masterNodes; this.masterNodes = masterNodes;
} }
public Long getControlNodes() {
return controlNodes;
}
public void setControlNodes(Long controlNodes) {
this.controlNodes = controlNodes;
}
public Long getClusterSize() { public Long getClusterSize() {
return clusterSize; return clusterSize;
} }

View File

@ -61,7 +61,7 @@ public class KubernetesSupportedVersionResponse extends BaseResponse {
private String zoneName; private String zoneName;
@SerializedName(ApiConstants.SUPPORTS_HA) @SerializedName(ApiConstants.SUPPORTS_HA)
@Param(description = "whether Kubernetes supported version supports HA, multi-master") @Param(description = "whether Kubernetes supported version supports HA, multi-control nodes")
private Boolean supportsHA; private Boolean supportsHA;
@SerializedName(ApiConstants.STATE) @SerializedName(ApiConstants.STATE)

View File

@ -18,14 +18,14 @@
# Version 1.14 and below needs extra flags with kubeadm upgrade node # Version 1.14 and below needs extra flags with kubeadm upgrade node
if [ $# -lt 4 ]; then if [ $# -lt 4 ]; then
echo "Invalid input. Valid usage: ./upgrade-kubernetes.sh UPGRADE_VERSION IS_MASTER IS_OLD_VERSION IS_EJECT_ISO" echo "Invalid input. Valid usage: ./upgrade-kubernetes.sh UPGRADE_VERSION IS_CONTROL_NODE IS_OLD_VERSION IS_EJECT_ISO"
echo "eg: ./upgrade-kubernetes.sh 1.16.3 true false false" echo "eg: ./upgrade-kubernetes.sh 1.16.3 true false false"
exit 1 exit 1
fi fi
UPGRADE_VERSION="${1}" UPGRADE_VERSION="${1}"
IS_MAIN_MASTER="" IS_MAIN_CONTROL=""
if [ $# -gt 1 ]; then if [ $# -gt 1 ]; then
IS_MAIN_MASTER="${2}" IS_MAIN_CONTROL="${2}"
fi fi
IS_OLD_VERSION="" IS_OLD_VERSION=""
if [ $# -gt 2 ]; then if [ $# -gt 2 ]; then
@ -100,7 +100,7 @@ if [ -d "$BINARIES_DIR" ]; then
tar -f "${BINARIES_DIR}/cni/cni-plugins-amd64.tgz" -C /opt/cni/bin -xz tar -f "${BINARIES_DIR}/cni/cni-plugins-amd64.tgz" -C /opt/cni/bin -xz
tar -f "${BINARIES_DIR}/cri-tools/crictl-linux-amd64.tar.gz" -C /opt/bin -xz tar -f "${BINARIES_DIR}/cri-tools/crictl-linux-amd64.tar.gz" -C /opt/bin -xz
if [ "${IS_MAIN_MASTER}" == 'true' ]; then if [ "${IS_MAIN_CONTROL}" == 'true' ]; then
set +e set +e
kubeadm upgrade apply ${UPGRADE_VERSION} -y kubeadm upgrade apply ${UPGRADE_VERSION} -y
retval=$? retval=$?
@ -121,7 +121,7 @@ if [ -d "$BINARIES_DIR" ]; then
chmod +x {kubelet,kubectl} chmod +x {kubelet,kubectl}
systemctl restart kubelet systemctl restart kubelet
if [ "${IS_MAIN_MASTER}" == 'true' ]; then if [ "${IS_MAIN_CONTROL}" == 'true' ]; then
kubectl apply -f ${BINARIES_DIR}/network.yaml kubectl apply -f ${BINARIES_DIR}/network.yaml
kubectl apply -f ${BINARIES_DIR}/dashboard.yaml kubectl apply -f ${BINARIES_DIR}/dashboard.yaml
fi fi

View File

@ -21,26 +21,26 @@ package com.cloud.agent.api;
public class GetControllerDataAnswer extends Answer { public class GetControllerDataAnswer extends Answer {
private final String _ipAddress; private final String _ipAddress;
private final boolean _isMaster; private final boolean _isPrimary;
public GetControllerDataAnswer(final GetControllerDataCommand cmd, public GetControllerDataAnswer(final GetControllerDataCommand cmd,
final String ipAddress, final boolean isMaster){ final String ipAddress, final boolean isPrimary){
super(cmd); super(cmd);
this._ipAddress = ipAddress; this._ipAddress = ipAddress;
this._isMaster = isMaster; this._isPrimary = isPrimary;
} }
public GetControllerDataAnswer(final Command command, final Exception e) { public GetControllerDataAnswer(final Command command, final Exception e) {
super(command, e); super(command, e);
this._ipAddress = null; this._ipAddress = null;
this._isMaster = false; this._isPrimary = false;
} }
public String getIpAddress() { public String getIpAddress() {
return _ipAddress; return _ipAddress;
} }
public boolean isMaster() { public boolean isPrimary() {
return _isMaster; return _isPrimary;
} }
} }

View File

@ -22,19 +22,19 @@ package com.cloud.agent.api;
import com.cloud.host.HostVO; import com.cloud.host.HostVO;
public class GetControllerHostsAnswer { public class GetControllerHostsAnswer {
private HostVO master; private HostVO primary;
private HostVO slave; private HostVO secondary;
public HostVO getMaster() { public HostVO getPrimary() {
return master; return primary;
} }
public void setMaster(final HostVO master) { public void setPrimary(final HostVO primary) {
this.master = master; this.primary = primary;
} }
public HostVO getSlave() { public HostVO getSecondary() {
return slave; return secondary;
} }
public void setSlave(final HostVO slave) { public void setSecondary(final HostVO secondary) {
this.slave = slave; this.secondary = secondary;
} }
} }

View File

@ -72,7 +72,7 @@ public class BigSwitchBcfApi {
private String zoneId; private String zoneId;
private Boolean nat; private Boolean nat;
private boolean isMaster; private boolean isPrimary;
private int _port = 8000; private int _port = 8000;
@ -241,7 +241,7 @@ public class BigSwitchBcfApi {
} }
public ControllerData getControllerData() { public ControllerData getControllerData() {
return new ControllerData(host, isMaster); return new ControllerData(host, isPrimary);
} }
private void checkInvariants() throws BigSwitchBcfApiException{ private void checkInvariants() throws BigSwitchBcfApiException{
@ -274,7 +274,7 @@ public class BigSwitchBcfApi {
throw new BigSwitchBcfApiException("BCF topology sync required", true); throw new BigSwitchBcfApiException("BCF topology sync required", true);
} }
if (m.getStatusCode() == HttpStatus.SC_SEE_OTHER) { if (m.getStatusCode() == HttpStatus.SC_SEE_OTHER) {
isMaster = false; isPrimary = false;
set_hash(HASH_IGNORE); set_hash(HASH_IGNORE);
return HASH_IGNORE; return HASH_IGNORE;
} }
@ -402,10 +402,10 @@ public class BigSwitchBcfApi {
} }
if(returnValue instanceof ControlClusterStatus) { if(returnValue instanceof ControlClusterStatus) {
if(HASH_CONFLICT.equals(hash)) { if(HASH_CONFLICT.equals(hash)) {
isMaster = true; isPrimary = true;
((ControlClusterStatus) returnValue).setTopologySyncRequested(true); ((ControlClusterStatus) returnValue).setTopologySyncRequested(true);
} else if (!HASH_IGNORE.equals(hash) && !isMaster) { } else if (!HASH_IGNORE.equals(hash) && !isPrimary) {
isMaster = true; isPrimary = true;
((ControlClusterStatus) returnValue).setTopologySyncRequested(true); ((ControlClusterStatus) returnValue).setTopologySyncRequested(true);
} }
} }

View File

@ -133,10 +133,10 @@ public class BigSwitchBcfUtils {
_hostDao.loadDetails(bigswitchBcfHost); _hostDao.loadDetails(bigswitchBcfHost);
GetControllerDataAnswer answer = (GetControllerDataAnswer) _agentMgr.easySend(bigswitchBcfHost.getId(), cmd); GetControllerDataAnswer answer = (GetControllerDataAnswer) _agentMgr.easySend(bigswitchBcfHost.getId(), cmd);
if (answer != null){ if (answer != null){
if (answer.isMaster()) { if (answer.isPrimary()) {
cluster.setMaster(bigswitchBcfHost); cluster.setPrimary(bigswitchBcfHost);
} else { } else {
cluster.setSlave(bigswitchBcfHost); cluster.setSecondary(bigswitchBcfHost);
} }
} }
} }
@ -471,14 +471,14 @@ public class BigSwitchBcfUtils {
public BcfAnswer sendBcfCommandWithNetworkSyncCheck(BcfCommand cmd, Network network)throws IllegalArgumentException{ public BcfAnswer sendBcfCommandWithNetworkSyncCheck(BcfCommand cmd, Network network)throws IllegalArgumentException{
// get registered Big Switch controller // get registered Big Switch controller
ControlClusterData cluster = getControlClusterData(network.getPhysicalNetworkId()); ControlClusterData cluster = getControlClusterData(network.getPhysicalNetworkId());
if(cluster.getMaster()==null){ if(cluster.getPrimary()==null){
return new BcfAnswer(cmd, new CloudRuntimeException("Big Switch Network controller temporarily unavailable")); return new BcfAnswer(cmd, new CloudRuntimeException("Big Switch Network controller temporarily unavailable"));
} }
TopologyData topo = getTopology(network.getPhysicalNetworkId()); TopologyData topo = getTopology(network.getPhysicalNetworkId());
cmd.setTopology(topo); cmd.setTopology(topo);
BcfAnswer answer = (BcfAnswer) _agentMgr.easySend(cluster.getMaster().getId(), cmd); BcfAnswer answer = (BcfAnswer) _agentMgr.easySend(cluster.getPrimary().getId(), cmd);
if (answer == null || !answer.getResult()) { if (answer == null || !answer.getResult()) {
s_logger.error ("BCF API Command failed"); s_logger.error ("BCF API Command failed");
@ -487,17 +487,17 @@ public class BigSwitchBcfUtils {
String newHash = answer.getHash(); String newHash = answer.getHash();
if (cmd.isTopologySyncRequested()) { if (cmd.isTopologySyncRequested()) {
newHash = syncTopologyToBcfHost(cluster.getMaster()); newHash = syncTopologyToBcfHost(cluster.getPrimary());
} }
if(newHash != null){ if(newHash != null){
commitTopologyHash(network.getPhysicalNetworkId(), newHash); commitTopologyHash(network.getPhysicalNetworkId(), newHash);
} }
HostVO slave = cluster.getSlave(); HostVO secondary = cluster.getSecondary();
if(slave != null){ if(secondary != null){
TopologyData newTopo = getTopology(network.getPhysicalNetworkId()); TopologyData newTopo = getTopology(network.getPhysicalNetworkId());
CacheBcfTopologyCommand cacheCmd = new CacheBcfTopologyCommand(newTopo); CacheBcfTopologyCommand cacheCmd = new CacheBcfTopologyCommand(newTopo);
_agentMgr.easySend(cluster.getSlave().getId(), cacheCmd); _agentMgr.easySend(cluster.getSecondary().getId(), cacheCmd);
} }
return answer; return answer;

View File

@ -22,22 +22,22 @@ package com.cloud.network.bigswitch;
import com.cloud.host.HostVO; import com.cloud.host.HostVO;
public class ControlClusterData { public class ControlClusterData {
private HostVO master; private HostVO primary;
private HostVO slave; private HostVO secondary;
public HostVO getMaster() { public HostVO getPrimary() {
return master; return primary;
} }
public void setMaster(HostVO master) { public void setPrimary(HostVO primary) {
this.master = master; this.primary = primary;
} }
public HostVO getSlave() { public HostVO getSecondary() {
return slave; return secondary;
} }
public void setSlave(HostVO slave) { public void setSecondary(HostVO secondary) {
this.slave = slave; this.secondary = secondary;
} }
} }

View File

@ -21,19 +21,19 @@ package com.cloud.network.bigswitch;
public class ControllerData { public class ControllerData {
private final String ipAddress; private final String ipAddress;
private final boolean isMaster; private final boolean isPrimary;
public ControllerData(String ipAddress, boolean isMaster) { public ControllerData(String ipAddress, boolean isPrimary) {
this.ipAddress = ipAddress; this.ipAddress = ipAddress;
this.isMaster = isMaster; this.isPrimary = isPrimary;
} }
public String getIpAddress() { public String getIpAddress() {
return ipAddress; return ipAddress;
} }
public boolean isMaster() { public boolean isPrimary() {
return isMaster; return isPrimary;
} }
} }

View File

@ -563,7 +563,7 @@ public class BigSwitchBcfResource extends ManagerBase implements ServerResource
ControllerData controller = _bigswitchBcfApi.getControllerData(); ControllerData controller = _bigswitchBcfApi.getControllerData();
return new GetControllerDataAnswer(cmd, return new GetControllerDataAnswer(cmd,
controller.getIpAddress(), controller.getIpAddress(),
controller.isMaster()); controller.isPrimary());
} }
private Answer executeRequest(ReadyCommand cmd) { private Answer executeRequest(ReadyCommand cmd) {

View File

@ -254,13 +254,13 @@ public class BigSwitchApiTest {
} }
@Test @Test
public void testExecuteCreateObjectSlave() throws BigSwitchBcfApiException, IOException { public void testExecuteCreateObjectSecondary() throws BigSwitchBcfApiException, IOException {
NetworkData network = new NetworkData(); NetworkData network = new NetworkData();
_method = mock(PostMethod.class); _method = mock(PostMethod.class);
when(_method.getStatusCode()).thenReturn(HttpStatus.SC_SEE_OTHER); when(_method.getStatusCode()).thenReturn(HttpStatus.SC_SEE_OTHER);
String hash = _api.executeCreateObject(network, "/", Collections.<String, String> emptyMap()); String hash = _api.executeCreateObject(network, "/", Collections.<String, String> emptyMap());
assertEquals(hash, BigSwitchBcfApi.HASH_IGNORE); assertEquals(hash, BigSwitchBcfApi.HASH_IGNORE);
assertEquals(_api.getControllerData().isMaster(), false); assertEquals(_api.getControllerData().isPrimary(), false);
} }
@Test(expected = BigSwitchBcfApiException.class) @Test(expected = BigSwitchBcfApiException.class)
@ -320,7 +320,7 @@ public class BigSwitchApiTest {
} }
@Test @Test
public void testExecuteUpdateObjectSlave() throws BigSwitchBcfApiException, IOException { public void testExecuteUpdateObjectSecondary() throws BigSwitchBcfApiException, IOException {
NetworkData network = new NetworkData(); NetworkData network = new NetworkData();
_method = mock(PutMethod.class); _method = mock(PutMethod.class);
when(_method.getStatusCode()).thenReturn(HttpStatus.SC_SEE_OTHER); when(_method.getStatusCode()).thenReturn(HttpStatus.SC_SEE_OTHER);
@ -396,7 +396,7 @@ public class BigSwitchApiTest {
} }
@Test @Test
public void testExecuteRetrieveControllerMasterStatus() throws BigSwitchBcfApiException, IOException { public void testExecuteRetrieveControllerPrimaryStatus() throws BigSwitchBcfApiException, IOException {
_method = mock(GetMethod.class); _method = mock(GetMethod.class);
when(_method.getStatusCode()).thenReturn(HttpStatus.SC_OK); when(_method.getStatusCode()).thenReturn(HttpStatus.SC_OK);
when(((HttpMethodBase)_method).getResponseBodyAsString(2048)).thenReturn("{'healthy': true, 'topologySyncRequested': false}"); when(((HttpMethodBase)_method).getResponseBodyAsString(2048)).thenReturn("{'healthy': true, 'topologySyncRequested': false}");
@ -404,11 +404,11 @@ public class BigSwitchApiTest {
}.getType(), "/", null); }.getType(), "/", null);
verify(_method, times(1)).releaseConnection(); verify(_method, times(1)).releaseConnection();
verify(_client, times(1)).executeMethod(_method); verify(_client, times(1)).executeMethod(_method);
assertEquals(_api.getControllerData().isMaster(), true); assertEquals(_api.getControllerData().isPrimary(), true);
} }
@Test @Test
public void testExecuteRetrieveControllerMasterStatusWithTopoConflict() throws BigSwitchBcfApiException, IOException { public void testExecuteRetrieveControllerPrimaryStatusWithTopoConflict() throws BigSwitchBcfApiException, IOException {
_method = mock(GetMethod.class); _method = mock(GetMethod.class);
when(_method.getStatusCode()).thenReturn(HttpStatus.SC_CONFLICT); when(_method.getStatusCode()).thenReturn(HttpStatus.SC_CONFLICT);
when(((HttpMethodBase)_method).getResponseBodyAsString(2048)).thenReturn("{'healthy': true, 'topologySyncRequested': true}"); when(((HttpMethodBase)_method).getResponseBodyAsString(2048)).thenReturn("{'healthy': true, 'topologySyncRequested': true}");
@ -416,11 +416,11 @@ public class BigSwitchApiTest {
}.getType(), "/", null); }.getType(), "/", null);
verify(_method, times(1)).releaseConnection(); verify(_method, times(1)).releaseConnection();
verify(_client, times(1)).executeMethod(_method); verify(_client, times(1)).executeMethod(_method);
assertEquals(_api.getControllerData().isMaster(), true); assertEquals(_api.getControllerData().isPrimary(), true);
} }
@Test @Test
public void testExecuteRetrieveControllerSlaveStatus() throws BigSwitchBcfApiException, IOException { public void testExecuteRetrieveControllerSecondaryStatus() throws BigSwitchBcfApiException, IOException {
_method = mock(GetMethod.class); _method = mock(GetMethod.class);
when(_method.getStatusCode()).thenReturn(HttpStatus.SC_SEE_OTHER); when(_method.getStatusCode()).thenReturn(HttpStatus.SC_SEE_OTHER);
when(((HttpMethodBase)_method).getResponseBodyAsString(1024)).thenReturn("{'healthy': true, 'topologySyncRequested': false}"); when(((HttpMethodBase)_method).getResponseBodyAsString(1024)).thenReturn("{'healthy': true, 'topologySyncRequested': false}");
@ -428,6 +428,6 @@ public class BigSwitchApiTest {
}.getType(), "/", null); }.getType(), "/", null);
verify(_method, times(1)).releaseConnection(); verify(_method, times(1)).releaseConnection();
verify(_client, times(1)).executeMethod(_method); verify(_client, times(1)).executeMethod(_method);
assertEquals(_api.getControllerData().isMaster(), false); assertEquals(_api.getControllerData().isPrimary(), false);
} }
} }

View File

@ -79,7 +79,7 @@ public class ServiceManagerImpl implements ServiceManager {
ContrailManager _manager; ContrailManager _manager;
/** /**
* In the case of service instance the master object is in the contrail API server. This object stores the * In the case of service instance the primary object is in the contrail API server. This object stores the
* service instance parameters in the database. * service instance parameters in the database.
* *
* @param owner Used to determine the project. * @param owner Used to determine the project.

View File

@ -33,7 +33,7 @@ import com.cloud.exception.InternalErrorException;
* *
* The object constructor should set the uuid and the internal id of the cloudstack objects. * The object constructor should set the uuid and the internal id of the cloudstack objects.
* *
* The build method reads the master database (typically cloudstack mysql) and derives the state that * The build method reads the primary database (typically cloudstack mysql) and derives the state that
* we wish to reflect in the contrail API. This method should not modify the Contrail API state. * we wish to reflect in the contrail API. This method should not modify the Contrail API state.
* *
* The verify method reads the API server state and compares with cached properties. * The verify method reads the API server state and compares with cached properties.

View File

@ -110,7 +110,7 @@ public class ServiceInstanceModel extends ModelObjectBase {
} }
/** /**
* Recreate the model object from the Contrail API which is the master for this type of object. * Recreate the model object from the Contrail API which is main for this type of object.
* @param siObj * @param siObj
*/ */
public void build(ModelController controller, ServiceInstance siObj) { public void build(ModelController controller, ServiceInstance siObj) {

View File

@ -116,7 +116,7 @@ public class NetworkProviderTest extends TestCase {
private ApiConnector _api; private ApiConnector _api;
private static int s_mysqlSrverPort; private static int s_mysqlSrverPort;
private static long s_msId; private static long s_msId;
private static Merovingian2 s_lockMaster; private static Merovingian2 s_lockController;
public static boolean s_initDone = false; public static boolean s_initDone = false;
@BeforeClass @BeforeClass
@ -127,14 +127,14 @@ public class NetworkProviderTest extends TestCase {
s_logger.info("mysql server launched on port " + s_mysqlSrverPort); s_logger.info("mysql server launched on port " + s_mysqlSrverPort);
s_msId = ManagementServerNode.getManagementServerId(); s_msId = ManagementServerNode.getManagementServerId();
s_lockMaster = Merovingian2.createLockMaster(s_msId); s_lockController = Merovingian2.createLockController(s_msId);
} }
@AfterClass @AfterClass
public static void globalTearDown() throws Exception { public static void globalTearDown() throws Exception {
s_lockMaster.cleanupForServer(s_msId); s_lockController.cleanupForServer(s_msId);
JmxUtil.unregisterMBean("Locks", "Locks"); JmxUtil.unregisterMBean("Locks", "Locks");
s_lockMaster = null; s_lockController = null;
AbstractApplicationContext ctx = (AbstractApplicationContext)ComponentContext.getApplicationContext(); AbstractApplicationContext ctx = (AbstractApplicationContext)ComponentContext.getApplicationContext();
Map<String, ComponentLifecycle> lifecycleComponents = ctx.getBeansOfType(ComponentLifecycle.class); Map<String, ComponentLifecycle> lifecycleComponents = ctx.getBeansOfType(ComponentLifecycle.class);

View File

@ -70,7 +70,7 @@ public class PublicNetworkTest extends TestCase {
private static boolean s_initDone = false; private static boolean s_initDone = false;
private static int s_mysqlServerPort; private static int s_mysqlServerPort;
private static long s_msId; private static long s_msId;
private static Merovingian2 s_lockMaster; private static Merovingian2 s_lockController;
private ManagementServerMock _server; private ManagementServerMock _server;
private ApiConnector _spy; private ApiConnector _spy;
@ -81,14 +81,14 @@ public class PublicNetworkTest extends TestCase {
s_mysqlServerPort = TestDbSetup.init(null); s_mysqlServerPort = TestDbSetup.init(null);
s_logger.info("mysql server launched on port " + s_mysqlServerPort); s_logger.info("mysql server launched on port " + s_mysqlServerPort);
s_msId = ManagementServerNode.getManagementServerId(); s_msId = ManagementServerNode.getManagementServerId();
s_lockMaster = Merovingian2.createLockMaster(s_msId); s_lockController = Merovingian2.createLockController(s_msId);
} }
@AfterClass @AfterClass
public static void globalTearDown() throws Exception { public static void globalTearDown() throws Exception {
s_lockMaster.cleanupForServer(s_msId); s_lockController.cleanupForServer(s_msId);
JmxUtil.unregisterMBean("Locks", "Locks"); JmxUtil.unregisterMBean("Locks", "Locks");
s_lockMaster = null; s_lockController = null;
AbstractApplicationContext ctx = (AbstractApplicationContext)ComponentContext.getApplicationContext(); AbstractApplicationContext ctx = (AbstractApplicationContext)ComponentContext.getApplicationContext();
Map<String, ComponentLifecycle> lifecycleComponents = ctx.getBeansOfType(ComponentLifecycle.class); Map<String, ComponentLifecycle> lifecycleComponents = ctx.getBeansOfType(ComponentLifecycle.class);

View File

@ -1161,7 +1161,7 @@ class MigrationStep:
You develop your own steps, and then pass a list of those steps to the You develop your own steps, and then pass a list of those steps to the
Migrator instance that will run them in order. Migrator instance that will run them in order.
When the migrator runs, it will take the list of steps you gave him, When the migrator runs, it will take the list of steps you gave,
and, for each step: and, for each step:
a) instantiate it, passing the context you gave to the migrator a) instantiate it, passing the context you gave to the migrator

View File

@ -1946,7 +1946,7 @@ public class ApiResponseHelper implements ResponseGenerator {
//check permissions //check permissions
if (_accountMgr.isNormalUser(caller.getId())) { if (_accountMgr.isNormalUser(caller.getId())) {
//regular user can see only jobs he owns //regular users can see only jobs they own
if (caller.getId() != jobOwner.getId()) { if (caller.getId() != jobOwner.getId()) {
throw new PermissionDeniedException("Account " + caller + " is not authorized to see job id=" + job.getId()); throw new PermissionDeniedException("Account " + caller + " is not authorized to see job id=" + job.getId());
} }

View File

@ -3746,10 +3746,10 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
throw new CloudRuntimeException("Resource type not supported."); throw new CloudRuntimeException("Resource type not supported.");
} }
if (CallContext.current().getCallingAccount().getType() != Account.ACCOUNT_TYPE_ADMIN) { if (CallContext.current().getCallingAccount().getType() != Account.ACCOUNT_TYPE_ADMIN) {
final List<String> userBlacklistedSettings = Stream.of(QueryService.UserVMBlacklistedDetails.value().split(",")) final List<String> userDenyListedSettings = Stream.of(QueryService.UserVMDeniedDetails.value().split(","))
.map(item -> (item).trim()) .map(item -> (item).trim())
.collect(Collectors.toList()); .collect(Collectors.toList());
for (final String detail : userBlacklistedSettings) { for (final String detail : userDenyListedSettings) {
if (options.containsKey(detail)) { if (options.containsKey(detail)) {
options.remove(detail); options.remove(detail);
} }
@ -4149,6 +4149,6 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
@Override @Override
public ConfigKey<?>[] getConfigKeys() { public ConfigKey<?>[] getConfigKeys() {
return new ConfigKey<?>[] {AllowUserViewDestroyedVM, UserVMBlacklistedDetails, UserVMReadOnlyDetails, SortKeyAscending, AllowUserViewAllDomainAccounts}; return new ConfigKey<?>[] {AllowUserViewDestroyedVM, UserVMDeniedDetails, UserVMReadOnlyDetails, SortKeyAscending, AllowUserViewAllDomainAccounts};
} }
} }

View File

@ -344,9 +344,9 @@ public class UserVmJoinDaoImpl extends GenericDaoBaseWithTagInformation<UserVmJo
userVmResponse.setPoolType(userVm.getPoolType().toString()); userVmResponse.setPoolType(userVm.getPoolType().toString());
} }
// Remove blacklisted settings if user is not admin // Remove deny listed settings if user is not admin
if (caller.getType() != Account.ACCOUNT_TYPE_ADMIN) { if (caller.getType() != Account.ACCOUNT_TYPE_ADMIN) {
String[] userVmSettingsToHide = QueryService.UserVMBlacklistedDetails.value().split(","); String[] userVmSettingsToHide = QueryService.UserVMDeniedDetails.value().split(",");
for (String key : userVmSettingsToHide) { for (String key : userVmSettingsToHide) {
resourceDetails.remove(key.trim()); resourceDetails.remove(key.trim());
} }

View File

@ -1021,7 +1021,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati
if (route != null) { if (route != null) {
final String routeToVerify = route.trim(); final String routeToVerify = route.trim();
if (!NetUtils.isValidIp4Cidr(routeToVerify)) { if (!NetUtils.isValidIp4Cidr(routeToVerify)) {
throw new InvalidParameterValueException("Invalid value for blacklisted route: " + route + ". Valid format is list" throw new InvalidParameterValueException("Invalid value for route: " + route + " in deny list. Valid format is list"
+ " of cidrs separated by coma. Example: 10.1.1.0/24,192.168.0.0/24"); + " of cidrs separated by coma. Example: 10.1.1.0/24,192.168.0.0/24");
} }
} }
@ -3765,7 +3765,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati
if (newVlanGateway == null && newVlanNetmask == null) { if (newVlanGateway == null && newVlanNetmask == null) {
newVlanGateway = vlanGateway; newVlanGateway = vlanGateway;
newVlanNetmask = vlanNetmask; newVlanNetmask = vlanNetmask;
// this means he is trying to add to the existing subnet. // this means we are trying to add to the existing subnet.
if (NetUtils.sameSubnet(newStartIP, newVlanGateway, newVlanNetmask)) { if (NetUtils.sameSubnet(newStartIP, newVlanGateway, newVlanNetmask)) {
if (NetUtils.sameSubnet(newEndIP, newVlanGateway, newVlanNetmask)) { if (NetUtils.sameSubnet(newEndIP, newVlanGateway, newVlanNetmask)) {
return NetUtils.SupersetOrSubset.sameSubnet; return NetUtils.SupersetOrSubset.sameSubnet;
@ -3840,7 +3840,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati
// this implies the user is trying to add a new subnet // this implies the user is trying to add a new subnet
// which is not a superset or subset of this subnet. // which is not a superset or subset of this subnet.
} else if (val == NetUtils.SupersetOrSubset.isSubset) { } else if (val == NetUtils.SupersetOrSubset.isSubset) {
// this means he is trying to add to the same subnet. // this means we are trying to add to the same subnet.
throw new InvalidParameterValueException("The subnet you are trying to add is a subset of the existing subnet having gateway " + vlanGateway throw new InvalidParameterValueException("The subnet you are trying to add is a subset of the existing subnet having gateway " + vlanGateway
+ " and netmask " + vlanNetmask); + " and netmask " + vlanNetmask);
} else if (val == NetUtils.SupersetOrSubset.sameSubnet) { } else if (val == NetUtils.SupersetOrSubset.sameSubnet) {

View File

@ -297,7 +297,7 @@ public class FirstFitPlanner extends AdapterBase implements DeploymentClusterPla
private Map<Short, Float> getCapacityThresholdMap() { private Map<Short, Float> getCapacityThresholdMap() {
// Lets build this real time so that the admin wont have to restart MS // Lets build this real time so that the admin wont have to restart MS
// if he changes these values // if anyone changes these values
Map<Short, Float> disableThresholdMap = new HashMap<Short, Float>(); Map<Short, Float> disableThresholdMap = new HashMap<Short, Float>();
String cpuDisableThresholdString = ClusterCPUCapacityDisableThreshold.value().toString(); String cpuDisableThresholdString = ClusterCPUCapacityDisableThreshold.value().toString();

View File

@ -90,7 +90,7 @@ import com.cloud.vm.dao.VMInstanceDao;
* state. If a Investigator finds the VM is dead, then HA process is started on the VM, skipping step 2. 2. If the list of * state. If a Investigator finds the VM is dead, then HA process is started on the VM, skipping step 2. 2. If the list of
* Investigators can not determine if the VM is dead or alive. The list of FenceBuilders is invoked to fence off the VM so that * Investigators can not determine if the VM is dead or alive. The list of FenceBuilders is invoked to fence off the VM so that
* it won't do any damage to the storage and network. 3. The VM is marked as stopped. 4. The VM is started again via the normal * it won't do any damage to the storage and network. 3. The VM is marked as stopped. 4. The VM is started again via the normal
* process of starting VMs. Note that once the VM is marked as stopped, the user may have started the VM himself. 5. VMs that * process of starting VMs. Note that once the VM is marked as stopped, the user may have started the VM explicitly. 5. VMs that
* have re-started more than the configured number of times are marked as in Error state and the user is not allowed to restart * have re-started more than the configured number of times are marked as in Error state and the user is not allowed to restart
* the VM. * the VM.
* *

View File

@ -617,7 +617,7 @@ NetworkMigrationResponder, AggregatedCommandExecutor, RedundantResource, DnsServ
} }
NetworkDetailVO updateInSequence=_networkDetailsDao.findDetail(network.getId(), Network.updatingInSequence); NetworkDetailVO updateInSequence=_networkDetailsDao.findDetail(network.getId(), Network.updatingInSequence);
if(network.isRedundant() && updateInSequence!=null && "true".equalsIgnoreCase(updateInSequence.getValue())){ if(network.isRedundant() && updateInSequence!=null && "true".equalsIgnoreCase(updateInSequence.getValue())){
List<DomainRouterVO> masterRouters=new ArrayList<DomainRouterVO>(); List<DomainRouterVO> primaryRouters=new ArrayList<DomainRouterVO>();
int noOfrouters=routers.size(); int noOfrouters=routers.size();
while (noOfrouters>0){ while (noOfrouters>0){
DomainRouterVO router = routers.get(0); DomainRouterVO router = routers.get(0);
@ -632,16 +632,16 @@ NetworkMigrationResponder, AggregatedCommandExecutor, RedundantResource, DnsServ
continue; continue;
} }
if(router.getRedundantState()!=VirtualRouter.RedundantState.BACKUP) { if(router.getRedundantState()!=VirtualRouter.RedundantState.BACKUP) {
masterRouters.add(router); primaryRouters.add(router);
routers.remove(router); routers.remove(router);
} }
noOfrouters--; noOfrouters--;
} }
if(routers.size()==0 && masterRouters.size()==0){ if(routers.size()==0 && primaryRouters.size()==0){
return null; return null;
} }
if(routers.size()==0 && masterRouters.size()!=0){ if(routers.size()==0 && primaryRouters.size()!=0){
routers=masterRouters; routers=primaryRouters;
} }
routers=routers.subList(0,1); routers=routers.subList(0,1);
routers.get(0).setUpdateState(VirtualRouter.UpdateState.UPDATE_IN_PROGRESS); routers.get(0).setUpdateState(VirtualRouter.UpdateState.UPDATE_IN_PROGRESS);

View File

@ -805,7 +805,7 @@ Configurable, StateListener<VirtualMachine.State, VirtualMachine.Event, VirtualM
if (conns == null || conns.isEmpty()) { if (conns == null || conns.isEmpty()) {
continue; continue;
} }
if (router.getIsRedundantRouter() && router.getRedundantState() != RedundantState.MASTER){ if (router.getIsRedundantRouter() && router.getRedundantState() != RedundantState.PRIMARY){
continue; continue;
} }
if (router.getState() != VirtualMachine.State.Running) { if (router.getState() != VirtualMachine.State.Running) {
@ -935,7 +935,7 @@ Configurable, StateListener<VirtualMachine.State, VirtualMachine.Event, VirtualM
final String context = "Redundant virtual router (name: " + router.getHostName() + ", id: " + router.getId() + ") " + " just switch from " + prevState + " to " final String context = "Redundant virtual router (name: " + router.getHostName() + ", id: " + router.getId() + ") " + " just switch from " + prevState + " to "
+ currState; + currState;
s_logger.info(context); s_logger.info(context);
if (currState == RedundantState.MASTER) { if (currState == RedundantState.PRIMARY) {
_alertMgr.sendAlert(AlertManager.AlertType.ALERT_TYPE_DOMAIN_ROUTER, router.getDataCenterId(), router.getPodIdToDeployIn(), title, context); _alertMgr.sendAlert(AlertManager.AlertType.ALERT_TYPE_DOMAIN_ROUTER, router.getDataCenterId(), router.getPodIdToDeployIn(), title, context);
} }
} }
@ -943,12 +943,12 @@ Configurable, StateListener<VirtualMachine.State, VirtualMachine.Event, VirtualM
} }
// Ensure router status is update to date before execute this function. The // Ensure router status is update to date before execute this function. The
// function would try best to recover all routers except MASTER // function would try best to recover all routers except PRIMARY
protected void recoverRedundantNetwork(final DomainRouterVO masterRouter, final DomainRouterVO backupRouter) { protected void recoverRedundantNetwork(final DomainRouterVO primaryRouter, final DomainRouterVO backupRouter) {
if (masterRouter.getState() == VirtualMachine.State.Running && backupRouter.getState() == VirtualMachine.State.Running) { if (primaryRouter.getState() == VirtualMachine.State.Running && backupRouter.getState() == VirtualMachine.State.Running) {
final HostVO masterHost = _hostDao.findById(masterRouter.getHostId()); final HostVO primaryHost = _hostDao.findById(primaryRouter.getHostId());
final HostVO backupHost = _hostDao.findById(backupRouter.getHostId()); final HostVO backupHost = _hostDao.findById(backupRouter.getHostId());
if (masterHost.getState() == Status.Up && backupHost.getState() == Status.Up) { if (primaryHost.getState() == Status.Up && backupHost.getState() == Status.Up) {
final String title = "Reboot " + backupRouter.getInstanceName() + " to ensure redundant virtual routers work"; final String title = "Reboot " + backupRouter.getInstanceName() + " to ensure redundant virtual routers work";
if (s_logger.isDebugEnabled()) { if (s_logger.isDebugEnabled()) {
s_logger.debug(title); s_logger.debug(title);
@ -971,7 +971,7 @@ Configurable, StateListener<VirtualMachine.State, VirtualMachine.Event, VirtualM
/* /*
* In order to make fail-over works well at any time, we have to ensure: * In order to make fail-over works well at any time, we have to ensure:
* 1. Backup router's priority = Master's priority - DELTA + 1 * 1. Backup router's priority = Primary's priority - DELTA + 1
*/ */
private void checkSanity(final List<DomainRouterVO> routers) { private void checkSanity(final List<DomainRouterVO> routers) {
final Set<Long> checkedNetwork = new HashSet<Long>(); final Set<Long> checkedNetwork = new HashSet<Long>();
@ -1000,16 +1000,16 @@ Configurable, StateListener<VirtualMachine.State, VirtualMachine.Event, VirtualM
continue; continue;
} }
DomainRouterVO masterRouter = null; DomainRouterVO primaryRouter = null;
DomainRouterVO backupRouter = null; DomainRouterVO backupRouter = null;
for (final DomainRouterVO r : checkingRouters) { for (final DomainRouterVO r : checkingRouters) {
if (r.getRedundantState() == RedundantState.MASTER) { if (r.getRedundantState() == RedundantState.PRIMARY) {
if (masterRouter == null) { if (primaryRouter == null) {
masterRouter = r; primaryRouter = r;
} else { } else {
// Wilder Rodrigues (wrodrigues@schubergphilis.com // Wilder Rodrigues (wrodrigues@schubergphilis.com
// Force a restart in order to fix the conflict // Force a restart in order to fix the conflict
// recoverRedundantNetwork(masterRouter, r); // recoverRedundantNetwork(primaryRouter, r);
break; break;
} }
} else if (r.getRedundantState() == RedundantState.BACKUP) { } else if (r.getRedundantState() == RedundantState.BACKUP) {
@ -1027,7 +1027,7 @@ Configurable, StateListener<VirtualMachine.State, VirtualMachine.Event, VirtualM
} }
} }
private void checkDuplicateMaster(final List<DomainRouterVO> routers) { private void checkDuplicatePrimary(final List<DomainRouterVO> routers) {
final Map<Long, DomainRouterVO> networkRouterMaps = new HashMap<Long, DomainRouterVO>(); final Map<Long, DomainRouterVO> networkRouterMaps = new HashMap<Long, DomainRouterVO>();
for (final DomainRouterVO router : routers) { for (final DomainRouterVO router : routers) {
final List<Long> routerGuestNtwkIds = _routerDao.getRouterNetworks(router.getId()); final List<Long> routerGuestNtwkIds = _routerDao.getRouterNetworks(router.getId());
@ -1035,13 +1035,13 @@ Configurable, StateListener<VirtualMachine.State, VirtualMachine.Event, VirtualM
final Long vpcId = router.getVpcId(); final Long vpcId = router.getVpcId();
if (vpcId != null || routerGuestNtwkIds.size() > 0) { if (vpcId != null || routerGuestNtwkIds.size() > 0) {
Long routerGuestNtwkId = vpcId != null ? vpcId : routerGuestNtwkIds.get(0); Long routerGuestNtwkId = vpcId != null ? vpcId : routerGuestNtwkIds.get(0);
if (router.getRedundantState() == RedundantState.MASTER) { if (router.getRedundantState() == RedundantState.PRIMARY) {
if (networkRouterMaps.containsKey(routerGuestNtwkId)) { if (networkRouterMaps.containsKey(routerGuestNtwkId)) {
final DomainRouterVO dupRouter = networkRouterMaps.get(routerGuestNtwkId); final DomainRouterVO dupRouter = networkRouterMaps.get(routerGuestNtwkId);
final String title = "More than one redundant virtual router is in MASTER state! Router " + router.getHostName() + " and router " final String title = "More than one redundant virtual router is in PRIMARY state! Router " + router.getHostName() + " and router "
+ dupRouter.getHostName(); + dupRouter.getHostName();
final String context = "Virtual router (name: " + router.getHostName() + ", id: " + router.getId() + " and router (name: " + dupRouter.getHostName() final String context = "Virtual router (name: " + router.getHostName() + ", id: " + router.getId() + " and router (name: " + dupRouter.getHostName()
+ ", id: " + router.getId() + ") are both in MASTER state! If the problem persist, restart both of routers. "; + ", id: " + router.getId() + ") are both in PRIMARY state! If the problem persist, restart both of routers. ";
_alertMgr.sendAlert(AlertManager.AlertType.ALERT_TYPE_DOMAIN_ROUTER, router.getDataCenterId(), router.getPodIdToDeployIn(), title, context); _alertMgr.sendAlert(AlertManager.AlertType.ALERT_TYPE_DOMAIN_ROUTER, router.getDataCenterId(), router.getPodIdToDeployIn(), title, context);
s_logger.warn(context); s_logger.warn(context);
} else { } else {
@ -1083,7 +1083,7 @@ Configurable, StateListener<VirtualMachine.State, VirtualMachine.Event, VirtualM
updateRoutersRedundantState(routers); updateRoutersRedundantState(routers);
// Wilder Rodrigues (wrodrigues@schubergphilis.com) - One of the routers is not running, // Wilder Rodrigues (wrodrigues@schubergphilis.com) - One of the routers is not running,
// so we don't have to continue here since the host will be null any way. Also, there is no need // so we don't have to continue here since the host will be null any way. Also, there is no need
// To check either for sanity of duplicate master. Thus, just update the state and get lost. // To check either for sanity of duplicate primary. Thus, just update the state and get lost.
continue; continue;
} }
@ -1104,7 +1104,7 @@ Configurable, StateListener<VirtualMachine.State, VirtualMachine.Event, VirtualM
continue; continue;
} }
updateRoutersRedundantState(routers); updateRoutersRedundantState(routers);
checkDuplicateMaster(routers); checkDuplicatePrimary(routers);
checkSanity(routers); checkSanity(routers);
} catch (final Exception ex) { } catch (final Exception ex) {
s_logger.error("Fail to complete the RvRStatusUpdateTask! ", ex); s_logger.error("Fail to complete the RvRStatusUpdateTask! ", ex);
@ -2231,13 +2231,13 @@ Configurable, StateListener<VirtualMachine.State, VirtualMachine.Event, VirtualM
String redundantState = RedundantState.BACKUP.toString(); String redundantState = RedundantState.BACKUP.toString();
router.setRedundantState(RedundantState.BACKUP); router.setRedundantState(RedundantState.BACKUP);
if (routers.size() == 0) { if (routers.size() == 0) {
redundantState = RedundantState.MASTER.toString(); redundantState = RedundantState.PRIMARY.toString();
router.setRedundantState(RedundantState.MASTER); router.setRedundantState(RedundantState.PRIMARY);
} else { } else {
final DomainRouterVO router0 = routers.get(0); final DomainRouterVO router0 = routers.get(0);
if (router.getId() == router0.getId()) { if (router.getId() == router0.getId()) {
redundantState = RedundantState.MASTER.toString(); redundantState = RedundantState.PRIMARY.toString();
router.setRedundantState(RedundantState.MASTER); router.setRedundantState(RedundantState.PRIMARY);
} }
} }

View File

@ -2293,9 +2293,9 @@ public class VpcManagerImpl extends ManagerBase implements VpcManager, VpcProvis
throw new InvalidParameterValueException("CIDR should be outside of link local cidr " + NetUtils.getLinkLocalCIDR()); throw new InvalidParameterValueException("CIDR should be outside of link local cidr " + NetUtils.getLinkLocalCIDR());
} }
// 3) Verify against blacklisted routes // 3) Verify against denied routes
if (isCidrBlacklisted(cidr, vpc.getZoneId())) { if (isCidrDenylisted(cidr, vpc.getZoneId())) {
throw new InvalidParameterValueException("The static gateway cidr overlaps with one of the blacklisted routes of the zone the VPC belongs to"); throw new InvalidParameterValueException("The static gateway cidr overlaps with one of the denied routes of the zone the VPC belongs to");
} }
return Transaction.execute(new TransactionCallbackWithException<StaticRouteVO, NetworkRuleConflictException>() { return Transaction.execute(new TransactionCallbackWithException<StaticRouteVO, NetworkRuleConflictException>() {
@ -2317,14 +2317,14 @@ public class VpcManagerImpl extends ManagerBase implements VpcManager, VpcProvis
}); });
} }
protected boolean isCidrBlacklisted(final String cidr, final long zoneId) { protected boolean isCidrDenylisted(final String cidr, final long zoneId) {
final String routesStr = NetworkOrchestrationService.GuestDomainSuffix.valueIn(zoneId); final String routesStr = NetworkOrchestrationService.GuestDomainSuffix.valueIn(zoneId);
if (routesStr != null && !routesStr.isEmpty()) { if (routesStr != null && !routesStr.isEmpty()) {
final String[] cidrBlackList = routesStr.split(","); final String[] cidrDenyList = routesStr.split(",");
if (cidrBlackList != null && cidrBlackList.length > 0) { if (cidrDenyList != null && cidrDenyList.length > 0) {
for (final String blackListedRoute : cidrBlackList) { for (final String denyListedRoute : cidrDenyList) {
if (NetUtils.isNetworksOverlap(blackListedRoute, cidr)) { if (NetUtils.isNetworksOverlap(denyListedRoute, cidr)) {
return true; return true;
} }
} }

View File

@ -714,8 +714,8 @@ public class ResourceLimitManagerImpl extends ManagerBase implements ResourceLim
} }
if ((caller.getAccountId() == accountId.longValue()) && (_accountMgr.isDomainAdmin(caller.getId()) || caller.getType() == Account.ACCOUNT_TYPE_RESOURCE_DOMAIN_ADMIN)) { if ((caller.getAccountId() == accountId.longValue()) && (_accountMgr.isDomainAdmin(caller.getId()) || caller.getType() == Account.ACCOUNT_TYPE_RESOURCE_DOMAIN_ADMIN)) {
// If the admin is trying to update his own account, disallow. // If the admin is trying to update their own account, disallow.
throw new PermissionDeniedException("Unable to update resource limit for his own account " + accountId + ", permission denied"); throw new PermissionDeniedException("Unable to update resource limit for their own account " + accountId + ", permission denied");
} }
if (account.getType() == Account.ACCOUNT_TYPE_PROJECT) { if (account.getType() == Account.ACCOUNT_TYPE_PROJECT) {

View File

@ -26,11 +26,11 @@ import com.cloud.utils.db.Merovingian2;
* when a management server is down. * when a management server is down.
* *
*/ */
public class LockMasterListener implements ClusterManagerListener { public class LockControllerListener implements ClusterManagerListener {
Merovingian2 _lockMaster; Merovingian2 _lockController;
public LockMasterListener(long msId) { public LockControllerListener(long msId) {
_lockMaster = Merovingian2.createLockMaster(msId); _lockController = Merovingian2.createLockController(msId);
} }
@Override @Override
@ -40,7 +40,7 @@ public class LockMasterListener implements ClusterManagerListener {
@Override @Override
public void onManagementNodeLeft(List<? extends ManagementServerHost> nodeList, long selfNodeId) { public void onManagementNodeLeft(List<? extends ManagementServerHost> nodeList, long selfNodeId) {
for (ManagementServerHost node : nodeList) { for (ManagementServerHost node : nodeList) {
_lockMaster.cleanupForServer(node.getMsid()); _lockController.cleanupForServer(node.getMsid());
} }
} }

View File

@ -879,7 +879,7 @@ public class ManagementServerImpl extends ManagerBase implements ManagementServe
@Inject @Inject
private VpcDao _vpcDao; private VpcDao _vpcDao;
private LockMasterListener _lockMasterListener; private LockControllerListener _lockControllerListener;
private final ScheduledExecutorService _eventExecutor = Executors.newScheduledThreadPool(1, new NamedThreadFactory("EventChecker")); private final ScheduledExecutorService _eventExecutor = Executors.newScheduledThreadPool(1, new NamedThreadFactory("EventChecker"));
private final ScheduledExecutorService _alertExecutor = Executors.newScheduledThreadPool(1, new NamedThreadFactory("AlertChecker")); private final ScheduledExecutorService _alertExecutor = Executors.newScheduledThreadPool(1, new NamedThreadFactory("AlertChecker"));
@ -985,11 +985,11 @@ public class ManagementServerImpl extends ManagerBase implements ManagementServe
// Set human readable sizes // Set human readable sizes
NumbersUtil.enableHumanReadableSizes = _configDao.findByName("display.human.readable.sizes").getValue().equals("true"); NumbersUtil.enableHumanReadableSizes = _configDao.findByName("display.human.readable.sizes").getValue().equals("true");
if (_lockMasterListener == null) { if (_lockControllerListener == null) {
_lockMasterListener = new LockMasterListener(ManagementServerNode.getManagementServerId()); _lockControllerListener = new LockControllerListener(ManagementServerNode.getManagementServerId());
} }
_clusterMgr.registerListener(_lockMasterListener); _clusterMgr.registerListener(_lockControllerListener);
enableAdminUser("password"); enableAdminUser("password");
return true; return true;
@ -3815,7 +3815,7 @@ public class ManagementServerImpl extends ManagerBase implements ManagementServe
String signature = ""; String signature = "";
try { try {
// get the user obj to get his secret key // get the user obj to get their secret key
user = _accountMgr.getActiveUser(userId); user = _accountMgr.getActiveUser(userId);
final String secretKey = user.getSecretKey(); final String secretKey = user.getSecretKey();
final String input = cloudIdentifier; final String input = cloudIdentifier;
@ -4551,12 +4551,12 @@ public class ManagementServerImpl extends ManagerBase implements ManagementServe
_storagePoolAllocators = storagePoolAllocators; _storagePoolAllocators = storagePoolAllocators;
} }
public LockMasterListener getLockMasterListener() { public LockControllerListener getLockControllerListener() {
return _lockMasterListener; return _lockControllerListener;
} }
public void setLockMasterListener(final LockMasterListener lockMasterListener) { public void setLockControllerListener(final LockControllerListener lockControllerListener) {
_lockMasterListener = lockMasterListener; _lockControllerListener = lockControllerListener;
} }
} }

View File

@ -581,7 +581,7 @@ public class AccountManagerImpl extends ManagerBase implements AccountManager, M
@Override @Override
public Long checkAccessAndSpecifyAuthority(Account caller, Long zoneId) { public Long checkAccessAndSpecifyAuthority(Account caller, Long zoneId) {
// We just care for resource domain admin for now. He should be permitted to see only his zone. // We just care for resource domain admins for now, and they should be permitted to see only their zone.
if (isResourceDomainAdmin(caller.getAccountId())) { if (isResourceDomainAdmin(caller.getAccountId())) {
if (zoneId == null) { if (zoneId == null) {
return getZoneIdForAccount(caller); return getZoneIdForAccount(caller);

View File

@ -2545,7 +2545,6 @@ public class UserVmManagerImpl extends ManagerBase implements UserVmManager, Vir
scanLock.releaseRef(); scanLock.releaseRef();
} }
} }
} }
@Override @Override
@ -2581,7 +2580,7 @@ public class UserVmManagerImpl extends ManagerBase implements UserVmManager, Vir
updateDisplayVmFlag(isDisplayVm, id, vmInstance); updateDisplayVmFlag(isDisplayVm, id, vmInstance);
} }
final Account caller = CallContext.current().getCallingAccount(); final Account caller = CallContext.current().getCallingAccount();
final List<String> userBlacklistedSettings = Stream.of(QueryService.UserVMBlacklistedDetails.value().split(",")) final List<String> userDenyListedSettings = Stream.of(QueryService.UserVMDeniedDetails.value().split(","))
.map(item -> (item).trim()) .map(item -> (item).trim())
.collect(Collectors.toList()); .collect(Collectors.toList());
final List<String> userReadOnlySettings = Stream.of(QueryService.UserVMReadOnlyDetails.value().split(",")) final List<String> userReadOnlySettings = Stream.of(QueryService.UserVMReadOnlyDetails.value().split(","))
@ -2592,7 +2591,7 @@ public class UserVmManagerImpl extends ManagerBase implements UserVmManager, Vir
userVmDetailsDao.removeDetails(id); userVmDetailsDao.removeDetails(id);
} else { } else {
for (final UserVmDetailVO detail : userVmDetailsDao.listDetails(id)) { for (final UserVmDetailVO detail : userVmDetailsDao.listDetails(id)) {
if (detail != null && !userBlacklistedSettings.contains(detail.getName()) if (detail != null && !userDenyListedSettings.contains(detail.getName())
&& !userReadOnlySettings.contains(detail.getName())) { && !userReadOnlySettings.contains(detail.getName())) {
userVmDetailsDao.removeDetail(id, detail.getName()); userVmDetailsDao.removeDetail(id, detail.getName());
} }
@ -2605,18 +2604,18 @@ public class UserVmManagerImpl extends ManagerBase implements UserVmManager, Vir
} }
if (caller != null && caller.getType() != Account.ACCOUNT_TYPE_ADMIN) { if (caller != null && caller.getType() != Account.ACCOUNT_TYPE_ADMIN) {
// Ensure blacklisted or read-only detail is not passed by non-root-admin user // Ensure denied or read-only detail is not passed by non-root-admin user
for (final String detailName : details.keySet()) { for (final String detailName : details.keySet()) {
if (userBlacklistedSettings.contains(detailName)) { if (userDenyListedSettings.contains(detailName)) {
throw new InvalidParameterValueException("You're not allowed to add or edit the restricted setting: " + detailName); throw new InvalidParameterValueException("You're not allowed to add or edit the restricted setting: " + detailName);
} }
if (userReadOnlySettings.contains(detailName)) { if (userReadOnlySettings.contains(detailName)) {
throw new InvalidParameterValueException("You're not allowed to add or edit the read-only setting: " + detailName); throw new InvalidParameterValueException("You're not allowed to add or edit the read-only setting: " + detailName);
} }
} }
// Add any hidden/blacklisted or read-only detail // Add any hidden/denied or read-only detail
for (final UserVmDetailVO detail : userVmDetailsDao.listDetails(id)) { for (final UserVmDetailVO detail : userVmDetailsDao.listDetails(id)) {
if (userBlacklistedSettings.contains(detail.getName()) || userReadOnlySettings.contains(detail.getName())) { if (userDenyListedSettings.contains(detail.getName()) || userReadOnlySettings.contains(detail.getName())) {
details.put(detail.getName(), detail.getValue()); details.put(detail.getName(), detail.getValue());
} }
} }
@ -5569,7 +5568,7 @@ public class UserVmManagerImpl extends ManagerBase implements UserVmManager, Vir
* @param vm * @param vm
*/ */
protected void persistExtraConfigKvm(String decodedUrl, UserVm vm) { protected void persistExtraConfigKvm(String decodedUrl, UserVm vm) {
// validate config against blacklisted cfg commands // validate config against denied cfg commands
validateKvmExtraConfig(decodedUrl); validateKvmExtraConfig(decodedUrl);
String[] extraConfigs = decodedUrl.split("\n\n"); String[] extraConfigs = decodedUrl.split("\n\n");
for (String cfg : extraConfigs) { for (String cfg : extraConfigs) {
@ -5591,7 +5590,7 @@ public class UserVmManagerImpl extends ManagerBase implements UserVmManager, Vir
/** /**
* This method is called by the persistExtraConfigKvm * This method is called by the persistExtraConfigKvm
* Validates passed extra configuration data for KVM and validates against blacklist of unwanted commands * Validates passed extra configuration data for KVM and validates against deny-list of unwanted commands
* controlled by Root admin * controlled by Root admin
* @param decodedUrl string containing xml configuration to be validated * @param decodedUrl string containing xml configuration to be validated
*/ */

View File

@ -51,7 +51,7 @@
</bean> </bean>
<bean id="managementServerImpl" class="com.cloud.server.ManagementServerImpl"> <bean id="managementServerImpl" class="com.cloud.server.ManagementServerImpl">
<property name="lockMasterListener" ref="lockMasterListener" /> <property name="lockControllerListener" ref="lockControllerListener" />
<property name="userAuthenticators" <property name="userAuthenticators"
value="#{userAuthenticatorsRegistry.registered}" /> value="#{userAuthenticatorsRegistry.registered}" />
<property name="userPasswordEncoders" <property name="userPasswordEncoders"

View File

@ -27,7 +27,7 @@
http://www.springframework.org/schema/context/spring-context.xsd" http://www.springframework.org/schema/context/spring-context.xsd"
> >
<bean id="lockMasterListener" class="com.cloud.server.LockMasterListener" > <bean id="lockControllerListener" class="com.cloud.server.LockControllerListener" >
<constructor-arg> <constructor-arg>
<bean class="org.apache.cloudstack.utils.identity.ManagementServerNode" factory-method="getManagementServerId" /> <bean class="org.apache.cloudstack.utils.identity.ManagementServerNode" factory-method="getManagementServerId" />
</constructor-arg> </constructor-arg>

View File

@ -250,7 +250,7 @@ public class VirtualRouterElementTest {
public void testGetRouters2(){ public void testGetRouters2(){
Network networkUpdateInprogress=new NetworkVO(2l,null,null,null,1l,1l,1l,1l,"d","d","d",null,1l,1l,null,true,null,true); Network networkUpdateInprogress=new NetworkVO(2l,null,null,null,1l,1l,1l,1l,"d","d","d",null,1l,1l,null,true,null,true);
mockDAOs((NetworkVO)networkUpdateInprogress,testOffering); mockDAOs((NetworkVO)networkUpdateInprogress,testOffering);
//alwyas return backup routers first when both master and backup need update. //alwyas return backup routers first when both primary and backup need update.
List<DomainRouterVO> routers=virtualRouterElement.getRouters(networkUpdateInprogress); List<DomainRouterVO> routers=virtualRouterElement.getRouters(networkUpdateInprogress);
assertTrue(routers.size()==1); assertTrue(routers.size()==1);
assertTrue(routers.get(0).getRedundantState()==RedundantState.BACKUP && routers.get(0).getUpdateState()==VirtualRouter.UpdateState.UPDATE_IN_PROGRESS); assertTrue(routers.get(0).getRedundantState()==RedundantState.BACKUP && routers.get(0).getUpdateState()==VirtualRouter.UpdateState.UPDATE_IN_PROGRESS);
@ -260,7 +260,7 @@ public class VirtualRouterElementTest {
public void testGetRouters3(){ public void testGetRouters3(){
Network network=new NetworkVO(3l,null,null,null,1l,1l,1l,1l,"d","d","d",null,1l,1l,null,true,null,true); Network network=new NetworkVO(3l,null,null,null,1l,1l,1l,1l,"d","d","d",null,1l,1l,null,true,null,true);
mockDAOs((NetworkVO)network,testOffering); mockDAOs((NetworkVO)network,testOffering);
//alwyas return backup routers first when both master and backup need update. //alwyas return backup routers first when both primary and backup need update.
List<DomainRouterVO> routers=virtualRouterElement.getRouters(network); List<DomainRouterVO> routers=virtualRouterElement.getRouters(network);
assertTrue(routers.size()==4); assertTrue(routers.size()==4);
} }
@ -376,7 +376,7 @@ public class VirtualRouterElementTest {
/* stopPending */ false, /* stopPending */ false,
/* vpcId */ null); /* vpcId */ null);
routerNeedUpdateBackup.setUpdateState(VirtualRouter.UpdateState.UPDATE_NEEDED); routerNeedUpdateBackup.setUpdateState(VirtualRouter.UpdateState.UPDATE_NEEDED);
final DomainRouterVO routerNeedUpdateMaster = new DomainRouterVO(/* id */ 3L, final DomainRouterVO routerNeedUpdatePrimary = new DomainRouterVO(/* id */ 3L,
/* serviceOfferingId */ 1L, /* serviceOfferingId */ 1L,
/* elementId */ 0L, /* elementId */ 0L,
"name", "name",
@ -387,11 +387,11 @@ public class VirtualRouterElementTest {
/* accountId */ 1L, /* accountId */ 1L,
/* userId */ 1L, /* userId */ 1L,
/* isRedundantRouter */ false, /* isRedundantRouter */ false,
RedundantState.MASTER, RedundantState.PRIMARY,
/* haEnabled */ false, /* haEnabled */ false,
/* stopPending */ false, /* stopPending */ false,
/* vpcId */ null); /* vpcId */ null);
routerNeedUpdateMaster.setUpdateState(VirtualRouter.UpdateState.UPDATE_NEEDED); routerNeedUpdatePrimary.setUpdateState(VirtualRouter.UpdateState.UPDATE_NEEDED);
final DomainRouterVO routerUpdateComplete = new DomainRouterVO(/* id */ 4L, final DomainRouterVO routerUpdateComplete = new DomainRouterVO(/* id */ 4L,
/* serviceOfferingId */ 1L, /* serviceOfferingId */ 1L,
/* elementId */ 0L, /* elementId */ 0L,
@ -427,12 +427,12 @@ public class VirtualRouterElementTest {
List<DomainRouterVO> routerList1=new ArrayList<>(); List<DomainRouterVO> routerList1=new ArrayList<>();
routerList1.add(routerUpdateComplete); routerList1.add(routerUpdateComplete);
routerList1.add(routerNeedUpdateBackup); routerList1.add(routerNeedUpdateBackup);
routerList1.add(routerNeedUpdateMaster); routerList1.add(routerNeedUpdatePrimary);
routerList1.add(routerUpdateInProgress); routerList1.add(routerUpdateInProgress);
List<DomainRouterVO> routerList2=new ArrayList<>(); List<DomainRouterVO> routerList2=new ArrayList<>();
routerList2.add(routerUpdateComplete); routerList2.add(routerUpdateComplete);
routerList2.add(routerNeedUpdateBackup); routerList2.add(routerNeedUpdateBackup);
routerList2.add(routerNeedUpdateMaster); routerList2.add(routerNeedUpdatePrimary);
List<DomainRouterVO> routerList3=new ArrayList<>(); List<DomainRouterVO> routerList3=new ArrayList<>();
routerList3.add(routerUpdateComplete); routerList3.add(routerUpdateComplete);
routerList3.add(routerUpdateInProgress); routerList3.add(routerUpdateInProgress);

View File

@ -263,7 +263,7 @@ public class VirtualNetworkApplianceManagerImplTest {
@Test @Test
public void testUpdateSite2SiteVpnConnectionState() throws Exception{ public void testUpdateSite2SiteVpnConnectionState() throws Exception{
DomainRouterVO router = new DomainRouterVO(1L, 1L, 1L, "First testing router", 1L, Hypervisor.HypervisorType.XenServer, 1L, 1L, 1L, 1L, false, VirtualRouter.RedundantState.MASTER, true, true, 1L); DomainRouterVO router = new DomainRouterVO(1L, 1L, 1L, "First testing router", 1L, Hypervisor.HypervisorType.XenServer, 1L, 1L, 1L, 1L, false, VirtualRouter.RedundantState.PRIMARY, true, true, 1L);
router.setState(VirtualMachine.State.Running); router.setState(VirtualMachine.State.Running);
router.setPrivateIpAddress("192.168.50.15"); router.setPrivateIpAddress("192.168.50.15");

View File

@ -951,9 +951,9 @@ function send_all_trees(s, lcodes, dcodes, blcodes)
* Check if the data type is TEXT or BINARY, using the following algorithm: * Check if the data type is TEXT or BINARY, using the following algorithm:
* - TEXT if the two conditions below are satisfied: * - TEXT if the two conditions below are satisfied:
* a) There are no non-portable control characters belonging to the * a) There are no non-portable control characters belonging to the
* "black list" (0..6, 14..25, 28..31). * "deny list" (0..6, 14..25, 28..31).
* b) There is at least one printable character belonging to the * b) There is at least one printable character belonging to the
* "white list" (9 {TAB}, 10 {LF}, 13 {CR}, 32..255). * "allow list" (9 {TAB}, 10 {LF}, 13 {CR}, 32..255).
* - BINARY otherwise. * - BINARY otherwise.
* - The following partially-portable control characters form a * - The following partially-portable control characters form a
* "gray list" that is ignored in this detection algorithm: * "gray list" that is ignored in this detection algorithm:
@ -961,21 +961,21 @@ function send_all_trees(s, lcodes, dcodes, blcodes)
* IN assertion: the fields Freq of dyn_ltree are set. * IN assertion: the fields Freq of dyn_ltree are set.
*/ */
function detect_data_type(s) { function detect_data_type(s) {
/* black_mask is the bit mask of black-listed bytes /* deny_mask is the bit mask of deny-listed bytes
* set bits 0..6, 14..25, and 28..31 * set bits 0..6, 14..25, and 28..31
* 0xf3ffc07f = binary 11110011111111111100000001111111 * 0xf3ffc07f = binary 11110011111111111100000001111111
*/ */
var black_mask = 0xf3ffc07f; var deny_mask = 0xf3ffc07f;
var n; var n;
/* Check for non-textual ("black-listed") bytes. */ /* Check for non-textual ("deny-listed") bytes. */
for (n = 0; n <= 31; n++, black_mask >>>= 1) { for (n = 0; n <= 31; n++, deny_mask >>>= 1) {
if ((black_mask & 1) && (s.dyn_ltree[n * 2]/*.Freq*/ !== 0)) { if ((deny_mask & 1) && (s.dyn_ltree[n * 2]/*.Freq*/ !== 0)) {
return Z_BINARY; return Z_BINARY;
} }
} }
/* Check for textual ("white-listed") bytes. */ /* Check for textual ("allow-listed") bytes. */
if (s.dyn_ltree[9 * 2]/*.Freq*/ !== 0 || s.dyn_ltree[10 * 2]/*.Freq*/ !== 0 || if (s.dyn_ltree[9 * 2]/*.Freq*/ !== 0 || s.dyn_ltree[10 * 2]/*.Freq*/ !== 0 ||
s.dyn_ltree[13 * 2]/*.Freq*/ !== 0) { s.dyn_ltree[13 * 2]/*.Freq*/ !== 0) {
return Z_TEXT; return Z_TEXT;
@ -986,7 +986,7 @@ function detect_data_type(s) {
} }
} }
/* There are no "black-listed" or "white-listed" bytes: /* There are no "deny-listed" or "allow-listed" bytes:
* this stream either is empty or has tolerated ("gray-listed") bytes only. * this stream either is empty or has tolerated ("gray-listed") bytes only.
*/ */
return Z_BINARY; return Z_BINARY;

View File

@ -27,13 +27,13 @@ fi
ROUTER_TYPE=$(cat /etc/cloudstack/cmdline.json | grep type | awk '{print $2;}' | sed -e 's/[,\"]//g') ROUTER_TYPE=$(cat /etc/cloudstack/cmdline.json | grep type | awk '{print $2;}' | sed -e 's/[,\"]//g')
if [ "$ROUTER_TYPE" = "router" ] if [ "$ROUTER_TYPE" = "router" ]
then then
ROUTER_STATE=$(ip addr show dev eth0 | grep inet | wc -l | xargs bash -c 'if [ $0 == 2 ]; then echo "MASTER"; else echo "BACKUP"; fi') ROUTER_STATE=$(ip addr show dev eth0 | grep inet | wc -l | xargs bash -c 'if [ $0 == 2 ]; then echo "PRIMARY"; else echo "BACKUP"; fi')
STATUS=$ROUTER_STATE STATUS=$ROUTER_STATE
else else
ROUTER_STATE=$(ip addr show dev eth1 | grep state | awk '{print $9;}') ROUTER_STATE=$(ip addr show dev eth1 | grep state | awk '{print $9;}')
if [ "$ROUTER_STATE" = "UP" ] if [ "$ROUTER_STATE" = "UP" ]
then then
STATUS=MASTER STATUS=PRIMARY
elif [ "$ROUTER_STATE" = "DOWN" ] elif [ "$ROUTER_STATE" = "DOWN" ]
then then
STATUS=BACKUP STATUS=BACKUP

View File

@ -25,9 +25,9 @@ import logging
from optparse import OptionParser from optparse import OptionParser
parser = OptionParser() parser = OptionParser()
parser.add_option("-m", "--master", parser.add_option("-p", "--primary",
action="store_true", default=False, dest="master", action="store_true", default=False, dest="primary",
help="Set router master") help="Set router primary")
parser.add_option("-b", "--backup", parser.add_option("-b", "--backup",
action="store_true", default=False, dest="backup", action="store_true", default=False, dest="backup",
help="Set router backup") help="Set router backup")
@ -42,15 +42,15 @@ logging.basicConfig(filename=config.get_logger(),
format=config.get_format()) format=config.get_format())
config.cmdline() config.cmdline()
cl = CsCmdLine("cmdline", config) cl = CsCmdLine("cmdline", config)
# Update the configuration to set state as backup and let keepalived decide who the real Master is! # Update the configuration to set state as backup and let keepalived decide who the real Primary is!
cl.set_master_state(False) cl.set_primary_state(False)
cl.save() cl.save()
config.set_address() config.set_address()
red = CsRedundant(config) red = CsRedundant(config)
if options.master: if options.primary:
red.set_master() red.set_primary()
if options.backup: if options.backup:
red.set_backup() red.set_backup()

View File

@ -608,13 +608,13 @@ class CsIP:
app.setup() app.setup()
# If redundant then this is dealt with # If redundant then this is dealt with
# by the master backup functions # by the primary backup functions
if not cmdline.is_redundant(): if not cmdline.is_redundant():
if method == "add": if method == "add":
CsPasswdSvc(self.address['public_ip']).start() CsPasswdSvc(self.address['public_ip']).start()
elif method == "delete": elif method == "delete":
CsPasswdSvc(self.address['public_ip']).stop() CsPasswdSvc(self.address['public_ip']).stop()
elif cmdline.is_master(): elif cmdline.is_primary():
if method == "add": if method == "add":
CsPasswdSvc(self.get_gateway() + "," + self.address['public_ip']).start() CsPasswdSvc(self.get_gateway() + "," + self.address['public_ip']).start()
elif method == "delete": elif method == "delete":

View File

@ -103,23 +103,23 @@ class CsCmdLine(CsDataBag):
else: else:
return "unknown" return "unknown"
def is_master(self): def is_primary(self):
if not self.is_redundant(): if not self.is_redundant():
return False return False
if "redundant_state" in self.idata(): if "redundant_state" in self.idata():
return self.idata()['redundant_state'] == "MASTER" return self.idata()['redundant_state'] == "PRIMARY"
return False return False
def set_fault_state(self): def set_fault_state(self):
self.idata()['redundant_state'] = "FAULT" self.idata()['redundant_state'] = "FAULT"
self.idata()['redundant_master'] = False self.idata()['redundant_primary'] = False
def set_master_state(self, value): def set_primary_state(self, value):
if value: if value:
self.idata()['redundant_state'] = "MASTER" self.idata()['redundant_state'] = "PRIMARY"
else: else:
self.idata()['redundant_state'] = "BACKUP" self.idata()['redundant_state'] = "BACKUP"
self.idata()['redundant_master'] = value self.idata()['redundant_primary'] = value
def get_router_id(self): def get_router_id(self):
if "router_id" in self.idata(): if "router_id" in self.idata():

View File

@ -71,7 +71,7 @@ class CsDhcp(CsDataBag):
self.write_hosts() self.write_hosts()
if not self.cl.is_redundant() or self.cl.is_master(): if not self.cl.is_redundant() or self.cl.is_primary():
if restart_dnsmasq: if restart_dnsmasq:
CsHelper.service("dnsmasq", "restart") CsHelper.service("dnsmasq", "restart")
else: else:

View File

@ -29,8 +29,8 @@ from netaddr import *
PUBLIC_INTERFACES = {"router": "eth2", "vpcrouter": "eth1"} PUBLIC_INTERFACES = {"router": "eth2", "vpcrouter": "eth1"}
STATE_COMMANDS = {"router": "ip addr show dev eth0 | grep inet | wc -l | xargs bash -c 'if [ $0 == 2 ]; then echo \"MASTER\"; else echo \"BACKUP\"; fi'", STATE_COMMANDS = {"router": "ip addr show dev eth0 | grep inet | wc -l | xargs bash -c 'if [ $0 == 2 ]; then echo \"PRIMARY\"; else echo \"BACKUP\"; fi'",
"vpcrouter": "ip addr show dev eth1 | grep state | awk '{print $9;}' | xargs bash -c 'if [ $0 == \"UP\" ]; then echo \"MASTER\"; else echo \"BACKUP\"; fi'"} "vpcrouter": "ip addr show dev eth1 | grep state | awk '{print $9;}' | xargs bash -c 'if [ $0 == \"UP\" ]; then echo \"PRIMARY\"; else echo \"BACKUP\"; fi'"}
def reconfigure_interfaces(router_config, interfaces): def reconfigure_interfaces(router_config, interfaces):
@ -41,14 +41,14 @@ def reconfigure_interfaces(router_config, interfaces):
cmd = "ip link set %s up" % interface.get_device() cmd = "ip link set %s up" % interface.get_device()
# If redundant only bring up public interfaces that are not eth1. # If redundant only bring up public interfaces that are not eth1.
# Reason: private gateways are public interfaces. # Reason: private gateways are public interfaces.
# master.py and keepalived will deal with eth1 public interface. # configure_router.py and keepalived will deal with eth1 public interface.
if router_config.is_redundant() and interface.is_public(): if router_config.is_redundant() and interface.is_public():
state_cmd = STATE_COMMANDS[router_config.get_type()] state_cmd = STATE_COMMANDS[router_config.get_type()]
logging.info("Check state command => %s" % state_cmd) logging.info("Check state command => %s" % state_cmd)
state = execute(state_cmd)[0] state = execute(state_cmd)[0]
logging.info("Route state => %s" % state) logging.info("Route state => %s" % state)
if interface.get_device() != PUBLIC_INTERFACES[router_config.get_type()] and state == "MASTER": if interface.get_device() != PUBLIC_INTERFACES[router_config.get_type()] and state == "PRIMARY":
execute(cmd) execute(cmd)
else: else:
execute(cmd) execute(cmd)

View File

@ -199,20 +199,20 @@ class CsRedundant(object):
if keepalived_conf.is_changed() or force_keepalived_restart: if keepalived_conf.is_changed() or force_keepalived_restart:
keepalived_conf.commit() keepalived_conf.commit()
os.chmod(self.KEEPALIVED_CONF, 0o644) os.chmod(self.KEEPALIVED_CONF, 0o644)
if force_keepalived_restart or not self.cl.is_master(): if force_keepalived_restart or not self.cl.is_primary():
CsHelper.service("keepalived", "restart") CsHelper.service("keepalived", "restart")
else: else:
CsHelper.service("keepalived", "reload") CsHelper.service("keepalived", "reload")
def release_lock(self): def release_lock(self):
try: try:
os.remove("/tmp/master_lock") os.remove("/tmp/primary_lock")
except OSError: except OSError:
pass pass
def set_lock(self): def set_lock(self):
""" """
Make sure that master state changes happen sequentially Make sure that primary state changes happen sequentially
""" """
iterations = 10 iterations = 10
time_between = 1 time_between = 1
@ -220,13 +220,13 @@ class CsRedundant(object):
for iter in range(0, iterations): for iter in range(0, iterations):
try: try:
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
s.bind('/tmp/master_lock') s.bind('/tmp/primary_lock')
return s return s
except socket.error, e: except socket.error, e:
error_code = e.args[0] error_code = e.args[0]
error_string = e.args[1] error_string = e.args[1]
print "Process already running (%d:%s). Exiting" % (error_code, error_string) print "Process already running (%d:%s). Exiting" % (error_code, error_string)
logging.info("Master is already running, waiting") logging.info("Primary is already running, waiting")
sleep(time_between) sleep(time_between)
def set_fault(self): def set_fault(self):
@ -290,7 +290,7 @@ class CsRedundant(object):
CsHelper.service("dnsmasq", "stop") CsHelper.service("dnsmasq", "stop")
self.cl.set_master_state(False) self.cl.set_primary_state(False)
self.cl.save() self.cl.save()
self.release_lock() self.release_lock()
@ -298,14 +298,14 @@ class CsRedundant(object):
CsHelper.reconfigure_interfaces(self.cl, interfaces) CsHelper.reconfigure_interfaces(self.cl, interfaces)
logging.info("Router switched to backup mode") logging.info("Router switched to backup mode")
def set_master(self): def set_primary(self):
""" Set the current router to master """ """ Set the current router to primary """
if not self.cl.is_redundant(): if not self.cl.is_redundant():
logging.error("Set master called on non-redundant router") logging.error("Set primary called on non-redundant router")
return return
self.set_lock() self.set_lock()
logging.debug("Setting router to master") logging.debug("Setting router to primary")
dev = '' dev = ''
interfaces = [interface for interface in self.address.get_interfaces() if interface.is_public()] interfaces = [interface for interface in self.address.get_interfaces() if interface.is_public()]
@ -348,7 +348,7 @@ class CsRedundant(object):
CsPasswdSvc(interface.get_gateway() + "," + interface.get_ip()).restart() CsPasswdSvc(interface.get_gateway() + "," + interface.get_ip()).restart()
CsHelper.service("dnsmasq", "restart") CsHelper.service("dnsmasq", "restart")
self.cl.set_master_state(True) self.cl.set_primary_state(True)
self.cl.save() self.cl.save()
self.release_lock() self.release_lock()
@ -362,7 +362,7 @@ class CsRedundant(object):
public_devices.sort() public_devices.sort()
# Ensure the default route is added, or outgoing traffic from VMs with static NAT on # Ensure the default route is added, or outgoing traffic from VMs with static NAT on
# the subsequent interfaces will go from he wrong IP # the subsequent interfaces will go from the wrong IP
route = CsRoute() route = CsRoute()
dev = '' dev = ''
for interface in interfaces: for interface in interfaces:
@ -381,7 +381,7 @@ class CsRedundant(object):
if interface.get_device() == device: if interface.get_device() == device:
CsHelper.execute("arping -I %s -U %s -c 1" % (device, interface.get_ip())) CsHelper.execute("arping -I %s -U %s -c 1" % (device, interface.get_ip()))
logging.info("Router switched to master mode") logging.info("Router switched to primary mode")
def _collect_ignore_ips(self): def _collect_ignore_ips(self):
""" """

View File

@ -358,7 +358,7 @@ cflag=
nflag= nflag=
op="" op=""
is_master=0 is_primary=0
is_redundant=0 is_redundant=0
if_keep_state=0 if_keep_state=0
IFACEGWIPFILE='/var/cache/cloud/ifaceGwIp' IFACEGWIPFILE='/var/cache/cloud/ifaceGwIp'
@ -366,13 +366,13 @@ grep "redundant_router=1" /var/cache/cloud/cmdline > /dev/null
if [ $? -eq 0 ] if [ $? -eq 0 ]
then then
is_redundant=1 is_redundant=1
sudo /opt/cloud/bin/checkrouter.sh --no-lock|grep "Status: MASTER" > /dev/null 2>&1 sudo /opt/cloud/bin/checkrouter.sh --no-lock|grep "Status: PRIMARY" > /dev/null 2>&1
if [ $? -eq 0 ] if [ $? -eq 0 ]
then then
is_master=1 is_primary=1
fi fi
fi fi
if [ $is_redundant -eq 1 -a $is_master -ne 1 ] if [ $is_redundant -eq 1 -a $is_primary -ne 1 ]
then then
if_keep_state=1 if_keep_state=1
fi fi

View File

@ -58,7 +58,7 @@ then
systemctl stop --now conntrackd >> $ROUTER_LOG 2>&1 systemctl stop --now conntrackd >> $ROUTER_LOG 2>&1
#Set fault so we have the same effect as a KeepaliveD fault. #Set fault so we have the same effect as a KeepaliveD fault.
python /opt/cloud/bin/master.py --fault python /opt/cloud/bin/configure_router.py --fault
pkill -9 keepalived >> $ROUTER_LOG 2>&1 || true pkill -9 keepalived >> $ROUTER_LOG 2>&1 || true
pkill -9 conntrackd >> $ROUTER_LOG 2>&1 || true pkill -9 conntrackd >> $ROUTER_LOG 2>&1 || true

Some files were not shown because too many files have changed in this diff Show More