api,agent,server,engine-schema: scalability improvements (#9840)

* api,agent,server,engine-schema: scalability improvements

Following changes and improvements have been added:

- Improvements in handling of PingRoutingCommand

    1. Added global config - `vm.sync.power.state.transitioning`, default value: true, to control syncing of power states for transitioning VMs. This can be set to false to prevent computation of transitioning state VMs.
    2. Improved VirtualMachinePowerStateSync to allow power state sync for host VMs in a batch
    3. Optimized scanning stalled VMs

- Added option to set worker threads for capacity calculation using config - `capacity.calculate.workers`

- Added caching framework based on Caffeine in-memory caching library, https://github.com/ben-manes/caffeine

- Added caching for account/use role API access with expiration after write can be configured using config - `dynamic.apichecker.cache.period`. If set to zero then there will be no caching. Default is 0.

- Added caching for account/use role API access with expiration after write set to 60 seconds.

- Added caching for some recurring DB retrievals

    1. CapacityManager - listing service offerings - beneficial in host capacity calculation
    2. LibvirtServerDiscoverer existing host for the cluster - beneficial for host joins
    3. DownloadListener - hypervisors for zone - beneficial for host joins
    5. VirtualMachineManagerImpl - VMs in progress- beneficial for processing stalled VMs during PingRoutingCommands

- Optimized MS list retrieval for agent connect

- Optimize finding ready systemvm template for zone

- Database retrieval optimisations - fix and refactor for cases where only IDs or counts are used mainly for hosts and other infra entities. Also similar cases for VMs and other entities related to host concerning background tasks

- Changes in agent-agentmanager connection with NIO client-server classes

    1. Optimized the use of the executor service
    2. Refactore Agent class to better handle connections.
    3. Do SSL handshakes within worker threads
    5. Added global configs to control the behaviour depending on the infra. SSL handshake could be a bottleneck during agent connections. Configs - `agent.ssl.handshake.min.workers` and `agent.ssl.handshake.max.workers` can be used to control number of new connections management server handles at a time. `agent.ssl.handshake.timeout` can be used to set number of seconds after which SSL handshake times out at MS end.
    6. On agent side backoff and sslhandshake timeout can be controlled by agent properties. `backoff.seconds` and `ssl.handshake.timeout` properties can be used.

- Improvements in StatsCollection - minimize DB retrievals.

- Improvements in DeploymentPlanner allow for the retrieval of only desired host fields and fewer retrievals.

- Improvements in hosts connection for a storage pool. Added config - `storage.pool.host.connect.workers` to control the number of worker threads that can be used to connect hosts to a storage pool. Worker thread approach is followed currently only for NFS and ScaleIO pools.

- Minor improvements in resource limit calculations wrt DB retrievals

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>

* test1, domaindetails, capacitymanager fix

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* test2 - agent tests

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* capacitymanagertest fix

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* change

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix missing changes

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* address comments

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* revert marvin/setup.py

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix indent

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* use space in sql

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* address duplicate

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* update host logs

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* revert e36c6a5d07

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix npe in capacity calculation

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* move schema changes to 4.20.1 upgrade

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* build fix

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* address comments

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix build

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* add some more tests

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* checkstyle fix

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* remove unnecessary mocks

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* build fix

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* replace statics

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* engine/orchestration,utils: limit number of concurrent new agent
connections

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactor - remove unused

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* unregister closed connections, monitor & cleanup

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* add check for outdated vm filter in power sync

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* agent: synchronize sendRequest wait

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

---------

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This commit is contained in:
Abhishek Kumar 2025-02-01 12:28:41 +05:30 committed by GitHub
parent ae2ffbe40b
commit 0b5a5e8043
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
138 changed files with 4440 additions and 2131 deletions

View File

@ -1 +1 @@
3.6 3.10

View File

@ -434,3 +434,10 @@ iscsi.session.cleanup.enabled=false
# Implicit host tags managed by agent.properties # Implicit host tags managed by agent.properties
# host.tags= # host.tags=
# Timeout(in seconds) for SSL handshake when agent connects to server. When no value is set then default value of 30s
# will be used
#ssl.handshake.timeout=
# Wait(in seconds) during agent reconnections. When no value is set then default value of 5s will be used
#backoff.seconds=

File diff suppressed because it is too large Load Diff

View File

@ -16,29 +16,6 @@
// under the License. // under the License.
package com.cloud.agent; package com.cloud.agent;
import com.cloud.agent.Agent.ExitStatus;
import com.cloud.agent.dao.StorageComponent;
import com.cloud.agent.dao.impl.PropertiesStorage;
import com.cloud.agent.properties.AgentProperties;
import com.cloud.agent.properties.AgentPropertiesFileHandler;
import com.cloud.resource.ServerResource;
import com.cloud.utils.LogUtils;
import com.cloud.utils.ProcessUtil;
import com.cloud.utils.PropertiesUtil;
import com.cloud.utils.backoff.BackoffAlgorithm;
import com.cloud.utils.backoff.impl.ConstantTimeBackoff;
import com.cloud.utils.exception.CloudRuntimeException;
import org.apache.commons.daemon.Daemon;
import org.apache.commons.daemon.DaemonContext;
import org.apache.commons.daemon.DaemonInitException;
import org.apache.commons.lang.math.NumberUtils;
import org.apache.commons.lang3.BooleanUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.core.config.Configurator;
import javax.naming.ConfigurationException;
import java.io.File; import java.io.File;
import java.io.FileNotFoundException; import java.io.FileNotFoundException;
import java.io.IOException; import java.io.IOException;
@ -53,6 +30,31 @@ import java.util.Map;
import java.util.Properties; import java.util.Properties;
import java.util.UUID; import java.util.UUID;
import javax.naming.ConfigurationException;
import org.apache.commons.daemon.Daemon;
import org.apache.commons.daemon.DaemonContext;
import org.apache.commons.daemon.DaemonInitException;
import org.apache.commons.lang.math.NumberUtils;
import org.apache.commons.lang3.BooleanUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.core.config.Configurator;
import com.cloud.agent.Agent.ExitStatus;
import com.cloud.agent.dao.StorageComponent;
import com.cloud.agent.dao.impl.PropertiesStorage;
import com.cloud.agent.properties.AgentProperties;
import com.cloud.agent.properties.AgentPropertiesFileHandler;
import com.cloud.resource.ServerResource;
import com.cloud.utils.LogUtils;
import com.cloud.utils.ProcessUtil;
import com.cloud.utils.PropertiesUtil;
import com.cloud.utils.backoff.BackoffAlgorithm;
import com.cloud.utils.backoff.impl.ConstantTimeBackoff;
import com.cloud.utils.exception.CloudRuntimeException;
public class AgentShell implements IAgentShell, Daemon { public class AgentShell implements IAgentShell, Daemon {
protected static Logger LOGGER = LogManager.getLogger(AgentShell.class); protected static Logger LOGGER = LogManager.getLogger(AgentShell.class);
@ -406,7 +408,9 @@ public class AgentShell implements IAgentShell, Daemon {
LOGGER.info("Defaulting to the constant time backoff algorithm"); LOGGER.info("Defaulting to the constant time backoff algorithm");
_backoff = new ConstantTimeBackoff(); _backoff = new ConstantTimeBackoff();
_backoff.configure("ConstantTimeBackoff", new HashMap<String, Object>()); Map<String, Object> map = new HashMap<>();
map.put("seconds", _properties.getProperty("backoff.seconds"));
_backoff.configure("ConstantTimeBackoff", map);
} }
private void launchAgent() throws ConfigurationException { private void launchAgent() throws ConfigurationException {
@ -455,6 +459,11 @@ public class AgentShell implements IAgentShell, Daemon {
agent.start(); agent.start();
} }
@Override
public Integer getSslHandshakeTimeout() {
return AgentPropertiesFileHandler.getPropertyValue(AgentProperties.SSL_HANDSHAKE_TIMEOUT);
}
public synchronized int getNextAgentId() { public synchronized int getNextAgentId() {
return _nextAgentId++; return _nextAgentId++;
} }

View File

@ -70,4 +70,6 @@ public interface IAgentShell {
String getConnectedHost(); String getConnectedHost();
void launchNewAgent(ServerResource resource) throws ConfigurationException; void launchNewAgent(ServerResource resource) throws ConfigurationException;
Integer getSslHandshakeTimeout();
} }

View File

@ -811,6 +811,13 @@ public class AgentProperties{
*/ */
public static final Property<String> HOST_TAGS = new Property<>("host.tags", null, String.class); public static final Property<String> HOST_TAGS = new Property<>("host.tags", null, String.class);
/**
* Timeout for SSL handshake in seconds
* Data type: Integer.<br>
* Default value: <code>null</code>
*/
public static final Property<Integer> SSL_HANDSHAKE_TIMEOUT = new Property<>("ssl.handshake.timeout", null, Integer.class);
public static class Property <T>{ public static class Property <T>{
private String name; private String name;
private T defaultValue; private T defaultValue;

View File

@ -362,4 +362,11 @@ public class AgentShellTest {
Assert.assertEquals(expected, shell.getConnectedHost()); Assert.assertEquals(expected, shell.getConnectedHost());
} }
@Test
public void testGetSslHandshakeTimeout() {
Integer expected = 1;
agentPropertiesFileHandlerMocked.when(() -> AgentPropertiesFileHandler.getPropertyValue(Mockito.eq(AgentProperties.SSL_HANDSHAKE_TIMEOUT))).thenReturn(expected);
Assert.assertEquals(expected, agentShellSpy.getSslHandshakeTimeout());
}
} }

View File

@ -0,0 +1,257 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.agent;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertSame;
import static org.junit.Assert.assertTrue;
import static org.mockito.Mockito.any;
import static org.mockito.Mockito.doReturn;
import static org.mockito.Mockito.doThrow;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.eq;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
import java.io.IOException;
import java.net.InetSocketAddress;
import javax.naming.ConfigurationException;
import org.apache.logging.log4j.Logger;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.junit.MockitoJUnitRunner;
import org.springframework.test.util.ReflectionTestUtils;
import com.cloud.resource.ServerResource;
import com.cloud.utils.backoff.impl.ConstantTimeBackoff;
import com.cloud.utils.nio.Link;
import com.cloud.utils.nio.NioConnection;
@RunWith(MockitoJUnitRunner.class)
public class AgentTest {
Agent agent;
private AgentShell shell;
private ServerResource serverResource;
private Logger logger;
@Before
public void setUp() throws ConfigurationException {
shell = mock(AgentShell.class);
serverResource = mock(ServerResource.class);
doReturn(true).when(serverResource).configure(any(), any());
doReturn(1).when(shell).getWorkers();
doReturn(1).when(shell).getPingRetries();
agent = new Agent(shell, 1, serverResource);
logger = mock(Logger.class);
ReflectionTestUtils.setField(agent, "logger", logger);
}
@Test
public void testGetLinkLogNullLinkReturnsEmptyString() {
Link link = null;
String result = agent.getLinkLog(link);
assertEquals("", result);
}
@Test
public void testGetLinkLogLinkWithTraceEnabledReturnsLinkLogWithHashCode() {
Link link = mock(Link.class);
InetSocketAddress socketAddress = new InetSocketAddress("192.168.1.100", 1111);
when(link.getSocketAddress()).thenReturn(socketAddress);
when(logger.isTraceEnabled()).thenReturn(true);
String result = agent.getLinkLog(link);
System.out.println(result);
assertTrue(result.startsWith(System.identityHashCode(link) + "-"));
assertTrue(result.contains("192.168.1.100"));
}
@Test
public void testGetAgentNameWhenServerResourceIsNull() {
ReflectionTestUtils.setField(agent, "serverResource", null);
assertEquals("Agent", agent.getAgentName());
}
@Test
public void testGetAgentNameWhenAppendAgentNameIsTrue() {
when(serverResource.isAppendAgentNameToLogs()).thenReturn(true);
when(serverResource.getName()).thenReturn("TestAgent");
String agentName = agent.getAgentName();
assertEquals("TestAgent", agentName);
}
@Test
public void testGetAgentNameWhenAppendAgentNameIsFalse() {
when(serverResource.isAppendAgentNameToLogs()).thenReturn(false);
String agentName = agent.getAgentName();
assertEquals("Agent", agentName);
}
@Test
public void testAgentInitialization() {
Runtime.getRuntime().removeShutdownHook(agent.shutdownThread);
when(shell.getPingRetries()).thenReturn(3);
when(shell.getWorkers()).thenReturn(5);
agent.setupShutdownHookAndInitExecutors();
assertNotNull(agent.selfTaskExecutor);
assertNotNull(agent.outRequestHandler);
assertNotNull(agent.requestHandler);
}
@Test
public void testAgentShutdownHookAdded() {
Runtime.getRuntime().removeShutdownHook(agent.shutdownThread);
agent.setupShutdownHookAndInitExecutors();
verify(logger).trace("Adding shutdown hook");
}
@Test
public void testGetResourceGuidValidGuidAndResourceName() {
when(shell.getGuid()).thenReturn("12345");
String result = agent.getResourceGuid();
assertTrue(result.startsWith("12345-" + ServerResource.class.getSimpleName()));
}
@Test
public void testGetZoneReturnsValidZone() {
when(shell.getZone()).thenReturn("ZoneA");
String result = agent.getZone();
assertEquals("ZoneA", result);
}
@Test
public void testGetPodReturnsValidPod() {
when(shell.getPod()).thenReturn("PodA");
String result = agent.getPod();
assertEquals("PodA", result);
}
@Test
public void testSetLinkAssignsLink() {
Link mockLink = mock(Link.class);
agent.setLink(mockLink);
assertEquals(mockLink, agent.link);
}
@Test
public void testGetResourceReturnsServerResource() {
ServerResource mockResource = mock(ServerResource.class);
ReflectionTestUtils.setField(agent, "serverResource", mockResource);
ServerResource result = agent.getResource();
assertSame(mockResource, result);
}
@Test
public void testGetResourceName() {
String result = agent.getResourceName();
assertTrue(result.startsWith(ServerResource.class.getSimpleName()));
}
@Test
public void testUpdateLastPingResponseTimeUpdatesCurrentTime() {
long beforeUpdate = System.currentTimeMillis();
agent.updateLastPingResponseTime();
long updatedTime = agent.lastPingResponseTime.get();
assertTrue(updatedTime >= beforeUpdate);
assertTrue(updatedTime <= System.currentTimeMillis());
}
@Test
public void testGetNextSequenceIncrementsSequence() {
long initialSequence = agent.getNextSequence();
long nextSequence = agent.getNextSequence();
assertEquals(initialSequence + 1, nextSequence);
long thirdSequence = agent.getNextSequence();
assertEquals(nextSequence + 1, thirdSequence);
}
@Test
public void testRegisterControlListenerAddsListener() {
IAgentControlListener listener = mock(IAgentControlListener.class);
agent.registerControlListener(listener);
assertTrue(agent.controlListeners.contains(listener));
}
@Test
public void testUnregisterControlListenerRemovesListener() {
IAgentControlListener listener = mock(IAgentControlListener.class);
agent.registerControlListener(listener);
assertTrue(agent.controlListeners.contains(listener));
agent.unregisterControlListener(listener);
assertFalse(agent.controlListeners.contains(listener));
}
@Test
public void testCloseAndTerminateLinkLinkIsNullDoesNothing() {
agent.closeAndTerminateLink(null);
}
@Test
public void testCloseAndTerminateLinkValidLinkCallsCloseAndTerminate() {
Link mockLink = mock(Link.class);
agent.closeAndTerminateLink(mockLink);
verify(mockLink).close();
verify(mockLink).terminated();
}
@Test
public void testStopAndCleanupConnectionConnectionIsNullDoesNothing() {
agent.connection = null;
agent.stopAndCleanupConnection(false);
}
@Test
public void testStopAndCleanupConnectionValidConnectionNoWaitStopsAndCleansUp() throws IOException {
NioConnection mockConnection = mock(NioConnection.class);
agent.connection = mockConnection;
agent.stopAndCleanupConnection(false);
verify(mockConnection).stop();
verify(mockConnection).cleanUp();
}
@Test
public void testStopAndCleanupConnectionCleanupThrowsIOExceptionLogsWarning() throws IOException {
NioConnection mockConnection = mock(NioConnection.class);
agent.connection = mockConnection;
doThrow(new IOException("Cleanup failed")).when(mockConnection).cleanUp();
agent.stopAndCleanupConnection(false);
verify(mockConnection).stop();
verify(logger).warn(eq("Fail to clean up old connection. {}"), any(IOException.class));
}
@Test
public void testStopAndCleanupConnectionValidConnectionWaitForStopWaitsForStartupToStop() throws IOException {
NioConnection mockConnection = mock(NioConnection.class);
ConstantTimeBackoff mockBackoff = mock(ConstantTimeBackoff.class);
mockBackoff.setTimeToWait(0);
agent.connection = mockConnection;
when(shell.getBackoffAlgorithm()).thenReturn(mockBackoff);
when(mockConnection.isStartup()).thenReturn(true, true, false);
agent.stopAndCleanupConnection(true);
verify(mockConnection).stop();
verify(mockConnection).cleanUp();
verify(mockBackoff, times(3)).waitBeforeRetry();
}
}

View File

@ -30,6 +30,11 @@ public interface RoleService {
ConfigKey<Boolean> EnableDynamicApiChecker = new ConfigKey<>("Advanced", Boolean.class, "dynamic.apichecker.enabled", "false", ConfigKey<Boolean> EnableDynamicApiChecker = new ConfigKey<>("Advanced", Boolean.class, "dynamic.apichecker.enabled", "false",
"If set to true, this enables the dynamic role-based api access checker and disables the default static role-based api access checker.", true); "If set to true, this enables the dynamic role-based api access checker and disables the default static role-based api access checker.", true);
ConfigKey<Integer> DynamicApiCheckerCachePeriod = new ConfigKey<>("Advanced", Integer.class,
"dynamic.apichecker.cache.period", "0",
"Defines the expiration time in seconds for the Dynamic API Checker cache, determining how long cached data is retained before being refreshed. If set to zero then caching will be disabled",
false);
boolean isEnabled(); boolean isEnabled();
/** /**

View File

@ -100,7 +100,7 @@ public class ListDomainsCmd extends BaseListCmd implements UserCmd {
dv = EnumSet.of(DomainDetails.all); dv = EnumSet.of(DomainDetails.all);
} else { } else {
try { try {
ArrayList<DomainDetails> dc = new ArrayList<DomainDetails>(); ArrayList<DomainDetails> dc = new ArrayList<>();
for (String detail : viewDetails) { for (String detail : viewDetails) {
dc.add(DomainDetails.valueOf(detail)); dc.add(DomainDetails.valueOf(detail));
} }
@ -142,7 +142,10 @@ public class ListDomainsCmd extends BaseListCmd implements UserCmd {
if (CollectionUtils.isEmpty(response)) { if (CollectionUtils.isEmpty(response)) {
return; return;
} }
EnumSet<DomainDetails> details = getDetails();
if (details.contains(DomainDetails.all) || details.contains(DomainDetails.resource)) {
_resourceLimitService.updateTaggedResourceLimitsAndCountsForDomains(response, getTag()); _resourceLimitService.updateTaggedResourceLimitsAndCountsForDomains(response, getTag());
}
if (!getShowIcon()) { if (!getShowIcon()) {
return; return;
} }

View File

@ -157,7 +157,10 @@ public class ListAccountsCmd extends BaseListDomainResourcesCmd implements UserC
if (CollectionUtils.isEmpty(response)) { if (CollectionUtils.isEmpty(response)) {
return; return;
} }
EnumSet<DomainDetails> details = getDetails();
if (details.contains(DomainDetails.all) || details.contains(DomainDetails.resource)) {
_resourceLimitService.updateTaggedResourceLimitsAndCountsForAccounts(response, getTag()); _resourceLimitService.updateTaggedResourceLimitsAndCountsForAccounts(response, getTag());
}
if (!getShowIcon()) { if (!getShowIcon()) {
return; return;
} }

View File

@ -39,7 +39,7 @@ public interface OutOfBandManagementService {
long getId(); long getId();
boolean isOutOfBandManagementEnabled(Host host); boolean isOutOfBandManagementEnabled(Host host);
void submitBackgroundPowerSyncTask(Host host); void submitBackgroundPowerSyncTask(Host host);
boolean transitionPowerStateToDisabled(List<? extends Host> hosts); boolean transitionPowerStateToDisabled(List<Long> hostIds);
OutOfBandManagementResponse enableOutOfBandManagement(DataCenter zone); OutOfBandManagementResponse enableOutOfBandManagement(DataCenter zone);
OutOfBandManagementResponse enableOutOfBandManagement(Cluster cluster); OutOfBandManagementResponse enableOutOfBandManagement(Cluster cluster);

View File

@ -18,6 +18,7 @@ package org.apache.cloudstack.api.command.admin.domain;
import java.util.List; import java.util.List;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.response.DomainResponse; import org.apache.cloudstack.api.response.DomainResponse;
import org.junit.Assert; import org.junit.Assert;
import org.junit.Test; import org.junit.Test;
@ -71,7 +72,17 @@ public class ListDomainsCmdTest {
cmd._resourceLimitService = resourceLimitService; cmd._resourceLimitService = resourceLimitService;
ReflectionTestUtils.setField(cmd, "tag", "abc"); ReflectionTestUtils.setField(cmd, "tag", "abc");
cmd.updateDomainResponse(List.of(Mockito.mock(DomainResponse.class))); cmd.updateDomainResponse(List.of(Mockito.mock(DomainResponse.class)));
Mockito.verify(resourceLimitService, Mockito.times(1)).updateTaggedResourceLimitsAndCountsForDomains(Mockito.any(), Mockito.any()); Mockito.verify(resourceLimitService).updateTaggedResourceLimitsAndCountsForDomains(Mockito.any(), Mockito.any());
}
@Test
public void testUpdateDomainResponseWithDomainsMinDetails() {
ListDomainsCmd cmd = new ListDomainsCmd();
ReflectionTestUtils.setField(cmd, "viewDetails", List.of(ApiConstants.DomainDetails.min.toString()));
cmd._resourceLimitService = resourceLimitService;
ReflectionTestUtils.setField(cmd, "tag", "abc");
cmd.updateDomainResponse(List.of(Mockito.mock(DomainResponse.class)));
Mockito.verify(resourceLimitService, Mockito.never()).updateTaggedResourceLimitsAndCountsForDomains(Mockito.any(), Mockito.any());
} }
} }

View File

@ -18,6 +18,7 @@ package org.apache.cloudstack.api.command.user.account;
import java.util.List; import java.util.List;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.response.AccountResponse; import org.apache.cloudstack.api.response.AccountResponse;
import org.junit.Assert; import org.junit.Assert;
import org.junit.Test; import org.junit.Test;
@ -58,7 +59,7 @@ public class ListAccountsCmdTest {
} }
@Test @Test
public void testUpdateDomainResponseNoDomains() { public void testUpdateAccountResponseNoAccounts() {
ListAccountsCmd cmd = new ListAccountsCmd(); ListAccountsCmd cmd = new ListAccountsCmd();
cmd._resourceLimitService = resourceLimitService; cmd._resourceLimitService = resourceLimitService;
cmd.updateAccountResponse(null); cmd.updateAccountResponse(null);
@ -66,11 +67,21 @@ public class ListAccountsCmdTest {
} }
@Test @Test
public void testUpdateDomainResponseWithDomains() { public void testUpdateDomainResponseWithAccounts() {
ListAccountsCmd cmd = new ListAccountsCmd(); ListAccountsCmd cmd = new ListAccountsCmd();
cmd._resourceLimitService = resourceLimitService; cmd._resourceLimitService = resourceLimitService;
ReflectionTestUtils.setField(cmd, "tag", "abc"); ReflectionTestUtils.setField(cmd, "tag", "abc");
cmd.updateAccountResponse(List.of(Mockito.mock(AccountResponse.class))); cmd.updateAccountResponse(List.of(Mockito.mock(AccountResponse.class)));
Mockito.verify(resourceLimitService, Mockito.times(1)).updateTaggedResourceLimitsAndCountsForAccounts(Mockito.any(), Mockito.any()); Mockito.verify(resourceLimitService, Mockito.times(1)).updateTaggedResourceLimitsAndCountsForAccounts(Mockito.any(), Mockito.any());
} }
@Test
public void testUpdateDomainResponseWithAccountsMinDetails() {
ListAccountsCmd cmd = new ListAccountsCmd();
ReflectionTestUtils.setField(cmd, "viewDetails", List.of(ApiConstants.DomainDetails.min.toString()));
cmd._resourceLimitService = resourceLimitService;
ReflectionTestUtils.setField(cmd, "tag", "abc");
cmd.updateAccountResponse(List.of(Mockito.mock(AccountResponse.class)));
Mockito.verify(resourceLimitService, Mockito.never()).updateTaggedResourceLimitsAndCountsForAccounts(Mockito.any(), Mockito.any());
}
} }

View File

@ -78,4 +78,12 @@ public interface ServerResource extends Manager {
void setAgentControl(IAgentControl agentControl); void setAgentControl(IAgentControl agentControl);
default boolean isExitOnFailures() {
return true;
}
default boolean isAppendAgentNameToLogs() {
return false;
}
} }

View File

@ -22,7 +22,6 @@ import java.util.LinkedHashMap;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import com.cloud.exception.ResourceAllocationException;
import org.apache.cloudstack.context.CallContext; import org.apache.cloudstack.context.CallContext;
import org.apache.cloudstack.framework.config.ConfigKey; import org.apache.cloudstack.framework.config.ConfigKey;
@ -38,6 +37,7 @@ import com.cloud.exception.ConcurrentOperationException;
import com.cloud.exception.InsufficientCapacityException; import com.cloud.exception.InsufficientCapacityException;
import com.cloud.exception.InsufficientServerCapacityException; import com.cloud.exception.InsufficientServerCapacityException;
import com.cloud.exception.OperationTimedoutException; import com.cloud.exception.OperationTimedoutException;
import com.cloud.exception.ResourceAllocationException;
import com.cloud.exception.ResourceUnavailableException; import com.cloud.exception.ResourceUnavailableException;
import com.cloud.host.Host; import com.cloud.host.Host;
import com.cloud.hypervisor.Hypervisor.HypervisorType; import com.cloud.hypervisor.Hypervisor.HypervisorType;
@ -101,6 +101,10 @@ public interface VirtualMachineManager extends Manager {
"refer documentation", "refer documentation",
true, ConfigKey.Scope.Zone); true, ConfigKey.Scope.Zone);
ConfigKey<Boolean> VmSyncPowerStateTransitioning = new ConfigKey<>("Advanced", Boolean.class, "vm.sync.power.state.transitioning", "true",
"Whether to sync power states of the transitioning and stalled VMs while processing VM power reports.", false);
interface Topics { interface Topics {
String VM_POWER_STATE = "vm.powerstate"; String VM_POWER_STATE = "vm.powerstate";
} }
@ -286,24 +290,22 @@ public interface VirtualMachineManager extends Manager {
/** /**
* Obtains statistics for a list of VMs; CPU and network utilization * Obtains statistics for a list of VMs; CPU and network utilization
* @param hostId ID of the host * @param host host
* @param hostName name of the host
* @param vmIds list of VM IDs * @param vmIds list of VM IDs
* @return map of VM ID and stats entry for the VM * @return map of VM ID and stats entry for the VM
*/ */
HashMap<Long, ? extends VmStats> getVirtualMachineStatistics(long hostId, String hostName, List<Long> vmIds); HashMap<Long, ? extends VmStats> getVirtualMachineStatistics(Host host, List<Long> vmIds);
/** /**
* Obtains statistics for a list of VMs; CPU and network utilization * Obtains statistics for a list of VMs; CPU and network utilization
* @param hostId ID of the host * @param host host
* @param hostName name of the host * @param vmMap map of VM instanceName and its ID
* @param vmMap map of VM IDs and the corresponding VirtualMachine object
* @return map of VM ID and stats entry for the VM * @return map of VM ID and stats entry for the VM
*/ */
HashMap<Long, ? extends VmStats> getVirtualMachineStatistics(long hostId, String hostName, Map<Long, ? extends VirtualMachine> vmMap); HashMap<Long, ? extends VmStats> getVirtualMachineStatistics(Host host, Map<String, Long> vmMap);
HashMap<Long, List<? extends VmDiskStats>> getVmDiskStatistics(long hostId, String hostName, Map<Long, ? extends VirtualMachine> vmMap); HashMap<Long, List<? extends VmDiskStats>> getVmDiskStatistics(Host host, Map<String, Long> vmInstanceNameIdMap);
HashMap<Long, List<? extends VmNetworkStats>> getVmNetworkStatistics(long hostId, String hostName, Map<Long, ? extends VirtualMachine> vmMap); HashMap<Long, List<? extends VmNetworkStats>> getVmNetworkStatistics(Host host, Map<String, Long> vmInstanceNameIdMap);
Map<Long, Boolean> getDiskOfferingSuitabilityForVm(long vmId, List<Long> diskOfferingIds); Map<Long, Boolean> getDiskOfferingSuitabilityForVm(long vmId, List<Long> diskOfferingIds);

View File

@ -16,14 +16,11 @@
// under the License. // under the License.
package com.cloud.capacity; package com.cloud.capacity;
import java.util.Map;
import org.apache.cloudstack.framework.config.ConfigKey; import org.apache.cloudstack.framework.config.ConfigKey;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO; import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import com.cloud.host.Host; import com.cloud.host.Host;
import com.cloud.offering.ServiceOffering; import com.cloud.offering.ServiceOffering;
import com.cloud.service.ServiceOfferingVO;
import com.cloud.storage.VMTemplateVO; import com.cloud.storage.VMTemplateVO;
import com.cloud.utils.Pair; import com.cloud.utils.Pair;
import com.cloud.vm.VirtualMachine; import com.cloud.vm.VirtualMachine;
@ -130,6 +127,10 @@ public interface CapacityManager {
true, true,
ConfigKey.Scope.Zone); ConfigKey.Scope.Zone);
ConfigKey<Integer> CapacityCalculateWorkers = new ConfigKey<>(ConfigKey.CATEGORY_ADVANCED, Integer.class,
"capacity.calculate.workers", "1",
"Number of worker threads to be used for capacities calculation", true);
public boolean releaseVmCapacity(VirtualMachine vm, boolean moveFromReserved, boolean moveToReservered, Long hostId); public boolean releaseVmCapacity(VirtualMachine vm, boolean moveFromReserved, boolean moveToReservered, Long hostId);
void allocateVmCapacity(VirtualMachine vm, boolean fromLastHost); void allocateVmCapacity(VirtualMachine vm, boolean fromLastHost);
@ -145,8 +146,6 @@ public interface CapacityManager {
void updateCapacityForHost(Host host); void updateCapacityForHost(Host host);
void updateCapacityForHost(Host host, Map<Long, ServiceOfferingVO> offeringsMap);
/** /**
* @param pool storage pool * @param pool storage pool
* @param templateForVmCreation template that will be used for vm creation * @param templateForVmCreation template that will be used for vm creation
@ -163,12 +162,12 @@ public interface CapacityManager {
/** /**
* Check if specified host has capability to support cpu cores and speed freq * Check if specified host has capability to support cpu cores and speed freq
* @param hostId the host to be checked * @param host the host to be checked
* @param cpuNum cpu number to check * @param cpuNum cpu number to check
* @param cpuSpeed cpu Speed to check * @param cpuSpeed cpu Speed to check
* @return true if the count of host's running VMs >= hypervisor limit * @return true if the count of host's running VMs >= hypervisor limit
*/ */
boolean checkIfHostHasCpuCapability(long hostId, Integer cpuNum, Integer cpuSpeed); boolean checkIfHostHasCpuCapability(Host host, Integer cpuNum, Integer cpuSpeed);
/** /**
* Check if cluster will cross threshold if the cpu/memory requested are accommodated * Check if cluster will cross threshold if the cpu/memory requested are accommodated

View File

@ -138,13 +138,13 @@ public interface ResourceManager extends ResourceService, Configurable {
public List<HostVO> listAllHostsInOneZoneNotInClusterByHypervisors(List<HypervisorType> types, long dcId, long clusterId); public List<HostVO> listAllHostsInOneZoneNotInClusterByHypervisors(List<HypervisorType> types, long dcId, long clusterId);
public List<HypervisorType> listAvailHypervisorInZone(Long hostId, Long zoneId); public List<HypervisorType> listAvailHypervisorInZone(Long zoneId);
public HostVO findHostByGuid(String guid); public HostVO findHostByGuid(String guid);
public HostVO findHostByName(String name); public HostVO findHostByName(String name);
HostStats getHostStatistics(long hostId); HostStats getHostStatistics(Host host);
Long getGuestOSCategoryId(long hostId); Long getGuestOSCategoryId(long hostId);

View File

@ -22,6 +22,7 @@ import java.util.Map;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStore; import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
import org.apache.cloudstack.engine.subsystem.api.storage.HypervisorHostListener; import org.apache.cloudstack.engine.subsystem.api.storage.HypervisorHostListener;
import org.apache.cloudstack.engine.subsystem.api.storage.Scope;
import org.apache.cloudstack.framework.config.ConfigKey; import org.apache.cloudstack.framework.config.ConfigKey;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO; import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
@ -42,6 +43,7 @@ import com.cloud.offering.DiskOffering;
import com.cloud.offering.ServiceOffering; import com.cloud.offering.ServiceOffering;
import com.cloud.storage.Storage.ImageFormat; import com.cloud.storage.Storage.ImageFormat;
import com.cloud.utils.Pair; import com.cloud.utils.Pair;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.vm.DiskProfile; import com.cloud.vm.DiskProfile;
import com.cloud.vm.VMInstanceVO; import com.cloud.vm.VMInstanceVO;
@ -214,6 +216,10 @@ public interface StorageManager extends StorageService {
"when resize a volume upto resize capacity disable threshold (pool.storage.allocated.resize.capacity.disablethreshold)", "when resize a volume upto resize capacity disable threshold (pool.storage.allocated.resize.capacity.disablethreshold)",
true, ConfigKey.Scope.Zone); true, ConfigKey.Scope.Zone);
ConfigKey<Integer> StoragePoolHostConnectWorkers = new ConfigKey<>("Storage", Integer.class,
"storage.pool.host.connect.workers", "1",
"Number of worker threads to be used to connect hosts to a primary storage", true);
/** /**
* should we execute in sequence not involving any storages? * should we execute in sequence not involving any storages?
* @return tru if commands should execute in sequence * @return tru if commands should execute in sequence
@ -365,6 +371,9 @@ public interface StorageManager extends StorageService {
String getStoragePoolMountFailureReason(String error); String getStoragePoolMountFailureReason(String error);
void connectHostsToPool(DataStore primaryStore, List<Long> hostIds, Scope scope,
boolean handleStorageConflictException, boolean errorOnNoUpHost) throws CloudRuntimeException;
boolean connectHostToSharedPool(Host host, long poolId) throws StorageUnavailableException, StorageConflictException; boolean connectHostToSharedPool(Host host, long poolId) throws StorageUnavailableException, StorageConflictException;
void disconnectHostFromSharedPool(Host host, StoragePool pool) throws StorageUnavailableException, StorageConflictException; void disconnectHostFromSharedPool(Host host, StoragePool pool) throws StorageUnavailableException, StorageConflictException;

View File

@ -18,6 +18,7 @@ package com.cloud.agent.manager;
import java.lang.reflect.Constructor; import java.lang.reflect.Constructor;
import java.lang.reflect.InvocationTargetException; import java.lang.reflect.InvocationTargetException;
import java.net.SocketAddress;
import java.nio.channels.ClosedChannelException; import java.nio.channels.ClosedChannelException;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Arrays; import java.util.Arrays;
@ -25,23 +26,20 @@ import java.util.Date;
import java.util.HashMap; import java.util.HashMap;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ExecutorService; import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.ScheduledThreadPoolExecutor; import java.util.concurrent.ScheduledThreadPoolExecutor;
import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Lock; import java.util.stream.Collectors;
import java.util.concurrent.locks.ReentrantLock;
import javax.inject.Inject; import javax.inject.Inject;
import javax.naming.ConfigurationException; import javax.naming.ConfigurationException;
import com.cloud.configuration.Config;
import com.cloud.org.Cluster;
import com.cloud.utils.NumbersUtil;
import com.cloud.utils.db.GlobalLock;
import org.apache.cloudstack.agent.lb.IndirectAgentLB; import org.apache.cloudstack.agent.lb.IndirectAgentLB;
import org.apache.cloudstack.ca.CAManager; import org.apache.cloudstack.ca.CAManager;
import org.apache.cloudstack.engine.orchestration.service.NetworkOrchestrationService; import org.apache.cloudstack.engine.orchestration.service.NetworkOrchestrationService;
@ -56,6 +54,8 @@ import org.apache.cloudstack.utils.identity.ManagementServerNode;
import org.apache.commons.collections.MapUtils; import org.apache.commons.collections.MapUtils;
import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils; import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils;
import org.apache.commons.lang3.BooleanUtils; import org.apache.commons.lang3.BooleanUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.logging.log4j.ThreadContext;
import com.cloud.agent.AgentManager; import com.cloud.agent.AgentManager;
import com.cloud.agent.Listener; import com.cloud.agent.Listener;
@ -82,6 +82,7 @@ import com.cloud.agent.api.UnsupportedAnswer;
import com.cloud.agent.transport.Request; import com.cloud.agent.transport.Request;
import com.cloud.agent.transport.Response; import com.cloud.agent.transport.Response;
import com.cloud.alert.AlertManager; import com.cloud.alert.AlertManager;
import com.cloud.configuration.Config;
import com.cloud.configuration.ManagementServiceConfiguration; import com.cloud.configuration.ManagementServiceConfiguration;
import com.cloud.dc.ClusterVO; import com.cloud.dc.ClusterVO;
import com.cloud.dc.DataCenterVO; import com.cloud.dc.DataCenterVO;
@ -101,15 +102,18 @@ import com.cloud.host.Status.Event;
import com.cloud.host.dao.HostDao; import com.cloud.host.dao.HostDao;
import com.cloud.hypervisor.Hypervisor.HypervisorType; import com.cloud.hypervisor.Hypervisor.HypervisorType;
import com.cloud.hypervisor.HypervisorGuruManager; import com.cloud.hypervisor.HypervisorGuruManager;
import com.cloud.org.Cluster;
import com.cloud.resource.Discoverer; import com.cloud.resource.Discoverer;
import com.cloud.resource.ResourceManager; import com.cloud.resource.ResourceManager;
import com.cloud.resource.ResourceState; import com.cloud.resource.ResourceState;
import com.cloud.resource.ServerResource; import com.cloud.resource.ServerResource;
import com.cloud.utils.NumbersUtil;
import com.cloud.utils.Pair; import com.cloud.utils.Pair;
import com.cloud.utils.component.ManagerBase; import com.cloud.utils.component.ManagerBase;
import com.cloud.utils.concurrency.NamedThreadFactory; import com.cloud.utils.concurrency.NamedThreadFactory;
import com.cloud.utils.db.DB; import com.cloud.utils.db.DB;
import com.cloud.utils.db.EntityManager; import com.cloud.utils.db.EntityManager;
import com.cloud.utils.db.GlobalLock;
import com.cloud.utils.db.QueryBuilder; import com.cloud.utils.db.QueryBuilder;
import com.cloud.utils.db.SearchCriteria.Op; import com.cloud.utils.db.SearchCriteria.Op;
import com.cloud.utils.db.TransactionLegacy; import com.cloud.utils.db.TransactionLegacy;
@ -124,8 +128,6 @@ import com.cloud.utils.nio.Link;
import com.cloud.utils.nio.NioServer; import com.cloud.utils.nio.NioServer;
import com.cloud.utils.nio.Task; import com.cloud.utils.nio.Task;
import com.cloud.utils.time.InaccurateClock; import com.cloud.utils.time.InaccurateClock;
import org.apache.commons.lang3.StringUtils;
import org.apache.logging.log4j.ThreadContext;
/** /**
* Implementation of the Agent Manager. This class controls the connection to the agents. * Implementation of the Agent Manager. This class controls the connection to the agents.
@ -143,7 +145,6 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
protected List<Long> _loadingAgents = new ArrayList<Long>(); protected List<Long> _loadingAgents = new ArrayList<Long>();
protected Map<String, Integer> _commandTimeouts = new HashMap<>(); protected Map<String, Integer> _commandTimeouts = new HashMap<>();
private int _monitorId = 0; private int _monitorId = 0;
private final Lock _agentStatusLock = new ReentrantLock();
@Inject @Inject
protected CAManager caService; protected CAManager caService;
@ -189,6 +190,9 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
protected StateMachine2<Status, Status.Event, Host> _statusStateMachine = Status.getStateMachine(); protected StateMachine2<Status, Status.Event, Host> _statusStateMachine = Status.getStateMachine();
private final ConcurrentHashMap<Long, Long> _pingMap = new ConcurrentHashMap<Long, Long>(10007); private final ConcurrentHashMap<Long, Long> _pingMap = new ConcurrentHashMap<Long, Long>(10007);
private int maxConcurrentNewAgentConnections;
private final ConcurrentHashMap<String, Long> newAgentConnections = new ConcurrentHashMap<>();
protected ScheduledExecutorService newAgentConnectionsMonitor;
@Inject @Inject
ResourceManager _resourceMgr; ResourceManager _resourceMgr;
@ -198,6 +202,14 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
protected final ConfigKey<Integer> Workers = new ConfigKey<Integer>("Advanced", Integer.class, "workers", "5", protected final ConfigKey<Integer> Workers = new ConfigKey<Integer>("Advanced", Integer.class, "workers", "5",
"Number of worker threads handling remote agent connections.", false); "Number of worker threads handling remote agent connections.", false);
protected final ConfigKey<Integer> Port = new ConfigKey<Integer>("Advanced", Integer.class, "port", "8250", "Port to listen on for remote agent connections.", false); protected final ConfigKey<Integer> Port = new ConfigKey<Integer>("Advanced", Integer.class, "port", "8250", "Port to listen on for remote agent connections.", false);
protected final ConfigKey<Integer> RemoteAgentSslHandshakeTimeout = new ConfigKey<>("Advanced",
Integer.class, "agent.ssl.handshake.timeout", "30",
"Seconds after which SSL handshake times out during remote agent connections.", false);
protected final ConfigKey<Integer> RemoteAgentMaxConcurrentNewConnections = new ConfigKey<>("Advanced",
Integer.class, "agent.max.concurrent.new.connections", "0",
"Number of maximum concurrent new connections server allows for remote agents. " +
"If set to zero (default value) then no limit will be enforced on concurrent new connections",
false);
protected final ConfigKey<Integer> AlertWait = new ConfigKey<Integer>("Advanced", Integer.class, "alert.wait", "1800", protected final ConfigKey<Integer> AlertWait = new ConfigKey<Integer>("Advanced", Integer.class, "alert.wait", "1800",
"Seconds to wait before alerting on a disconnected agent", true); "Seconds to wait before alerting on a disconnected agent", true);
protected final ConfigKey<Integer> DirectAgentLoadSize = new ConfigKey<Integer>("Advanced", Integer.class, "direct.agent.load.size", "16", protected final ConfigKey<Integer> DirectAgentLoadSize = new ConfigKey<Integer>("Advanced", Integer.class, "direct.agent.load.size", "16",
@ -214,8 +226,6 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
logger.info("Ping Timeout is {}.", mgmtServiceConf.getPingTimeout()); logger.info("Ping Timeout is {}.", mgmtServiceConf.getPingTimeout());
final int threads = DirectAgentLoadSize.value();
_nodeId = ManagementServerNode.getManagementServerId(); _nodeId = ManagementServerNode.getManagementServerId();
logger.info("Configuring AgentManagerImpl. management server node id(msid): {}.", _nodeId); logger.info("Configuring AgentManagerImpl. management server node id(msid): {}.", _nodeId);
@ -226,24 +236,31 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
registerForHostEvents(new SetHostParamsListener(), true, true, false); registerForHostEvents(new SetHostParamsListener(), true, true, false);
_executor = new ThreadPoolExecutor(threads, threads, 60l, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(), new NamedThreadFactory("AgentTaskPool")); final int agentTaskThreads = DirectAgentLoadSize.value();
_executor = new ThreadPoolExecutor(agentTaskThreads, agentTaskThreads, 60L, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(), new NamedThreadFactory("AgentTaskPool"));
_connectExecutor = new ThreadPoolExecutor(100, 500, 60l, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(), new NamedThreadFactory("AgentConnectTaskPool")); _connectExecutor = new ThreadPoolExecutor(100, 500, 60l, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(), new NamedThreadFactory("AgentConnectTaskPool"));
// allow core threads to time out even when there are no items in the queue // allow core threads to time out even when there are no items in the queue
_connectExecutor.allowCoreThreadTimeOut(true); _connectExecutor.allowCoreThreadTimeOut(true);
_connection = new NioServer("AgentManager", Port.value(), Workers.value() + 10, this, caService); maxConcurrentNewAgentConnections = RemoteAgentMaxConcurrentNewConnections.value();
_connection = new NioServer("AgentManager", Port.value(), Workers.value() + 10,
this, caService, RemoteAgentSslHandshakeTimeout.value());
logger.info("Listening on {} with {} workers.", Port.value(), Workers.value()); logger.info("Listening on {} with {} workers.", Port.value(), Workers.value());
final int directAgentPoolSize = DirectAgentPoolSize.value();
// executes all agent commands other than cron and ping // executes all agent commands other than cron and ping
_directAgentExecutor = new ScheduledThreadPoolExecutor(DirectAgentPoolSize.value(), new NamedThreadFactory("DirectAgent")); _directAgentExecutor = new ScheduledThreadPoolExecutor(directAgentPoolSize, new NamedThreadFactory("DirectAgent"));
// executes cron and ping agent commands // executes cron and ping agent commands
_cronJobExecutor = new ScheduledThreadPoolExecutor(DirectAgentPoolSize.value(), new NamedThreadFactory("DirectAgentCronJob")); _cronJobExecutor = new ScheduledThreadPoolExecutor(directAgentPoolSize, new NamedThreadFactory("DirectAgentCronJob"));
logger.debug("Created DirectAgentAttache pool with size: {}.", DirectAgentPoolSize.value()); logger.debug("Created DirectAgentAttache pool with size: {}.", directAgentPoolSize);
_directAgentThreadCap = Math.round(DirectAgentPoolSize.value() * DirectAgentThreadCap.value()) + 1; // add 1 to always make the value > 0 _directAgentThreadCap = Math.round(directAgentPoolSize * DirectAgentThreadCap.value()) + 1; // add 1 to always make the value > 0
_monitorExecutor = new ScheduledThreadPoolExecutor(1, new NamedThreadFactory("AgentMonitor")); _monitorExecutor = new ScheduledThreadPoolExecutor(1, new NamedThreadFactory("AgentMonitor"));
newAgentConnectionsMonitor = Executors.newScheduledThreadPool(1, new NamedThreadFactory("NewAgentConnectionsMonitor"));
initializeCommandTimeouts(); initializeCommandTimeouts();
return true; return true;
@ -254,6 +271,28 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
return new AgentHandler(type, link, data); return new AgentHandler(type, link, data);
} }
@Override
public int getMaxConcurrentNewConnectionsCount() {
return maxConcurrentNewAgentConnections;
}
@Override
public int getNewConnectionsCount() {
return newAgentConnections.size();
}
@Override
public void registerNewConnection(SocketAddress address) {
logger.trace(String.format("Adding new agent connection from %s", address.toString()));
newAgentConnections.putIfAbsent(address.toString(), System.currentTimeMillis());
}
@Override
public void unregisterNewConnection(SocketAddress address) {
logger.trace(String.format("Removing new agent connection for %s", address.toString()));
newAgentConnections.remove(address.toString());
}
@Override @Override
public int registerForHostEvents(final Listener listener, final boolean connections, final boolean commands, final boolean priority) { public int registerForHostEvents(final Listener listener, final boolean connections, final boolean commands, final boolean priority) {
synchronized (_hostMonitors) { synchronized (_hostMonitors) {
@ -687,6 +726,10 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
_monitorExecutor.scheduleWithFixedDelay(new MonitorTask(), mgmtServiceConf.getPingInterval(), mgmtServiceConf.getPingInterval(), TimeUnit.SECONDS); _monitorExecutor.scheduleWithFixedDelay(new MonitorTask(), mgmtServiceConf.getPingInterval(), mgmtServiceConf.getPingInterval(), TimeUnit.SECONDS);
final int cleanupTime = Wait.value();
newAgentConnectionsMonitor.scheduleAtFixedRate(new AgentNewConnectionsMonitorTask(), cleanupTime,
cleanupTime, TimeUnit.MINUTES);
return true; return true;
} }
@ -844,6 +887,7 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
_connectExecutor.shutdownNow(); _connectExecutor.shutdownNow();
_monitorExecutor.shutdownNow(); _monitorExecutor.shutdownNow();
newAgentConnectionsMonitor.shutdownNow();
return true; return true;
} }
@ -1312,6 +1356,7 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
if (attache == null) { if (attache == null) {
logger.warn("Unable to create attache for agent: {}", _request); logger.warn("Unable to create attache for agent: {}", _request);
} }
unregisterNewConnection(_link.getSocketAddress());
} }
} }
@ -1594,8 +1639,6 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
@Override @Override
public boolean agentStatusTransitTo(final HostVO host, final Status.Event e, final long msId) { public boolean agentStatusTransitTo(final HostVO host, final Status.Event e, final long msId) {
try {
_agentStatusLock.lock();
logger.debug("[Resource state = {}, Agent event = , Host = {}]", logger.debug("[Resource state = {}, Agent event = , Host = {}]",
host.getResourceState(), e.toString(), host); host.getResourceState(), e.toString(), host);
@ -1607,9 +1650,6 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
throw new CloudRuntimeException(String.format( throw new CloudRuntimeException(String.format(
"Cannot transit agent status with event %s for host %s, management server id is %d, %s", e, host, msId, e1.getMessage())); "Cannot transit agent status with event %s for host %s, management server id is %d, %s", e, host, msId, e1.getMessage()));
} }
} finally {
_agentStatusLock.unlock();
}
} }
public boolean disconnectAgent(final HostVO host, final Status.Event e, final long msId) { public boolean disconnectAgent(final HostVO host, final Status.Event e, final long msId) {
@ -1813,6 +1853,35 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
} }
} }
protected class AgentNewConnectionsMonitorTask extends ManagedContextRunnable {
@Override
protected void runInContext() {
logger.trace("Agent New Connections Monitor is started.");
final int cleanupTime = Wait.value();
Set<Map.Entry<String, Long>> entrySet = newAgentConnections.entrySet();
long cutOff = System.currentTimeMillis() - (cleanupTime * 60 * 1000L);
if (logger.isDebugEnabled()) {
List<String> expiredConnections = newAgentConnections.entrySet()
.stream()
.filter(e -> e.getValue() <= cutOff)
.map(Map.Entry::getKey)
.collect(Collectors.toList());
logger.debug(String.format("Currently %d active new connections, of which %d have expired - %s",
entrySet.size(),
expiredConnections.size(),
StringUtils.join(expiredConnections)));
}
for (Map.Entry<String, Long> entry : entrySet) {
if (entry.getValue() <= cutOff) {
if (logger.isTraceEnabled()) {
logger.trace(String.format("Cleaning up new agent connection for %s", entry.getKey()));
}
newAgentConnections.remove(entry.getKey());
}
}
}
}
protected class BehindOnPingListener implements Listener { protected class BehindOnPingListener implements Listener {
@Override @Override
public boolean isRecurring() { public boolean isRecurring() {
@ -1888,7 +1957,8 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
@Override @Override
public ConfigKey<?>[] getConfigKeys() { public ConfigKey<?>[] getConfigKeys() {
return new ConfigKey<?>[] { CheckTxnBeforeSending, Workers, Port, Wait, AlertWait, DirectAgentLoadSize, return new ConfigKey<?>[] { CheckTxnBeforeSending, Workers, Port, Wait, AlertWait, DirectAgentLoadSize,
DirectAgentPoolSize, DirectAgentThreadCap, EnableKVMAutoEnableDisable, ReadyCommandWait, GranularWaitTimeForCommands }; DirectAgentPoolSize, DirectAgentThreadCap, EnableKVMAutoEnableDisable, ReadyCommandWait,
GranularWaitTimeForCommands, RemoteAgentSslHandshakeTimeout, RemoteAgentMaxConcurrentNewConnections };
} }
protected class SetHostParamsListener implements Listener { protected class SetHostParamsListener implements Listener {

View File

@ -49,11 +49,12 @@ import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
import org.apache.cloudstack.ha.dao.HAConfigDao; import org.apache.cloudstack.ha.dao.HAConfigDao;
import org.apache.cloudstack.managed.context.ManagedContextRunnable; import org.apache.cloudstack.managed.context.ManagedContextRunnable;
import org.apache.cloudstack.managed.context.ManagedContextTimerTask; import org.apache.cloudstack.managed.context.ManagedContextTimerTask;
import org.apache.cloudstack.management.ManagementServerHost;
import org.apache.cloudstack.outofbandmanagement.dao.OutOfBandManagementDao; import org.apache.cloudstack.outofbandmanagement.dao.OutOfBandManagementDao;
import org.apache.cloudstack.shutdown.ShutdownManager; import org.apache.cloudstack.shutdown.ShutdownManager;
import org.apache.cloudstack.shutdown.command.BaseShutdownManagementServerHostCommand;
import org.apache.cloudstack.shutdown.command.CancelShutdownManagementServerHostCommand; import org.apache.cloudstack.shutdown.command.CancelShutdownManagementServerHostCommand;
import org.apache.cloudstack.shutdown.command.PrepareForShutdownManagementServerHostCommand; import org.apache.cloudstack.shutdown.command.PrepareForShutdownManagementServerHostCommand;
import org.apache.cloudstack.shutdown.command.BaseShutdownManagementServerHostCommand;
import org.apache.cloudstack.shutdown.command.TriggerShutdownManagementServerHostCommand; import org.apache.cloudstack.shutdown.command.TriggerShutdownManagementServerHostCommand;
import org.apache.cloudstack.utils.identity.ManagementServerNode; import org.apache.cloudstack.utils.identity.ManagementServerNode;
import org.apache.cloudstack.utils.security.SSLUtils; import org.apache.cloudstack.utils.security.SSLUtils;
@ -73,7 +74,6 @@ import com.cloud.cluster.ClusterManager;
import com.cloud.cluster.ClusterManagerListener; import com.cloud.cluster.ClusterManagerListener;
import com.cloud.cluster.ClusterServicePdu; import com.cloud.cluster.ClusterServicePdu;
import com.cloud.cluster.ClusteredAgentRebalanceService; import com.cloud.cluster.ClusteredAgentRebalanceService;
import org.apache.cloudstack.management.ManagementServerHost;
import com.cloud.cluster.ManagementServerHostVO; import com.cloud.cluster.ManagementServerHostVO;
import com.cloud.cluster.agentlb.AgentLoadBalancerPlanner; import com.cloud.cluster.agentlb.AgentLoadBalancerPlanner;
import com.cloud.cluster.agentlb.HostTransferMapVO; import com.cloud.cluster.agentlb.HostTransferMapVO;
@ -215,12 +215,10 @@ public class ClusteredAgentManagerImpl extends AgentManagerImpl implements Clust
continue; continue;
} }
} }
logger.debug("Loading directly connected {}", host);
logger.debug("Loading directly connected host {}", host);
loadDirectlyConnectedHost(host, false); loadDirectlyConnectedHost(host, false);
} catch (final Throwable e) { } catch (final Throwable e) {
logger.warn(" can not load directly connected host {}({}) due to ", logger.warn(" can not load directly connected {} due to ", host, e);
host, e);
} }
} }
} }
@ -250,8 +248,8 @@ public class ClusteredAgentManagerImpl extends AgentManagerImpl implements Clust
final AgentAttache attache = new ClusteredAgentAttache(this, id, host.getUuid(), host.getName()); final AgentAttache attache = new ClusteredAgentAttache(this, id, host.getUuid(), host.getName());
AgentAttache old = null; AgentAttache old = null;
synchronized (_agents) { synchronized (_agents) {
old = _agents.get(id); old = _agents.get(host.getId());
_agents.put(id, attache); _agents.put(host.getId(), attache);
} }
if (old != null) { if (old != null) {
logger.debug("Remove stale agent attache from current management server"); logger.debug("Remove stale agent attache from current management server");
@ -550,13 +548,13 @@ public class ClusteredAgentManagerImpl extends AgentManagerImpl implements Clust
AgentAttache agent = findAttache(hostId); AgentAttache agent = findAttache(hostId);
if (agent == null || !agent.forForward()) { if (agent == null || !agent.forForward()) {
if (isHostOwnerSwitched(host)) { if (isHostOwnerSwitched(host)) {
logger.debug("Host {} has switched to another management server, need to update agent map with a forwarding agent attache", host); logger.debug("{} has switched to another management server, need to update agent map with a forwarding agent attache", host);
agent = createAttache(host); agent = createAttache(host);
} }
} }
if (agent == null) { if (agent == null) {
final AgentUnavailableException ex = new AgentUnavailableException("Host with specified id is not in the right state: " + host.getStatus(), hostId); final AgentUnavailableException ex = new AgentUnavailableException("Host with specified id is not in the right state: " + host.getStatus(), hostId);
ex.addProxyObject(_entityMgr.findById(Host.class, hostId).getUuid()); ex.addProxyObject(host.getUuid());
throw ex; throw ex;
} }
@ -1034,7 +1032,7 @@ public class ClusteredAgentManagerImpl extends AgentManagerImpl implements Clust
} else if (futureOwnerId == _nodeId) { } else if (futureOwnerId == _nodeId) {
final HostVO host = _hostDao.findById(hostId); final HostVO host = _hostDao.findById(hostId);
try { try {
logger.debug("Disconnecting host {} as a part of rebalance process without notification", host); logger.debug("Disconnecting {} as a part of rebalance process without notification", host);
final AgentAttache attache = findAttache(hostId); final AgentAttache attache = findAttache(hostId);
if (attache != null) { if (attache != null) {
@ -1042,7 +1040,7 @@ public class ClusteredAgentManagerImpl extends AgentManagerImpl implements Clust
} }
if (result) { if (result) {
logger.debug("Loading directly connected host {} to the management server {} as a part of rebalance process", host, _nodeId); logger.debug("Loading directly connected {} to the management server {} as a part of rebalance process", host, _nodeId);
result = loadDirectlyConnectedHost(host, true); result = loadDirectlyConnectedHost(host, true);
} else { } else {
logger.warn("Failed to disconnect {} as a part of rebalance process without notification", host); logger.warn("Failed to disconnect {} as a part of rebalance process without notification", host);
@ -1054,9 +1052,9 @@ public class ClusteredAgentManagerImpl extends AgentManagerImpl implements Clust
} }
if (result) { if (result) {
logger.debug("Successfully loaded directly connected host {} to the management server {} a part of rebalance process without notification", host, _nodeId); logger.debug("Successfully loaded directly connected {} to the management server {} a part of rebalance process without notification", host, _nodeId);
} else { } else {
logger.warn("Failed to load directly connected host {} to the management server {} a part of rebalance process without notification", host, _nodeId); logger.warn("Failed to load directly connected {} to the management server {} a part of rebalance process without notification", host, _nodeId);
} }
} }

View File

@ -85,6 +85,7 @@ import org.apache.cloudstack.resource.ResourceCleanupService;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao; import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO; import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.cloudstack.storage.to.VolumeObjectTO; import org.apache.cloudstack.storage.to.VolumeObjectTO;
import org.apache.cloudstack.utils.cache.SingleCache;
import org.apache.cloudstack.utils.identity.ManagementServerNode; import org.apache.cloudstack.utils.identity.ManagementServerNode;
import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils; import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils;
import org.apache.cloudstack.vm.UnmanagedVMsManager; import org.apache.cloudstack.vm.UnmanagedVMsManager;
@ -406,6 +407,10 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
private DomainDao domainDao; private DomainDao domainDao;
@Inject @Inject
ResourceCleanupService resourceCleanupService; ResourceCleanupService resourceCleanupService;
@Inject
VmWorkJobDao vmWorkJobDao;
private SingleCache<List<Long>> vmIdsInProgressCache;
VmWorkJobHandlerProxy _jobHandlerProxy = new VmWorkJobHandlerProxy(this); VmWorkJobHandlerProxy _jobHandlerProxy = new VmWorkJobHandlerProxy(this);
@ -450,6 +455,8 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
Long.class, "systemvm.root.disk.size", "-1", Long.class, "systemvm.root.disk.size", "-1",
"Size of root volume (in GB) of system VMs and virtual routers", true); "Size of root volume (in GB) of system VMs and virtual routers", true);
private boolean syncTransitioningVmPowerState;
ScheduledExecutorService _executor = null; ScheduledExecutorService _executor = null;
private long _nodeId; private long _nodeId;
@ -816,6 +823,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
@Override @Override
public boolean start() { public boolean start() {
vmIdsInProgressCache = new SingleCache<>(10, vmWorkJobDao::listVmIdsWithPendingJob);
_executor.scheduleAtFixedRate(new CleanupTask(), 5, VmJobStateReportInterval.value(), TimeUnit.SECONDS); _executor.scheduleAtFixedRate(new CleanupTask(), 5, VmJobStateReportInterval.value(), TimeUnit.SECONDS);
_executor.scheduleAtFixedRate(new TransitionTask(), VmOpCleanupInterval.value(), VmOpCleanupInterval.value(), TimeUnit.SECONDS); _executor.scheduleAtFixedRate(new TransitionTask(), VmOpCleanupInterval.value(), VmOpCleanupInterval.value(), TimeUnit.SECONDS);
cancelWorkItems(_nodeId); cancelWorkItems(_nodeId);
@ -843,6 +851,8 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
_messageBus.subscribe(VirtualMachineManager.Topics.VM_POWER_STATE, MessageDispatcher.getDispatcher(this)); _messageBus.subscribe(VirtualMachineManager.Topics.VM_POWER_STATE, MessageDispatcher.getDispatcher(this));
syncTransitioningVmPowerState = Boolean.TRUE.equals(VmSyncPowerStateTransitioning.value());
return true; return true;
} }
@ -3506,7 +3516,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
if (MIGRATE_VM_ACROSS_CLUSTERS.valueIn(host.getDataCenterId()) && if (MIGRATE_VM_ACROSS_CLUSTERS.valueIn(host.getDataCenterId()) &&
(HypervisorType.VMware.equals(host.getHypervisorType()) || !checkIfVmHasClusterWideVolumes(vm.getId()))) { (HypervisorType.VMware.equals(host.getHypervisorType()) || !checkIfVmHasClusterWideVolumes(vm.getId()))) {
logger.info("Searching for hosts in the zone for vm migration"); logger.info("Searching for hosts in the zone for vm migration");
List<Long> clustersToExclude = _clusterDao.listAllClusters(host.getDataCenterId()); List<Long> clustersToExclude = _clusterDao.listAllClusterIds(host.getDataCenterId());
List<ClusterVO> clusterList = _clusterDao.listByDcHyType(host.getDataCenterId(), host.getHypervisorType().toString()); List<ClusterVO> clusterList = _clusterDao.listByDcHyType(host.getDataCenterId(), host.getHypervisorType().toString());
for (ClusterVO cluster : clusterList) { for (ClusterVO cluster : clusterList) {
clustersToExclude.remove(cluster.getId()); clustersToExclude.remove(cluster.getId());
@ -3800,7 +3810,6 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
if (ping.getHostVmStateReport() != null) { if (ping.getHostVmStateReport() != null) {
_syncMgr.processHostVmStatePingReport(agentId, ping.getHostVmStateReport(), ping.getOutOfBand()); _syncMgr.processHostVmStatePingReport(agentId, ping.getHostVmStateReport(), ping.getOutOfBand());
} }
scanStalledVMInTransitionStateOnUpHost(agentId); scanStalledVMInTransitionStateOnUpHost(agentId);
processed = true; processed = true;
} }
@ -4757,7 +4766,8 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
VmOpLockStateRetry, VmOpWaitInterval, ExecuteInSequence, VmJobCheckInterval, VmJobTimeout, VmJobStateReportInterval, VmOpLockStateRetry, VmOpWaitInterval, ExecuteInSequence, VmJobCheckInterval, VmJobTimeout, VmJobStateReportInterval,
VmConfigDriveLabel, VmConfigDriveOnPrimaryPool, VmConfigDriveForceHostCacheUse, VmConfigDriveUseHostCacheOnUnsupportedPool, VmConfigDriveLabel, VmConfigDriveOnPrimaryPool, VmConfigDriveForceHostCacheUse, VmConfigDriveUseHostCacheOnUnsupportedPool,
HaVmRestartHostUp, ResourceCountRunningVMsonly, AllowExposeHypervisorHostname, AllowExposeHypervisorHostnameAccountLevel, SystemVmRootDiskSize, HaVmRestartHostUp, ResourceCountRunningVMsonly, AllowExposeHypervisorHostname, AllowExposeHypervisorHostnameAccountLevel, SystemVmRootDiskSize,
AllowExposeDomainInMetadata, MetadataCustomCloudName, VmMetadataManufacturer, VmMetadataProductName AllowExposeDomainInMetadata, MetadataCustomCloudName, VmMetadataManufacturer, VmMetadataProductName,
VmSyncPowerStateTransitioning
}; };
} }
@ -4955,20 +4965,46 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
} }
} }
/**
* Scans stalled VMs in transition states on an UP host and processes them accordingly.
*
* <p>This method is executed only when the {@code syncTransitioningVmPowerState} flag is enabled. It identifies
* VMs stuck in specific states (e.g., Starting, Stopping, Migrating) on a host that is UP, except for those
* in the Expunging state, which require special handling.</p>
*
* <p>The following conditions are checked during the scan:
* <ul>
* <li>No pending {@code VmWork} job exists for the VM.</li>
* <li>The VM is associated with the given {@code hostId}, and the host is UP.</li>
* </ul>
* </p>
*
* <p>When a host is UP, a state report for the VMs will typically be received. However, certain scenarios
* (e.g., out-of-band changes or behavior specific to hypervisors like XenServer or KVM) might result in
* missing reports, preventing the state-sync logic from running. To address this, the method scans VMs
* based on their last update timestamp. If a VM remains stalled without a status update while its host is UP,
* it is assumed to be powered off, which is generally a safe assumption.</p>
*
* @param hostId the ID of the host to scan for stalled VMs in transition states.
*/
private void scanStalledVMInTransitionStateOnUpHost(final long hostId) { private void scanStalledVMInTransitionStateOnUpHost(final long hostId) {
final long stallThresholdInMs = VmJobStateReportInterval.value() + (VmJobStateReportInterval.value() >> 1); if (!syncTransitioningVmPowerState) {
final Date cutTime = new Date(DateUtil.currentGMTTime().getTime() - stallThresholdInMs); return;
final List<Long> mostlikelyStoppedVMs = listStalledVMInTransitionStateOnUpHost(hostId, cutTime); }
for (final Long vmId : mostlikelyStoppedVMs) { if (!_hostDao.isHostUp(hostId)) {
final VMInstanceVO vm = _vmDao.findById(vmId); return;
assert vm != null; }
final long stallThresholdInMs = VmJobStateReportInterval.value() * 2;
final long cutTime = new Date(DateUtil.currentGMTTime().getTime() - stallThresholdInMs).getTime();
final List<VMInstanceVO> hostTransitionVms = _vmDao.listByHostAndState(hostId, State.Starting, State.Stopping, State.Migrating);
final List<VMInstanceVO> mostLikelyStoppedVMs = listStalledVMInTransitionStateOnUpHost(hostTransitionVms, cutTime);
for (final VMInstanceVO vm : mostLikelyStoppedVMs) {
handlePowerOffReportWithNoPendingJobsOnVM(vm); handlePowerOffReportWithNoPendingJobsOnVM(vm);
} }
final List<Long> vmsWithRecentReport = listVMInTransitionStateWithRecentReportOnUpHost(hostId, cutTime); final List<VMInstanceVO> vmsWithRecentReport = listVMInTransitionStateWithRecentReportOnUpHost(hostTransitionVms, cutTime);
for (final Long vmId : vmsWithRecentReport) { for (final VMInstanceVO vm : vmsWithRecentReport) {
final VMInstanceVO vm = _vmDao.findById(vmId);
assert vm != null;
if (vm.getPowerState() == PowerState.PowerOn) { if (vm.getPowerState() == PowerState.PowerOn) {
handlePowerOnReportWithNoPendingJobsOnVM(vm); handlePowerOnReportWithNoPendingJobsOnVM(vm);
} else { } else {
@ -4977,6 +5013,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
} }
} }
private void scanStalledVMInTransitionStateOnDisconnectedHosts() { private void scanStalledVMInTransitionStateOnDisconnectedHosts() {
final Date cutTime = new Date(DateUtil.currentGMTTime().getTime() - VmOpWaitInterval.value() * 1000); final Date cutTime = new Date(DateUtil.currentGMTTime().getTime() - VmOpWaitInterval.value() * 1000);
final List<Long> stuckAndUncontrollableVMs = listStalledVMInTransitionStateOnDisconnectedHosts(cutTime); final List<Long> stuckAndUncontrollableVMs = listStalledVMInTransitionStateOnDisconnectedHosts(cutTime);
@ -4989,72 +5026,42 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
} }
} }
private List<Long> listStalledVMInTransitionStateOnUpHost(final long hostId, final Date cutTime) { private List<VMInstanceVO> listStalledVMInTransitionStateOnUpHost(
final String sql = "SELECT i.* FROM vm_instance as i, host as h WHERE h.status = 'UP' " + final List<VMInstanceVO> transitioningVms, final long cutTime) {
"AND h.id = ? AND i.power_state_update_time < ? AND i.host_id = h.id " + if (CollectionUtils.isEmpty(transitioningVms)) {
"AND (i.state ='Starting' OR i.state='Stopping' OR i.state='Migrating') " + return transitioningVms;
"AND i.id NOT IN (SELECT w.vm_instance_id FROM vm_work_job AS w JOIN async_job AS j ON w.id = j.id WHERE j.job_status = ?)" +
"AND i.removed IS NULL";
final List<Long> l = new ArrayList<>();
try (TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.CLOUD_DB)) {
String cutTimeStr = DateUtil.getDateDisplayString(TimeZone.getTimeZone("GMT"), cutTime);
try {
PreparedStatement pstmt = txn.prepareAutoCloseStatement(sql);
pstmt.setLong(1, hostId);
pstmt.setString(2, cutTimeStr);
pstmt.setInt(3, JobInfo.Status.IN_PROGRESS.ordinal());
final ResultSet rs = pstmt.executeQuery();
while (rs.next()) {
l.add(rs.getLong(1));
} }
} catch (SQLException e) { List<Long> vmIdsInProgress = vmIdsInProgressCache.get();
logger.error("Unable to execute SQL [{}] with params {\"h.id\": {}, \"i.power_state_update_time\": \"{}\"} due to [{}].", sql, hostId, cutTimeStr, e.getMessage(), e); return transitioningVms.stream()
} .filter(v -> v.getPowerStateUpdateTime().getTime() < cutTime && !vmIdsInProgress.contains(v.getId()))
} .collect(Collectors.toList());
return l;
} }
private List<Long> listVMInTransitionStateWithRecentReportOnUpHost(final long hostId, final Date cutTime) { private List<VMInstanceVO> listVMInTransitionStateWithRecentReportOnUpHost(
final String sql = "SELECT i.* FROM vm_instance as i, host as h WHERE h.status = 'UP' " + final List<VMInstanceVO> transitioningVms, final long cutTime) {
"AND h.id = ? AND i.power_state_update_time > ? AND i.host_id = h.id " + if (CollectionUtils.isEmpty(transitioningVms)) {
"AND (i.state ='Starting' OR i.state='Stopping' OR i.state='Migrating') " + return transitioningVms;
"AND i.id NOT IN (SELECT w.vm_instance_id FROM vm_work_job AS w JOIN async_job AS j ON w.id = j.id WHERE j.job_status = ?)" +
"AND i.removed IS NULL";
final List<Long> l = new ArrayList<>();
try (TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.CLOUD_DB)) {
String cutTimeStr = DateUtil.getDateDisplayString(TimeZone.getTimeZone("GMT"), cutTime);
int jobStatusInProgress = JobInfo.Status.IN_PROGRESS.ordinal();
try {
PreparedStatement pstmt = txn.prepareAutoCloseStatement(sql);
pstmt.setLong(1, hostId);
pstmt.setString(2, cutTimeStr);
pstmt.setInt(3, jobStatusInProgress);
final ResultSet rs = pstmt.executeQuery();
while (rs.next()) {
l.add(rs.getLong(1));
}
} catch (final SQLException e) {
logger.error("Unable to execute SQL [{}] with params {\"h.id\": {}, \"i.power_state_update_time\": \"{}\", \"j.job_status\": {}} due to [{}].", sql, hostId, cutTimeStr, jobStatusInProgress, e.getMessage(), e);
}
return l;
} }
List<Long> vmIdsInProgress = vmIdsInProgressCache.get();
return transitioningVms.stream()
.filter(v -> v.getPowerStateUpdateTime().getTime() > cutTime && !vmIdsInProgress.contains(v.getId()))
.collect(Collectors.toList());
} }
private List<Long> listStalledVMInTransitionStateOnDisconnectedHosts(final Date cutTime) { private List<Long> listStalledVMInTransitionStateOnDisconnectedHosts(final Date cutTime) {
final String sql = "SELECT i.* FROM vm_instance as i, host as h WHERE h.status != 'UP' " + final String sql = "SELECT i.* " +
"AND i.power_state_update_time < ? AND i.host_id = h.id " + "FROM vm_instance AS i " +
"AND (i.state ='Starting' OR i.state='Stopping' OR i.state='Migrating') " + "INNER JOIN host AS h ON i.host_id = h.id " +
"AND i.id NOT IN (SELECT w.vm_instance_id FROM vm_work_job AS w JOIN async_job AS j ON w.id = j.id WHERE j.job_status = ?)" + "WHERE h.status != 'UP' " +
"AND i.removed IS NULL"; " AND i.power_state_update_time < ? " +
" AND i.state IN ('Starting', 'Stopping', 'Migrating') " +
" AND i.id NOT IN (SELECT vm_instance_id FROM vm_work_job AS w " +
" INNER JOIN async_job AS j ON w.id = j.id " +
" WHERE j.job_status = ?) " +
" AND i.removed IS NULL";
final List<Long> l = new ArrayList<>(); final List<Long> l = new ArrayList<>();
try (TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.CLOUD_DB)) { TransactionLegacy txn = TransactionLegacy.currentTxn();
String cutTimeStr = DateUtil.getDateDisplayString(TimeZone.getTimeZone("GMT"), cutTime); String cutTimeStr = DateUtil.getDateDisplayString(TimeZone.getTimeZone("GMT"), cutTime);
int jobStatusInProgress = JobInfo.Status.IN_PROGRESS.ordinal(); int jobStatusInProgress = JobInfo.Status.IN_PROGRESS.ordinal();
@ -5072,7 +5079,6 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
} }
return l; return l;
} }
}
public class VmStateSyncOutcome extends OutcomeImpl<VirtualMachine> { public class VmStateSyncOutcome extends OutcomeImpl<VirtualMachine> {
private long _vmId; private long _vmId;
@ -5953,29 +5959,23 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
} }
@Override @Override
public HashMap<Long, ? extends VmStats> getVirtualMachineStatistics(long hostId, String hostName, List<Long> vmIds) { public HashMap<Long, ? extends VmStats> getVirtualMachineStatistics(Host host, List<Long> vmIds) {
HashMap<Long, VmStatsEntry> vmStatsById = new HashMap<>(); HashMap<Long, VmStatsEntry> vmStatsById = new HashMap<>();
if (CollectionUtils.isEmpty(vmIds)) { if (CollectionUtils.isEmpty(vmIds)) {
return vmStatsById; return vmStatsById;
} }
Map<Long, VMInstanceVO> vmMap = new HashMap<>(); Map<String, Long> vmMap = _vmDao.getNameIdMapForVmIds(vmIds);
for (Long vmId : vmIds) { return getVirtualMachineStatistics(host, vmMap);
vmMap.put(vmId, _vmDao.findById(vmId));
}
return getVirtualMachineStatistics(hostId, hostName, vmMap);
} }
@Override @Override
public HashMap<Long, ? extends VmStats> getVirtualMachineStatistics(long hostId, String hostName, Map<Long, ? extends VirtualMachine> vmMap) { public HashMap<Long, ? extends VmStats> getVirtualMachineStatistics(Host host, Map<String, Long> vmInstanceNameIdMap) {
HashMap<Long, VmStatsEntry> vmStatsById = new HashMap<>(); HashMap<Long, VmStatsEntry> vmStatsById = new HashMap<>();
if (MapUtils.isEmpty(vmMap)) { if (MapUtils.isEmpty(vmInstanceNameIdMap)) {
return vmStatsById; return vmStatsById;
} }
Map<String, Long> vmNames = new HashMap<>(); Answer answer = _agentMgr.easySend(host.getId(), new GetVmStatsCommand(
for (Map.Entry<Long, ? extends VirtualMachine> vmEntry : vmMap.entrySet()) { new ArrayList<>(vmInstanceNameIdMap.keySet()), host.getGuid(), host.getName()));
vmNames.put(vmEntry.getValue().getInstanceName(), vmEntry.getKey());
}
Answer answer = _agentMgr.easySend(hostId, new GetVmStatsCommand(new ArrayList<>(vmNames.keySet()), _hostDao.findById(hostId).getGuid(), hostName));
if (answer == null || !answer.getResult()) { if (answer == null || !answer.getResult()) {
logger.warn("Unable to obtain VM statistics."); logger.warn("Unable to obtain VM statistics.");
return vmStatsById; return vmStatsById;
@ -5986,23 +5986,20 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
return vmStatsById; return vmStatsById;
} }
for (Map.Entry<String, VmStatsEntry> entry : vmStatsByName.entrySet()) { for (Map.Entry<String, VmStatsEntry> entry : vmStatsByName.entrySet()) {
vmStatsById.put(vmNames.get(entry.getKey()), entry.getValue()); vmStatsById.put(vmInstanceNameIdMap.get(entry.getKey()), entry.getValue());
} }
} }
return vmStatsById; return vmStatsById;
} }
@Override @Override
public HashMap<Long, List<? extends VmDiskStats>> getVmDiskStatistics(long hostId, String hostName, Map<Long, ? extends VirtualMachine> vmMap) { public HashMap<Long, List<? extends VmDiskStats>> getVmDiskStatistics(Host host, Map<String, Long> vmInstanceNameIdMap) {
HashMap<Long, List<? extends VmDiskStats>> vmDiskStatsById = new HashMap<>(); HashMap<Long, List<? extends VmDiskStats>> vmDiskStatsById = new HashMap<>();
if (MapUtils.isEmpty(vmMap)) { if (MapUtils.isEmpty(vmInstanceNameIdMap)) {
return vmDiskStatsById; return vmDiskStatsById;
} }
Map<String, Long> vmNames = new HashMap<>(); Answer answer = _agentMgr.easySend(host.getId(), new GetVmDiskStatsCommand(
for (Map.Entry<Long, ? extends VirtualMachine> vmEntry : vmMap.entrySet()) { new ArrayList<>(vmInstanceNameIdMap.keySet()), host.getGuid(), host.getName()));
vmNames.put(vmEntry.getValue().getInstanceName(), vmEntry.getKey());
}
Answer answer = _agentMgr.easySend(hostId, new GetVmDiskStatsCommand(new ArrayList<>(vmNames.keySet()), _hostDao.findById(hostId).getGuid(), hostName));
if (answer == null || !answer.getResult()) { if (answer == null || !answer.getResult()) {
logger.warn("Unable to obtain VM disk statistics."); logger.warn("Unable to obtain VM disk statistics.");
return vmDiskStatsById; return vmDiskStatsById;
@ -6013,23 +6010,20 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
return vmDiskStatsById; return vmDiskStatsById;
} }
for (Map.Entry<String, List<VmDiskStatsEntry>> entry: vmDiskStatsByName.entrySet()) { for (Map.Entry<String, List<VmDiskStatsEntry>> entry: vmDiskStatsByName.entrySet()) {
vmDiskStatsById.put(vmNames.get(entry.getKey()), entry.getValue()); vmDiskStatsById.put(vmInstanceNameIdMap.get(entry.getKey()), entry.getValue());
} }
} }
return vmDiskStatsById; return vmDiskStatsById;
} }
@Override @Override
public HashMap<Long, List<? extends VmNetworkStats>> getVmNetworkStatistics(long hostId, String hostName, Map<Long, ? extends VirtualMachine> vmMap) { public HashMap<Long, List<? extends VmNetworkStats>> getVmNetworkStatistics(Host host, Map<String, Long> vmInstanceNameIdMap) {
HashMap<Long, List<? extends VmNetworkStats>> vmNetworkStatsById = new HashMap<>(); HashMap<Long, List<? extends VmNetworkStats>> vmNetworkStatsById = new HashMap<>();
if (MapUtils.isEmpty(vmMap)) { if (MapUtils.isEmpty(vmInstanceNameIdMap)) {
return vmNetworkStatsById; return vmNetworkStatsById;
} }
Map<String, Long> vmNames = new HashMap<>(); Answer answer = _agentMgr.easySend(host.getId(), new GetVmNetworkStatsCommand(
for (Map.Entry<Long, ? extends VirtualMachine> vmEntry : vmMap.entrySet()) { new ArrayList<>(vmInstanceNameIdMap.keySet()), host.getGuid(), host.getName()));
vmNames.put(vmEntry.getValue().getInstanceName(), vmEntry.getKey());
}
Answer answer = _agentMgr.easySend(hostId, new GetVmNetworkStatsCommand(new ArrayList<>(vmNames.keySet()), _hostDao.findById(hostId).getGuid(), hostName));
if (answer == null || !answer.getResult()) { if (answer == null || !answer.getResult()) {
logger.warn("Unable to obtain VM network statistics."); logger.warn("Unable to obtain VM network statistics.");
return vmNetworkStatsById; return vmNetworkStatsById;
@ -6040,7 +6034,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
return vmNetworkStatsById; return vmNetworkStatsById;
} }
for (Map.Entry<String, List<VmNetworkStatsEntry>> entry: vmNetworkStatsByName.entrySet()) { for (Map.Entry<String, List<VmNetworkStatsEntry>> entry: vmNetworkStatsByName.entrySet()) {
vmNetworkStatsById.put(vmNames.get(entry.getKey()), entry.getValue()); vmNetworkStatsById.put(vmInstanceNameIdMap.get(entry.getKey()), entry.getValue());
} }
} }
return vmNetworkStatsById; return vmNetworkStatsById;

View File

@ -16,27 +16,29 @@
// under the License. // under the License.
package com.cloud.vm; package com.cloud.vm;
import java.text.SimpleDateFormat;
import java.util.Date; import java.util.Date;
import java.util.HashMap; import java.util.HashMap;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.Set;
import java.util.stream.Collectors;
import javax.inject.Inject; import javax.inject.Inject;
import com.cloud.host.Host;
import com.cloud.host.HostVO;
import com.cloud.host.dao.HostDao;
import com.cloud.utils.Pair;
import org.apache.cloudstack.framework.messagebus.MessageBus; import org.apache.cloudstack.framework.messagebus.MessageBus;
import org.apache.cloudstack.framework.messagebus.PublishScope; import org.apache.cloudstack.framework.messagebus.PublishScope;
import org.apache.logging.log4j.Logger; import org.apache.cloudstack.utils.cache.LazyCache;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.collections.MapUtils;
import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import com.cloud.agent.api.HostVmStateReportEntry; import com.cloud.agent.api.HostVmStateReportEntry;
import com.cloud.configuration.ManagementServiceConfiguration; import com.cloud.configuration.ManagementServiceConfiguration;
import com.cloud.host.Host;
import com.cloud.host.HostVO;
import com.cloud.host.dao.HostDao;
import com.cloud.utils.DateUtil; import com.cloud.utils.DateUtil;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.vm.dao.VMInstanceDao; import com.cloud.vm.dao.VMInstanceDao;
public class VirtualMachinePowerStateSyncImpl implements VirtualMachinePowerStateSync { public class VirtualMachinePowerStateSyncImpl implements VirtualMachinePowerStateSync {
@ -47,7 +49,12 @@ public class VirtualMachinePowerStateSyncImpl implements VirtualMachinePowerStat
@Inject HostDao hostDao; @Inject HostDao hostDao;
@Inject ManagementServiceConfiguration mgmtServiceConf; @Inject ManagementServiceConfiguration mgmtServiceConf;
private LazyCache<Long, VMInstanceVO> vmCache;
private LazyCache<Long, HostVO> hostCache;
public VirtualMachinePowerStateSyncImpl() { public VirtualMachinePowerStateSyncImpl() {
vmCache = new LazyCache<>(16, 10, this::getVmFromId);
hostCache = new LazyCache<>(16, 10, this::getHostFromId);
} }
@Override @Override
@ -58,130 +65,141 @@ public class VirtualMachinePowerStateSyncImpl implements VirtualMachinePowerStat
@Override @Override
public void processHostVmStateReport(long hostId, Map<String, HostVmStateReportEntry> report) { public void processHostVmStateReport(long hostId, Map<String, HostVmStateReportEntry> report) {
HostVO host = hostDao.findById(hostId); logger.debug("Process host VM state report. host: {}", hostCache.get(hostId));
logger.debug("Process host VM state report. host: {}", host); Map<Long, VirtualMachine.PowerState> translatedInfo = convertVmStateReport(report);
processReport(hostId, translatedInfo, false);
Map<Long, Pair<VirtualMachine.PowerState, VMInstanceVO>> translatedInfo = convertVmStateReport(report);
processReport(host, translatedInfo, false);
} }
@Override @Override
public void processHostVmStatePingReport(long hostId, Map<String, HostVmStateReportEntry> report, boolean force) { public void processHostVmStatePingReport(long hostId, Map<String, HostVmStateReportEntry> report, boolean force) {
HostVO host = hostDao.findById(hostId); logger.debug("Process host VM state report from ping process. host: {}", hostCache.get(hostId));
logger.debug("Process host VM state report from ping process. host: {}", host); Map<Long, VirtualMachine.PowerState> translatedInfo = convertVmStateReport(report);
processReport(hostId, translatedInfo, force);
Map<Long, Pair<VirtualMachine.PowerState, VMInstanceVO>> translatedInfo = convertVmStateReport(report);
processReport(host, translatedInfo, force);
} }
private void processReport(HostVO host, Map<Long, Pair<VirtualMachine.PowerState, VMInstanceVO>> translatedInfo, boolean force) { private void updateAndPublishVmPowerStates(long hostId, Map<Long, VirtualMachine.PowerState> instancePowerStates,
Date updateTime) {
logger.debug("Process VM state report. host: {}, number of records in report: {}.", host, translatedInfo.size()); if (instancePowerStates.isEmpty()) {
return;
for (Map.Entry<Long, Pair<VirtualMachine.PowerState, VMInstanceVO>> entry : translatedInfo.entrySet()) { }
Set<Long> vmIds = instancePowerStates.keySet();
logger.debug("VM state report. host: {}, vm: {}, power state: {}", host, entry.getValue().second(), entry.getValue().first()); Map<Long, VirtualMachine.PowerState> notUpdated = _instanceDao.updatePowerState(instancePowerStates, hostId,
updateTime);
if (_instanceDao.updatePowerState(entry.getKey(), host.getId(), entry.getValue().first(), DateUtil.currentGMTTime())) { if (notUpdated.size() > vmIds.size()) {
logger.debug("VM state report is updated. host: {}, vm: {}, power state: {}", host, entry.getValue().second(), entry.getValue().first()); return;
}
_messageBus.publish(null, VirtualMachineManager.Topics.VM_POWER_STATE, PublishScope.GLOBAL, entry.getKey()); for (Long vmId : vmIds) {
} else { if (!notUpdated.isEmpty() && !notUpdated.containsKey(vmId)) {
logger.trace("VM power state does not change, skip DB writing. vm: {}", entry.getValue().second()); logger.debug("VM state report is updated. {}, {}, power state: {}",
() -> hostCache.get(hostId), () -> vmCache.get(vmId), () -> instancePowerStates.get(vmId));
_messageBus.publish(null, VirtualMachineManager.Topics.VM_POWER_STATE,
PublishScope.GLOBAL, vmId);
continue;
}
logger.trace("VM power state does not change, skip DB writing. {}", () -> vmCache.get(vmId));
} }
} }
private List<VMInstanceVO> filterOutdatedFromMissingVmReport(List<VMInstanceVO> vmsThatAreMissingReport) {
List<Long> outdatedVms = vmsThatAreMissingReport.stream()
.filter(v -> !_instanceDao.isPowerStateUpToDate(v))
.map(VMInstanceVO::getId)
.collect(Collectors.toList());
if (CollectionUtils.isEmpty(outdatedVms)) {
return vmsThatAreMissingReport;
}
_instanceDao.resetVmPowerStateTracking(outdatedVms);
return vmsThatAreMissingReport.stream()
.filter(v -> !outdatedVms.contains(v.getId()))
.collect(Collectors.toList());
}
private void processMissingVmReport(long hostId, Set<Long> vmIds, boolean force) {
// any state outdates should be checked against the time before this list was retrieved // any state outdates should be checked against the time before this list was retrieved
Date startTime = DateUtil.currentGMTTime(); Date startTime = DateUtil.currentGMTTime();
// for all running/stopping VMs, we provide monitoring of missing report // for all running/stopping VMs, we provide monitoring of missing report
List<VMInstanceVO> vmsThatAreMissingReport = _instanceDao.findByHostInStates(host.getId(), VirtualMachine.State.Running, List<VMInstanceVO> vmsThatAreMissingReport = _instanceDao.findByHostInStatesExcluding(hostId, vmIds,
VirtualMachine.State.Stopping, VirtualMachine.State.Starting); VirtualMachine.State.Running, VirtualMachine.State.Stopping, VirtualMachine.State.Starting);
java.util.Iterator<VMInstanceVO> it = vmsThatAreMissingReport.iterator();
while (it.hasNext()) {
VMInstanceVO instance = it.next();
if (translatedInfo.get(instance.getId()) != null)
it.remove();
}
// here we need to be wary of out of band migration as opposed to other, more unexpected state changes // here we need to be wary of out of band migration as opposed to other, more unexpected state changes
if (vmsThatAreMissingReport.size() > 0) { if (vmsThatAreMissingReport.isEmpty()) {
return;
}
Date currentTime = DateUtil.currentGMTTime(); Date currentTime = DateUtil.currentGMTTime();
logger.debug("Run missing VM report for host {}. current time: {}", host, currentTime.getTime()); logger.debug("Run missing VM report. current time: {}", currentTime.getTime());
if (!force) {
vmsThatAreMissingReport = filterOutdatedFromMissingVmReport(vmsThatAreMissingReport);
}
// 2 times of sync-update interval for graceful period // 2 times of sync-update interval for graceful period
long milliSecondsGracefullPeriod = mgmtServiceConf.getPingInterval() * 2000L; long milliSecondsGracefulPeriod = mgmtServiceConf.getPingInterval() * 2000L;
Map<Long, VirtualMachine.PowerState> instancePowerStates = new HashMap<>();
for (VMInstanceVO instance : vmsThatAreMissingReport) { for (VMInstanceVO instance : vmsThatAreMissingReport) {
// Make sure powerState is up to date for missing VMs
try {
if (!force && !_instanceDao.isPowerStateUpToDate(instance.getId())) {
logger.warn("Detected missing VM but power state is outdated, wait for another process report run for VM: {}", instance);
_instanceDao.resetVmPowerStateTracking(instance.getId());
continue;
}
} catch (CloudRuntimeException e) {
logger.warn("Checked for missing powerstate of a none existing vm {}", instance, e);
continue;
}
Date vmStateUpdateTime = instance.getPowerStateUpdateTime(); Date vmStateUpdateTime = instance.getPowerStateUpdateTime();
if (vmStateUpdateTime == null) { if (vmStateUpdateTime == null) {
logger.warn("VM power state update time is null, falling back to update time for vm: {}", instance); logger.warn("VM power state update time is null, falling back to update time for {}", instance);
vmStateUpdateTime = instance.getUpdateTime(); vmStateUpdateTime = instance.getUpdateTime();
if (vmStateUpdateTime == null) { if (vmStateUpdateTime == null) {
logger.warn("VM update time is null, falling back to creation time for vm: {}", instance); logger.warn("VM update time is null, falling back to creation time for {}", instance);
vmStateUpdateTime = instance.getCreated(); vmStateUpdateTime = instance.getCreated();
} }
} }
logger.debug("Detected missing VM. host: {}, vm id: {}({}), power state: {}, last state update: {}",
String lastTime = new SimpleDateFormat("yyyy/MM/dd'T'HH:mm:ss.SSS'Z'").format(vmStateUpdateTime); hostId,
logger.debug("Detected missing VM. host: {}, vm: {}, power state: {}, last state update: {}", instance.getId(),
host, instance, VirtualMachine.PowerState.PowerReportMissing, lastTime); instance.getUuid(),
VirtualMachine.PowerState.PowerReportMissing,
DateUtil.getOutputString(vmStateUpdateTime));
long milliSecondsSinceLastStateUpdate = currentTime.getTime() - vmStateUpdateTime.getTime(); long milliSecondsSinceLastStateUpdate = currentTime.getTime() - vmStateUpdateTime.getTime();
if (force || (milliSecondsSinceLastStateUpdate > milliSecondsGracefulPeriod)) {
if (force || milliSecondsSinceLastStateUpdate > milliSecondsGracefullPeriod) { logger.debug("vm id: {} - time since last state update({} ms) has passed graceful period",
logger.debug("vm: {} - time since last state update({}ms) has passed graceful period", instance, milliSecondsSinceLastStateUpdate); instance.getId(), milliSecondsSinceLastStateUpdate);
// this is where a race condition might have happened if we don't re-fetch the instance;
// this is were a race condition might have happened if we don't re-fetch the instance;
// between the startime of this job and the currentTime of this missing-branch // between the startime of this job and the currentTime of this missing-branch
// an update might have occurred that we should not override in case of out of band migration // an update might have occurred that we should not override in case of out of band migration
if (_instanceDao.updatePowerState(instance.getId(), host.getId(), VirtualMachine.PowerState.PowerReportMissing, startTime)) { instancePowerStates.put(instance.getId(), VirtualMachine.PowerState.PowerReportMissing);
logger.debug("VM state report is updated. host: {}, vm: {}, power state: PowerReportMissing ", host, instance);
_messageBus.publish(null, VirtualMachineManager.Topics.VM_POWER_STATE, PublishScope.GLOBAL, instance.getId());
} else { } else {
logger.debug("VM power state does not change, skip DB writing. vm: {}", instance); logger.debug("vm id: {} - time since last state update({} ms) has not passed graceful period yet",
} instance.getId(), milliSecondsSinceLastStateUpdate);
} else {
logger.debug("vm: {} - time since last state update({} ms) has not passed graceful period yet", instance, milliSecondsSinceLastStateUpdate);
} }
} }
updateAndPublishVmPowerStates(hostId, instancePowerStates, startTime);
} }
logger.debug("Done with process of VM state report. host: {}", host); private void processReport(long hostId, Map<Long, VirtualMachine.PowerState> translatedInfo, boolean force) {
logger.debug("Process VM state report. {}, number of records in report: {}. VMs: [{}]",
() -> hostCache.get(hostId),
translatedInfo::size,
() -> translatedInfo.entrySet().stream().map(entry -> entry.getKey() + ":" + entry.getValue())
.collect(Collectors.joining(", ")) + "]");
updateAndPublishVmPowerStates(hostId, translatedInfo, DateUtil.currentGMTTime());
processMissingVmReport(hostId, translatedInfo.keySet(), force);
logger.debug("Done with process of VM state report. host: {}", () -> hostCache.get(hostId));
} }
public Map<Long, Pair<VirtualMachine.PowerState, VMInstanceVO>> convertVmStateReport(Map<String, HostVmStateReportEntry> states) { public Map<Long, VirtualMachine.PowerState> convertVmStateReport(Map<String, HostVmStateReportEntry> states) {
final HashMap<Long, Pair<VirtualMachine.PowerState, VMInstanceVO>> map = new HashMap<>(); final HashMap<Long, VirtualMachine.PowerState> map = new HashMap<>();
if (states == null) { if (MapUtils.isEmpty(states)) {
return map; return map;
} }
Map<String, Long> nameIdMap = _instanceDao.getNameIdMapForVmInstanceNames(states.keySet());
for (Map.Entry<String, HostVmStateReportEntry> entry : states.entrySet()) { for (Map.Entry<String, HostVmStateReportEntry> entry : states.entrySet()) {
VMInstanceVO vm = findVM(entry.getKey()); Long id = nameIdMap.get(entry.getKey());
if (vm != null) { if (id != null) {
map.put(vm.getId(), new Pair<>(entry.getValue().getState(), vm)); map.put(id, entry.getValue().getState());
} else { } else {
logger.debug("Unable to find matched VM in CloudStack DB. name: {} powerstate: {}", entry.getKey(), entry.getValue()); logger.debug("Unable to find matched VM in CloudStack DB. name: {} powerstate: {}", entry.getKey(), entry.getValue());
} }
} }
return map; return map;
} }
private VMInstanceVO findVM(String vmName) { protected VMInstanceVO getVmFromId(long vmId) {
return _instanceDao.findVMByInstanceName(vmName); return _instanceDao.findById(vmId);
}
protected HostVO getHostFromId(long hostId) {
return hostDao.findById(hostId);
} }
} }

View File

@ -28,6 +28,8 @@ import com.cloud.utils.db.GenericDao;
public interface CapacityDao extends GenericDao<CapacityVO, Long> { public interface CapacityDao extends GenericDao<CapacityVO, Long> {
CapacityVO findByHostIdType(Long hostId, short capacityType); CapacityVO findByHostIdType(Long hostId, short capacityType);
List<CapacityVO> listByHostIdTypes(Long hostId, List<Short> capacityTypes);
List<Long> listClustersInZoneOrPodByHostCapacities(long id, long vmId, int requiredCpu, long requiredRam, short capacityTypeForOrdering, boolean isZone); List<Long> listClustersInZoneOrPodByHostCapacities(long id, long vmId, int requiredCpu, long requiredRam, short capacityTypeForOrdering, boolean isZone);
List<Long> listHostsWithEnoughCapacity(int requiredCpu, long requiredRam, Long clusterId, String hostType); List<Long> listHostsWithEnoughCapacity(int requiredCpu, long requiredRam, Long clusterId, String hostType);

View File

@ -671,6 +671,18 @@ public class CapacityDaoImpl extends GenericDaoBase<CapacityVO, Long> implements
return findOneBy(sc); return findOneBy(sc);
} }
@Override
public List<CapacityVO> listByHostIdTypes(Long hostId, List<Short> capacityTypes) {
SearchBuilder<CapacityVO> sb = createSearchBuilder();
sb.and("hostId", sb.entity().getHostOrPoolId(), SearchCriteria.Op.EQ);
sb.and("type", sb.entity().getCapacityType(), SearchCriteria.Op.IN);
sb.done();
SearchCriteria<CapacityVO> sc = sb.create();
sc.setParameters("hostId", hostId);
sc.setParameters("type", capacityTypes.toArray());
return listBy(sc);
}
@Override @Override
public List<Long> listClustersInZoneOrPodByHostCapacities(long id, long vmId, int requiredCpu, long requiredRam, short capacityTypeForOrdering, boolean isZone) { public List<Long> listClustersInZoneOrPodByHostCapacities(long id, long vmId, int requiredCpu, long requiredRam, short capacityTypeForOrdering, boolean isZone) {
TransactionLegacy txn = TransactionLegacy.currentTxn(); TransactionLegacy txn = TransactionLegacy.currentTxn();

View File

@ -16,6 +16,7 @@
// under the License. // under the License.
package com.cloud.dc; package com.cloud.dc;
import java.util.Collection;
import java.util.Map; import java.util.Map;
import com.cloud.utils.db.GenericDao; import com.cloud.utils.db.GenericDao;
@ -29,6 +30,8 @@ public interface ClusterDetailsDao extends GenericDao<ClusterDetailsVO, Long> {
ClusterDetailsVO findDetail(long clusterId, String name); ClusterDetailsVO findDetail(long clusterId, String name);
Map<String, String> findDetails(long clusterId, Collection<String> names);
void deleteDetails(long clusterId); void deleteDetails(long clusterId);
String getVmwareDcName(Long clusterId); String getVmwareDcName(Long clusterId);

View File

@ -16,13 +16,16 @@
// under the License. // under the License.
package com.cloud.dc; package com.cloud.dc;
import java.util.Collection;
import java.util.HashMap; import java.util.HashMap;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.stream.Collectors;
import org.apache.cloudstack.framework.config.ConfigKey; import org.apache.cloudstack.framework.config.ConfigKey;
import org.apache.cloudstack.framework.config.ConfigKey.Scope; import org.apache.cloudstack.framework.config.ConfigKey.Scope;
import org.apache.cloudstack.framework.config.ScopedConfigStorage; import org.apache.cloudstack.framework.config.ScopedConfigStorage;
import org.apache.commons.collections.CollectionUtils;
import com.cloud.utils.crypt.DBEncryptionUtil; import com.cloud.utils.crypt.DBEncryptionUtil;
import com.cloud.utils.db.GenericDaoBase; import com.cloud.utils.db.GenericDaoBase;
@ -82,6 +85,23 @@ public class ClusterDetailsDaoImpl extends GenericDaoBase<ClusterDetailsVO, Long
return details; return details;
} }
@Override
public Map<String, String> findDetails(long clusterId, Collection<String> names) {
if (CollectionUtils.isEmpty(names)) {
return new HashMap<>();
}
SearchBuilder<ClusterDetailsVO> sb = createSearchBuilder();
sb.and("clusterId", sb.entity().getClusterId(), SearchCriteria.Op.EQ);
sb.and("name", sb.entity().getName(), SearchCriteria.Op.IN);
sb.done();
SearchCriteria<ClusterDetailsVO> sc = sb.create();
sc.setParameters("clusterId", clusterId);
sc.setParameters("name", names.toArray());
List<ClusterDetailsVO> results = search(sc, null);
return results.stream()
.collect(Collectors.toMap(ClusterDetailsVO::getName, ClusterDetailsVO::getValue));
}
@Override @Override
public void deleteDetails(long clusterId) { public void deleteDetails(long clusterId) {
SearchCriteria<ClusterDetailsVO> sc = ClusterSearch.create(); SearchCriteria<ClusterDetailsVO> sc = ClusterSearch.create();

View File

@ -16,15 +16,15 @@
// under the License. // under the License.
package com.cloud.dc.dao; package com.cloud.dc.dao;
import java.util.List;
import java.util.Map;
import java.util.Set;
import com.cloud.cpu.CPU; import com.cloud.cpu.CPU;
import com.cloud.dc.ClusterVO; import com.cloud.dc.ClusterVO;
import com.cloud.hypervisor.Hypervisor.HypervisorType; import com.cloud.hypervisor.Hypervisor.HypervisorType;
import com.cloud.utils.db.GenericDao; import com.cloud.utils.db.GenericDao;
import java.util.List;
import java.util.Map;
import java.util.Set;
public interface ClusterDao extends GenericDao<ClusterVO, Long> { public interface ClusterDao extends GenericDao<ClusterVO, Long> {
List<ClusterVO> listByPodId(long podId); List<ClusterVO> listByPodId(long podId);
@ -36,7 +36,7 @@ public interface ClusterDao extends GenericDao<ClusterVO, Long> {
List<HypervisorType> getAvailableHypervisorInZone(Long zoneId); List<HypervisorType> getAvailableHypervisorInZone(Long zoneId);
Set<HypervisorType> getDistictAvailableHypervisorsAcrossClusters(); Set<HypervisorType> getDistinctAvailableHypervisorsAcrossClusters();
List<ClusterVO> listByDcHyType(long dcId, String hyType); List<ClusterVO> listByDcHyType(long dcId, String hyType);
@ -46,9 +46,13 @@ public interface ClusterDao extends GenericDao<ClusterVO, Long> {
List<Long> listClustersWithDisabledPods(long zoneId); List<Long> listClustersWithDisabledPods(long zoneId);
Integer countAllByDcId(long zoneId);
Integer countAllManagedAndEnabledByDcId(long zoneId);
List<ClusterVO> listClustersByDcId(long zoneId); List<ClusterVO> listClustersByDcId(long zoneId);
List<Long> listAllClusters(Long zoneId); List<Long> listAllClusterIds(Long zoneId);
boolean getSupportsResigning(long clusterId); boolean getSupportsResigning(long clusterId);

View File

@ -16,25 +16,6 @@
// under the License. // under the License.
package com.cloud.dc.dao; package com.cloud.dc.dao;
import com.cloud.cpu.CPU;
import com.cloud.dc.ClusterDetailsDao;
import com.cloud.dc.ClusterDetailsVO;
import com.cloud.dc.ClusterVO;
import com.cloud.dc.HostPodVO;
import com.cloud.hypervisor.Hypervisor.HypervisorType;
import com.cloud.org.Grouping;
import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.GenericSearchBuilder;
import com.cloud.utils.db.JoinBuilder;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.SearchCriteria.Func;
import com.cloud.utils.db.SearchCriteria.Op;
import com.cloud.utils.db.TransactionLegacy;
import com.cloud.utils.exception.CloudRuntimeException;
import org.springframework.stereotype.Component;
import javax.inject.Inject;
import java.sql.PreparedStatement; import java.sql.PreparedStatement;
import java.sql.ResultSet; import java.sql.ResultSet;
import java.sql.SQLException; import java.sql.SQLException;
@ -46,6 +27,28 @@ import java.util.Map;
import java.util.Set; import java.util.Set;
import java.util.stream.Collectors; import java.util.stream.Collectors;
import javax.inject.Inject;
import org.springframework.stereotype.Component;
import com.cloud.cpu.CPU;
import com.cloud.dc.ClusterDetailsDao;
import com.cloud.dc.ClusterDetailsVO;
import com.cloud.dc.ClusterVO;
import com.cloud.dc.HostPodVO;
import com.cloud.hypervisor.Hypervisor.HypervisorType;
import com.cloud.org.Grouping;
import com.cloud.org.Managed;
import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.GenericSearchBuilder;
import com.cloud.utils.db.JoinBuilder;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.SearchCriteria.Func;
import com.cloud.utils.db.SearchCriteria.Op;
import com.cloud.utils.db.TransactionLegacy;
import com.cloud.utils.exception.CloudRuntimeException;
@Component @Component
public class ClusterDaoImpl extends GenericDaoBase<ClusterVO, Long> implements ClusterDao { public class ClusterDaoImpl extends GenericDaoBase<ClusterVO, Long> implements ClusterDao {
@ -58,7 +61,6 @@ public class ClusterDaoImpl extends GenericDaoBase<ClusterVO, Long> implements C
protected final SearchBuilder<ClusterVO> ClusterSearch; protected final SearchBuilder<ClusterVO> ClusterSearch;
protected final SearchBuilder<ClusterVO> ClusterDistinctArchSearch; protected final SearchBuilder<ClusterVO> ClusterDistinctArchSearch;
protected final SearchBuilder<ClusterVO> ClusterArchSearch; protected final SearchBuilder<ClusterVO> ClusterArchSearch;
protected GenericSearchBuilder<ClusterVO, Long> ClusterIdSearch; protected GenericSearchBuilder<ClusterVO, Long> ClusterIdSearch;
private static final String GET_POD_CLUSTER_MAP_PREFIX = "SELECT pod_id, id FROM cloud.cluster WHERE cluster.id IN( "; private static final String GET_POD_CLUSTER_MAP_PREFIX = "SELECT pod_id, id FROM cloud.cluster WHERE cluster.id IN( ";
@ -98,6 +100,8 @@ public class ClusterDaoImpl extends GenericDaoBase<ClusterVO, Long> implements C
ZoneClusterSearch = createSearchBuilder(); ZoneClusterSearch = createSearchBuilder();
ZoneClusterSearch.and("dataCenterId", ZoneClusterSearch.entity().getDataCenterId(), SearchCriteria.Op.EQ); ZoneClusterSearch.and("dataCenterId", ZoneClusterSearch.entity().getDataCenterId(), SearchCriteria.Op.EQ);
ZoneClusterSearch.and("allocationState", ZoneClusterSearch.entity().getAllocationState(), Op.EQ);
ZoneClusterSearch.and("managedState", ZoneClusterSearch.entity().getManagedState(), Op.EQ);
ZoneClusterSearch.done(); ZoneClusterSearch.done();
ClusterIdSearch = createSearchBuilder(Long.class); ClusterIdSearch = createSearchBuilder(Long.class);
@ -167,23 +171,15 @@ public class ClusterDaoImpl extends GenericDaoBase<ClusterVO, Long> implements C
sc.setParameters("zoneId", zoneId); sc.setParameters("zoneId", zoneId);
} }
List<ClusterVO> clusters = listBy(sc); List<ClusterVO> clusters = listBy(sc);
List<HypervisorType> hypers = new ArrayList<HypervisorType>(4); return clusters.stream()
for (ClusterVO cluster : clusters) { .map(ClusterVO::getHypervisorType)
hypers.add(cluster.getHypervisorType()); .distinct()
} .collect(Collectors.toList());
return hypers;
} }
@Override @Override
public Set<HypervisorType> getDistictAvailableHypervisorsAcrossClusters() { public Set<HypervisorType> getDistinctAvailableHypervisorsAcrossClusters() {
SearchCriteria<ClusterVO> sc = ClusterSearch.create(); return new HashSet<>(getAvailableHypervisorInZone(null));
List<ClusterVO> clusters = listBy(sc);
Set<HypervisorType> hypers = new HashSet<>();
for (ClusterVO cluster : clusters) {
hypers.add(cluster.getHypervisorType());
}
return hypers;
} }
@Override @Override
@ -266,6 +262,23 @@ public class ClusterDaoImpl extends GenericDaoBase<ClusterVO, Long> implements C
return customSearch(sc, null); return customSearch(sc, null);
} }
@Override
public Integer countAllByDcId(long zoneId) {
SearchCriteria<ClusterVO> sc = ZoneClusterSearch.create();
sc.setParameters("dataCenterId", zoneId);
return getCount(sc);
}
@Override
public Integer countAllManagedAndEnabledByDcId(long zoneId) {
SearchCriteria<ClusterVO> sc = ZoneClusterSearch.create();
sc.setParameters("dataCenterId", zoneId);
sc.setParameters("allocationState", Grouping.AllocationState.Enabled);
sc.setParameters("managedState", Managed.ManagedState.Managed);
return getCount(sc);
}
@Override @Override
public List<ClusterVO> listClustersByDcId(long zoneId) { public List<ClusterVO> listClustersByDcId(long zoneId) {
SearchCriteria<ClusterVO> sc = ZoneClusterSearch.create(); SearchCriteria<ClusterVO> sc = ZoneClusterSearch.create();
@ -289,7 +302,7 @@ public class ClusterDaoImpl extends GenericDaoBase<ClusterVO, Long> implements C
} }
@Override @Override
public List<Long> listAllClusters(Long zoneId) { public List<Long> listAllClusterIds(Long zoneId) {
SearchCriteria<Long> sc = ClusterIdSearch.create(); SearchCriteria<Long> sc = ClusterIdSearch.create();
if (zoneId != null) { if (zoneId != null) {
sc.setParameters("dataCenterId", zoneId); sc.setParameters("dataCenterId", zoneId);

View File

@ -294,8 +294,7 @@ public class DataCenterIpAddressDaoImpl extends GenericDaoBase<DataCenterIpAddre
sc.addAnd("podId", SearchCriteria.Op.EQ, podId); sc.addAnd("podId", SearchCriteria.Op.EQ, podId);
sc.addAnd("dataCenterId", SearchCriteria.Op.EQ, dcId); sc.addAnd("dataCenterId", SearchCriteria.Op.EQ, dcId);
List<DataCenterIpAddressVO> result = listBy(sc); return getCount(sc);
return result.size();
} }
public DataCenterIpAddressDaoImpl() { public DataCenterIpAddressDaoImpl() {

View File

@ -81,7 +81,7 @@ public class DataCenterVnetDaoImpl extends GenericDaoBase<DataCenterVnetVO, Long
public int countAllocatedVnets(long physicalNetworkId) { public int countAllocatedVnets(long physicalNetworkId) {
SearchCriteria<DataCenterVnetVO> sc = DcSearchAllocated.create(); SearchCriteria<DataCenterVnetVO> sc = DcSearchAllocated.create();
sc.setParameters("physicalNetworkId", physicalNetworkId); sc.setParameters("physicalNetworkId", physicalNetworkId);
return listBy(sc).size(); return getCount(sc);
} }
@Override @Override

View File

@ -27,6 +27,7 @@ import com.cloud.hypervisor.Hypervisor;
import com.cloud.hypervisor.Hypervisor.HypervisorType; import com.cloud.hypervisor.Hypervisor.HypervisorType;
import com.cloud.info.RunningHostCountInfo; import com.cloud.info.RunningHostCountInfo;
import com.cloud.resource.ResourceState; import com.cloud.resource.ResourceState;
import com.cloud.utils.Pair;
import com.cloud.utils.db.GenericDao; import com.cloud.utils.db.GenericDao;
import com.cloud.utils.fsm.StateDao; import com.cloud.utils.fsm.StateDao;
@ -39,8 +40,14 @@ public interface HostDao extends GenericDao<HostVO, Long>, StateDao<Status, Stat
Integer countAllByType(final Host.Type type); Integer countAllByType(final Host.Type type);
Integer countAllInClusterByTypeAndStates(Long clusterId, final Host.Type type, List<Status> status);
Integer countAllByTypeInZone(long zoneId, final Host.Type type); Integer countAllByTypeInZone(long zoneId, final Host.Type type);
Integer countUpAndEnabledHostsInZone(long zoneId);
Pair<Integer, Integer> countAllHostsAndCPUSocketsByType(Type type);
/** /**
* Mark all hosts associated with a certain management server * Mark all hosts associated with a certain management server
* as disconnected. * as disconnected.
@ -75,32 +82,41 @@ public interface HostDao extends GenericDao<HostVO, Long>, StateDao<Status, Stat
List<HostVO> findHypervisorHostInCluster(long clusterId); List<HostVO> findHypervisorHostInCluster(long clusterId);
HostVO findAnyStateHypervisorHostInCluster(long clusterId);
HostVO findOldestExistentHypervisorHostInCluster(long clusterId); HostVO findOldestExistentHypervisorHostInCluster(long clusterId);
List<HostVO> listAllUpAndEnabledNonHAHosts(Type type, Long clusterId, Long podId, long dcId, String haTag); List<HostVO> listAllUpAndEnabledNonHAHosts(Type type, Long clusterId, Long podId, long dcId, String haTag);
List<HostVO> findByDataCenterId(Long zoneId); List<HostVO> findByDataCenterId(Long zoneId);
List<Long> listIdsByDataCenterId(Long zoneId);
List<HostVO> findByPodId(Long podId); List<HostVO> findByPodId(Long podId);
List<Long> listIdsByPodId(Long podId);
List<HostVO> findByClusterId(Long clusterId); List<HostVO> findByClusterId(Long clusterId);
List<Long> listIdsByClusterId(Long clusterId);
List<Long> listIdsForUpRouting(Long zoneId, Long podId, Long clusterId);
List<Long> listIdsByType(Type type);
List<Long> listIdsForUpEnabledByZoneAndHypervisor(Long zoneId, HypervisorType hypervisorType);
List<HostVO> findByClusterIdAndEncryptionSupport(Long clusterId); List<HostVO> findByClusterIdAndEncryptionSupport(Long clusterId);
/** /**
* Returns hosts that are 'Up' and 'Enabled' from the given Data Center/Zone * Returns host Ids that are 'Up' and 'Enabled' from the given Data Center/Zone
*/ */
List<HostVO> listByDataCenterId(long id); List<Long> listEnabledIdsByDataCenterId(long id);
/** /**
* Returns hosts that are from the given Data Center/Zone and at a given state (e.g. Creating, Enabled, Disabled, etc). * Returns host Ids that are 'Up' and 'Disabled' from the given Data Center/Zone
*/ */
List<HostVO> listByDataCenterIdAndState(long id, ResourceState state); List<Long> listDisabledIdsByDataCenterId(long id);
/**
* Returns hosts that are 'Up' and 'Disabled' from the given Data Center/Zone
*/
List<HostVO> listDisabledByDataCenterId(long id);
List<HostVO> listByDataCenterIdAndHypervisorType(long zoneId, Hypervisor.HypervisorType hypervisorType); List<HostVO> listByDataCenterIdAndHypervisorType(long zoneId, Hypervisor.HypervisorType hypervisorType);
@ -110,8 +126,6 @@ public interface HostDao extends GenericDao<HostVO, Long>, StateDao<Status, Stat
List<HostVO> listAllHostsThatHaveNoRuleTag(Host.Type type, Long clusterId, Long podId, Long dcId); List<HostVO> listAllHostsThatHaveNoRuleTag(Host.Type type, Long clusterId, Long podId, Long dcId);
List<HostVO> listAllHostsByType(Host.Type type);
HostVO findByPublicIp(String publicIp); HostVO findByPublicIp(String publicIp);
List<Long> listClustersByHostTag(String hostTagOnOffering); List<Long> listClustersByHostTag(String hostTagOnOffering);
@ -171,4 +185,14 @@ public interface HostDao extends GenericDao<HostVO, Long>, StateDao<Status, Stat
List<Long> findClustersThatMatchHostTagRule(String computeOfferingTags); List<Long> findClustersThatMatchHostTagRule(String computeOfferingTags);
List<Long> listSsvmHostsWithPendingMigrateJobsOrderedByJobCount(); List<Long> listSsvmHostsWithPendingMigrateJobsOrderedByJobCount();
boolean isHostUp(long hostId);
List<Long> findHostIdsByZoneClusterResourceStateTypeAndHypervisorType(final Long zoneId, final Long clusterId,
final List<ResourceState> resourceStates, final List<Type> types,
final List<Hypervisor.HypervisorType> hypervisorTypes);
List<HypervisorType> listDistinctHypervisorTypes(final Long zoneId);
List<HostVO> listByIds(final List<Long> ids);
} }

View File

@ -20,6 +20,7 @@ import java.sql.PreparedStatement;
import java.sql.ResultSet; import java.sql.ResultSet;
import java.sql.SQLException; import java.sql.SQLException;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Arrays;
import java.util.Date; import java.util.Date;
import java.util.HashMap; import java.util.HashMap;
import java.util.HashSet; import java.util.HashSet;
@ -45,8 +46,8 @@ import com.cloud.dc.ClusterVO;
import com.cloud.dc.dao.ClusterDao; import com.cloud.dc.dao.ClusterDao;
import com.cloud.gpu.dao.HostGpuGroupsDao; import com.cloud.gpu.dao.HostGpuGroupsDao;
import com.cloud.gpu.dao.VGPUTypesDao; import com.cloud.gpu.dao.VGPUTypesDao;
import com.cloud.host.Host;
import com.cloud.host.DetailVO; import com.cloud.host.DetailVO;
import com.cloud.host.Host;
import com.cloud.host.Host.Type; import com.cloud.host.Host.Type;
import com.cloud.host.HostTagVO; import com.cloud.host.HostTagVO;
import com.cloud.host.HostVO; import com.cloud.host.HostVO;
@ -59,6 +60,7 @@ import com.cloud.org.Grouping;
import com.cloud.org.Managed; import com.cloud.org.Managed;
import com.cloud.resource.ResourceState; import com.cloud.resource.ResourceState;
import com.cloud.utils.DateUtil; import com.cloud.utils.DateUtil;
import com.cloud.utils.Pair;
import com.cloud.utils.db.Attribute; import com.cloud.utils.db.Attribute;
import com.cloud.utils.db.DB; import com.cloud.utils.db.DB;
import com.cloud.utils.db.Filter; import com.cloud.utils.db.Filter;
@ -74,8 +76,6 @@ import com.cloud.utils.db.TransactionLegacy;
import com.cloud.utils.db.UpdateBuilder; import com.cloud.utils.db.UpdateBuilder;
import com.cloud.utils.exception.CloudRuntimeException; import com.cloud.utils.exception.CloudRuntimeException;
import java.util.Arrays;
@DB @DB
@TableGenerator(name = "host_req_sq", table = "op_host", pkColumnName = "id", valueColumnName = "sequence", allocationSize = 1) @TableGenerator(name = "host_req_sq", table = "op_host", pkColumnName = "id", valueColumnName = "sequence", allocationSize = 1)
public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao { //FIXME: , ExternalIdDao { public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao { //FIXME: , ExternalIdDao {
@ -98,6 +98,7 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
protected SearchBuilder<HostVO> TypePodDcStatusSearch; protected SearchBuilder<HostVO> TypePodDcStatusSearch;
protected SearchBuilder<HostVO> IdsSearch;
protected SearchBuilder<HostVO> IdStatusSearch; protected SearchBuilder<HostVO> IdStatusSearch;
protected SearchBuilder<HostVO> TypeDcSearch; protected SearchBuilder<HostVO> TypeDcSearch;
protected SearchBuilder<HostVO> TypeDcStatusSearch; protected SearchBuilder<HostVO> TypeDcStatusSearch;
@ -124,6 +125,7 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
protected SearchBuilder<HostVO> UnmanagedApplianceSearch; protected SearchBuilder<HostVO> UnmanagedApplianceSearch;
protected SearchBuilder<HostVO> MaintenanceCountSearch; protected SearchBuilder<HostVO> MaintenanceCountSearch;
protected SearchBuilder<HostVO> HostTypeCountSearch; protected SearchBuilder<HostVO> HostTypeCountSearch;
protected SearchBuilder<HostVO> HostTypeClusterCountSearch;
protected SearchBuilder<HostVO> ResponsibleMsCountSearch; protected SearchBuilder<HostVO> ResponsibleMsCountSearch;
protected SearchBuilder<HostVO> HostTypeZoneCountSearch; protected SearchBuilder<HostVO> HostTypeZoneCountSearch;
protected SearchBuilder<HostVO> ClusterStatusSearch; protected SearchBuilder<HostVO> ClusterStatusSearch;
@ -136,8 +138,7 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
protected SearchBuilder<HostVO> ManagedRoutingServersSearch; protected SearchBuilder<HostVO> ManagedRoutingServersSearch;
protected SearchBuilder<HostVO> SecondaryStorageVMSearch; protected SearchBuilder<HostVO> SecondaryStorageVMSearch;
protected GenericSearchBuilder<HostVO, Long> HostIdSearch; protected GenericSearchBuilder<HostVO, Long> HostsInStatusesSearch;
protected GenericSearchBuilder<HostVO, Long> HostsInStatusSearch;
protected GenericSearchBuilder<HostVO, Long> CountRoutingByDc; protected GenericSearchBuilder<HostVO, Long> CountRoutingByDc;
protected SearchBuilder<HostTransferMapVO> HostTransferSearch; protected SearchBuilder<HostTransferMapVO> HostTransferSearch;
protected SearchBuilder<ClusterVO> ClusterManagedSearch; protected SearchBuilder<ClusterVO> ClusterManagedSearch;
@ -187,12 +188,21 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
HostTypeCountSearch = createSearchBuilder(); HostTypeCountSearch = createSearchBuilder();
HostTypeCountSearch.and("type", HostTypeCountSearch.entity().getType(), SearchCriteria.Op.EQ); HostTypeCountSearch.and("type", HostTypeCountSearch.entity().getType(), SearchCriteria.Op.EQ);
HostTypeCountSearch.and("zoneId", HostTypeCountSearch.entity().getDataCenterId(), SearchCriteria.Op.EQ);
HostTypeCountSearch.and("resourceState", HostTypeCountSearch.entity().getResourceState(), SearchCriteria.Op.EQ);
HostTypeCountSearch.done(); HostTypeCountSearch.done();
ResponsibleMsCountSearch = createSearchBuilder(); ResponsibleMsCountSearch = createSearchBuilder();
ResponsibleMsCountSearch.and("managementServerId", ResponsibleMsCountSearch.entity().getManagementServerId(), SearchCriteria.Op.EQ); ResponsibleMsCountSearch.and("managementServerId", ResponsibleMsCountSearch.entity().getManagementServerId(), SearchCriteria.Op.EQ);
ResponsibleMsCountSearch.done(); ResponsibleMsCountSearch.done();
HostTypeClusterCountSearch = createSearchBuilder();
HostTypeClusterCountSearch.and("cluster", HostTypeClusterCountSearch.entity().getClusterId(), SearchCriteria.Op.EQ);
HostTypeClusterCountSearch.and("type", HostTypeClusterCountSearch.entity().getType(), SearchCriteria.Op.EQ);
HostTypeClusterCountSearch.and("status", HostTypeClusterCountSearch.entity().getStatus(), SearchCriteria.Op.IN);
HostTypeClusterCountSearch.and("removed", HostTypeClusterCountSearch.entity().getRemoved(), SearchCriteria.Op.NULL);
HostTypeClusterCountSearch.done();
HostTypeZoneCountSearch = createSearchBuilder(); HostTypeZoneCountSearch = createSearchBuilder();
HostTypeZoneCountSearch.and("type", HostTypeZoneCountSearch.entity().getType(), SearchCriteria.Op.EQ); HostTypeZoneCountSearch.and("type", HostTypeZoneCountSearch.entity().getType(), SearchCriteria.Op.EQ);
HostTypeZoneCountSearch.and("dc", HostTypeZoneCountSearch.entity().getDataCenterId(), SearchCriteria.Op.EQ); HostTypeZoneCountSearch.and("dc", HostTypeZoneCountSearch.entity().getDataCenterId(), SearchCriteria.Op.EQ);
@ -240,6 +250,10 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
TypeClusterStatusSearch.and("resourceState", TypeClusterStatusSearch.entity().getResourceState(), SearchCriteria.Op.EQ); TypeClusterStatusSearch.and("resourceState", TypeClusterStatusSearch.entity().getResourceState(), SearchCriteria.Op.EQ);
TypeClusterStatusSearch.done(); TypeClusterStatusSearch.done();
IdsSearch = createSearchBuilder();
IdsSearch.and("id", IdsSearch.entity().getId(), SearchCriteria.Op.IN);
IdsSearch.done();
IdStatusSearch = createSearchBuilder(); IdStatusSearch = createSearchBuilder();
IdStatusSearch.and("id", IdStatusSearch.entity().getId(), SearchCriteria.Op.EQ); IdStatusSearch.and("id", IdStatusSearch.entity().getId(), SearchCriteria.Op.EQ);
IdStatusSearch.and("states", IdStatusSearch.entity().getStatus(), SearchCriteria.Op.IN); IdStatusSearch.and("states", IdStatusSearch.entity().getStatus(), SearchCriteria.Op.IN);
@ -386,14 +400,14 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
AvailHypevisorInZone.groupBy(AvailHypevisorInZone.entity().getHypervisorType()); AvailHypevisorInZone.groupBy(AvailHypevisorInZone.entity().getHypervisorType());
AvailHypevisorInZone.done(); AvailHypevisorInZone.done();
HostsInStatusSearch = createSearchBuilder(Long.class); HostsInStatusesSearch = createSearchBuilder(Long.class);
HostsInStatusSearch.selectFields(HostsInStatusSearch.entity().getId()); HostsInStatusesSearch.selectFields(HostsInStatusesSearch.entity().getId());
HostsInStatusSearch.and("dc", HostsInStatusSearch.entity().getDataCenterId(), Op.EQ); HostsInStatusesSearch.and("dc", HostsInStatusesSearch.entity().getDataCenterId(), Op.EQ);
HostsInStatusSearch.and("pod", HostsInStatusSearch.entity().getPodId(), Op.EQ); HostsInStatusesSearch.and("pod", HostsInStatusesSearch.entity().getPodId(), Op.EQ);
HostsInStatusSearch.and("cluster", HostsInStatusSearch.entity().getClusterId(), Op.EQ); HostsInStatusesSearch.and("cluster", HostsInStatusesSearch.entity().getClusterId(), Op.EQ);
HostsInStatusSearch.and("type", HostsInStatusSearch.entity().getType(), Op.EQ); HostsInStatusesSearch.and("type", HostsInStatusesSearch.entity().getType(), Op.EQ);
HostsInStatusSearch.and("statuses", HostsInStatusSearch.entity().getStatus(), Op.IN); HostsInStatusesSearch.and("statuses", HostsInStatusesSearch.entity().getStatus(), Op.IN);
HostsInStatusSearch.done(); HostsInStatusesSearch.done();
CountRoutingByDc = createSearchBuilder(Long.class); CountRoutingByDc = createSearchBuilder(Long.class);
CountRoutingByDc.select(null, Func.COUNT, null); CountRoutingByDc.select(null, Func.COUNT, null);
@ -456,11 +470,6 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
HostsInClusterSearch.and("server", HostsInClusterSearch.entity().getManagementServerId(), SearchCriteria.Op.NNULL); HostsInClusterSearch.and("server", HostsInClusterSearch.entity().getManagementServerId(), SearchCriteria.Op.NNULL);
HostsInClusterSearch.done(); HostsInClusterSearch.done();
HostIdSearch = createSearchBuilder(Long.class);
HostIdSearch.selectFields(HostIdSearch.entity().getId());
HostIdSearch.and("dataCenterId", HostIdSearch.entity().getDataCenterId(), Op.EQ);
HostIdSearch.done();
searchBuilderFindByRuleTag = _hostTagsDao.createSearchBuilder(); searchBuilderFindByRuleTag = _hostTagsDao.createSearchBuilder();
searchBuilderFindByRuleTag.and("is_tag_a_rule", searchBuilderFindByRuleTag.entity().getIsTagARule(), Op.EQ); searchBuilderFindByRuleTag.and("is_tag_a_rule", searchBuilderFindByRuleTag.entity().getIsTagARule(), Op.EQ);
searchBuilderFindByRuleTag.or("tagDoesNotExist", searchBuilderFindByRuleTag.entity().getIsTagARule(), Op.NULL); searchBuilderFindByRuleTag.or("tagDoesNotExist", searchBuilderFindByRuleTag.entity().getIsTagARule(), Op.NULL);
@ -492,8 +501,7 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
sc.setParameters("resourceState", (Object[])states); sc.setParameters("resourceState", (Object[])states);
sc.setParameters("cluster", clusterId); sc.setParameters("cluster", clusterId);
List<HostVO> hosts = listBy(sc); return getCount(sc);
return hosts.size();
} }
@Override @Override
@ -504,36 +512,62 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
} }
@Override @Override
public Integer countAllByTypeInZone(long zoneId, Type type) { public Integer countAllInClusterByTypeAndStates(Long clusterId, final Host.Type type, List<Status> status) {
SearchCriteria<HostVO> sc = HostTypeCountSearch.create(); SearchCriteria<HostVO> sc = HostTypeClusterCountSearch.create();
if (clusterId != null) {
sc.setParameters("cluster", clusterId);
}
if (type != null) {
sc.setParameters("type", type); sc.setParameters("type", type);
sc.setParameters("dc", zoneId); }
if (status != null) {
sc.setParameters("status", status.toArray());
}
return getCount(sc); return getCount(sc);
} }
@Override @Override
public List<HostVO> listByDataCenterId(long id) { public Integer countAllByTypeInZone(long zoneId, Type type) {
return listByDataCenterIdAndState(id, ResourceState.Enabled); SearchCriteria<HostVO> sc = HostTypeCountSearch.create();
sc.setParameters("type", type);
sc.setParameters("zoneId", zoneId);
return getCount(sc);
} }
@Override @Override
public List<HostVO> listByDataCenterIdAndState(long id, ResourceState state) { public Integer countUpAndEnabledHostsInZone(long zoneId) {
SearchCriteria<HostVO> sc = scHostsFromZoneUpRouting(id); SearchCriteria<HostVO> sc = HostTypeCountSearch.create();
sc.setParameters("resourceState", state); sc.setParameters("type", Type.Routing);
return listBy(sc); sc.setParameters("resourceState", ResourceState.Enabled);
sc.setParameters("zoneId", zoneId);
return getCount(sc);
} }
@Override @Override
public List<HostVO> listDisabledByDataCenterId(long id) { public Pair<Integer, Integer> countAllHostsAndCPUSocketsByType(Type type) {
return listByDataCenterIdAndState(id, ResourceState.Disabled); GenericSearchBuilder<HostVO, SumCount> sb = createSearchBuilder(SumCount.class);
sb.select("sum", Func.SUM, sb.entity().getCpuSockets());
sb.select("count", Func.COUNT, null);
sb.and("type", sb.entity().getType(), SearchCriteria.Op.EQ);
sb.done();
SearchCriteria<SumCount> sc = sb.create();
sc.setParameters("type", type);
SumCount result = customSearch(sc, null).get(0);
return new Pair<>((int)result.count, (int)result.sum);
} }
private SearchCriteria<HostVO> scHostsFromZoneUpRouting(long id) { private List<Long> listIdsForRoutingByZoneIdAndResourceState(long zoneId, ResourceState state) {
SearchCriteria<HostVO> sc = DcSearch.create(); return listIdsBy(Type.Routing, Status.Up, state, null, zoneId, null, null);
sc.setParameters("dc", id); }
sc.setParameters("status", Status.Up);
sc.setParameters("type", Host.Type.Routing); @Override
return sc; public List<Long> listEnabledIdsByDataCenterId(long id) {
return listIdsForRoutingByZoneIdAndResourceState(id, ResourceState.Enabled);
}
@Override
public List<Long> listDisabledIdsByDataCenterId(long id) {
return listIdsForRoutingByZoneIdAndResourceState(id, ResourceState.Disabled);
} }
@Override @Override
@ -1178,6 +1212,11 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
return listBy(sc); return listBy(sc);
} }
@Override
public List<Long> listIdsByDataCenterId(Long zoneId) {
return listIdsBy(Type.Routing, null, null, null, zoneId, null, null);
}
@Override @Override
public List<HostVO> findByPodId(Long podId) { public List<HostVO> findByPodId(Long podId) {
SearchCriteria<HostVO> sc = PodSearch.create(); SearchCriteria<HostVO> sc = PodSearch.create();
@ -1185,6 +1224,11 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
return listBy(sc); return listBy(sc);
} }
@Override
public List<Long> listIdsByPodId(Long podId) {
return listIdsBy(null, null, null, null, null, podId, null);
}
@Override @Override
public List<HostVO> findByClusterId(Long clusterId) { public List<HostVO> findByClusterId(Long clusterId) {
SearchCriteria<HostVO> sc = ClusterSearch.create(); SearchCriteria<HostVO> sc = ClusterSearch.create();
@ -1192,6 +1236,63 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
return listBy(sc); return listBy(sc);
} }
protected List<Long> listIdsBy(Host.Type type, Status status, ResourceState resourceState,
HypervisorType hypervisorType, Long zoneId, Long podId, Long clusterId) {
GenericSearchBuilder<HostVO, Long> sb = createSearchBuilder(Long.class);
sb.selectFields(sb.entity().getId());
sb.and("type", sb.entity().getType(), SearchCriteria.Op.EQ);
sb.and("status", sb.entity().getStatus(), SearchCriteria.Op.EQ);
sb.and("resourceState", sb.entity().getResourceState(), SearchCriteria.Op.EQ);
sb.and("hypervisorType", sb.entity().getHypervisorType(), SearchCriteria.Op.EQ);
sb.and("zoneId", sb.entity().getDataCenterId(), SearchCriteria.Op.EQ);
sb.and("podId", sb.entity().getPodId(), SearchCriteria.Op.EQ);
sb.and("clusterId", sb.entity().getClusterId(), SearchCriteria.Op.EQ);
sb.done();
SearchCriteria<Long> sc = sb.create();
if (type != null) {
sc.setParameters("type", type);
}
if (status != null) {
sc.setParameters("status", status);
}
if (resourceState != null) {
sc.setParameters("resourceState", resourceState);
}
if (hypervisorType != null) {
sc.setParameters("hypervisorType", hypervisorType);
}
if (zoneId != null) {
sc.setParameters("zoneId", zoneId);
}
if (podId != null) {
sc.setParameters("podId", podId);
}
if (clusterId != null) {
sc.setParameters("clusterId", clusterId);
}
return customSearch(sc, null);
}
@Override
public List<Long> listIdsByClusterId(Long clusterId) {
return listIdsBy(null, null, null, null, null, null, clusterId);
}
@Override
public List<Long> listIdsForUpRouting(Long zoneId, Long podId, Long clusterId) {
return listIdsBy(Type.Routing, Status.Up, null, null, zoneId, podId, clusterId);
}
@Override
public List<Long> listIdsByType(Type type) {
return listIdsBy(type, null, null, null, null, null, null);
}
@Override
public List<Long> listIdsForUpEnabledByZoneAndHypervisor(Long zoneId, HypervisorType hypervisorType) {
return listIdsBy(null, Status.Up, ResourceState.Enabled, hypervisorType, zoneId, null, null);
}
@Override @Override
public List<HostVO> findByClusterIdAndEncryptionSupport(Long clusterId) { public List<HostVO> findByClusterIdAndEncryptionSupport(Long clusterId) {
SearchBuilder<DetailVO> hostCapabilitySearch = _detailsDao.createSearchBuilder(); SearchBuilder<DetailVO> hostCapabilitySearch = _detailsDao.createSearchBuilder();
@ -1244,6 +1345,15 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
return listBy(sc); return listBy(sc);
} }
@Override
public HostVO findAnyStateHypervisorHostInCluster(long clusterId) {
SearchCriteria<HostVO> sc = TypeClusterStatusSearch.create();
sc.setParameters("type", Host.Type.Routing);
sc.setParameters("cluster", clusterId);
List<HostVO> list = listBy(sc, new Filter(1));
return list.isEmpty() ? null : list.get(0);
}
@Override @Override
public HostVO findOldestExistentHypervisorHostInCluster(long clusterId) { public HostVO findOldestExistentHypervisorHostInCluster(long clusterId) {
SearchCriteria<HostVO> sc = TypeClusterStatusSearch.create(); SearchCriteria<HostVO> sc = TypeClusterStatusSearch.create();
@ -1263,9 +1373,7 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
@Override @Override
public List<Long> listAllHosts(long zoneId) { public List<Long> listAllHosts(long zoneId) {
SearchCriteria<Long> sc = HostIdSearch.create(); return listIdsBy(null, null, null, null, zoneId, null, null);
sc.addAnd("dataCenterId", SearchCriteria.Op.EQ, zoneId);
return customSearch(sc, null);
} }
@Override @Override
@ -1449,15 +1557,6 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
return result; return result;
} }
@Override
public List<HostVO> listAllHostsByType(Host.Type type) {
SearchCriteria<HostVO> sc = TypeSearch.create();
sc.setParameters("type", type);
sc.setParameters("resourceState", ResourceState.Enabled);
return listBy(sc);
}
@Override @Override
public List<HostVO> listByType(Host.Type type) { public List<HostVO> listByType(Host.Type type) {
SearchCriteria<HostVO> sc = TypeSearch.create(); SearchCriteria<HostVO> sc = TypeSearch.create();
@ -1602,4 +1701,71 @@ public class HostDaoImpl extends GenericDaoBase<HostVO, Long> implements HostDao
} }
return String.format(sqlFindHostInZoneToExecuteCommand, hostResourceStatus); return String.format(sqlFindHostInZoneToExecuteCommand, hostResourceStatus);
} }
@Override
public boolean isHostUp(long hostId) {
GenericSearchBuilder<HostVO, Status> sb = createSearchBuilder(Status.class);
sb.and("id", sb.entity().getId(), Op.EQ);
sb.selectFields(sb.entity().getStatus());
SearchCriteria<Status> sc = sb.create();
sc.setParameters("id", hostId);
List<Status> statuses = customSearch(sc, null);
return CollectionUtils.isNotEmpty(statuses) && Status.Up.equals(statuses.get(0));
}
@Override
public List<Long> findHostIdsByZoneClusterResourceStateTypeAndHypervisorType(final Long zoneId, final Long clusterId,
final List<ResourceState> resourceStates, final List<Type> types,
final List<Hypervisor.HypervisorType> hypervisorTypes) {
GenericSearchBuilder<HostVO, Long> sb = createSearchBuilder(Long.class);
sb.selectFields(sb.entity().getId());
sb.and("zoneId", sb.entity().getDataCenterId(), SearchCriteria.Op.EQ);
sb.and("clusterId", sb.entity().getClusterId(), SearchCriteria.Op.EQ);
sb.and("resourceState", sb.entity().getResourceState(), SearchCriteria.Op.IN);
sb.and("type", sb.entity().getType(), SearchCriteria.Op.IN);
if (CollectionUtils.isNotEmpty(hypervisorTypes)) {
sb.and().op(sb.entity().getHypervisorType(), SearchCriteria.Op.NULL);
sb.or("hypervisorTypes", sb.entity().getHypervisorType(), SearchCriteria.Op.IN);
sb.cp();
}
sb.done();
SearchCriteria<Long> sc = sb.create();
if (zoneId != null) {
sc.setParameters("zoneId", zoneId);
}
if (clusterId != null) {
sc.setParameters("clusterId", clusterId);
}
if (CollectionUtils.isNotEmpty(hypervisorTypes)) {
sc.setParameters("hypervisorTypes", hypervisorTypes.toArray());
}
sc.setParameters("resourceState", resourceStates.toArray());
sc.setParameters("type", types.toArray());
return customSearch(sc, null);
}
@Override
public List<HypervisorType> listDistinctHypervisorTypes(final Long zoneId) {
GenericSearchBuilder<HostVO, HypervisorType> sb = createSearchBuilder(HypervisorType.class);
sb.and("zoneId", sb.entity().getDataCenterId(), SearchCriteria.Op.EQ);
sb.and("type", sb.entity().getType(), SearchCriteria.Op.EQ);
sb.select(null, Func.DISTINCT, sb.entity().getHypervisorType());
sb.done();
SearchCriteria<HypervisorType> sc = sb.create();
if (zoneId != null) {
sc.setParameters("zoneId", zoneId);
}
sc.setParameters("type", Type.Routing);
return customSearch(sc, null);
}
@Override
public List<HostVO> listByIds(List<Long> ids) {
if (CollectionUtils.isEmpty(ids)) {
return new ArrayList<>();
}
SearchCriteria<HostVO> sc = IdsSearch.create();
sc.setParameters("id", ids.toArray());
return search(sc, null);
}
} }

View File

@ -421,7 +421,7 @@ public class IPAddressDaoImpl extends GenericDaoBase<IPAddressVO, Long> implemen
public long countFreeIpsInVlan(long vlanDbId) { public long countFreeIpsInVlan(long vlanDbId) {
SearchCriteria<IPAddressVO> sc = VlanDbIdSearchUnallocated.create(); SearchCriteria<IPAddressVO> sc = VlanDbIdSearchUnallocated.create();
sc.setParameters("vlanDbId", vlanDbId); sc.setParameters("vlanDbId", vlanDbId);
return listBy(sc).size(); return getCount(sc);
} }
@Override @Override

View File

@ -415,8 +415,7 @@ public class NetworkDaoImpl extends GenericDaoBase<NetworkVO, Long>implements Ne
sc.setParameters("broadcastUri", broadcastURI); sc.setParameters("broadcastUri", broadcastURI);
sc.setParameters("guestType", guestTypes); sc.setParameters("guestType", guestTypes);
sc.setJoinParameters("persistent", "persistent", isPersistent); sc.setJoinParameters("persistent", "persistent", isPersistent);
List<NetworkVO> persistentNetworks = search(sc, null); return getCount(sc);
return persistentNetworks.size();
} }
@Override @Override

View File

@ -55,8 +55,7 @@ public class CommandExecLogDaoImpl extends GenericDaoBase<CommandExecLogVO, Long
SearchCriteria<CommandExecLogVO> sc = CommandSearch.create(); SearchCriteria<CommandExecLogVO> sc = CommandSearch.create();
sc.setParameters("host_id", id); sc.setParameters("host_id", id);
sc.setParameters("command_name", "CopyCommand"); sc.setParameters("command_name", "CopyCommand");
List<CommandExecLogVO> copyCmds = customSearch(sc, null); return getCount(sc);
return copyCmds.size();
} }
@Override @Override

View File

@ -54,7 +54,7 @@ public interface ServiceOfferingDao extends GenericDao<ServiceOfferingVO, Long>
List<ServiceOfferingVO> listPublicByCpuAndMemory(Integer cpus, Integer memory); List<ServiceOfferingVO> listPublicByCpuAndMemory(Integer cpus, Integer memory);
List<ServiceOfferingVO> listByHostTag(String tag);
ServiceOfferingVO findServiceOfferingByComputeOnlyDiskOffering(long diskOfferingId, boolean includingRemoved); ServiceOfferingVO findServiceOfferingByComputeOnlyDiskOffering(long diskOfferingId, boolean includingRemoved);
List<Long> listIdsByHostTag(String tag);
} }

View File

@ -34,6 +34,7 @@ import com.cloud.service.ServiceOfferingVO;
import com.cloud.storage.Storage.ProvisioningType; import com.cloud.storage.Storage.ProvisioningType;
import com.cloud.utils.db.DB; import com.cloud.utils.db.DB;
import com.cloud.utils.db.GenericDaoBase; import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.GenericSearchBuilder;
import com.cloud.utils.db.SearchBuilder; import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria; import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.exception.CloudRuntimeException; import com.cloud.utils.exception.CloudRuntimeException;
@ -293,8 +294,9 @@ public class ServiceOfferingDaoImpl extends GenericDaoBase<ServiceOfferingVO, Lo
} }
@Override @Override
public List<ServiceOfferingVO> listByHostTag(String tag) { public List<Long> listIdsByHostTag(String tag) {
SearchBuilder<ServiceOfferingVO> sb = createSearchBuilder(); GenericSearchBuilder<ServiceOfferingVO, Long> sb = createSearchBuilder(Long.class);
sb.selectFields(sb.entity().getId());
sb.and("tagNotNull", sb.entity().getHostTag(), SearchCriteria.Op.NNULL); sb.and("tagNotNull", sb.entity().getHostTag(), SearchCriteria.Op.NNULL);
sb.and().op("tagEq", sb.entity().getHostTag(), SearchCriteria.Op.EQ); sb.and().op("tagEq", sb.entity().getHostTag(), SearchCriteria.Op.EQ);
sb.or("tagStartLike", sb.entity().getHostTag(), SearchCriteria.Op.LIKE); sb.or("tagStartLike", sb.entity().getHostTag(), SearchCriteria.Op.LIKE);
@ -302,11 +304,12 @@ public class ServiceOfferingDaoImpl extends GenericDaoBase<ServiceOfferingVO, Lo
sb.or("tagEndLike", sb.entity().getHostTag(), SearchCriteria.Op.LIKE); sb.or("tagEndLike", sb.entity().getHostTag(), SearchCriteria.Op.LIKE);
sb.cp(); sb.cp();
sb.done(); sb.done();
SearchCriteria<ServiceOfferingVO> sc = sb.create(); SearchCriteria<Long> sc = sb.create();
sc.setParameters("tagEq", tag); sc.setParameters("tagEq", tag);
sc.setParameters("tagStartLike", tag + ",%"); sc.setParameters("tagStartLike", tag + ",%");
sc.setParameters("tagMidLike", "%," + tag + ",%"); sc.setParameters("tagMidLike", "%," + tag + ",%");
sc.setParameters("tagEndLike", "%," + tag); sc.setParameters("tagEndLike", "%," + tag);
return listBy(sc); return customSearch(sc, null);
} }
} }

View File

@ -34,7 +34,7 @@ public interface StoragePoolHostDao extends GenericDao<StoragePoolHostVO, Long>
List<Long> findHostsConnectedToPools(List<Long> poolIds); List<Long> findHostsConnectedToPools(List<Long> poolIds);
List<Pair<Long, Integer>> getDatacenterStoragePoolHostInfo(long dcId, boolean sharedOnly); boolean hasDatacenterStoragePoolHostInfo(long dcId, boolean sharedOnly);
public void deletePrimaryRecordsForHost(long hostId); public void deletePrimaryRecordsForHost(long hostId);

View File

@ -55,11 +55,11 @@ public class StoragePoolHostDaoImpl extends GenericDaoBase<StoragePoolHostVO, Lo
protected static final String HOSTS_FOR_POOLS_SEARCH = "SELECT DISTINCT(ph.host_id) FROM storage_pool_host_ref ph, host h WHERE ph.host_id = h.id AND h.status = 'Up' AND resource_state = 'Enabled' AND ph.pool_id IN (?)"; protected static final String HOSTS_FOR_POOLS_SEARCH = "SELECT DISTINCT(ph.host_id) FROM storage_pool_host_ref ph, host h WHERE ph.host_id = h.id AND h.status = 'Up' AND resource_state = 'Enabled' AND ph.pool_id IN (?)";
protected static final String STORAGE_POOL_HOST_INFO = "SELECT p.data_center_id, count(ph.host_id) " + " FROM storage_pool p, storage_pool_host_ref ph " protected static final String STORAGE_POOL_HOST_INFO = "SELECT (SELECT id FROM storage_pool_host_ref ph WHERE " +
+ " WHERE p.id = ph.pool_id AND p.data_center_id = ? " + " GROUP by p.data_center_id"; "ph.pool_id=p.id limit 1) AS sphr FROM storage_pool p WHERE p.data_center_id = ?";
protected static final String SHARED_STORAGE_POOL_HOST_INFO = "SELECT p.data_center_id, count(ph.host_id) " + " FROM storage_pool p, storage_pool_host_ref ph " protected static final String SHARED_STORAGE_POOL_HOST_INFO = "SELECT (SELECT id FROM storage_pool_host_ref ph " +
+ " WHERE p.id = ph.pool_id AND p.data_center_id = ? " + " AND p.pool_type NOT IN ('LVM', 'Filesystem')" + " GROUP by p.data_center_id"; "WHERE ph.pool_id=p.id limit 1) AS sphr FROM storage_pool p WHERE p.data_center_id = ? AND p.pool_type NOT IN ('LVM', 'Filesystem')";
protected static final String DELETE_PRIMARY_RECORDS = "DELETE " + "FROM storage_pool_host_ref " + "WHERE host_id = ?"; protected static final String DELETE_PRIMARY_RECORDS = "DELETE " + "FROM storage_pool_host_ref " + "WHERE host_id = ?";
@ -169,23 +169,23 @@ public class StoragePoolHostDaoImpl extends GenericDaoBase<StoragePoolHostVO, Lo
} }
@Override @Override
public List<Pair<Long, Integer>> getDatacenterStoragePoolHostInfo(long dcId, boolean sharedOnly) { public boolean hasDatacenterStoragePoolHostInfo(long dcId, boolean sharedOnly) {
ArrayList<Pair<Long, Integer>> l = new ArrayList<Pair<Long, Integer>>(); Long poolCount = 0L;
String sql = sharedOnly ? SHARED_STORAGE_POOL_HOST_INFO : STORAGE_POOL_HOST_INFO; String sql = sharedOnly ? SHARED_STORAGE_POOL_HOST_INFO : STORAGE_POOL_HOST_INFO;
TransactionLegacy txn = TransactionLegacy.currentTxn(); TransactionLegacy txn = TransactionLegacy.currentTxn();
PreparedStatement pstmt = null; try (PreparedStatement pstmt = txn.prepareAutoCloseStatement(sql)) {
try {
pstmt = txn.prepareAutoCloseStatement(sql);
pstmt.setLong(1, dcId); pstmt.setLong(1, dcId);
ResultSet rs = pstmt.executeQuery(); ResultSet rs = pstmt.executeQuery();
while (rs.next()) { while (rs.next()) {
l.add(new Pair<Long, Integer>(rs.getLong(1), rs.getInt(2))); poolCount = rs.getLong(1);
if (poolCount > 0) {
return true;
}
} }
} catch (SQLException e) { } catch (SQLException e) {
logger.debug("SQLException: ", e); logger.debug("SQLException: ", e);
} }
return l; return false;
} }
/** /**

View File

@ -67,6 +67,8 @@ public interface VMTemplateDao extends GenericDao<VMTemplateVO, Long>, StateDao<
public List<VMTemplateVO> userIsoSearch(boolean listRemoved); public List<VMTemplateVO> userIsoSearch(boolean listRemoved);
List<VMTemplateVO> listAllReadySystemVMTemplates(Long zoneId);
VMTemplateVO findSystemVMTemplate(long zoneId); VMTemplateVO findSystemVMTemplate(long zoneId);
VMTemplateVO findSystemVMReadyTemplate(long zoneId, HypervisorType hypervisorType); VMTemplateVO findSystemVMReadyTemplate(long zoneId, HypervisorType hypervisorType);
@ -91,6 +93,5 @@ public interface VMTemplateDao extends GenericDao<VMTemplateVO, Long>, StateDao<
List<VMTemplateVO> listByIds(List<Long> ids); List<VMTemplateVO> listByIds(List<Long> ids);
List<VMTemplateVO> listByTemplateTag(String tag); List<Long> listIdsByTemplateTag(String tag);
} }

View File

@ -344,19 +344,12 @@ public class VMTemplateDaoImpl extends GenericDaoBase<VMTemplateVO, Long> implem
readySystemTemplateSearch = createSearchBuilder(); readySystemTemplateSearch = createSearchBuilder();
readySystemTemplateSearch.and("state", readySystemTemplateSearch.entity().getState(), SearchCriteria.Op.EQ); readySystemTemplateSearch.and("state", readySystemTemplateSearch.entity().getState(), SearchCriteria.Op.EQ);
readySystemTemplateSearch.and("templateType", readySystemTemplateSearch.entity().getTemplateType(), SearchCriteria.Op.EQ); readySystemTemplateSearch.and("templateType", readySystemTemplateSearch.entity().getTemplateType(), SearchCriteria.Op.EQ);
readySystemTemplateSearch.and("hypervisorType", readySystemTemplateSearch.entity().getHypervisorType(), SearchCriteria.Op.IN);
SearchBuilder<TemplateDataStoreVO> templateDownloadSearch = _templateDataStoreDao.createSearchBuilder(); SearchBuilder<TemplateDataStoreVO> templateDownloadSearch = _templateDataStoreDao.createSearchBuilder();
templateDownloadSearch.and("downloadState", templateDownloadSearch.entity().getDownloadState(), SearchCriteria.Op.IN); templateDownloadSearch.and("downloadState", templateDownloadSearch.entity().getDownloadState(), SearchCriteria.Op.IN);
readySystemTemplateSearch.join("vmTemplateJoinTemplateStoreRef", templateDownloadSearch, templateDownloadSearch.entity().getTemplateId(), readySystemTemplateSearch.join("vmTemplateJoinTemplateStoreRef", templateDownloadSearch, templateDownloadSearch.entity().getTemplateId(),
readySystemTemplateSearch.entity().getId(), JoinBuilder.JoinType.INNER); readySystemTemplateSearch.entity().getId(), JoinBuilder.JoinType.INNER);
SearchBuilder<HostVO> hostHyperSearch2 = _hostDao.createSearchBuilder(); readySystemTemplateSearch.groupBy(readySystemTemplateSearch.entity().getId());
hostHyperSearch2.and("type", hostHyperSearch2.entity().getType(), SearchCriteria.Op.EQ);
hostHyperSearch2.and("zoneId", hostHyperSearch2.entity().getDataCenterId(), SearchCriteria.Op.EQ);
hostHyperSearch2.and("removed", hostHyperSearch2.entity().getRemoved(), SearchCriteria.Op.NULL);
hostHyperSearch2.groupBy(hostHyperSearch2.entity().getHypervisorType());
readySystemTemplateSearch.join("tmplHyper", hostHyperSearch2, hostHyperSearch2.entity().getHypervisorType(), readySystemTemplateSearch.entity()
.getHypervisorType(), JoinBuilder.JoinType.INNER);
hostHyperSearch2.done();
readySystemTemplateSearch.done(); readySystemTemplateSearch.done();
tmpltTypeHyperSearch2 = createSearchBuilder(); tmpltTypeHyperSearch2 = createSearchBuilder();
@ -556,30 +549,36 @@ public class VMTemplateDaoImpl extends GenericDaoBase<VMTemplateVO, Long> implem
} }
@Override @Override
public VMTemplateVO findSystemVMReadyTemplate(long zoneId, HypervisorType hypervisorType) { public List<VMTemplateVO> listAllReadySystemVMTemplates(Long zoneId) {
List<HypervisorType> availableHypervisors = _hostDao.listDistinctHypervisorTypes(zoneId);
if (CollectionUtils.isEmpty(availableHypervisors)) {
return Collections.emptyList();
}
SearchCriteria<VMTemplateVO> sc = readySystemTemplateSearch.create(); SearchCriteria<VMTemplateVO> sc = readySystemTemplateSearch.create();
sc.setParameters("templateType", Storage.TemplateType.SYSTEM); sc.setParameters("templateType", Storage.TemplateType.SYSTEM);
sc.setParameters("state", VirtualMachineTemplate.State.Active); sc.setParameters("state", VirtualMachineTemplate.State.Active);
sc.setJoinParameters("tmplHyper", "type", Host.Type.Routing); sc.setParameters("hypervisorType", availableHypervisors.toArray());
sc.setJoinParameters("tmplHyper", "zoneId", zoneId); sc.setJoinParameters("vmTemplateJoinTemplateStoreRef", "downloadState",
sc.setJoinParameters("vmTemplateJoinTemplateStoreRef", "downloadState", new VMTemplateStorageResourceAssoc.Status[] {VMTemplateStorageResourceAssoc.Status.DOWNLOADED, VMTemplateStorageResourceAssoc.Status.BYPASSED}); List.of(VMTemplateStorageResourceAssoc.Status.DOWNLOADED,
VMTemplateStorageResourceAssoc.Status.BYPASSED).toArray());
// order by descending order of id // order by descending order of id
List<VMTemplateVO> tmplts = listBy(sc, new Filter(VMTemplateVO.class, "id", false, null, null)); return listBy(sc, new Filter(VMTemplateVO.class, "id", false, null, null));
if (tmplts.size() > 0) {
if (hypervisorType == HypervisorType.Any) {
return tmplts.get(0);
}
for (VMTemplateVO tmplt : tmplts) {
if (tmplt.getHypervisorType() == hypervisorType) {
return tmplt;
}
} }
} @Override
public VMTemplateVO findSystemVMReadyTemplate(long zoneId, HypervisorType hypervisorType) {
List<VMTemplateVO> templates = listAllReadySystemVMTemplates(zoneId);
if (CollectionUtils.isEmpty(templates)) {
return null; return null;
} }
if (hypervisorType == HypervisorType.Any) {
return templates.get(0);
}
return templates.stream()
.filter(t -> t.getHypervisorType() == hypervisorType)
.findFirst()
.orElse(null);
}
@Override @Override
public VMTemplateVO findRoutingTemplate(HypervisorType hType, String templateName) { public VMTemplateVO findRoutingTemplate(HypervisorType hType, String templateName) {
@ -687,13 +686,14 @@ public class VMTemplateDaoImpl extends GenericDaoBase<VMTemplateVO, Long> implem
} }
@Override @Override
public List<VMTemplateVO> listByTemplateTag(String tag) { public List<Long> listIdsByTemplateTag(String tag) {
SearchBuilder<VMTemplateVO> sb = createSearchBuilder(); GenericSearchBuilder<VMTemplateVO, Long> sb = createSearchBuilder(Long.class);
sb.selectFields(sb.entity().getId());
sb.and("tag", sb.entity().getTemplateTag(), SearchCriteria.Op.EQ); sb.and("tag", sb.entity().getTemplateTag(), SearchCriteria.Op.EQ);
sb.done(); sb.done();
SearchCriteria<VMTemplateVO> sc = sb.create(); SearchCriteria<Long> sc = sb.create();
sc.setParameters("tag", tag); sc.setParameters("tag", tag);
return listIncludingRemovedBy(sc); return customSearchIncludingRemoved(sc, null);
} }
@Override @Override

View File

@ -571,14 +571,6 @@ public class VolumeDaoImpl extends GenericDaoBase<VolumeVO, Long> implements Vol
} }
} }
public static class SumCount {
public long sum;
public long count;
public SumCount() {
}
}
@Override @Override
public List<VolumeVO> listVolumesToBeDestroyed() { public List<VolumeVO> listVolumesToBeDestroyed() {
SearchCriteria<VolumeVO> sc = AllFieldsSearch.create(); SearchCriteria<VolumeVO> sc = AllFieldsSearch.create();

View File

@ -870,7 +870,7 @@ public class SystemVmTemplateRegistration {
public void doInTransactionWithoutResult(final TransactionStatus status) { public void doInTransactionWithoutResult(final TransactionStatus status) {
Set<Hypervisor.HypervisorType> hypervisorsListInUse = new HashSet<Hypervisor.HypervisorType>(); Set<Hypervisor.HypervisorType> hypervisorsListInUse = new HashSet<Hypervisor.HypervisorType>();
try { try {
hypervisorsListInUse = clusterDao.getDistictAvailableHypervisorsAcrossClusters(); hypervisorsListInUse = clusterDao.getDistinctAvailableHypervisorsAcrossClusters();
} catch (final Exception e) { } catch (final Exception e) {
LOGGER.error("updateSystemVmTemplates: Exception caught while getting hypervisor types from clusters: " + e.getMessage()); LOGGER.error("updateSystemVmTemplates: Exception caught while getting hypervisor types from clusters: " + e.getMessage());

View File

@ -114,6 +114,17 @@ public class DatabaseAccessObject {
} }
} }
public void renameIndex(Connection conn, String tableName, String oldName, String newName) {
String stmt = String.format("ALTER TABLE %s RENAME INDEX %s TO %s", tableName, oldName, newName);
logger.debug("Statement: {}", stmt);
try (PreparedStatement pstmt = conn.prepareStatement(stmt)) {
pstmt.execute();
logger.debug("Renamed index {} to {}", oldName, newName);
} catch (SQLException e) {
logger.warn("Unable to rename index {} to {}", oldName, newName, e);
}
}
protected void closePreparedStatement(PreparedStatement pstmt, String errorMessage) { protected void closePreparedStatement(PreparedStatement pstmt, String errorMessage) {
try { try {
if (pstmt != null) { if (pstmt != null) {

View File

@ -31,6 +31,12 @@ public class DbUpgradeUtils {
} }
} }
public static void renameIndexIfNeeded(Connection conn, String tableName, String oldName, String newName) {
if (!dao.indexExists(conn, tableName, oldName)) {
dao.renameIndex(conn, tableName, oldName, newName);
}
}
public static void addForeignKey(Connection conn, String tableName, String tableColumn, String foreignTableName, String foreignColumnName) { public static void addForeignKey(Connection conn, String tableName, String tableColumn, String foreignTableName, String foreignColumnName) {
dao.addForeignKey(conn, tableName, tableColumn, foreignTableName, foreignColumnName); dao.addForeignKey(conn, tableName, tableColumn, foreignTableName, foreignColumnName);
} }

View File

@ -53,6 +53,7 @@ public class Upgrade42000to42010 extends DbUpgradeAbstractImpl implements DbUpgr
@Override @Override
public void performDataMigration(Connection conn) { public void performDataMigration(Connection conn) {
addIndexes(conn);
} }
@Override @Override
@ -80,4 +81,42 @@ public class Upgrade42000to42010 extends DbUpgradeAbstractImpl implements DbUpgr
throw new CloudRuntimeException("Failed to find / register SystemVM template(s)"); throw new CloudRuntimeException("Failed to find / register SystemVM template(s)");
} }
} }
private void addIndexes(Connection conn) {
DbUpgradeUtils.addIndexIfNeeded(conn, "host", "mgmt_server_id");
DbUpgradeUtils.addIndexIfNeeded(conn, "host", "resource");
DbUpgradeUtils.addIndexIfNeeded(conn, "host", "resource_state");
DbUpgradeUtils.addIndexIfNeeded(conn, "host", "type");
DbUpgradeUtils.renameIndexIfNeeded(conn, "user_ip_address", "public_ip_address", "uk_public_ip_address");
DbUpgradeUtils.addIndexIfNeeded(conn, "user_ip_address", "public_ip_address");
DbUpgradeUtils.addIndexIfNeeded(conn, "user_ip_address", "data_center_id");
DbUpgradeUtils.addIndexIfNeeded(conn, "user_ip_address", "vlan_db_id");
DbUpgradeUtils.addIndexIfNeeded(conn, "user_ip_address", "removed");
DbUpgradeUtils.addIndexIfNeeded(conn, "vlan", "vlan_type");
DbUpgradeUtils.addIndexIfNeeded(conn, "vlan", "data_center_id");
DbUpgradeUtils.addIndexIfNeeded(conn, "vlan", "removed");
DbUpgradeUtils.addIndexIfNeeded(conn, "network_offering_details", "name");
DbUpgradeUtils.addIndexIfNeeded(conn, "network_offering_details", "resource_id", "resource_type");
DbUpgradeUtils.addIndexIfNeeded(conn, "service_offering", "cpu");
DbUpgradeUtils.addIndexIfNeeded(conn, "service_offering", "speed");
DbUpgradeUtils.addIndexIfNeeded(conn, "service_offering", "ram_size");
DbUpgradeUtils.addIndexIfNeeded(conn, "op_host_planner_reservation", "resource_usage");
DbUpgradeUtils.addIndexIfNeeded(conn, "storage_pool", "pool_type");
DbUpgradeUtils.addIndexIfNeeded(conn, "storage_pool", "data_center_id", "status", "scope", "hypervisor");
DbUpgradeUtils.addIndexIfNeeded(conn, "router_network_ref", "guest_type");
DbUpgradeUtils.addIndexIfNeeded(conn, "domain_router", "role");
DbUpgradeUtils.addIndexIfNeeded(conn, "async_job", "instance_type", "job_status");
DbUpgradeUtils.addIndexIfNeeded(conn, "cluster", "managed_state");
}
} }

View File

@ -45,7 +45,7 @@ public interface ConsoleProxyDao extends GenericDao<ConsoleProxyVO, Long> {
public List<ConsoleProxyLoadInfo> getDatacenterSessionLoadMatrix(); public List<ConsoleProxyLoadInfo> getDatacenterSessionLoadMatrix();
public List<Pair<Long, Integer>> getDatacenterStoragePoolHostInfo(long dcId, boolean countAllPoolTypes); public boolean hasDatacenterStoragePoolHostInfo(long dcId, boolean sharedOnly);
public List<Pair<Long, Integer>> getProxyLoadMatrix(); public List<Pair<Long, Integer>> getProxyLoadMatrix();

View File

@ -23,7 +23,6 @@ import java.util.ArrayList;
import java.util.Date; import java.util.Date;
import java.util.List; import java.util.List;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
import com.cloud.info.ConsoleProxyLoadInfo; import com.cloud.info.ConsoleProxyLoadInfo;
@ -76,11 +75,11 @@ public class ConsoleProxyDaoImpl extends GenericDaoBase<ConsoleProxyVO, Long> im
private static final String GET_PROXY_ACTIVE_LOAD = "SELECT active_session AS count" + " FROM console_proxy" + " WHERE id=?"; private static final String GET_PROXY_ACTIVE_LOAD = "SELECT active_session AS count" + " FROM console_proxy" + " WHERE id=?";
private static final String STORAGE_POOL_HOST_INFO = "SELECT p.data_center_id, count(ph.host_id) " + " FROM storage_pool p, storage_pool_host_ref ph " protected static final String STORAGE_POOL_HOST_INFO = "SELECT (SELECT id FROM storage_pool_host_ref ph WHERE " +
+ " WHERE p.id = ph.pool_id AND p.data_center_id = ? " + " GROUP by p.data_center_id"; "ph.pool_id=p.id limit 1) AS sphr FROM storage_pool p WHERE p.data_center_id = ?";
private static final String SHARED_STORAGE_POOL_HOST_INFO = "SELECT p.data_center_id, count(ph.host_id) " + " FROM storage_pool p, storage_pool_host_ref ph " protected static final String SHARED_STORAGE_POOL_HOST_INFO = "SELECT (SELECT id FROM storage_pool_host_ref ph " +
+ " WHERE p.pool_type <> 'LVM' AND p.id = ph.pool_id AND p.data_center_id = ? " + " GROUP by p.data_center_id"; "WHERE ph.pool_id=p.id limit 1) AS sphr FROM storage_pool p WHERE p.data_center_id = ? AND p.pool_type NOT IN ('LVM', 'Filesystem')";
protected SearchBuilder<ConsoleProxyVO> DataCenterStatusSearch; protected SearchBuilder<ConsoleProxyVO> DataCenterStatusSearch;
protected SearchBuilder<ConsoleProxyVO> StateSearch; protected SearchBuilder<ConsoleProxyVO> StateSearch;
@ -219,28 +218,23 @@ public class ConsoleProxyDaoImpl extends GenericDaoBase<ConsoleProxyVO, Long> im
} }
@Override @Override
public List<Pair<Long, Integer>> getDatacenterStoragePoolHostInfo(long dcId, boolean countAllPoolTypes) { public boolean hasDatacenterStoragePoolHostInfo(long dcId, boolean sharedOnly) {
ArrayList<Pair<Long, Integer>> l = new ArrayList<Pair<Long, Integer>>(); Long poolCount = 0L;
String sql = sharedOnly ? SHARED_STORAGE_POOL_HOST_INFO : STORAGE_POOL_HOST_INFO;
TransactionLegacy txn = TransactionLegacy.currentTxn(); TransactionLegacy txn = TransactionLegacy.currentTxn();
; try (PreparedStatement pstmt = txn.prepareAutoCloseStatement(sql)) {
PreparedStatement pstmt = null;
try {
if (countAllPoolTypes) {
pstmt = txn.prepareAutoCloseStatement(STORAGE_POOL_HOST_INFO);
} else {
pstmt = txn.prepareAutoCloseStatement(SHARED_STORAGE_POOL_HOST_INFO);
}
pstmt.setLong(1, dcId); pstmt.setLong(1, dcId);
ResultSet rs = pstmt.executeQuery(); ResultSet rs = pstmt.executeQuery();
while (rs.next()) { while (rs.next()) {
l.add(new Pair<Long, Integer>(rs.getLong(1), rs.getInt(2))); poolCount = rs.getLong(1);
if (poolCount > 0) {
return true;
}
} }
} catch (SQLException e) { } catch (SQLException e) {
logger.debug("Caught SQLException: ", e); logger.debug("Caught SQLException: ", e);
} }
return l; return false;
} }
@Override @Override

View File

@ -170,8 +170,7 @@ public class NicIpAliasDaoImpl extends GenericDaoBase<NicIpAliasVO, Long> implem
public Integer countAliasIps(long id) { public Integer countAliasIps(long id) {
SearchCriteria<NicIpAliasVO> sc = AllFieldsSearch.create(); SearchCriteria<NicIpAliasVO> sc = AllFieldsSearch.create();
sc.setParameters("instanceId", id); sc.setParameters("instanceId", id);
List<NicIpAliasVO> list = listBy(sc); return getCount(sc);
return list.size();
} }
@Override @Override

View File

@ -16,6 +16,7 @@
// under the License. // under the License.
package com.cloud.vm.dao; package com.cloud.vm.dao;
import java.util.Collection;
import java.util.Date; import java.util.Date;
import java.util.HashMap; import java.util.HashMap;
import java.util.List; import java.util.List;
@ -81,7 +82,7 @@ public interface VMInstanceDao extends GenericDao<VMInstanceVO, Long>, StateDao<
List<VMInstanceVO> listByHostAndState(long hostId, State... states); List<VMInstanceVO> listByHostAndState(long hostId, State... states);
List<VMInstanceVO> listByTypes(VirtualMachine.Type... types); int countByTypes(VirtualMachine.Type... types);
VMInstanceVO findByIdTypes(long id, VirtualMachine.Type... types); VMInstanceVO findByIdTypes(long id, VirtualMachine.Type... types);
@ -144,21 +145,28 @@ public interface VMInstanceDao extends GenericDao<VMInstanceVO, Long>, StateDao<
*/ */
List<String> listDistinctHostNames(long networkId, VirtualMachine.Type... types); List<String> listDistinctHostNames(long networkId, VirtualMachine.Type... types);
List<VMInstanceVO> findByHostInStatesExcluding(Long hostId, Collection<Long> excludingIds, State... states);
List<VMInstanceVO> findByHostInStates(Long hostId, State... states); List<VMInstanceVO> findByHostInStates(Long hostId, State... states);
List<VMInstanceVO> listStartingWithNoHostId(); List<VMInstanceVO> listStartingWithNoHostId();
boolean updatePowerState(long instanceId, long powerHostId, VirtualMachine.PowerState powerState, Date wisdomEra); boolean updatePowerState(long instanceId, long powerHostId, VirtualMachine.PowerState powerState, Date wisdomEra);
Map<Long, VirtualMachine.PowerState> updatePowerState(Map<Long, VirtualMachine.PowerState> instancePowerStates,
long powerHostId, Date wisdomEra);
void resetVmPowerStateTracking(long instanceId); void resetVmPowerStateTracking(long instanceId);
void resetVmPowerStateTracking(List<Long> instanceId);
void resetHostPowerStateTracking(long hostId); void resetHostPowerStateTracking(long hostId);
HashMap<String, Long> countVgpuVMs(Long dcId, Long podId, Long clusterId); HashMap<String, Long> countVgpuVMs(Long dcId, Long podId, Long clusterId);
VMInstanceVO findVMByHostNameInZone(String hostName, long zoneId); VMInstanceVO findVMByHostNameInZone(String hostName, long zoneId);
boolean isPowerStateUpToDate(long instanceId); boolean isPowerStateUpToDate(VMInstanceVO instance);
List<VMInstanceVO> listNonMigratingVmsByHostEqualsLastHost(long hostId); List<VMInstanceVO> listNonMigratingVmsByHostEqualsLastHost(long hostId);
@ -170,4 +178,13 @@ public interface VMInstanceDao extends GenericDao<VMInstanceVO, Long>, StateDao<
List<Long> skippedVmIds); List<Long> skippedVmIds);
Pair<List<VMInstanceVO>, Integer> listByVmsNotInClusterUsingPool(long clusterId, long poolId); Pair<List<VMInstanceVO>, Integer> listByVmsNotInClusterUsingPool(long clusterId, long poolId);
List<VMInstanceVO> listIdServiceOfferingForUpVmsByHostId(Long hostId);
List<VMInstanceVO> listIdServiceOfferingForVmsMigratingFromHost(Long hostId);
Map<String, Long> getNameIdMapForVmInstanceNames(Collection<String> names);
Map<String, Long> getNameIdMapForVmIds(Collection<Long> ids);
} }

View File

@ -20,6 +20,7 @@ import java.sql.PreparedStatement;
import java.sql.ResultSet; import java.sql.ResultSet;
import java.sql.SQLException; import java.sql.SQLException;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Collection;
import java.util.Date; import java.util.Date;
import java.util.HashMap; import java.util.HashMap;
import java.util.List; import java.util.List;
@ -75,6 +76,7 @@ public class VMInstanceDaoImpl extends GenericDaoBase<VMInstanceVO, Long> implem
protected SearchBuilder<VMInstanceVO> LHVMClusterSearch; protected SearchBuilder<VMInstanceVO> LHVMClusterSearch;
protected SearchBuilder<VMInstanceVO> IdStatesSearch; protected SearchBuilder<VMInstanceVO> IdStatesSearch;
protected SearchBuilder<VMInstanceVO> AllFieldsSearch; protected SearchBuilder<VMInstanceVO> AllFieldsSearch;
protected SearchBuilder<VMInstanceVO> IdServiceOfferingIdSelectSearch;
protected SearchBuilder<VMInstanceVO> ZoneTemplateNonExpungedSearch; protected SearchBuilder<VMInstanceVO> ZoneTemplateNonExpungedSearch;
protected SearchBuilder<VMInstanceVO> TemplateNonExpungedSearch; protected SearchBuilder<VMInstanceVO> TemplateNonExpungedSearch;
protected SearchBuilder<VMInstanceVO> NameLikeSearch; protected SearchBuilder<VMInstanceVO> NameLikeSearch;
@ -101,6 +103,7 @@ public class VMInstanceDaoImpl extends GenericDaoBase<VMInstanceVO, Long> implem
protected SearchBuilder<VMInstanceVO> BackupSearch; protected SearchBuilder<VMInstanceVO> BackupSearch;
protected SearchBuilder<VMInstanceVO> LastHostAndStatesSearch; protected SearchBuilder<VMInstanceVO> LastHostAndStatesSearch;
protected SearchBuilder<VMInstanceVO> VmsNotInClusterUsingPool; protected SearchBuilder<VMInstanceVO> VmsNotInClusterUsingPool;
protected SearchBuilder<VMInstanceVO> IdsPowerStateSelectSearch;
@Inject @Inject
ResourceTagDao tagsDao; ResourceTagDao tagsDao;
@ -175,6 +178,14 @@ public class VMInstanceDaoImpl extends GenericDaoBase<VMInstanceVO, Long> implem
AllFieldsSearch.and("account", AllFieldsSearch.entity().getAccountId(), Op.EQ); AllFieldsSearch.and("account", AllFieldsSearch.entity().getAccountId(), Op.EQ);
AllFieldsSearch.done(); AllFieldsSearch.done();
IdServiceOfferingIdSelectSearch = createSearchBuilder();
IdServiceOfferingIdSelectSearch.and("host", IdServiceOfferingIdSelectSearch.entity().getHostId(), Op.EQ);
IdServiceOfferingIdSelectSearch.and("lastHost", IdServiceOfferingIdSelectSearch.entity().getLastHostId(), Op.EQ);
IdServiceOfferingIdSelectSearch.and("state", IdServiceOfferingIdSelectSearch.entity().getState(), Op.EQ);
IdServiceOfferingIdSelectSearch.and("states", IdServiceOfferingIdSelectSearch.entity().getState(), Op.IN);
IdServiceOfferingIdSelectSearch.selectFields(IdServiceOfferingIdSelectSearch.entity().getId(), IdServiceOfferingIdSelectSearch.entity().getServiceOfferingId());
IdServiceOfferingIdSelectSearch.done();
ZoneTemplateNonExpungedSearch = createSearchBuilder(); ZoneTemplateNonExpungedSearch = createSearchBuilder();
ZoneTemplateNonExpungedSearch.and("zone", ZoneTemplateNonExpungedSearch.entity().getDataCenterId(), Op.EQ); ZoneTemplateNonExpungedSearch.and("zone", ZoneTemplateNonExpungedSearch.entity().getDataCenterId(), Op.EQ);
ZoneTemplateNonExpungedSearch.and("template", ZoneTemplateNonExpungedSearch.entity().getTemplateId(), Op.EQ); ZoneTemplateNonExpungedSearch.and("template", ZoneTemplateNonExpungedSearch.entity().getTemplateId(), Op.EQ);
@ -274,6 +285,7 @@ public class VMInstanceDaoImpl extends GenericDaoBase<VMInstanceVO, Long> implem
HostAndStateSearch = createSearchBuilder(); HostAndStateSearch = createSearchBuilder();
HostAndStateSearch.and("host", HostAndStateSearch.entity().getHostId(), Op.EQ); HostAndStateSearch.and("host", HostAndStateSearch.entity().getHostId(), Op.EQ);
HostAndStateSearch.and("states", HostAndStateSearch.entity().getState(), Op.IN); HostAndStateSearch.and("states", HostAndStateSearch.entity().getState(), Op.IN);
HostAndStateSearch.and("idsNotIn", HostAndStateSearch.entity().getId(), Op.NIN);
HostAndStateSearch.done(); HostAndStateSearch.done();
StartingWithNoHostSearch = createSearchBuilder(); StartingWithNoHostSearch = createSearchBuilder();
@ -323,6 +335,15 @@ public class VMInstanceDaoImpl extends GenericDaoBase<VMInstanceVO, Long> implem
VmsNotInClusterUsingPool.join("hostSearch2", hostSearch2, hostSearch2.entity().getId(), VmsNotInClusterUsingPool.entity().getHostId(), JoinType.INNER); VmsNotInClusterUsingPool.join("hostSearch2", hostSearch2, hostSearch2.entity().getId(), VmsNotInClusterUsingPool.entity().getHostId(), JoinType.INNER);
VmsNotInClusterUsingPool.and("vmStates", VmsNotInClusterUsingPool.entity().getState(), Op.IN); VmsNotInClusterUsingPool.and("vmStates", VmsNotInClusterUsingPool.entity().getState(), Op.IN);
VmsNotInClusterUsingPool.done(); VmsNotInClusterUsingPool.done();
IdsPowerStateSelectSearch = createSearchBuilder();
IdsPowerStateSelectSearch.and("id", IdsPowerStateSelectSearch.entity().getId(), Op.IN);
IdsPowerStateSelectSearch.selectFields(IdsPowerStateSelectSearch.entity().getId(),
IdsPowerStateSelectSearch.entity().getPowerHostId(),
IdsPowerStateSelectSearch.entity().getPowerState(),
IdsPowerStateSelectSearch.entity().getPowerStateUpdateCount(),
IdsPowerStateSelectSearch.entity().getPowerStateUpdateTime());
IdsPowerStateSelectSearch.done();
} }
@Override @Override
@ -458,10 +479,10 @@ public class VMInstanceDaoImpl extends GenericDaoBase<VMInstanceVO, Long> implem
} }
@Override @Override
public List<VMInstanceVO> listByTypes(Type... types) { public int countByTypes(Type... types) {
SearchCriteria<VMInstanceVO> sc = TypesSearch.create(); SearchCriteria<VMInstanceVO> sc = TypesSearch.create();
sc.setParameters("types", (Object[])types); sc.setParameters("types", (Object[])types);
return listBy(sc); return getCount(sc);
} }
@Override @Override
@ -897,6 +918,17 @@ public class VMInstanceDaoImpl extends GenericDaoBase<VMInstanceVO, Long> implem
return result; return result;
} }
@Override
public List<VMInstanceVO> findByHostInStatesExcluding(Long hostId, Collection<Long> excludingIds, State... states) {
SearchCriteria<VMInstanceVO> sc = HostAndStateSearch.create();
sc.setParameters("host", hostId);
if (excludingIds != null && !excludingIds.isEmpty()) {
sc.setParameters("idsNotIn", excludingIds.toArray());
}
sc.setParameters("states", (Object[])states);
return listBy(sc);
}
@Override @Override
public List<VMInstanceVO> findByHostInStates(Long hostId, State... states) { public List<VMInstanceVO> findByHostInStates(Long hostId, State... states) {
SearchCriteria<VMInstanceVO> sc = HostAndStateSearch.create(); SearchCriteria<VMInstanceVO> sc = HostAndStateSearch.create();
@ -912,42 +944,109 @@ public class VMInstanceDaoImpl extends GenericDaoBase<VMInstanceVO, Long> implem
return listBy(sc); return listBy(sc);
} }
@Override protected List<VMInstanceVO> listSelectPowerStateByIds(final List<Long> ids) {
public boolean updatePowerState(final long instanceId, final long powerHostId, final VirtualMachine.PowerState powerState, Date wisdomEra) { if (CollectionUtils.isEmpty(ids)) {
return Transaction.execute(new TransactionCallback<>() { return new ArrayList<>();
@Override }
public Boolean doInTransaction(TransactionStatus status) { SearchCriteria<VMInstanceVO> sc = IdsPowerStateSelectSearch.create();
boolean needToUpdate = false; sc.setParameters("id", ids.toArray());
VMInstanceVO instance = findById(instanceId); return customSearch(sc, null);
if (instance != null }
&& (null == instance.getPowerStateUpdateTime()
|| instance.getPowerStateUpdateTime().before(wisdomEra))) { protected Integer getPowerUpdateCount(final VMInstanceVO instance, final long powerHostId,
final VirtualMachine.PowerState powerState, Date wisdomEra) {
if (instance.getPowerStateUpdateTime() == null || instance.getPowerStateUpdateTime().before(wisdomEra)) {
Long savedPowerHostId = instance.getPowerHostId(); Long savedPowerHostId = instance.getPowerHostId();
if (instance.getPowerState() != powerState boolean isStateMismatch = instance.getPowerState() != powerState
|| savedPowerHostId == null || savedPowerHostId == null
|| savedPowerHostId != powerHostId || !savedPowerHostId.equals(powerHostId)
|| !isPowerStateInSyncWithInstanceState(powerState, powerHostId, instance)) { || !isPowerStateInSyncWithInstanceState(powerState, powerHostId, instance);
if (isStateMismatch) {
return 1;
} else if (instance.getPowerStateUpdateCount() < MAX_CONSECUTIVE_SAME_STATE_UPDATE_COUNT) {
return instance.getPowerStateUpdateCount() + 1;
}
}
return null;
}
@Override
public boolean updatePowerState(final long instanceId, final long powerHostId,
final VirtualMachine.PowerState powerState, Date wisdomEra) {
return Transaction.execute((TransactionCallback<Boolean>) status -> {
VMInstanceVO instance = findById(instanceId);
if (instance == null) {
return false;
}
// Check if we need to update based on powerStateUpdateTime
if (instance.getPowerStateUpdateTime() == null || instance.getPowerStateUpdateTime().before(wisdomEra)) {
Long savedPowerHostId = instance.getPowerHostId();
boolean isStateMismatch = instance.getPowerState() != powerState
|| savedPowerHostId == null
|| !savedPowerHostId.equals(powerHostId)
|| !isPowerStateInSyncWithInstanceState(powerState, powerHostId, instance);
if (isStateMismatch) {
instance.setPowerState(powerState); instance.setPowerState(powerState);
instance.setPowerHostId(powerHostId); instance.setPowerHostId(powerHostId);
instance.setPowerStateUpdateCount(1); instance.setPowerStateUpdateCount(1);
instance.setPowerStateUpdateTime(DateUtil.currentGMTTime()); } else if (instance.getPowerStateUpdateCount() < MAX_CONSECUTIVE_SAME_STATE_UPDATE_COUNT) {
needToUpdate = true;
update(instanceId, instance);
} else {
// to reduce DB updates, consecutive same state update for more than 3 times
if (instance.getPowerStateUpdateCount() < MAX_CONSECUTIVE_SAME_STATE_UPDATE_COUNT) {
instance.setPowerStateUpdateCount(instance.getPowerStateUpdateCount() + 1); instance.setPowerStateUpdateCount(instance.getPowerStateUpdateCount() + 1);
} else {
// No need to update if power state is already in sync and count exceeded
return false;
}
instance.setPowerStateUpdateTime(DateUtil.currentGMTTime()); instance.setPowerStateUpdateTime(DateUtil.currentGMTTime());
needToUpdate = true;
update(instanceId, instance); update(instanceId, instance);
return true; // Return true since an update occurred
} }
} return false;
}
return needToUpdate;
}
}); });
} }
@Override
public Map<Long, VirtualMachine.PowerState> updatePowerState(
final Map<Long, VirtualMachine.PowerState> instancePowerStates, long powerHostId, Date wisdomEra) {
Map<Long, VirtualMachine.PowerState> notUpdated = new HashMap<>();
List<VMInstanceVO> instances = listSelectPowerStateByIds(new ArrayList<>(instancePowerStates.keySet()));
Map<Long, Integer> updateCounts = new HashMap<>();
for (VMInstanceVO instance : instances) {
VirtualMachine.PowerState powerState = instancePowerStates.get(instance.getId());
Integer count = getPowerUpdateCount(instance, powerHostId, powerState, wisdomEra);
if (count != null) {
updateCounts.put(instance.getId(), count);
} else {
notUpdated.put(instance.getId(), powerState);
}
}
if (updateCounts.isEmpty()) {
return notUpdated;
}
StringBuilder sql = new StringBuilder("UPDATE `cloud`.`vm_instance` SET " +
"`power_host` = ?, `power_state_update_time` = now(), `power_state` = CASE ");
updateCounts.keySet().forEach(key -> {
sql.append("WHEN id = ").append(key).append(" THEN '").append(instancePowerStates.get(key)).append("' ");
});
sql.append("END, `power_state_update_count` = CASE ");
StringBuilder idList = new StringBuilder();
updateCounts.forEach((key, value) -> {
sql.append("WHEN `id` = ").append(key).append(" THEN ").append(value).append(" ");
idList.append(key).append(",");
});
idList.setLength(idList.length() - 1);
sql.append("END WHERE `id` IN (").append(idList).append(")");
TransactionLegacy txn = TransactionLegacy.currentTxn();
try (PreparedStatement pstmt = txn.prepareAutoCloseStatement(sql.toString())) {
pstmt.setLong(1, powerHostId);
pstmt.executeUpdate();
} catch (SQLException e) {
logger.error("Unable to execute update power states SQL from VMs {} due to: {}",
idList, e.getMessage(), e);
return instancePowerStates;
}
return notUpdated;
}
private boolean isPowerStateInSyncWithInstanceState(final VirtualMachine.PowerState powerState, final long powerHostId, final VMInstanceVO instance) { private boolean isPowerStateInSyncWithInstanceState(final VirtualMachine.PowerState powerState, final long powerHostId, final VMInstanceVO instance) {
State instanceState = instance.getState(); State instanceState = instance.getState();
if ((powerState == VirtualMachine.PowerState.PowerOff && instanceState == State.Running) if ((powerState == VirtualMachine.PowerState.PowerOff && instanceState == State.Running)
@ -962,11 +1061,7 @@ public class VMInstanceDaoImpl extends GenericDaoBase<VMInstanceVO, Long> implem
} }
@Override @Override
public boolean isPowerStateUpToDate(final long instanceId) { public boolean isPowerStateUpToDate(final VMInstanceVO instance) {
VMInstanceVO instance = findById(instanceId);
if(instance == null) {
throw new CloudRuntimeException("checking power state update count on non existing instance " + instanceId);
}
return instance.getPowerStateUpdateCount() < MAX_CONSECUTIVE_SAME_STATE_UPDATE_COUNT; return instance.getPowerStateUpdateCount() < MAX_CONSECUTIVE_SAME_STATE_UPDATE_COUNT;
} }
@ -985,6 +1080,25 @@ public class VMInstanceDaoImpl extends GenericDaoBase<VMInstanceVO, Long> implem
}); });
} }
@Override
public void resetVmPowerStateTracking(List<Long> instanceIds) {
if (CollectionUtils.isEmpty(instanceIds)) {
return;
}
Transaction.execute(new TransactionCallbackNoReturn() {
@Override
public void doInTransactionWithoutResult(TransactionStatus status) {
SearchCriteria<VMInstanceVO> sc = IdsPowerStateSelectSearch.create();
sc.setParameters("id", instanceIds.toArray());
VMInstanceVO vm = createForUpdate();
vm.setPowerStateUpdateCount(0);
vm.setPowerStateUpdateTime(DateUtil.currentGMTTime());
UpdateBuilder ub = getUpdateBuilder(vm);
update(ub, sc, null);
}
});
}
@Override @DB @Override @DB
public void resetHostPowerStateTracking(final long hostId) { public void resetHostPowerStateTracking(final long hostId) {
Transaction.execute(new TransactionCallbackNoReturn() { Transaction.execute(new TransactionCallbackNoReturn() {
@ -1060,6 +1174,7 @@ public class VMInstanceDaoImpl extends GenericDaoBase<VMInstanceVO, Long> implem
return searchIncludingRemoved(sc, filter, null, false); return searchIncludingRemoved(sc, filter, null, false);
} }
@Override
public Pair<List<VMInstanceVO>, Integer> listByVmsNotInClusterUsingPool(long clusterId, long poolId) { public Pair<List<VMInstanceVO>, Integer> listByVmsNotInClusterUsingPool(long clusterId, long poolId) {
SearchCriteria<VMInstanceVO> sc = VmsNotInClusterUsingPool.create(); SearchCriteria<VMInstanceVO> sc = VmsNotInClusterUsingPool.create();
sc.setParameters("vmStates", State.Starting, State.Running, State.Stopping, State.Migrating, State.Restoring); sc.setParameters("vmStates", State.Starting, State.Running, State.Stopping, State.Migrating, State.Restoring);
@ -1069,4 +1184,44 @@ public class VMInstanceDaoImpl extends GenericDaoBase<VMInstanceVO, Long> implem
List<VMInstanceVO> uniqueVms = vms.stream().distinct().collect(Collectors.toList()); List<VMInstanceVO> uniqueVms = vms.stream().distinct().collect(Collectors.toList());
return new Pair<>(uniqueVms, uniqueVms.size()); return new Pair<>(uniqueVms, uniqueVms.size());
} }
@Override
public List<VMInstanceVO> listIdServiceOfferingForUpVmsByHostId(Long hostId) {
SearchCriteria<VMInstanceVO> sc = IdServiceOfferingIdSelectSearch.create();
sc.setParameters("host", hostId);
sc.setParameters("states", new Object[] {State.Starting, State.Running, State.Stopping, State.Migrating});
return customSearch(sc, null);
}
@Override
public List<VMInstanceVO> listIdServiceOfferingForVmsMigratingFromHost(Long hostId) {
SearchCriteria<VMInstanceVO> sc = IdServiceOfferingIdSelectSearch.create();
sc.setParameters("lastHost", hostId);
sc.setParameters("state", State.Migrating);
return customSearch(sc, null);
}
@Override
public Map<String, Long> getNameIdMapForVmInstanceNames(Collection<String> names) {
SearchBuilder<VMInstanceVO> sb = createSearchBuilder();
sb.and("name", sb.entity().getInstanceName(), Op.IN);
sb.selectFields(sb.entity().getId(), sb.entity().getInstanceName());
SearchCriteria<VMInstanceVO> sc = sb.create();
sc.setParameters("name", names.toArray());
List<VMInstanceVO> vms = customSearch(sc, null);
return vms.stream()
.collect(Collectors.toMap(VMInstanceVO::getInstanceName, VMInstanceVO::getId));
}
@Override
public Map<String, Long> getNameIdMapForVmIds(Collection<Long> ids) {
SearchBuilder<VMInstanceVO> sb = createSearchBuilder();
sb.and("id", sb.entity().getId(), Op.IN);
sb.selectFields(sb.entity().getId(), sb.entity().getInstanceName());
SearchCriteria<VMInstanceVO> sc = sb.create();
sc.setParameters("id", ids.toArray());
List<VMInstanceVO> vms = customSearch(sc, null);
return vms.stream()
.collect(Collectors.toMap(VMInstanceVO::getInstanceName, VMInstanceVO::getId));
}
} }

View File

@ -88,6 +88,8 @@ public interface ResourceDetailsDao<R extends ResourceDetail> extends GenericDao
public Map<String, String> listDetailsKeyPairs(long resourceId); public Map<String, String> listDetailsKeyPairs(long resourceId);
Map<String, String> listDetailsKeyPairs(long resourceId, List<String> keys);
public Map<String, String> listDetailsKeyPairs(long resourceId, boolean forDisplay); public Map<String, String> listDetailsKeyPairs(long resourceId, boolean forDisplay);
Map<String, Boolean> listDetailsVisibility(long resourceId); Map<String, Boolean> listDetailsVisibility(long resourceId);

View File

@ -19,6 +19,7 @@ package org.apache.cloudstack.resourcedetail;
import java.util.HashMap; import java.util.HashMap;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.stream.Collectors;
import org.apache.cloudstack.api.ResourceDetail; import org.apache.cloudstack.api.ResourceDetail;
import org.apache.commons.collections.CollectionUtils; import org.apache.commons.collections.CollectionUtils;
@ -91,6 +92,20 @@ public abstract class ResourceDetailsDaoBase<R extends ResourceDetail> extends G
return details; return details;
} }
@Override
public Map<String, String> listDetailsKeyPairs(long resourceId, List<String> keys) {
SearchBuilder<R> sb = createSearchBuilder();
sb.and("resourceId", sb.entity().getResourceId(), SearchCriteria.Op.EQ);
sb.and("name", sb.entity().getName(), SearchCriteria.Op.IN);
sb.done();
SearchCriteria<R> sc = sb.create();
sc.setParameters("resourceId", resourceId);
sc.setParameters("name", keys.toArray());
List<R> results = search(sc, null);
return results.stream().collect(Collectors.toMap(R::getName, R::getValue));
}
public Map<String, Boolean> listDetailsVisibility(long resourceId) { public Map<String, Boolean> listDetailsVisibility(long resourceId) {
SearchCriteria<R> sc = AllFieldsSearch.create(); SearchCriteria<R> sc = AllFieldsSearch.create();
sc.setParameters("resourceId", resourceId); sc.setParameters("resourceId", resourceId);

View File

@ -28,20 +28,20 @@ import java.util.stream.Collectors;
import javax.inject.Inject; import javax.inject.Inject;
import javax.naming.ConfigurationException; import javax.naming.ConfigurationException;
import com.cloud.storage.Storage;
import com.cloud.utils.Pair;
import com.cloud.utils.db.Filter;
import org.apache.commons.collections.CollectionUtils; import org.apache.commons.collections.CollectionUtils;
import com.cloud.host.Status; import com.cloud.host.Status;
import com.cloud.hypervisor.Hypervisor.HypervisorType; import com.cloud.hypervisor.Hypervisor.HypervisorType;
import com.cloud.storage.ScopeType; import com.cloud.storage.ScopeType;
import com.cloud.storage.Storage;
import com.cloud.storage.StoragePoolHostVO; import com.cloud.storage.StoragePoolHostVO;
import com.cloud.storage.StoragePoolStatus; import com.cloud.storage.StoragePoolStatus;
import com.cloud.storage.StoragePoolTagVO; import com.cloud.storage.StoragePoolTagVO;
import com.cloud.storage.dao.StoragePoolHostDao; import com.cloud.storage.dao.StoragePoolHostDao;
import com.cloud.storage.dao.StoragePoolTagsDao; import com.cloud.storage.dao.StoragePoolTagsDao;
import com.cloud.utils.Pair;
import com.cloud.utils.db.DB; import com.cloud.utils.db.DB;
import com.cloud.utils.db.Filter;
import com.cloud.utils.db.GenericDaoBase; import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.GenericSearchBuilder; import com.cloud.utils.db.GenericSearchBuilder;
import com.cloud.utils.db.JoinBuilder; import com.cloud.utils.db.JoinBuilder;

View File

@ -76,13 +76,9 @@ SELECT
FROM FROM
`cloud`.`network_offerings` `cloud`.`network_offerings`
LEFT JOIN LEFT JOIN
`cloud`.`network_offering_details` AS `domain_details` ON `domain_details`.`network_offering_id` = `network_offerings`.`id` AND `domain_details`.`name`='domainid' `cloud`.`domain` AS `domain` ON `domain`.id IN (SELECT value from `network_offering_details` where `name` = 'domainid' and `network_offering_id` = `network_offerings`.`id`)
LEFT JOIN LEFT JOIN
`cloud`.`domain` AS `domain` ON FIND_IN_SET(`domain`.`id`, `domain_details`.`value`) `cloud`.`data_center` AS `zone` ON `zone`.`id` IN (SELECT value from `network_offering_details` where `name` = 'zoneid' and `network_offering_id` = `network_offerings`.`id`)
LEFT JOIN
`cloud`.`network_offering_details` AS `zone_details` ON `zone_details`.`network_offering_id` = `network_offerings`.`id` AND `zone_details`.`name`='zoneid'
LEFT JOIN
`cloud`.`data_center` AS `zone` ON FIND_IN_SET(`zone`.`id`, `zone_details`.`value`)
LEFT JOIN LEFT JOIN
`cloud`.`network_offering_details` AS `offering_details` ON `offering_details`.`network_offering_id` = `network_offerings`.`id` AND `offering_details`.`name`='internetProtocol' `cloud`.`network_offering_details` AS `offering_details` ON `offering_details`.`network_offering_id` = `network_offerings`.`id` AND `offering_details`.`name`='internetProtocol'
GROUP BY GROUP BY

View File

@ -0,0 +1,99 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.capacity.dao;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertSame;
import static org.junit.Assert.assertTrue;
import static org.mockito.Mockito.any;
import static org.mockito.Mockito.doReturn;
import static org.mockito.Mockito.eq;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Mockito;
import org.mockito.Spy;
import org.mockito.junit.MockitoJUnitRunner;
import com.cloud.capacity.CapacityVO;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
@RunWith(MockitoJUnitRunner.class)
public class CapacityDaoImplTest {
@Spy
@InjectMocks
CapacityDaoImpl capacityDao = new CapacityDaoImpl();
private SearchBuilder<CapacityVO> searchBuilder;
private SearchCriteria<CapacityVO> searchCriteria;
@Before
public void setUp() {
searchBuilder = mock(SearchBuilder.class);
CapacityVO capacityVO = mock(CapacityVO.class);
when(searchBuilder.entity()).thenReturn(capacityVO);
searchCriteria = mock(SearchCriteria.class);
doReturn(searchBuilder).when(capacityDao).createSearchBuilder();
when(searchBuilder.create()).thenReturn(searchCriteria);
}
@Test
public void testListByHostIdTypes() {
// Prepare inputs
Long hostId = 1L;
List<Short> capacityTypes = Arrays.asList((short)1, (short)2);
CapacityVO capacity1 = new CapacityVO();
CapacityVO capacity2 = new CapacityVO();
List<CapacityVO> mockResult = Arrays.asList(capacity1, capacity2);
doReturn(mockResult).when(capacityDao).listBy(any(SearchCriteria.class));
List<CapacityVO> result = capacityDao.listByHostIdTypes(hostId, capacityTypes);
verify(searchBuilder).and(eq("hostId"), any(), eq(SearchCriteria.Op.EQ));
verify(searchBuilder).and(eq("type"), any(), eq(SearchCriteria.Op.IN));
verify(searchBuilder).done();
verify(searchCriteria).setParameters("hostId", hostId);
verify(searchCriteria).setParameters("type", capacityTypes.toArray());
verify(capacityDao).listBy(searchCriteria);
assertEquals(2, result.size());
assertSame(capacity1, result.get(0));
assertSame(capacity2, result.get(1));
}
@Test
public void testListByHostIdTypesEmptyResult() {
Long hostId = 1L;
List<Short> capacityTypes = Arrays.asList((short)1, (short)2);
doReturn(Collections.emptyList()).when(capacityDao).listBy(any(SearchCriteria.class));
List<CapacityVO> result = capacityDao.listByHostIdTypes(hostId, capacityTypes);
verify(searchBuilder).and(Mockito.eq("hostId"), any(), eq(SearchCriteria.Op.EQ));
verify(searchBuilder).and(eq("type"), any(), eq(SearchCriteria.Op.IN));
verify(searchBuilder).done();
verify(searchCriteria).setParameters("hostId", hostId);
verify(searchCriteria).setParameters("type", capacityTypes.toArray());
verify(capacityDao).listBy(searchCriteria);
assertTrue(result.isEmpty());
}
}

View File

@ -0,0 +1,78 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.dc.dao;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import static org.mockito.Mockito.any;
import static org.mockito.Mockito.doReturn;
import static org.mockito.Mockito.isNull;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Spy;
import org.mockito.junit.MockitoJUnitRunner;
import com.cloud.dc.ClusterVO;
import com.cloud.utils.db.GenericSearchBuilder;
import com.cloud.utils.db.SearchBuilder;
@RunWith(MockitoJUnitRunner.class)
public class ClusterDaoImplTest {
@Spy
@InjectMocks
ClusterDaoImpl clusterDao = new ClusterDaoImpl();
private GenericSearchBuilder<ClusterVO, Long> genericSearchBuilder;
@Before
public void setUp() {
genericSearchBuilder = mock(SearchBuilder.class);
ClusterVO entityVO = mock(ClusterVO.class);
when(genericSearchBuilder.entity()).thenReturn(entityVO);
doReturn(genericSearchBuilder).when(clusterDao).createSearchBuilder(Long.class);
}
@Test
public void testListAllIds() {
List<Long> mockIds = Arrays.asList(1L, 2L, 3L);
doReturn(mockIds).when(clusterDao).customSearch(any(), isNull());
List<Long> result = clusterDao.listAllIds();
verify(clusterDao).customSearch(genericSearchBuilder.create(), null);
assertEquals(3, result.size());
assertEquals(Long.valueOf(1L), result.get(0));
assertEquals(Long.valueOf(2L), result.get(1));
assertEquals(Long.valueOf(3L), result.get(2));
}
@Test
public void testListAllIdsEmptyResult() {
doReturn(Collections.emptyList()).when(clusterDao).customSearch(any(), isNull());
List<Long> result = clusterDao.listAllIds();
verify(clusterDao).customSearch(genericSearchBuilder.create(), null);
assertTrue(result.isEmpty());
}
}

View File

@ -0,0 +1,184 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.host.dao;
import java.util.List;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.mockito.Spy;
import org.mockito.junit.MockitoJUnitRunner;
import com.cloud.host.Host;
import com.cloud.host.HostVO;
import com.cloud.host.Status;
import com.cloud.hypervisor.Hypervisor;
import com.cloud.resource.ResourceState;
import com.cloud.utils.Pair;
import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.GenericSearchBuilder;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
@RunWith(MockitoJUnitRunner.class)
public class HostDaoImplTest {
@Spy
HostDaoImpl hostDao = new HostDaoImpl();
@Mock
private SearchBuilder<HostVO> mockSearchBuilder;
@Mock
private SearchCriteria<HostVO> mockSearchCriteria;
@Test
public void testCountUpAndEnabledHostsInZone() {
long testZoneId = 100L;
hostDao.HostTypeCountSearch = mockSearchBuilder;
Mockito.when(mockSearchBuilder.create()).thenReturn(mockSearchCriteria);
Mockito.doNothing().when(mockSearchCriteria).setParameters(Mockito.anyString(), Mockito.any());
int expected = 5;
Mockito.doReturn(expected).when(hostDao).getCount(mockSearchCriteria);
Integer count = hostDao.countUpAndEnabledHostsInZone(testZoneId);
Assert.assertSame(expected, count);
Mockito.verify(mockSearchCriteria).setParameters("type", Host.Type.Routing);
Mockito.verify(mockSearchCriteria).setParameters("resourceState", ResourceState.Enabled);
Mockito.verify(mockSearchCriteria).setParameters("zoneId", testZoneId);
Mockito.verify(hostDao).getCount(mockSearchCriteria);
}
@Test
public void testCountAllHostsAndCPUSocketsByType() {
Host.Type type = Host.Type.Routing;
GenericDaoBase.SumCount mockSumCount = new GenericDaoBase.SumCount();
mockSumCount.count = 10;
mockSumCount.sum = 20;
HostVO host = Mockito.mock(HostVO.class);
GenericSearchBuilder<HostVO, GenericDaoBase.SumCount> sb = Mockito.mock(GenericSearchBuilder.class);
Mockito.when(sb.entity()).thenReturn(host);
Mockito.doReturn(sb).when(hostDao).createSearchBuilder(GenericDaoBase.SumCount.class);
SearchCriteria<GenericDaoBase.SumCount> sc = Mockito.mock(SearchCriteria.class);
Mockito.when(sb.create()).thenReturn(sc);
Mockito.doReturn(List.of(mockSumCount)).when(hostDao).customSearch(Mockito.any(SearchCriteria.class), Mockito.any());
Pair<Integer, Integer> result = hostDao.countAllHostsAndCPUSocketsByType(type);
Assert.assertEquals(10, result.first().intValue());
Assert.assertEquals(20, result.second().intValue());
Mockito.verify(sc).setParameters("type", type);
}
@Test
public void testIsHostUp() {
long testHostId = 101L;
List<Status> statuses = List.of(Status.Up);
HostVO host = Mockito.mock(HostVO.class);
GenericSearchBuilder<HostVO, Status> sb = Mockito.mock(GenericSearchBuilder.class);
Mockito.when(sb.entity()).thenReturn(host);
SearchCriteria<Status> sc = Mockito.mock(SearchCriteria.class);
Mockito.when(sb.create()).thenReturn(sc);
Mockito.doReturn(sb).when(hostDao).createSearchBuilder(Status.class);
Mockito.doReturn(statuses).when(hostDao).customSearch(Mockito.any(SearchCriteria.class), Mockito.any());
boolean result = hostDao.isHostUp(testHostId);
Assert.assertTrue("Host should be up", result);
Mockito.verify(sc).setParameters("id", testHostId);
Mockito.verify(hostDao).customSearch(sc, null);
}
@Test
public void testFindHostIdsByZoneClusterResourceStateTypeAndHypervisorType() {
Long zoneId = 1L;
Long clusterId = 2L;
List<ResourceState> resourceStates = List.of(ResourceState.Enabled);
List<Host.Type> types = List.of(Host.Type.Routing);
List<Hypervisor.HypervisorType> hypervisorTypes = List.of(Hypervisor.HypervisorType.KVM);
List<Long> mockResults = List.of(1001L, 1002L); // Mocked result
HostVO host = Mockito.mock(HostVO.class);
GenericSearchBuilder<HostVO, Long> sb = Mockito.mock(GenericSearchBuilder.class);
Mockito.when(sb.entity()).thenReturn(host);
SearchCriteria<Long> sc = Mockito.mock(SearchCriteria.class);
Mockito.when(sb.create()).thenReturn(sc);
Mockito.when(sb.and()).thenReturn(sb);
Mockito.doReturn(sb).when(hostDao).createSearchBuilder(Long.class);
Mockito.doReturn(mockResults).when(hostDao).customSearch(Mockito.any(SearchCriteria.class), Mockito.any());
List<Long> hostIds = hostDao.findHostIdsByZoneClusterResourceStateTypeAndHypervisorType(
zoneId, clusterId, resourceStates, types, hypervisorTypes);
Assert.assertEquals(mockResults, hostIds);
Mockito.verify(sc).setParameters("zoneId", zoneId);
Mockito.verify(sc).setParameters("clusterId", clusterId);
Mockito.verify(sc).setParameters("resourceState", resourceStates.toArray());
Mockito.verify(sc).setParameters("type", types.toArray());
Mockito.verify(sc).setParameters("hypervisorTypes", hypervisorTypes.toArray());
}
@Test
public void testListDistinctHypervisorTypes() {
Long zoneId = 1L;
List<Hypervisor.HypervisorType> mockResults = List.of(Hypervisor.HypervisorType.KVM, Hypervisor.HypervisorType.XenServer);
HostVO host = Mockito.mock(HostVO.class);
GenericSearchBuilder<HostVO, Hypervisor.HypervisorType> sb = Mockito.mock(GenericSearchBuilder.class);
Mockito.when(sb.entity()).thenReturn(host);
SearchCriteria<Hypervisor.HypervisorType> sc = Mockito.mock(SearchCriteria.class);
Mockito.when(sb.create()).thenReturn(sc);
Mockito.doReturn(sb).when(hostDao).createSearchBuilder(Hypervisor.HypervisorType.class);
Mockito.doReturn(mockResults).when(hostDao).customSearch(Mockito.any(SearchCriteria.class), Mockito.any());
List<Hypervisor.HypervisorType> hypervisorTypes = hostDao.listDistinctHypervisorTypes(zoneId);
Assert.assertEquals(mockResults, hypervisorTypes);
Mockito.verify(sc).setParameters("zoneId", zoneId);
Mockito.verify(sc).setParameters("type", Host.Type.Routing);
}
@Test
public void testListByIds() {
List<Long> ids = List.of(101L, 102L);
List<HostVO> mockResults = List.of(Mockito.mock(HostVO.class), Mockito.mock(HostVO.class));
hostDao.IdsSearch = mockSearchBuilder;
Mockito.when(mockSearchBuilder.create()).thenReturn(mockSearchCriteria);
Mockito.doReturn(mockResults).when(hostDao).search(Mockito.any(SearchCriteria.class), Mockito.any());
List<HostVO> hosts = hostDao.listByIds(ids);
Assert.assertEquals(mockResults, hosts);
Mockito.verify(mockSearchCriteria).setParameters("id", ids.toArray());
Mockito.verify(hostDao).search(mockSearchCriteria, null);
}
@Test
public void testListIdsBy() {
Host.Type type = Host.Type.Routing;
Status status = Status.Up;
ResourceState resourceState = ResourceState.Enabled;
Hypervisor.HypervisorType hypervisorType = Hypervisor.HypervisorType.KVM;
Long zoneId = 1L, podId = 2L, clusterId = 3L;
List<Long> mockResults = List.of(1001L, 1002L);
HostVO host = Mockito.mock(HostVO.class);
GenericSearchBuilder<HostVO, Long> sb = Mockito.mock(GenericSearchBuilder.class);
Mockito.when(sb.entity()).thenReturn(host);
SearchCriteria<Long> sc = Mockito.mock(SearchCriteria.class);
Mockito.when(sb.create()).thenReturn(sc);
Mockito.doReturn(sb).when(hostDao).createSearchBuilder(Long.class);
Mockito.doReturn(mockResults).when(hostDao).customSearch(Mockito.any(SearchCriteria.class), Mockito.any());
List<Long> hostIds = hostDao.listIdsBy(type, status, resourceState, hypervisorType, zoneId, podId, clusterId);
Assert.assertEquals(mockResults, hostIds);
Mockito.verify(sc).setParameters("type", type);
Mockito.verify(sc).setParameters("status", status);
Mockito.verify(sc).setParameters("resourceState", resourceState);
Mockito.verify(sc).setParameters("hypervisorType", hypervisorType);
Mockito.verify(sc).setParameters("zoneId", zoneId);
Mockito.verify(sc).setParameters("podId", podId);
Mockito.verify(sc).setParameters("clusterId", clusterId);
}
}

View File

@ -23,12 +23,9 @@ import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when; import static org.mockito.Mockito.when;
import java.sql.PreparedStatement; import java.sql.PreparedStatement;
import com.cloud.utils.DateUtil;
import com.cloud.utils.db.TransactionLegacy;
import java.util.Date; import java.util.Date;
import java.util.TimeZone; import java.util.TimeZone;
import com.cloud.usage.UsageStorageVO;
import org.junit.Test; import org.junit.Test;
import org.junit.runner.RunWith; import org.junit.runner.RunWith;
import org.mockito.Mock; import org.mockito.Mock;
@ -36,6 +33,10 @@ import org.mockito.MockedStatic;
import org.mockito.Mockito; import org.mockito.Mockito;
import org.mockito.junit.MockitoJUnitRunner; import org.mockito.junit.MockitoJUnitRunner;
import com.cloud.usage.UsageStorageVO;
import com.cloud.utils.DateUtil;
import com.cloud.utils.db.TransactionLegacy;
@RunWith(MockitoJUnitRunner.class) @RunWith(MockitoJUnitRunner.class)
public class UsageStorageDaoImplTest { public class UsageStorageDaoImplTest {

View File

@ -0,0 +1,181 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.resourcedetail;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import static org.mockito.ArgumentMatchers.isNull;
import static org.mockito.Mockito.any;
import static org.mockito.Mockito.doReturn;
import static org.mockito.Mockito.eq;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;
import org.apache.cloudstack.api.ResourceDetail;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Spy;
import org.mockito.junit.MockitoJUnitRunner;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
@RunWith(MockitoJUnitRunner.class)
public class ResourceDetailsDaoBaseTest {
@Spy
@InjectMocks
TestDetailsDao testDetailsDao = new TestDetailsDao();
private SearchBuilder<TestDetailVO> searchBuilder;
private SearchCriteria<TestDetailVO> searchCriteria;
@Before
public void setUp() {
searchBuilder = mock(SearchBuilder.class);
searchCriteria = mock(SearchCriteria.class);
TestDetailVO entityVO = mock(TestDetailVO.class);
when(searchBuilder.entity()).thenReturn(entityVO);
searchCriteria = mock(SearchCriteria.class);
doReturn(searchBuilder).when(testDetailsDao).createSearchBuilder();
when(searchBuilder.create()).thenReturn(searchCriteria);
}
@Test
public void testListDetailsKeyPairs() {
long resourceId = 1L;
List<String> keys = Arrays.asList("key1", "key2");
TestDetailVO result1 = mock(TestDetailVO.class);
when(result1.getName()).thenReturn("key1");
when(result1.getValue()).thenReturn("value1");
TestDetailVO result2 = mock(TestDetailVO.class);
when(result2.getName()).thenReturn("key2");
when(result2.getValue()).thenReturn("value2");
List<TestDetailVO> mockResults = Arrays.asList(result1, result2);
doReturn(mockResults).when(testDetailsDao).search(any(SearchCriteria.class), isNull());
Map<String, String> result = testDetailsDao.listDetailsKeyPairs(resourceId, keys);
verify(searchBuilder).and(eq("resourceId"), any(), eq(SearchCriteria.Op.EQ));
verify(searchBuilder).and(eq("name"), any(), eq(SearchCriteria.Op.IN));
verify(searchBuilder).done();
verify(searchCriteria).setParameters("resourceId", resourceId);
verify(searchCriteria).setParameters("name", keys.toArray());
verify(testDetailsDao).search(searchCriteria, null);
assertEquals(2, result.size());
assertEquals("value1", result.get("key1"));
assertEquals("value2", result.get("key2"));
}
@Test
public void testListDetailsKeyPairsEmptyResult() {
long resourceId = 1L;
List<String> keys = Arrays.asList("key1", "key2");
doReturn(Collections.emptyList()).when(testDetailsDao).search(any(SearchCriteria.class), isNull());
Map<String, String> result = testDetailsDao.listDetailsKeyPairs(resourceId, keys);
verify(searchBuilder).and(eq("resourceId"), any(), eq(SearchCriteria.Op.EQ));
verify(searchBuilder).and(eq("name"), any(), eq(SearchCriteria.Op.IN));
verify(searchBuilder).done();
verify(searchCriteria).setParameters("resourceId", resourceId);
verify(searchCriteria).setParameters("name", keys.toArray());
verify(testDetailsDao).search(searchCriteria, null);
assertTrue(result.isEmpty());
}
protected static class TestDetailsDao extends ResourceDetailsDaoBase<TestDetailVO> {
@Override
public void addDetail(long resourceId, String key, String value, boolean display) {
super.addDetail(new TestDetailVO(resourceId, key, value, display));
}
}
@Entity
@Table(name = "test_details")
protected static class TestDetailVO implements ResourceDetail {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name = "id")
private long id;
@Column(name = "resource_id")
private long resourceId;
@Column(name = "name")
private String name;
@Column(name = "value")
private String value;
@Column(name = "display")
private boolean display = true;
public TestDetailVO() {
}
public TestDetailVO(long resourceId, String name, String value, boolean display) {
this.resourceId = resourceId;
this.name = name;
this.value = value;
this.display = display;
}
@Override
public long getId() {
return id;
}
@Override
public String getName() {
return name;
}
@Override
public String getValue() {
return value;
}
@Override
public long getResourceId() {
return resourceId;
}
@Override
public boolean isDisplay() {
return display;
}
public void setName(String name) {
this.name = name;
}
public void setValue(String value) {
this.value = value;
}
}
}

View File

@ -17,12 +17,17 @@
package org.apache.cloudstack.storage.datastore.db; package org.apache.cloudstack.storage.datastore.db;
import static org.mockito.ArgumentMatchers.nullable; import static org.mockito.ArgumentMatchers.nullable;
import static org.mockito.Mockito.any;
import static org.mockito.Mockito.doReturn; import static org.mockito.Mockito.doReturn;
import static org.mockito.Mockito.isNull;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.verify; import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
import java.io.IOException; import java.io.IOException;
import java.sql.SQLException; import java.sql.SQLException;
import java.util.Arrays; import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap; import java.util.HashMap;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
@ -34,13 +39,15 @@ import org.junit.runner.RunWith;
import org.mockito.InjectMocks; import org.mockito.InjectMocks;
import org.mockito.Mock; import org.mockito.Mock;
import org.mockito.Spy; import org.mockito.Spy;
import org.mockito.junit.MockitoJUnitRunner;
import com.cloud.storage.ScopeType; import com.cloud.storage.ScopeType;
import com.cloud.storage.dao.StoragePoolHostDao; import com.cloud.storage.dao.StoragePoolHostDao;
import com.cloud.storage.dao.StoragePoolTagsDao; import com.cloud.storage.dao.StoragePoolTagsDao;
import com.cloud.utils.db.GenericSearchBuilder;
import com.cloud.utils.db.SearchBuilder;
import junit.framework.TestCase; import junit.framework.TestCase;
import org.mockito.junit.MockitoJUnitRunner;
@RunWith(MockitoJUnitRunner.class) @RunWith(MockitoJUnitRunner.class)
public class PrimaryDataStoreDaoImplTest extends TestCase { public class PrimaryDataStoreDaoImplTest extends TestCase {
@ -59,6 +66,8 @@ public class PrimaryDataStoreDaoImplTest extends TestCase {
@Mock @Mock
StoragePoolVO storagePoolVO; StoragePoolVO storagePoolVO;
private GenericSearchBuilder<StoragePoolVO, Long> genericSearchBuilder;
private static final String STORAGE_TAG_1 = "NFS-A"; private static final String STORAGE_TAG_1 = "NFS-A";
private static final String STORAGE_TAG_2 = "NFS-B"; private static final String STORAGE_TAG_2 = "NFS-B";
private static final String[] STORAGE_TAGS_ARRAY = {STORAGE_TAG_1, STORAGE_TAG_2}; private static final String[] STORAGE_TAGS_ARRAY = {STORAGE_TAG_1, STORAGE_TAG_2};
@ -155,4 +164,32 @@ public class PrimaryDataStoreDaoImplTest extends TestCase {
String expectedSql = primaryDataStoreDao.DetailsSqlPrefix + SQL_VALUES + primaryDataStoreDao.DetailsSqlSuffix; String expectedSql = primaryDataStoreDao.DetailsSqlPrefix + SQL_VALUES + primaryDataStoreDao.DetailsSqlSuffix;
verify(primaryDataStoreDao).searchStoragePoolsPreparedStatement(expectedSql, DATACENTER_ID, POD_ID, CLUSTER_ID, SCOPE, STORAGE_POOL_DETAILS.size()); verify(primaryDataStoreDao).searchStoragePoolsPreparedStatement(expectedSql, DATACENTER_ID, POD_ID, CLUSTER_ID, SCOPE, STORAGE_POOL_DETAILS.size());
} }
@Test
public void testListAllIds() {
GenericSearchBuilder<StoragePoolVO, Long> genericSearchBuilder = mock(SearchBuilder.class);
StoragePoolVO entityVO = mock(StoragePoolVO.class);
when(genericSearchBuilder.entity()).thenReturn(entityVO);
doReturn(genericSearchBuilder).when(primaryDataStoreDao).createSearchBuilder(Long.class);
List<Long> mockIds = Arrays.asList(1L, 2L, 3L);
doReturn(mockIds).when(primaryDataStoreDao).customSearch(any(), isNull());
List<Long> result = primaryDataStoreDao.listAllIds();
verify(primaryDataStoreDao).customSearch(genericSearchBuilder.create(), null);
assertEquals(3, result.size());
assertEquals(Long.valueOf(1L), result.get(0));
assertEquals(Long.valueOf(2L), result.get(1));
assertEquals(Long.valueOf(3L), result.get(2));
}
@Test
public void testListAllIdsEmptyResult() {
GenericSearchBuilder<StoragePoolVO, Long> genericSearchBuilder = mock(SearchBuilder.class);
StoragePoolVO entityVO = mock(StoragePoolVO.class);
when(genericSearchBuilder.entity()).thenReturn(entityVO);
doReturn(genericSearchBuilder).when(primaryDataStoreDao).createSearchBuilder(Long.class);
doReturn(Collections.emptyList()).when(primaryDataStoreDao).customSearch(any(), isNull());
List<Long> result = primaryDataStoreDao.listAllIds();
verify(primaryDataStoreDao).customSearch(genericSearchBuilder.create(), null);
assertTrue(result.isEmpty());
}
} }

View File

@ -42,4 +42,8 @@ public interface IndirectAgentLBAlgorithm {
* @return true if the lists are equal, false if not * @return true if the lists are equal, false if not
*/ */
boolean compare(final List<String> msList, final List<String> receivedMsList); boolean compare(final List<String> msList, final List<String> receivedMsList);
default boolean isHostListNeeded() {
return false;
}
} }

View File

@ -107,8 +107,7 @@ public class ManagementServerHostPeerDaoImpl extends GenericDaoBase<ManagementSe
sc.setParameters("peerRunid", runid); sc.setParameters("peerRunid", runid);
sc.setParameters("peerState", state); sc.setParameters("peerState", state);
List<ManagementServerHostPeerVO> l = listBy(sc); return getCount(sc);
return l.size();
} }
@Override @Override

View File

@ -23,7 +23,6 @@ import java.util.HashMap;
import java.util.HashSet; import java.util.HashSet;
import java.util.List; import java.util.List;
import java.util.Set; import java.util.Set;
import java.util.concurrent.TimeUnit;
import javax.annotation.PostConstruct; import javax.annotation.PostConstruct;
import javax.inject.Inject; import javax.inject.Inject;
@ -36,6 +35,7 @@ import org.apache.cloudstack.framework.config.ScopedConfigStorage;
import org.apache.cloudstack.framework.config.dao.ConfigurationDao; import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
import org.apache.cloudstack.framework.config.dao.ConfigurationGroupDao; import org.apache.cloudstack.framework.config.dao.ConfigurationGroupDao;
import org.apache.cloudstack.framework.config.dao.ConfigurationSubGroupDao; import org.apache.cloudstack.framework.config.dao.ConfigurationSubGroupDao;
import org.apache.cloudstack.utils.cache.LazyCache;
import org.apache.commons.lang.ObjectUtils; import org.apache.commons.lang.ObjectUtils;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.LogManager;
@ -44,8 +44,6 @@ import org.apache.logging.log4j.Logger;
import com.cloud.utils.Pair; import com.cloud.utils.Pair;
import com.cloud.utils.Ternary; import com.cloud.utils.Ternary;
import com.cloud.utils.exception.CloudRuntimeException; import com.cloud.utils.exception.CloudRuntimeException;
import com.github.benmanes.caffeine.cache.Cache;
import com.github.benmanes.caffeine.cache.Caffeine;
/** /**
* ConfigDepotImpl implements the ConfigDepot and ConfigDepotAdmin interface. * ConfigDepotImpl implements the ConfigDepot and ConfigDepotAdmin interface.
@ -87,17 +85,15 @@ public class ConfigDepotImpl implements ConfigDepot, ConfigDepotAdmin {
List<ScopedConfigStorage> _scopedStorages; List<ScopedConfigStorage> _scopedStorages;
Set<Configurable> _configured = Collections.synchronizedSet(new HashSet<Configurable>()); Set<Configurable> _configured = Collections.synchronizedSet(new HashSet<Configurable>());
Set<String> newConfigs = Collections.synchronizedSet(new HashSet<>()); Set<String> newConfigs = Collections.synchronizedSet(new HashSet<>());
Cache<String, String> configCache; LazyCache<String, String> configCache;
private HashMap<String, Pair<String, ConfigKey<?>>> _allKeys = new HashMap<String, Pair<String, ConfigKey<?>>>(1007); private HashMap<String, Pair<String, ConfigKey<?>>> _allKeys = new HashMap<String, Pair<String, ConfigKey<?>>>(1007);
HashMap<ConfigKey.Scope, Set<ConfigKey<?>>> _scopeLevelConfigsMap = new HashMap<ConfigKey.Scope, Set<ConfigKey<?>>>(); HashMap<ConfigKey.Scope, Set<ConfigKey<?>>> _scopeLevelConfigsMap = new HashMap<ConfigKey.Scope, Set<ConfigKey<?>>>();
public ConfigDepotImpl() { public ConfigDepotImpl() {
configCache = Caffeine.newBuilder() configCache = new LazyCache<>(512,
.maximumSize(512) CONFIG_CACHE_EXPIRE_SECONDS, this::getConfigStringValueInternal);
.expireAfterWrite(CONFIG_CACHE_EXPIRE_SECONDS, TimeUnit.SECONDS)
.build();
ConfigKey.init(this); ConfigKey.init(this);
createEmptyScopeLevelMappings(); createEmptyScopeLevelMappings();
} }
@ -311,7 +307,7 @@ public class ConfigDepotImpl implements ConfigDepot, ConfigDepotAdmin {
@Override @Override
public String getConfigStringValue(String key, ConfigKey.Scope scope, Long scopeId) { public String getConfigStringValue(String key, ConfigKey.Scope scope, Long scopeId) {
return configCache.get(getConfigCacheKey(key, scope, scopeId), this::getConfigStringValueInternal); return configCache.get(getConfigCacheKey(key, scope, scopeId));
} }
@Override @Override

View File

@ -148,6 +148,11 @@ public interface GenericDao<T, ID extends Serializable> {
*/ */
List<T> listAll(Filter filter); List<T> listAll(Filter filter);
/**
* Look IDs for all active rows.
*/
List<ID> listAllIds();
/** /**
* Search for the entity beans * Search for the entity beans
* @param sc * @param sc

View File

@ -1218,6 +1218,35 @@ public abstract class GenericDaoBase<T, ID extends Serializable> extends Compone
return executeList(sql.toString()); return executeList(sql.toString());
} }
private Object getIdObject() {
T entity = (T)_searchEnhancer.create();
try {
Method m = _entityBeanType.getMethod("getId");
return m.invoke(entity);
} catch (NoSuchMethodException | InvocationTargetException | IllegalAccessException ignored) {
logger.warn("Unable to get ID object for entity: {}", _entityBeanType.getSimpleName());
}
return null;
}
@Override
public List<ID> listAllIds() {
Object idObj = getIdObject();
if (idObj == null) {
return Collections.emptyList();
}
Class<ID> clazz = (Class<ID>)idObj.getClass();
GenericSearchBuilder<T, ID> sb = createSearchBuilder(clazz);
try {
Method m = sb.entity().getClass().getMethod("getId");
sb.selectFields(m.invoke(sb.entity()));
} catch (NoSuchMethodException | InvocationTargetException | IllegalAccessException ignored) {
return Collections.emptyList();
}
sb.done();
return customSearch(sb.create(), null);
}
@Override @Override
public boolean expunge(final ID id) { public boolean expunge(final ID id) {
final TransactionLegacy txn = TransactionLegacy.currentTxn(); final TransactionLegacy txn = TransactionLegacy.currentTxn();
@ -2445,4 +2474,11 @@ public abstract class GenericDaoBase<T, ID extends Serializable> extends Compone
} }
} }
public static class SumCount {
public long sum;
public long count;
public SumCount() {
}
}
} }

View File

@ -40,4 +40,5 @@ public interface VmWorkJobDao extends GenericDao<VmWorkJobVO, Long> {
void expungeLeftoverWorkJobs(long msid); void expungeLeftoverWorkJobs(long msid);
int expungeByVmList(List<Long> vmIds, Long batchSize); int expungeByVmList(List<Long> vmIds, Long batchSize);
List<Long> listVmIdsWithPendingJob();
} }

View File

@ -24,6 +24,7 @@ import java.util.List;
import javax.annotation.PostConstruct; import javax.annotation.PostConstruct;
import javax.inject.Inject; import javax.inject.Inject;
import org.apache.cloudstack.framework.jobs.impl.AsyncJobVO;
import org.apache.cloudstack.framework.jobs.impl.VmWorkJobVO; import org.apache.cloudstack.framework.jobs.impl.VmWorkJobVO;
import org.apache.cloudstack.framework.jobs.impl.VmWorkJobVO.Step; import org.apache.cloudstack.framework.jobs.impl.VmWorkJobVO.Step;
import org.apache.cloudstack.jobs.JobInfo; import org.apache.cloudstack.jobs.JobInfo;
@ -32,6 +33,8 @@ import org.apache.commons.collections.CollectionUtils;
import com.cloud.utils.DateUtil; import com.cloud.utils.DateUtil;
import com.cloud.utils.db.Filter; import com.cloud.utils.db.Filter;
import com.cloud.utils.db.GenericDaoBase; import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.GenericSearchBuilder;
import com.cloud.utils.db.JoinBuilder;
import com.cloud.utils.db.SearchBuilder; import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria; import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.SearchCriteria.Op; import com.cloud.utils.db.SearchCriteria.Op;
@ -224,4 +227,17 @@ public class VmWorkJobDaoImpl extends GenericDaoBase<VmWorkJobVO, Long> implemen
sc.setParameters("vmIds", vmIds.toArray()); sc.setParameters("vmIds", vmIds.toArray());
return batchExpunge(sc, batchSize); return batchExpunge(sc, batchSize);
} }
@Override
public List<Long> listVmIdsWithPendingJob() {
GenericSearchBuilder<VmWorkJobVO, Long> sb = createSearchBuilder(Long.class);
SearchBuilder<AsyncJobVO> asyncJobSearch = _baseJobDao.createSearchBuilder();
asyncJobSearch.and("status", asyncJobSearch.entity().getStatus(), SearchCriteria.Op.EQ);
sb.join("asyncJobSearch", asyncJobSearch, sb.entity().getId(), asyncJobSearch.entity().getId(), JoinBuilder.JoinType.INNER);
sb.and("removed", sb.entity().getRemoved(), Op.NULL);
sb.selectFields(sb.entity().getVmInstanceId());
SearchCriteria<Long> sc = sb.create();
sc.setJoinParameters("asyncJobSearch", "status", JobInfo.Status.IN_PROGRESS);
return customSearch(sc, null);
}
} }

View File

@ -16,27 +16,69 @@
// under the License. // under the License.
package org.apache.cloudstack.framework.jobs.dao; package org.apache.cloudstack.framework.jobs.dao;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import static org.mockito.Mockito.any;
import static org.mockito.Mockito.anyLong;
import static org.mockito.Mockito.doAnswer;
import static org.mockito.Mockito.doReturn;
import static org.mockito.Mockito.eq;
import static org.mockito.Mockito.isNull;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.List; import java.util.List;
import org.apache.cloudstack.framework.jobs.impl.AsyncJobVO;
import org.apache.cloudstack.framework.jobs.impl.VmWorkJobVO; import org.apache.cloudstack.framework.jobs.impl.VmWorkJobVO;
import org.apache.cloudstack.jobs.JobInfo;
import org.junit.Assert; import org.junit.Assert;
import org.junit.Before;
import org.junit.Test; import org.junit.Test;
import org.junit.runner.RunWith; import org.junit.runner.RunWith;
import org.mockito.Mockito; import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.Spy; import org.mockito.Spy;
import org.mockito.junit.MockitoJUnitRunner; import org.mockito.junit.MockitoJUnitRunner;
import org.mockito.stubbing.Answer; import org.mockito.stubbing.Answer;
import com.cloud.utils.db.GenericSearchBuilder;
import com.cloud.utils.db.JoinBuilder;
import com.cloud.utils.db.SearchBuilder; import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria; import com.cloud.utils.db.SearchCriteria;
@RunWith(MockitoJUnitRunner.class) @RunWith(MockitoJUnitRunner.class)
public class VmWorkJobDaoImplTest { public class VmWorkJobDaoImplTest {
@Mock
AsyncJobDao asyncJobDao;
@Spy @Spy
@InjectMocks
VmWorkJobDaoImpl vmWorkJobDaoImpl; VmWorkJobDaoImpl vmWorkJobDaoImpl;
private GenericSearchBuilder<VmWorkJobVO, Long> genericVmWorkJobSearchBuilder;
private SearchBuilder<AsyncJobVO> asyncJobSearchBuilder;
private SearchCriteria<Long> searchCriteria;
@Before
public void setUp() {
genericVmWorkJobSearchBuilder = mock(GenericSearchBuilder.class);
VmWorkJobVO entityVO = mock(VmWorkJobVO.class);
when(genericVmWorkJobSearchBuilder.entity()).thenReturn(entityVO);
asyncJobSearchBuilder = mock(SearchBuilder.class);
AsyncJobVO asyncJobVO = mock(AsyncJobVO.class);
when(asyncJobSearchBuilder.entity()).thenReturn(asyncJobVO);
searchCriteria = mock(SearchCriteria.class);
when(vmWorkJobDaoImpl.createSearchBuilder(Long.class)).thenReturn(genericVmWorkJobSearchBuilder);
when(asyncJobDao.createSearchBuilder()).thenReturn(asyncJobSearchBuilder);
when(genericVmWorkJobSearchBuilder.create()).thenReturn(searchCriteria);
}
@Test @Test
public void testExpungeByVmListNoVms() { public void testExpungeByVmListNoVms() {
Assert.assertEquals(0, vmWorkJobDaoImpl.expungeByVmList( Assert.assertEquals(0, vmWorkJobDaoImpl.expungeByVmList(
@ -47,22 +89,52 @@ public class VmWorkJobDaoImplTest {
@Test @Test
public void testExpungeByVmList() { public void testExpungeByVmList() {
SearchBuilder<VmWorkJobVO> sb = Mockito.mock(SearchBuilder.class); SearchBuilder<VmWorkJobVO> sb = mock(SearchBuilder.class);
SearchCriteria<VmWorkJobVO> sc = Mockito.mock(SearchCriteria.class); SearchCriteria<VmWorkJobVO> sc = mock(SearchCriteria.class);
Mockito.when(sb.create()).thenReturn(sc); when(sb.create()).thenReturn(sc);
Mockito.doAnswer((Answer<Integer>) invocationOnMock -> { doAnswer((Answer<Integer>) invocationOnMock -> {
Long batchSize = (Long)invocationOnMock.getArguments()[1]; Long batchSize = (Long)invocationOnMock.getArguments()[1];
return batchSize == null ? 0 : batchSize.intValue(); return batchSize == null ? 0 : batchSize.intValue();
}).when(vmWorkJobDaoImpl).batchExpunge(Mockito.any(SearchCriteria.class), Mockito.anyLong()); }).when(vmWorkJobDaoImpl).batchExpunge(any(SearchCriteria.class), anyLong());
Mockito.when(vmWorkJobDaoImpl.createSearchBuilder()).thenReturn(sb); when(vmWorkJobDaoImpl.createSearchBuilder()).thenReturn(sb);
final VmWorkJobVO mockedVO = Mockito.mock(VmWorkJobVO.class); final VmWorkJobVO mockedVO = mock(VmWorkJobVO.class);
Mockito.when(sb.entity()).thenReturn(mockedVO); when(sb.entity()).thenReturn(mockedVO);
List<Long> vmIds = List.of(1L, 2L); List<Long> vmIds = List.of(1L, 2L);
Object[] array = vmIds.toArray(); Object[] array = vmIds.toArray();
Long batchSize = 50L; Long batchSize = 50L;
Assert.assertEquals(batchSize.intValue(), vmWorkJobDaoImpl.expungeByVmList(List.of(1L, 2L), batchSize)); Assert.assertEquals(batchSize.intValue(), vmWorkJobDaoImpl.expungeByVmList(List.of(1L, 2L), batchSize));
Mockito.verify(sc).setParameters("vmIds", array); verify(sc).setParameters("vmIds", array);
Mockito.verify(vmWorkJobDaoImpl, Mockito.times(1)) verify(vmWorkJobDaoImpl, times(1))
.batchExpunge(sc, batchSize); .batchExpunge(sc, batchSize);
} }
@Test
public void testListVmIdsWithPendingJob() {
List<Long> mockVmIds = Arrays.asList(101L, 102L, 103L);
doReturn(mockVmIds).when(vmWorkJobDaoImpl).customSearch(any(SearchCriteria.class), isNull());
List<Long> result = vmWorkJobDaoImpl.listVmIdsWithPendingJob();
verify(genericVmWorkJobSearchBuilder).join(eq("asyncJobSearch"), eq(asyncJobSearchBuilder), any(), any(), eq(JoinBuilder.JoinType.INNER));
verify(genericVmWorkJobSearchBuilder).and(eq("removed"), any(), eq(SearchCriteria.Op.NULL));
verify(genericVmWorkJobSearchBuilder).create();
verify(asyncJobSearchBuilder).and(eq("status"), any(), eq(SearchCriteria.Op.EQ));
verify(searchCriteria).setJoinParameters(eq("asyncJobSearch"), eq("status"), eq(JobInfo.Status.IN_PROGRESS));
verify(vmWorkJobDaoImpl).customSearch(searchCriteria, null);
assertEquals(3, result.size());
assertEquals(Long.valueOf(101L), result.get(0));
assertEquals(Long.valueOf(102L), result.get(1));
assertEquals(Long.valueOf(103L), result.get(2));
}
@Test
public void testListVmIdsWithPendingJobEmptyResult() {
doReturn(Collections.emptyList()).when(vmWorkJobDaoImpl).customSearch(any(SearchCriteria.class), isNull());
List<Long> result = vmWorkJobDaoImpl.listVmIdsWithPendingJob();
verify(genericVmWorkJobSearchBuilder).join(eq("asyncJobSearch"), eq(asyncJobSearchBuilder), any(), any(), eq(JoinBuilder.JoinType.INNER));
verify(genericVmWorkJobSearchBuilder).and(eq("removed"), any(), eq(SearchCriteria.Op.NULL));
verify(genericVmWorkJobSearchBuilder).create();
verify(asyncJobSearchBuilder).and(eq("status"), any(), eq(SearchCriteria.Op.EQ));
verify(searchCriteria).setJoinParameters(eq("asyncJobSearch"), eq("status"), eq(JobInfo.Status.IN_PROGRESS));
verify(vmWorkJobDaoImpl).customSearch(searchCriteria, null);
assertTrue(result.isEmpty());
}
} }

View File

@ -26,20 +26,21 @@ import java.util.Set;
import javax.inject.Inject; import javax.inject.Inject;
import javax.naming.ConfigurationException; import javax.naming.ConfigurationException;
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.acl.RolePermissionEntity.Permission; import org.apache.cloudstack.acl.RolePermissionEntity.Permission;
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.utils.cache.LazyCache;
import org.apache.commons.lang3.StringUtils;
import com.cloud.exception.PermissionDeniedException; import com.cloud.exception.PermissionDeniedException;
import com.cloud.exception.UnavailableCommandException; import com.cloud.exception.UnavailableCommandException;
import com.cloud.user.Account; import com.cloud.user.Account;
import com.cloud.user.AccountService; import com.cloud.user.AccountService;
import com.cloud.user.User; import com.cloud.user.User;
import com.cloud.utils.Pair;
import com.cloud.utils.component.AdapterBase; import com.cloud.utils.component.AdapterBase;
import com.cloud.utils.component.PluggableService; import com.cloud.utils.component.PluggableService;
import org.apache.commons.lang3.StringUtils;
public class DynamicRoleBasedAPIAccessChecker extends AdapterBase implements APIAclChecker { public class DynamicRoleBasedAPIAccessChecker extends AdapterBase implements APIAclChecker {
@Inject @Inject
private AccountService accountService; private AccountService accountService;
@Inject @Inject
@ -48,6 +49,9 @@ public class DynamicRoleBasedAPIAccessChecker extends AdapterBase implements API
private List<PluggableService> services; private List<PluggableService> services;
private Map<RoleType, Set<String>> annotationRoleBasedApisMap = new HashMap<RoleType, Set<String>>(); private Map<RoleType, Set<String>> annotationRoleBasedApisMap = new HashMap<RoleType, Set<String>>();
private LazyCache<Long, Account> accountCache;
private LazyCache<Long, Pair<Role, List<RolePermission>>> rolePermissionsCache;
private int cachePeriod;
protected DynamicRoleBasedAPIAccessChecker() { protected DynamicRoleBasedAPIAccessChecker() {
super(); super();
@ -99,23 +103,66 @@ public class DynamicRoleBasedAPIAccessChecker extends AdapterBase implements API
annotationRoleBasedApisMap.get(role.getRoleType()).contains(apiName); annotationRoleBasedApisMap.get(role.getRoleType()).contains(apiName);
} }
protected Account getAccountFromId(long accountId) {
return accountService.getAccount(accountId);
}
protected Pair<Role, List<RolePermission>> getRolePermissions(long roleId) {
final Role accountRole = roleService.findRole(roleId);
if (accountRole == null || accountRole.getId() < 1L) {
return new Pair<>(null, null);
}
if (accountRole.getRoleType() == RoleType.Admin && accountRole.getId() == RoleType.Admin.getId()) {
return new Pair<>(accountRole, null);
}
return new Pair<>(accountRole, roleService.findAllPermissionsBy(accountRole.getId()));
}
protected Pair<Role, List<RolePermission>> getRolePermissionsUsingCache(long roleId) {
if (cachePeriod > 0) {
return rolePermissionsCache.get(roleId);
}
return getRolePermissions(roleId);
}
protected Account getAccountFromIdUsingCache(long accountId) {
if (cachePeriod > 0) {
return accountCache.get(accountId);
}
return getAccountFromId(accountId);
}
@Override @Override
public boolean checkAccess(User user, String commandName) throws PermissionDeniedException { public boolean checkAccess(User user, String commandName) throws PermissionDeniedException {
if (!isEnabled()) { if (!isEnabled()) {
return true; return true;
} }
Account account = getAccountFromIdUsingCache(user.getAccountId());
Account account = accountService.getAccount(user.getAccountId());
if (account == null) { if (account == null) {
throw new PermissionDeniedException(String.format("The account id [%s] for user id [%s] is null.", user.getAccountId(), user.getUuid())); throw new PermissionDeniedException(String.format("Account for user id [%s] cannot be found", user.getUuid()));
} }
Pair<Role, List<RolePermission>> roleAndPermissions = getRolePermissionsUsingCache(account.getRoleId());
return checkAccess(account, commandName); final Role accountRole = roleAndPermissions.first();
if (accountRole == null) {
throw new PermissionDeniedException(String.format("Account role for user id [%s] cannot be found.", user.getUuid()));
}
if (accountRole.getRoleType() == RoleType.Admin && accountRole.getId() == RoleType.Admin.getId()) {
logger.info("Account for user id {} is Root Admin or Domain Admin, all APIs are allowed.", user.getUuid());
return true;
}
List<RolePermission> allPermissions = roleAndPermissions.second();
if (checkApiPermissionByRole(accountRole, commandName, allPermissions)) {
return true;
}
throw new UnavailableCommandException(String.format("The API [%s] does not exist or is not available for the account for user id [%s].", commandName, user.getUuid()));
} }
public boolean checkAccess(Account account, String commandName) { public boolean checkAccess(Account account, String commandName) {
final Role accountRole = roleService.findRole(account.getRoleId()); Pair<Role, List<RolePermission>> roleAndPermissions = getRolePermissionsUsingCache(account.getRoleId());
if (accountRole == null || accountRole.getId() < 1L) { final Role accountRole = roleAndPermissions.first();
if (accountRole == null) {
throw new PermissionDeniedException(String.format("The account [%s] has role null or unknown.", account)); throw new PermissionDeniedException(String.format("The account [%s] has role null or unknown.", account));
} }
@ -160,6 +207,9 @@ public class DynamicRoleBasedAPIAccessChecker extends AdapterBase implements API
@Override @Override
public boolean configure(String name, Map<String, Object> params) throws ConfigurationException { public boolean configure(String name, Map<String, Object> params) throws ConfigurationException {
super.configure(name, params); super.configure(name, params);
cachePeriod = Math.max(0, RoleService.DynamicApiCheckerCachePeriod.value());
accountCache = new LazyCache<>(32, cachePeriod, this::getAccountFromId);
rolePermissionsCache = new LazyCache<>(32, cachePeriod, this::getRolePermissions);
return true; return true;
} }

View File

@ -321,13 +321,13 @@ public class ExplicitDedicationProcessor extends AffinityProcessorBase implement
} }
} }
//add all hosts inside this in includeList //add all hosts inside this in includeList
List<HostVO> hostList = _hostDao.listByDataCenterId(dr.getDataCenterId()); List<Long> hostList = _hostDao.listEnabledIdsByDataCenterId(dr.getDataCenterId());
for (HostVO host : hostList) { for (Long hostId : hostList) {
DedicatedResourceVO dHost = _dedicatedDao.findByHostId(host.getId()); DedicatedResourceVO dHost = _dedicatedDao.findByHostId(hostId);
if (dHost != null && !dedicatedResources.contains(dHost)) { if (dHost != null && !dedicatedResources.contains(dHost)) {
avoidList.addHost(host.getId()); avoidList.addHost(hostId);
} else { } else {
includeList.addHost(host.getId()); includeList.addHost(hostId);
} }
} }
} }
@ -337,7 +337,7 @@ public class ExplicitDedicationProcessor extends AffinityProcessorBase implement
List<HostPodVO> pods = _podDao.listByDataCenterId(dc.getId()); List<HostPodVO> pods = _podDao.listByDataCenterId(dc.getId());
List<ClusterVO> clusters = _clusterDao.listClustersByDcId(dc.getId()); List<ClusterVO> clusters = _clusterDao.listClustersByDcId(dc.getId());
List<HostVO> hosts = _hostDao.listByDataCenterId(dc.getId()); List<Long> hostIds = _hostDao.listEnabledIdsByDataCenterId(dc.getId());
Set<Long> podsInIncludeList = includeList.getPodsToAvoid(); Set<Long> podsInIncludeList = includeList.getPodsToAvoid();
Set<Long> clustersInIncludeList = includeList.getClustersToAvoid(); Set<Long> clustersInIncludeList = includeList.getClustersToAvoid();
Set<Long> hostsInIncludeList = includeList.getHostsToAvoid(); Set<Long> hostsInIncludeList = includeList.getHostsToAvoid();
@ -357,9 +357,9 @@ public class ExplicitDedicationProcessor extends AffinityProcessorBase implement
} }
} }
for (HostVO host : hosts) { for (Long hostId : hostIds) {
if (hostsInIncludeList != null && !hostsInIncludeList.contains(host.getId())) { if (hostsInIncludeList != null && !hostsInIncludeList.contains(hostId)) {
avoidList.addHost(host.getId()); avoidList.addHost(hostId);
} }
} }
return avoidList; return avoidList;

View File

@ -23,7 +23,6 @@ import java.util.Map;
import javax.inject.Inject; import javax.inject.Inject;
import javax.naming.ConfigurationException; import javax.naming.ConfigurationException;
import org.apache.commons.lang3.StringUtils;
import org.apache.cloudstack.affinity.AffinityGroup; import org.apache.cloudstack.affinity.AffinityGroup;
import org.apache.cloudstack.affinity.AffinityGroupService; import org.apache.cloudstack.affinity.AffinityGroupService;
import org.apache.cloudstack.affinity.dao.AffinityGroupDao; import org.apache.cloudstack.affinity.dao.AffinityGroupDao;
@ -45,8 +44,9 @@ import org.apache.cloudstack.api.response.DedicatePodResponse;
import org.apache.cloudstack.api.response.DedicateZoneResponse; import org.apache.cloudstack.api.response.DedicateZoneResponse;
import org.apache.cloudstack.context.CallContext; import org.apache.cloudstack.context.CallContext;
import org.apache.cloudstack.framework.config.dao.ConfigurationDao; import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
import org.apache.logging.log4j.Logger; import org.apache.commons.lang3.StringUtils;
import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
import com.cloud.configuration.Config; import com.cloud.configuration.Config;
@ -126,7 +126,7 @@ public class DedicatedResourceManagerImpl implements DedicatedService {
@ActionEvent(eventType = EventTypes.EVENT_DEDICATE_RESOURCE, eventDescription = "dedicating a Zone") @ActionEvent(eventType = EventTypes.EVENT_DEDICATE_RESOURCE, eventDescription = "dedicating a Zone")
public List<DedicatedResourceVO> dedicateZone(final Long zoneId, final Long domainId, final String accountName) { public List<DedicatedResourceVO> dedicateZone(final Long zoneId, final Long domainId, final String accountName) {
Long accountId = null; Long accountId = null;
List<HostVO> hosts = null; List<Long> hostIds = null;
if (accountName != null) { if (accountName != null) {
Account caller = CallContext.current().getCallingAccount(); Account caller = CallContext.current().getCallingAccount();
Account owner = _accountMgr.finalizeOwner(caller, accountName, domainId, null); Account owner = _accountMgr.finalizeOwner(caller, accountName, domainId, null);
@ -203,18 +203,20 @@ public class DedicatedResourceManagerImpl implements DedicatedService {
releaseDedicatedResource(null, null, dr.getClusterId(), null); releaseDedicatedResource(null, null, dr.getClusterId(), null);
} }
hosts = _hostDao.listByDataCenterId(dc.getId()); hostIds = _hostDao.listEnabledIdsByDataCenterId(dc.getId());
for (HostVO host : hosts) { for (Long hostId : hostIds) {
DedicatedResourceVO dHost = _dedicatedDao.findByHostId(host.getId()); DedicatedResourceVO dHost = _dedicatedDao.findByHostId(hostId);
if (dHost != null) { if (dHost != null) {
if (!(childDomainIds.contains(dHost.getDomainId()))) { if (!(childDomainIds.contains(dHost.getDomainId()))) {
HostVO host = _hostDao.findById(hostId);
throw new CloudRuntimeException("Host " + host.getName() + " under this Zone " + dc.getName() + " is dedicated to different account/domain"); throw new CloudRuntimeException("Host " + host.getName() + " under this Zone " + dc.getName() + " is dedicated to different account/domain");
} }
if (accountId != null) { if (accountId != null) {
if (dHost.getAccountId().equals(accountId)) { if (dHost.getAccountId().equals(accountId)) {
hostsToRelease.add(dHost); hostsToRelease.add(dHost);
} else { } else {
logger.error(String.format("Host %s under this Zone %s is dedicated to different account/domain", host, dc)); HostVO host = _hostDao.findById(hostId);
logger.error("{} under {} is dedicated to different account/domain", host, dc);
throw new CloudRuntimeException("Host " + host.getName() + " under this Zone " + dc.getName() + " is dedicated to different account/domain"); throw new CloudRuntimeException("Host " + host.getName() + " under this Zone " + dc.getName() + " is dedicated to different account/domain");
} }
} else { } else {
@ -230,7 +232,7 @@ public class DedicatedResourceManagerImpl implements DedicatedService {
} }
} }
checkHostsSuitabilityForExplicitDedication(accountId, childDomainIds, hosts); checkHostsSuitabilityForExplicitDedication(accountId, childDomainIds, hostIds);
final Long accountIdFinal = accountId; final Long accountIdFinal = accountId;
return Transaction.execute(new TransactionCallback<List<DedicatedResourceVO>>() { return Transaction.execute(new TransactionCallback<List<DedicatedResourceVO>>() {
@ -284,7 +286,7 @@ public class DedicatedResourceManagerImpl implements DedicatedService {
childDomainIds.add(domainId); childDomainIds.add(domainId);
checkAccountAndDomain(accountId, domainId); checkAccountAndDomain(accountId, domainId);
HostPodVO pod = _podDao.findById(podId); HostPodVO pod = _podDao.findById(podId);
List<HostVO> hosts = null; List<Long> hostIds = null;
if (pod == null) { if (pod == null) {
throw new InvalidParameterValueException("Unable to find pod by id " + podId); throw new InvalidParameterValueException("Unable to find pod by id " + podId);
} else { } else {
@ -339,18 +341,20 @@ public class DedicatedResourceManagerImpl implements DedicatedService {
releaseDedicatedResource(null, null, dr.getClusterId(), null); releaseDedicatedResource(null, null, dr.getClusterId(), null);
} }
hosts = _hostDao.findByPodId(pod.getId()); hostIds = _hostDao.listIdsByPodId(pod.getId());
for (HostVO host : hosts) { for (Long hostId : hostIds) {
DedicatedResourceVO dHost = _dedicatedDao.findByHostId(host.getId()); DedicatedResourceVO dHost = _dedicatedDao.findByHostId(hostId);
if (dHost != null) { if (dHost != null) {
if (!(getDomainChildIds(domainId).contains(dHost.getDomainId()))) { if (!(getDomainChildIds(domainId).contains(dHost.getDomainId()))) {
HostVO host = _hostDao.findById(hostId);
throw new CloudRuntimeException("Host " + host.getName() + " under this Pod " + pod.getName() + " is dedicated to different account/domain"); throw new CloudRuntimeException("Host " + host.getName() + " under this Pod " + pod.getName() + " is dedicated to different account/domain");
} }
if (accountId != null) { if (accountId != null) {
if (dHost.getAccountId().equals(accountId)) { if (dHost.getAccountId().equals(accountId)) {
hostsToRelease.add(dHost); hostsToRelease.add(dHost);
} else { } else {
logger.error(String.format("Host %s under this Pod %s is dedicated to different account/domain", host, pod)); HostVO host = _hostDao.findById(hostId);
logger.error("{} under this {} is dedicated to different account/domain", host, pod);
throw new CloudRuntimeException("Host " + host.getName() + " under this Pod " + pod.getName() + " is dedicated to different account/domain"); throw new CloudRuntimeException("Host " + host.getName() + " under this Pod " + pod.getName() + " is dedicated to different account/domain");
} }
} else { } else {
@ -366,7 +370,7 @@ public class DedicatedResourceManagerImpl implements DedicatedService {
} }
} }
checkHostsSuitabilityForExplicitDedication(accountId, childDomainIds, hosts); checkHostsSuitabilityForExplicitDedication(accountId, childDomainIds, hostIds);
final Long accountIdFinal = accountId; final Long accountIdFinal = accountId;
return Transaction.execute(new TransactionCallback<List<DedicatedResourceVO>>() { return Transaction.execute(new TransactionCallback<List<DedicatedResourceVO>>() {
@ -402,7 +406,7 @@ public class DedicatedResourceManagerImpl implements DedicatedService {
@ActionEvent(eventType = EventTypes.EVENT_DEDICATE_RESOURCE, eventDescription = "dedicating a Cluster") @ActionEvent(eventType = EventTypes.EVENT_DEDICATE_RESOURCE, eventDescription = "dedicating a Cluster")
public List<DedicatedResourceVO> dedicateCluster(final Long clusterId, final Long domainId, final String accountName) { public List<DedicatedResourceVO> dedicateCluster(final Long clusterId, final Long domainId, final String accountName) {
Long accountId = null; Long accountId = null;
List<HostVO> hosts = null; List<Long> hostIds = null;
if (accountName != null) { if (accountName != null) {
Account caller = CallContext.current().getCallingAccount(); Account caller = CallContext.current().getCallingAccount();
Account owner = _accountMgr.finalizeOwner(caller, accountName, domainId, null); Account owner = _accountMgr.finalizeOwner(caller, accountName, domainId, null);
@ -448,12 +452,13 @@ public class DedicatedResourceManagerImpl implements DedicatedService {
} }
//check if any resource under this cluster is dedicated to different account or sub-domain //check if any resource under this cluster is dedicated to different account or sub-domain
hosts = _hostDao.findByClusterId(cluster.getId()); hostIds = _hostDao.listIdsByClusterId(cluster.getId());
List<DedicatedResourceVO> hostsToRelease = new ArrayList<DedicatedResourceVO>(); List<DedicatedResourceVO> hostsToRelease = new ArrayList<DedicatedResourceVO>();
for (HostVO host : hosts) { for (Long hostId : hostIds) {
DedicatedResourceVO dHost = _dedicatedDao.findByHostId(host.getId()); DedicatedResourceVO dHost = _dedicatedDao.findByHostId(hostId);
if (dHost != null) { if (dHost != null) {
if (!(childDomainIds.contains(dHost.getDomainId()))) { if (!(childDomainIds.contains(dHost.getDomainId()))) {
HostVO host = _hostDao.findById(hostId);
throw new CloudRuntimeException("Host " + host.getName() + " under this Cluster " + cluster.getName() + throw new CloudRuntimeException("Host " + host.getName() + " under this Cluster " + cluster.getName() +
" is dedicated to different account/domain"); " is dedicated to different account/domain");
} }
@ -479,7 +484,7 @@ public class DedicatedResourceManagerImpl implements DedicatedService {
} }
} }
checkHostsSuitabilityForExplicitDedication(accountId, childDomainIds, hosts); checkHostsSuitabilityForExplicitDedication(accountId, childDomainIds, hostIds);
final Long accountIdFinal = accountId; final Long accountIdFinal = accountId;
return Transaction.execute(new TransactionCallback<List<DedicatedResourceVO>>() { return Transaction.execute(new TransactionCallback<List<DedicatedResourceVO>>() {
@ -576,7 +581,7 @@ public class DedicatedResourceManagerImpl implements DedicatedService {
List<Long> childDomainIds = getDomainChildIds(domainId); List<Long> childDomainIds = getDomainChildIds(domainId);
childDomainIds.add(domainId); childDomainIds.add(domainId);
checkHostSuitabilityForExplicitDedication(accountId, childDomainIds, host); checkHostSuitabilityForExplicitDedication(accountId, childDomainIds, host.getId());
final Long accountIdFinal = accountId; final Long accountIdFinal = accountId;
return Transaction.execute(new TransactionCallback<List<DedicatedResourceVO>>() { return Transaction.execute(new TransactionCallback<List<DedicatedResourceVO>>() {
@ -662,13 +667,14 @@ public class DedicatedResourceManagerImpl implements DedicatedService {
return vms; return vms;
} }
private boolean checkHostSuitabilityForExplicitDedication(Long accountId, List<Long> domainIds, Host host) { private boolean checkHostSuitabilityForExplicitDedication(Long accountId, List<Long> domainIds, long hostId) {
boolean suitable = true; boolean suitable = true;
List<UserVmVO> allVmsOnHost = getVmsOnHost(host.getId()); List<UserVmVO> allVmsOnHost = getVmsOnHost(hostId);
if (accountId != null) { if (accountId != null) {
for (UserVmVO vm : allVmsOnHost) { for (UserVmVO vm : allVmsOnHost) {
if (vm.getAccountId() != accountId) { if (vm.getAccountId() != accountId) {
logger.info(String.format("Host %s found to be unsuitable for explicit dedication as it is running instances of another account", host)); Host host = _hostDao.findById(hostId);
logger.info("{} found to be unsuitable for explicit dedication as it is running instances of another account", host);
throw new CloudRuntimeException("Host " + host.getUuid() + " found to be unsuitable for explicit dedication as it is " + throw new CloudRuntimeException("Host " + host.getUuid() + " found to be unsuitable for explicit dedication as it is " +
"running instances of another account"); "running instances of another account");
} }
@ -676,7 +682,8 @@ public class DedicatedResourceManagerImpl implements DedicatedService {
} else { } else {
for (UserVmVO vm : allVmsOnHost) { for (UserVmVO vm : allVmsOnHost) {
if (!domainIds.contains(vm.getDomainId())) { if (!domainIds.contains(vm.getDomainId())) {
logger.info(String.format("Host %s found to be unsuitable for explicit dedication as it is running instances of another domain", host)); Host host = _hostDao.findById(hostId);
logger.info("{} found to be unsuitable for explicit dedication as it is running instances of another domain", host);
throw new CloudRuntimeException("Host " + host.getUuid() + " found to be unsuitable for explicit dedication as it is " + throw new CloudRuntimeException("Host " + host.getUuid() + " found to be unsuitable for explicit dedication as it is " +
"running instances of another domain"); "running instances of another domain");
} }
@ -685,10 +692,10 @@ public class DedicatedResourceManagerImpl implements DedicatedService {
return suitable; return suitable;
} }
private boolean checkHostsSuitabilityForExplicitDedication(Long accountId, List<Long> domainIds, List<HostVO> hosts) { private boolean checkHostsSuitabilityForExplicitDedication(Long accountId, List<Long> domainIds, List<Long> hostIds) {
boolean suitable = true; boolean suitable = true;
for (HostVO host : hosts) { for (Long hostId : hostIds) {
checkHostSuitabilityForExplicitDedication(accountId, domainIds, host); checkHostSuitabilityForExplicitDedication(accountId, domainIds, hostId);
} }
return suitable; return suitable;
} }

View File

@ -21,14 +21,15 @@ import java.util.HashSet;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.Set; import java.util.Set;
import java.util.stream.Collectors;
import javax.inject.Inject; import javax.inject.Inject;
import javax.naming.ConfigurationException; import javax.naming.ConfigurationException;
import org.apache.commons.collections.CollectionUtils;
import com.cloud.configuration.Config; import com.cloud.configuration.Config;
import com.cloud.exception.InsufficientServerCapacityException; import com.cloud.exception.InsufficientServerCapacityException;
import com.cloud.host.HostVO;
import com.cloud.resource.ResourceManager; import com.cloud.resource.ResourceManager;
import com.cloud.service.ServiceOfferingVO; import com.cloud.service.ServiceOfferingVO;
import com.cloud.service.dao.ServiceOfferingDao; import com.cloud.service.dao.ServiceOfferingDao;
@ -38,7 +39,6 @@ import com.cloud.utils.DateUtil;
import com.cloud.utils.NumbersUtil; import com.cloud.utils.NumbersUtil;
import com.cloud.vm.VMInstanceVO; import com.cloud.vm.VMInstanceVO;
import com.cloud.vm.VirtualMachineProfile; import com.cloud.vm.VirtualMachineProfile;
import org.springframework.util.CollectionUtils;
public class ImplicitDedicationPlanner extends FirstFitPlanner implements DeploymentClusterPlanner { public class ImplicitDedicationPlanner extends FirstFitPlanner implements DeploymentClusterPlanner {
@ -73,12 +73,11 @@ public class ImplicitDedicationPlanner extends FirstFitPlanner implements Deploy
boolean preferred = isServiceOfferingUsingPlannerInPreferredMode(vmProfile.getServiceOfferingId()); boolean preferred = isServiceOfferingUsingPlannerInPreferredMode(vmProfile.getServiceOfferingId());
// Get the list of all the hosts in the given clusters // Get the list of all the hosts in the given clusters
List<Long> allHosts = new ArrayList<Long>(); List<Long> allHosts = new ArrayList<>();
for (Long cluster : clusterList) { if (CollectionUtils.isNotEmpty(clusterList)) {
List<HostVO> hostsInCluster = resourceMgr.listAllHostsInCluster(cluster); allHosts = clusterList.stream()
for (HostVO hostVO : hostsInCluster) { .flatMap(cluster -> hostDao.listIdsByClusterId(cluster).stream())
allHosts.add(hostVO.getId()); .collect(Collectors.toList());
}
} }
// Go over all the hosts in the cluster and get a list of // Go over all the hosts in the cluster and get a list of
@ -224,20 +223,15 @@ public class ImplicitDedicationPlanner extends FirstFitPlanner implements Deploy
} }
private List<Long> getUpdatedClusterList(List<Long> clusterList, Set<Long> hostsSet) { private List<Long> getUpdatedClusterList(List<Long> clusterList, Set<Long> hostsSet) {
List<Long> updatedClusterList = new ArrayList<Long>(); if (CollectionUtils.isEmpty(clusterList)) {
for (Long cluster : clusterList) { return new ArrayList<>();
List<HostVO> hosts = resourceMgr.listAllHostsInCluster(cluster);
Set<Long> hostsInClusterSet = new HashSet<Long>();
for (HostVO host : hosts) {
hostsInClusterSet.add(host.getId());
} }
return clusterList.stream()
if (!hostsSet.containsAll(hostsInClusterSet)) { .filter(cluster -> {
updatedClusterList.add(cluster); Set<Long> hostsInClusterSet = new HashSet<>(hostDao.listIdsByClusterId(cluster));
} return !hostsSet.containsAll(hostsInClusterSet);
} })
.collect(Collectors.toList());
return updatedClusterList;
} }
@Override @Override
@ -257,15 +251,11 @@ public class ImplicitDedicationPlanner extends FirstFitPlanner implements Deploy
Account account = vmProfile.getOwner(); Account account = vmProfile.getOwner();
// Get the list of all the hosts in the given clusters // Get the list of all the hosts in the given clusters
List<Long> allHosts = new ArrayList<Long>(); List<Long> allHosts = new ArrayList<>();
if (!CollectionUtils.isEmpty(clusterList)) { if (CollectionUtils.isNotEmpty(clusterList)) {
for (Long cluster : clusterList) { allHosts = clusterList.stream()
List<HostVO> hostsInCluster = resourceMgr.listAllHostsInCluster(cluster); .flatMap(cluster -> hostDao.listIdsByClusterId(cluster).stream())
for (HostVO hostVO : hostsInCluster) { .collect(Collectors.toList());
allHosts.add(hostVO.getId());
}
}
} }
// Go over all the hosts in the cluster and get a list of // Go over all the hosts in the cluster and get a list of
// 1. All empty hosts, not running any vms. // 1. All empty hosts, not running any vms.

View File

@ -16,11 +16,11 @@
// under the License. // under the License.
package org.apache.cloudstack.implicitplanner; package org.apache.cloudstack.implicitplanner;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.everyItem;
import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue; import static org.junit.Assert.assertTrue;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.everyItem;
import static org.hamcrest.Matchers.equalTo;
import static org.mockito.ArgumentMatchers.anyString; import static org.mockito.ArgumentMatchers.anyString;
import static org.mockito.Mockito.mock; import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when; import static org.mockito.Mockito.when;
@ -36,7 +36,11 @@ import java.util.UUID;
import javax.inject.Inject; import javax.inject.Inject;
import com.cloud.user.User; import org.apache.cloudstack.context.CallContext;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager;
import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.test.utils.SpringUtils;
import org.junit.After; import org.junit.After;
import org.junit.Before; import org.junit.Before;
import org.junit.Test; import org.junit.Test;
@ -54,12 +58,6 @@ import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.test.context.support.AnnotationConfigContextLoader; import org.springframework.test.context.support.AnnotationConfigContextLoader;
import org.apache.cloudstack.context.CallContext;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager;
import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.test.utils.SpringUtils;
import com.cloud.capacity.Capacity; import com.cloud.capacity.Capacity;
import com.cloud.capacity.CapacityManager; import com.cloud.capacity.CapacityManager;
import com.cloud.capacity.dao.CapacityDao; import com.cloud.capacity.dao.CapacityDao;
@ -73,7 +71,6 @@ import com.cloud.deploy.DeploymentPlanner.ExcludeList;
import com.cloud.deploy.ImplicitDedicationPlanner; import com.cloud.deploy.ImplicitDedicationPlanner;
import com.cloud.exception.InsufficientServerCapacityException; import com.cloud.exception.InsufficientServerCapacityException;
import com.cloud.gpu.dao.HostGpuGroupsDao; import com.cloud.gpu.dao.HostGpuGroupsDao;
import com.cloud.host.HostVO;
import com.cloud.host.dao.HostDao; import com.cloud.host.dao.HostDao;
import com.cloud.host.dao.HostDetailsDao; import com.cloud.host.dao.HostDetailsDao;
import com.cloud.host.dao.HostTagsDao; import com.cloud.host.dao.HostTagsDao;
@ -90,6 +87,7 @@ import com.cloud.storage.dao.VolumeDao;
import com.cloud.user.Account; import com.cloud.user.Account;
import com.cloud.user.AccountManager; import com.cloud.user.AccountManager;
import com.cloud.user.AccountVO; import com.cloud.user.AccountVO;
import com.cloud.user.User;
import com.cloud.user.UserVO; import com.cloud.user.UserVO;
import com.cloud.utils.Pair; import com.cloud.utils.Pair;
import com.cloud.utils.component.ComponentContext; import com.cloud.utils.component.ComponentContext;
@ -387,21 +385,9 @@ public class ImplicitPlannerTest {
when(serviceOfferingDetailsDao.listDetailsKeyPairs(offeringId)).thenReturn(details); when(serviceOfferingDetailsDao.listDetailsKeyPairs(offeringId)).thenReturn(details);
// Initialize hosts in clusters // Initialize hosts in clusters
HostVO host1 = mock(HostVO.class); when(hostDao.listIdsByClusterId(1L)).thenReturn(List.of(5L));
when(host1.getId()).thenReturn(5L); when(hostDao.listIdsByClusterId(2L)).thenReturn(List.of(6L));
HostVO host2 = mock(HostVO.class); when(hostDao.listIdsByClusterId(3L)).thenReturn(List.of(7L));
when(host2.getId()).thenReturn(6L);
HostVO host3 = mock(HostVO.class);
when(host3.getId()).thenReturn(7L);
List<HostVO> hostsInCluster1 = new ArrayList<HostVO>();
List<HostVO> hostsInCluster2 = new ArrayList<HostVO>();
List<HostVO> hostsInCluster3 = new ArrayList<HostVO>();
hostsInCluster1.add(host1);
hostsInCluster2.add(host2);
hostsInCluster3.add(host3);
when(resourceMgr.listAllHostsInCluster(1)).thenReturn(hostsInCluster1);
when(resourceMgr.listAllHostsInCluster(2)).thenReturn(hostsInCluster2);
when(resourceMgr.listAllHostsInCluster(3)).thenReturn(hostsInCluster3);
// Mock vms on each host. // Mock vms on each host.
long offeringIdForVmsOfThisAccount = 15L; long offeringIdForVmsOfThisAccount = 15L;

View File

@ -109,7 +109,7 @@ public class AgentRoutingResource extends AgentStorageResource {
public PingCommand getCurrentStatus(long id) { public PingCommand getCurrentStatus(long id) {
TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.SIMULATOR_DB); TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.SIMULATOR_DB);
try { try {
MockConfigurationVO config = _simMgr.getMockConfigurationDao().findByNameBottomUP(agentHost.getDataCenterId(), agentHost.getPodId(), agentHost.getClusterId(), agentHost.getId(), "PingCommand"); MockConfigurationVO config = null;
if (config != null) { if (config != null) {
Map<String, String> configParameters = config.getParameters(); Map<String, String> configParameters = config.getParameters();
for (Map.Entry<String, String> entry : configParameters.entrySet()) { for (Map.Entry<String, String> entry : configParameters.entrySet()) {
@ -122,7 +122,7 @@ public class AgentRoutingResource extends AgentStorageResource {
} }
} }
config = _simMgr.getMockConfigurationDao().findByNameBottomUP(agentHost.getDataCenterId(), agentHost.getPodId(), agentHost.getClusterId(), agentHost.getId(), "PingRoutingWithNwGroupsCommand"); config = null;
if (config != null) { if (config != null) {
String message = config.getJsonResponse(); String message = config.getJsonResponse();
if (message != null) { if (message != null) {

View File

@ -31,6 +31,7 @@ import javax.naming.ConfigurationException;
import javax.persistence.EntityExistsException; import javax.persistence.EntityExistsException;
import org.apache.cloudstack.hypervisor.xenserver.XenserverConfigs; import org.apache.cloudstack.hypervisor.xenserver.XenserverConfigs;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.apache.maven.artifact.versioning.ComparableVersion; import org.apache.maven.artifact.versioning.ComparableVersion;
import org.apache.xmlrpc.XmlRpcException; import org.apache.xmlrpc.XmlRpcException;
@ -144,8 +145,8 @@ public class XcpServerDiscoverer extends DiscovererBase implements Discoverer, L
sc.and(sc.entity().getGuid(), Op.EQ, guid); sc.and(sc.entity().getGuid(), Op.EQ, guid);
List<ClusterVO> clusters = sc.list(); List<ClusterVO> clusters = sc.list();
ClusterVO clu = clusters.get(0); ClusterVO clu = clusters.get(0);
List<HostVO> clusterHosts = _resourceMgr.listAllHostsInCluster(clu.getId()); List<Long> clusterHostIds = _hostDao.listIdsByClusterId(clu.getId());
if (clusterHosts == null || clusterHosts.size() == 0) { if (CollectionUtils.isEmpty(clusterHostIds)) {
clu.setGuid(null); clu.setGuid(null);
_clusterDao.update(clu.getId(), clu); _clusterDao.update(clu.getId(), clu);
_clusterDao.update(cluster.getId(), cluster); _clusterDao.update(cluster.getId(), cluster);
@ -245,8 +246,8 @@ public class XcpServerDiscoverer extends DiscovererBase implements Discoverer, L
if (clu.getGuid() == null) { if (clu.getGuid() == null) {
setClusterGuid(clu, poolUuid); setClusterGuid(clu, poolUuid);
} else { } else {
List<HostVO> clusterHosts = _resourceMgr.listAllHostsInCluster(clusterId); List<Long> clusterHostIds = _hostDao.listIdsByClusterId(clusterId);
if (clusterHosts != null && clusterHosts.size() > 0) { if (CollectionUtils.isNotEmpty(clusterHostIds)) {
if (!clu.getGuid().equals(poolUuid)) { if (!clu.getGuid().equals(poolUuid)) {
String msg = "Please join the host " + hostIp + " to XS pool " String msg = "Please join the host " + hostIp + " to XS pool "
+ clu.getGuid() + " through XC/XS before adding it through CS UI"; + clu.getGuid() + " through XC/XS before adding it through CS UI";

View File

@ -298,8 +298,8 @@ public class PrometheusExporterImpl extends ManagerBase implements PrometheusExp
metricsList.add(new ItemHostMemory(zoneName, zoneUuid, null, null, null, null, ALLOCATED, allocatedCapacityByTag.third(), 0, tag)); metricsList.add(new ItemHostMemory(zoneName, zoneUuid, null, null, null, null, ALLOCATED, allocatedCapacityByTag.third(), 0, tag));
}); });
List<HostTagVO> allHostTagVOS = hostDao.listAll().stream() List<HostTagVO> allHostTagVOS = hostDao.listAllIds().stream()
.flatMap( h -> _hostTagsDao.getHostTags(h.getId()).stream()) .flatMap( h -> _hostTagsDao.getHostTags(h).stream())
.distinct() .distinct()
.collect(Collectors.toList()); .collect(Collectors.toList());
List<String> allHostTags = new ArrayList<>(); List<String> allHostTags = new ArrayList<>();

View File

@ -30,6 +30,7 @@ import org.apache.cloudstack.api.response.StoragePoolResponse;
import org.apache.cloudstack.api.response.UserVmResponse; import org.apache.cloudstack.api.response.UserVmResponse;
import org.apache.cloudstack.api.response.VolumeResponse; import org.apache.cloudstack.api.response.VolumeResponse;
import org.apache.cloudstack.api.response.ZoneResponse; import org.apache.cloudstack.api.response.ZoneResponse;
import org.apache.cloudstack.framework.config.ConfigKey;
import org.apache.cloudstack.response.ClusterMetricsResponse; import org.apache.cloudstack.response.ClusterMetricsResponse;
import org.apache.cloudstack.response.DbMetricsResponse; import org.apache.cloudstack.response.DbMetricsResponse;
import org.apache.cloudstack.response.HostMetricsResponse; import org.apache.cloudstack.response.HostMetricsResponse;
@ -47,6 +48,11 @@ import com.cloud.utils.Pair;
import com.cloud.utils.component.PluggableService; import com.cloud.utils.component.PluggableService;
public interface MetricsService extends PluggableService { public interface MetricsService extends PluggableService {
ConfigKey<Boolean> AllowListMetricsComputation = new ConfigKey<>("Advanced", Boolean.class, "allow.list.metrics.computation", "true",
"Whether the list zones and cluster metrics APIs are allowed metrics computation. Large environments may disabled this.",
true, ConfigKey.Scope.Global);
InfrastructureResponse listInfrastructure(); InfrastructureResponse listInfrastructure();
ListResponse<VmMetricsStatsResponse> searchForVmMetricsStats(ListVMsUsageHistoryCmd cmd); ListResponse<VmMetricsStatsResponse> searchForVmMetricsStats(ListVMsUsageHistoryCmd cmd);
@ -56,10 +62,10 @@ public interface MetricsService extends PluggableService {
List<VmMetricsResponse> listVmMetrics(List<UserVmResponse> vmResponses); List<VmMetricsResponse> listVmMetrics(List<UserVmResponse> vmResponses);
List<StoragePoolMetricsResponse> listStoragePoolMetrics(List<StoragePoolResponse> poolResponses); List<StoragePoolMetricsResponse> listStoragePoolMetrics(List<StoragePoolResponse> poolResponses);
List<HostMetricsResponse> listHostMetrics(List<HostResponse> poolResponses); List<HostMetricsResponse> listHostMetrics(List<HostResponse> poolResponses);
List<ManagementServerMetricsResponse> listManagementServerMetrics(List<ManagementServerResponse> poolResponses);
List<ClusterMetricsResponse> listClusterMetrics(Pair<List<ClusterResponse>, Integer> clusterResponses); List<ClusterMetricsResponse> listClusterMetrics(Pair<List<ClusterResponse>, Integer> clusterResponses);
List<ZoneMetricsResponse> listZoneMetrics(List<ZoneResponse> poolResponses); List<ZoneMetricsResponse> listZoneMetrics(List<ZoneResponse> poolResponses);
List<ManagementServerMetricsResponse> listManagementServerMetrics(List<ManagementServerResponse> poolResponses);
UsageServerMetricsResponse listUsageServerMetrics(); UsageServerMetricsResponse listUsageServerMetrics();
DbMetricsResponse listDbMetrics(); DbMetricsResponse listDbMetrics();
} }

View File

@ -61,6 +61,8 @@ import org.apache.cloudstack.api.response.VolumeResponse;
import org.apache.cloudstack.api.response.ZoneResponse; import org.apache.cloudstack.api.response.ZoneResponse;
import org.apache.cloudstack.cluster.ClusterDrsAlgorithm; import org.apache.cloudstack.cluster.ClusterDrsAlgorithm;
import org.apache.cloudstack.context.CallContext; import org.apache.cloudstack.context.CallContext;
import org.apache.cloudstack.framework.config.ConfigKey;
import org.apache.cloudstack.framework.config.Configurable;
import org.apache.cloudstack.management.ManagementServerHost.State; import org.apache.cloudstack.management.ManagementServerHost.State;
import org.apache.cloudstack.response.ClusterMetricsResponse; import org.apache.cloudstack.response.ClusterMetricsResponse;
import org.apache.cloudstack.response.DbMetricsResponse; import org.apache.cloudstack.response.DbMetricsResponse;
@ -110,8 +112,6 @@ import com.cloud.host.Status;
import com.cloud.host.dao.HostDao; import com.cloud.host.dao.HostDao;
import com.cloud.network.router.VirtualRouter; import com.cloud.network.router.VirtualRouter;
import com.cloud.org.Cluster; import com.cloud.org.Cluster;
import com.cloud.org.Grouping;
import com.cloud.org.Managed;
import com.cloud.server.DbStatsCollection; import com.cloud.server.DbStatsCollection;
import com.cloud.server.ManagementServerHostStats; import com.cloud.server.ManagementServerHostStats;
import com.cloud.server.StatsCollector; import com.cloud.server.StatsCollector;
@ -141,8 +141,7 @@ import com.cloud.vm.dao.VMInstanceDao;
import com.cloud.vm.dao.VmStatsDao; import com.cloud.vm.dao.VmStatsDao;
import com.google.gson.Gson; import com.google.gson.Gson;
public class MetricsServiceImpl extends MutualExclusiveIdsManagerBase implements MetricsService { public class MetricsServiceImpl extends MutualExclusiveIdsManagerBase implements MetricsService, Configurable {
@Inject @Inject
private DataCenterDao dataCenterDao; private DataCenterDao dataCenterDao;
@Inject @Inject
@ -197,7 +196,6 @@ public class MetricsServiceImpl extends MutualExclusiveIdsManagerBase implements
} }
private void updateHostMetrics(final HostMetrics hostMetrics, final HostJoinVO host) { private void updateHostMetrics(final HostMetrics hostMetrics, final HostJoinVO host) {
hostMetrics.incrTotalHosts();
hostMetrics.addCpuAllocated(host.getCpuReservedCapacity() + host.getCpuUsedCapacity()); hostMetrics.addCpuAllocated(host.getCpuReservedCapacity() + host.getCpuUsedCapacity());
hostMetrics.addMemoryAllocated(host.getMemReservedCapacity() + host.getMemUsedCapacity()); hostMetrics.addMemoryAllocated(host.getMemReservedCapacity() + host.getMemUsedCapacity());
final HostStats hostStats = ApiDBUtils.getHostStatistics(host.getId()); final HostStats hostStats = ApiDBUtils.getHostStatistics(host.getId());
@ -561,22 +559,17 @@ public class MetricsServiceImpl extends MutualExclusiveIdsManagerBase implements
response.setZones(dataCenterDao.countAll()); response.setZones(dataCenterDao.countAll());
response.setPods(podDao.countAll()); response.setPods(podDao.countAll());
response.setClusters(clusterDao.countAll()); response.setClusters(clusterDao.countAll());
response.setHosts(hostDao.countAllByType(Host.Type.Routing)); Pair<Integer, Integer> hostCountAndCpuSockets = hostDao.countAllHostsAndCPUSocketsByType(Host.Type.Routing);
response.setHosts(hostCountAndCpuSockets.first());
response.setStoragePools(storagePoolDao.countAll()); response.setStoragePools(storagePoolDao.countAll());
response.setImageStores(imageStoreDao.countAllImageStores()); response.setImageStores(imageStoreDao.countAllImageStores());
response.setObjectStores(objectStoreDao.countAllObjectStores()); response.setObjectStores(objectStoreDao.countAllObjectStores());
response.setSystemvms(vmInstanceDao.listByTypes(VirtualMachine.Type.ConsoleProxy, VirtualMachine.Type.SecondaryStorageVm).size()); response.setSystemvms(vmInstanceDao.countByTypes(VirtualMachine.Type.ConsoleProxy, VirtualMachine.Type.SecondaryStorageVm));
response.setRouters(domainRouterDao.countAllByRole(VirtualRouter.Role.VIRTUAL_ROUTER)); response.setRouters(domainRouterDao.countAllByRole(VirtualRouter.Role.VIRTUAL_ROUTER));
response.setInternalLbs(domainRouterDao.countAllByRole(VirtualRouter.Role.INTERNAL_LB_VM)); response.setInternalLbs(domainRouterDao.countAllByRole(VirtualRouter.Role.INTERNAL_LB_VM));
response.setAlerts(alertDao.countAll()); response.setAlerts(alertDao.countAll());
int cpuSockets = 0; response.setCpuSockets(hostCountAndCpuSockets.second());
for (final Host host : hostDao.listByType(Host.Type.Routing)) { response.setManagementServers(managementServerHostDao.countAll());
if (host.getCpuSockets() != null) {
cpuSockets += host.getCpuSockets();
}
}
response.setCpuSockets(cpuSockets);
response.setManagementServers(managementServerHostDao.listAll().size());
return response; return response;
} }
@ -764,38 +757,44 @@ public class MetricsServiceImpl extends MutualExclusiveIdsManagerBase implements
final CapacityDaoImpl.SummedCapacity cpuCapacity = getCapacity(Capacity.CAPACITY_TYPE_CPU, null, clusterId); final CapacityDaoImpl.SummedCapacity cpuCapacity = getCapacity(Capacity.CAPACITY_TYPE_CPU, null, clusterId);
final CapacityDaoImpl.SummedCapacity memoryCapacity = getCapacity(Capacity.CAPACITY_TYPE_MEMORY, null, clusterId); final CapacityDaoImpl.SummedCapacity memoryCapacity = getCapacity(Capacity.CAPACITY_TYPE_MEMORY, null, clusterId);
final HostMetrics hostMetrics = new HostMetrics(cpuCapacity, memoryCapacity); final HostMetrics hostMetrics = new HostMetrics(cpuCapacity, memoryCapacity);
hostMetrics.setUpResources(Long.valueOf(hostDao.countAllInClusterByTypeAndStates(clusterId, Host.Type.Routing, List.of(Status.Up))));
hostMetrics.setTotalResources(Long.valueOf(hostDao.countAllInClusterByTypeAndStates(clusterId, Host.Type.Routing, null)));
hostMetrics.setTotalHosts(hostMetrics.getTotalResources());
if (AllowListMetricsComputation.value()) {
List<Ternary<Long, Long, Long>> cpuList = new ArrayList<>(); List<Ternary<Long, Long, Long>> cpuList = new ArrayList<>();
List<Ternary<Long, Long, Long>> memoryList = new ArrayList<>(); List<Ternary<Long, Long, Long>> memoryList = new ArrayList<>();
for (final Host host : hostDao.findByClusterId(clusterId)) {
for (final Host host: hostDao.findByClusterId(clusterId)) {
if (host == null || host.getType() != Host.Type.Routing) { if (host == null || host.getType() != Host.Type.Routing) {
continue; continue;
} }
if (host.getStatus() == Status.Up) { updateHostMetrics(hostMetrics, hostJoinDao.findById(host.getId()));
hostMetrics.incrUpResources();
}
hostMetrics.incrTotalResources();
HostJoinVO hostJoin = hostJoinDao.findById(host.getId()); HostJoinVO hostJoin = hostJoinDao.findById(host.getId());
updateHostMetrics(hostMetrics, hostJoin);
cpuList.add(new Ternary<>(hostJoin.getCpuUsedCapacity(), hostJoin.getCpuReservedCapacity(), hostJoin.getCpus() * hostJoin.getSpeed())); cpuList.add(new Ternary<>(hostJoin.getCpuUsedCapacity(), hostJoin.getCpuReservedCapacity(), hostJoin.getCpus() * hostJoin.getSpeed()));
memoryList.add(new Ternary<>(hostJoin.getMemUsedCapacity(), hostJoin.getMemReservedCapacity(), hostJoin.getTotalMemory())); memoryList.add(new Ternary<>(hostJoin.getMemUsedCapacity(), hostJoin.getMemReservedCapacity(), hostJoin.getTotalMemory()));
} }
try { try {
Double imbalance = ClusterDrsAlgorithm.getClusterImbalance(clusterId, cpuList, memoryList, null); Double imbalance = ClusterDrsAlgorithm.getClusterImbalance(clusterId, cpuList, memoryList, null);
metricsResponse.setDrsImbalance(imbalance.isNaN() ? null : 100.0 * imbalance); metricsResponse.setDrsImbalance(imbalance.isNaN() ? null : 100.0 * imbalance);
} catch (ConfigurationException e) { } catch (ConfigurationException e) {
logger.warn("Failed to get cluster imbalance for cluster " + clusterId, e); logger.warn("Failed to get cluster imbalance for cluster {}", clusterId, e);
}
} else {
if (cpuCapacity != null) {
hostMetrics.setCpuAllocated(cpuCapacity.getAllocatedCapacity());
}
if (memoryCapacity != null) {
hostMetrics.setMemoryAllocated(memoryCapacity.getAllocatedCapacity());
}
} }
metricsResponse.setState(clusterResponse.getAllocationState(), clusterResponse.getManagedState());
metricsResponse.setResources(hostMetrics.getUpResources(), hostMetrics.getTotalResources());
addHostCpuMetricsToResponse(metricsResponse, clusterId, hostMetrics); addHostCpuMetricsToResponse(metricsResponse, clusterId, hostMetrics);
addHostMemoryMetricsToResponse(metricsResponse, clusterId, hostMetrics); addHostMemoryMetricsToResponse(metricsResponse, clusterId, hostMetrics);
metricsResponse.setHasAnnotation(clusterResponse.hasAnnotation()); metricsResponse.setHasAnnotation(clusterResponse.hasAnnotation());
metricsResponse.setState(clusterResponse.getAllocationState(), clusterResponse.getManagedState());
metricsResponse.setResources(hostMetrics.getUpResources(), hostMetrics.getTotalResources());
metricsResponses.add(metricsResponse); metricsResponses.add(metricsResponse);
} }
return metricsResponses; return metricsResponses;
@ -942,17 +941,15 @@ public class MetricsServiceImpl extends MutualExclusiveIdsManagerBase implements
final CapacityDaoImpl.SummedCapacity cpuCapacity = getCapacity((int) Capacity.CAPACITY_TYPE_CPU, zoneId, null); final CapacityDaoImpl.SummedCapacity cpuCapacity = getCapacity((int) Capacity.CAPACITY_TYPE_CPU, zoneId, null);
final CapacityDaoImpl.SummedCapacity memoryCapacity = getCapacity((int) Capacity.CAPACITY_TYPE_MEMORY, zoneId, null); final CapacityDaoImpl.SummedCapacity memoryCapacity = getCapacity((int) Capacity.CAPACITY_TYPE_MEMORY, zoneId, null);
final HostMetrics hostMetrics = new HostMetrics(cpuCapacity, memoryCapacity); final HostMetrics hostMetrics = new HostMetrics(cpuCapacity, memoryCapacity);
hostMetrics.setUpResources(Long.valueOf(clusterDao.countAllManagedAndEnabledByDcId(zoneId)));
hostMetrics.setTotalResources(Long.valueOf(clusterDao.countAllByDcId(zoneId)));
hostMetrics.setTotalHosts(Long.valueOf(hostDao.countAllByTypeInZone(zoneId, Host.Type.Routing)));
if (AllowListMetricsComputation.value()) {
for (final Cluster cluster : clusterDao.listClustersByDcId(zoneId)) { for (final Cluster cluster : clusterDao.listClustersByDcId(zoneId)) {
if (cluster == null) { if (cluster == null) {
continue; continue;
} }
hostMetrics.incrTotalResources();
if (cluster.getAllocationState() == Grouping.AllocationState.Enabled
&& cluster.getManagedState() == Managed.ManagedState.Managed) {
hostMetrics.incrUpResources();
}
for (final Host host: hostDao.findByClusterId(cluster.getId())) { for (final Host host: hostDao.findByClusterId(cluster.getId())) {
if (host == null || host.getType() != Host.Type.Routing) { if (host == null || host.getType() != Host.Type.Routing) {
continue; continue;
@ -960,17 +957,22 @@ public class MetricsServiceImpl extends MutualExclusiveIdsManagerBase implements
updateHostMetrics(hostMetrics, hostJoinDao.findById(host.getId())); updateHostMetrics(hostMetrics, hostJoinDao.findById(host.getId()));
} }
} }
} else {
if (cpuCapacity != null) {
hostMetrics.setCpuAllocated(cpuCapacity.getAllocatedCapacity());
}
if (memoryCapacity != null) {
hostMetrics.setMemoryAllocated(memoryCapacity.getAllocatedCapacity());
}
}
addHostCpuMetricsToResponse(metricsResponse, null, hostMetrics);
addHostMemoryMetricsToResponse(metricsResponse, null, hostMetrics);
metricsResponse.setHasAnnotation(zoneResponse.hasAnnotation()); metricsResponse.setHasAnnotation(zoneResponse.hasAnnotation());
metricsResponse.setState(zoneResponse.getAllocationState()); metricsResponse.setState(zoneResponse.getAllocationState());
metricsResponse.setResource(hostMetrics.getUpResources(), hostMetrics.getTotalResources()); metricsResponse.setResource(hostMetrics.getUpResources(), hostMetrics.getTotalResources());
final Long totalHosts = hostMetrics.getTotalHosts();
// CPU
addHostCpuMetricsToResponse(metricsResponse, null, hostMetrics);
// Memory
addHostMemoryMetricsToResponse(metricsResponse, null, hostMetrics);
metricsResponses.add(metricsResponse); metricsResponses.add(metricsResponse);
} }
return metricsResponses; return metricsResponses;
@ -1028,12 +1030,14 @@ public class MetricsServiceImpl extends MutualExclusiveIdsManagerBase implements
private void getQueryHistory(DbMetricsResponse response) { private void getQueryHistory(DbMetricsResponse response) {
Map<String, Object> dbStats = ApiDBUtils.getDbStatistics(); Map<String, Object> dbStats = ApiDBUtils.getDbStatistics();
if (dbStats != null) { if (dbStats == null) {
response.setQueries((Long)dbStats.get(DbStatsCollection.queries)); return;
response.setUptime((Long)dbStats.get(DbStatsCollection.uptime));
} }
List<Double> loadHistory = (List<Double>) dbStats.get(DbStatsCollection.loadAvarages); response.setQueries((Long)dbStats.getOrDefault(DbStatsCollection.queries, -1L));
response.setUptime((Long)dbStats.getOrDefault(DbStatsCollection.uptime, -1L));
List<Double> loadHistory = (List<Double>) dbStats.getOrDefault(DbStatsCollection.loadAvarages, new ArrayList<Double>());
double[] loadAverages = new double[loadHistory.size()]; double[] loadAverages = new double[loadHistory.size()];
int index = 0; int index = 0;
@ -1108,6 +1112,16 @@ public class MetricsServiceImpl extends MutualExclusiveIdsManagerBase implements
return cmdList; return cmdList;
} }
@Override
public String getConfigComponentName() {
return MetricsService.class.getSimpleName();
}
@Override
public ConfigKey<?>[] getConfigKeys() {
return new ConfigKey<?>[] {AllowListMetricsComputation};
}
private class HostMetrics { private class HostMetrics {
// CPU metrics // CPU metrics
private Long totalCpu = 0L; private Long totalCpu = 0L;
@ -1133,6 +1147,14 @@ public class MetricsServiceImpl extends MutualExclusiveIdsManagerBase implements
} }
} }
public void setCpuAllocated(Long cpuAllocated) {
this.cpuAllocated = cpuAllocated;
}
public void setMemoryAllocated(Long memoryAllocated) {
this.memoryAllocated = memoryAllocated;
}
public void addCpuAllocated(Long cpuAllocated) { public void addCpuAllocated(Long cpuAllocated) {
this.cpuAllocated += cpuAllocated; this.cpuAllocated += cpuAllocated;
} }
@ -1161,16 +1183,16 @@ public class MetricsServiceImpl extends MutualExclusiveIdsManagerBase implements
} }
} }
public void incrTotalHosts() { public void setTotalHosts(Long totalHosts) {
this.totalHosts++; this.totalHosts = totalHosts;
} }
public void incrTotalResources() { public void setTotalResources(Long totalResources) {
this.totalResources++; this.totalResources = totalResources;
} }
public void incrUpResources() { public void setUpResources(Long upResources) {
this.upResources++; this.upResources = upResources;
} }
public Long getTotalCpu() { public Long getTotalCpu() {

View File

@ -18,6 +18,26 @@
*/ */
package org.apache.cloudstack.storage.datastore.lifecycle; package org.apache.cloudstack.storage.datastore.lifecycle;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import java.util.stream.Collectors;
import javax.inject.Inject;
import org.apache.cloudstack.engine.subsystem.api.storage.ClusterScope;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager;
import org.apache.cloudstack.engine.subsystem.api.storage.HostScope;
import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStoreInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStoreLifeCycle;
import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStoreParameters;
import org.apache.cloudstack.engine.subsystem.api.storage.ZoneScope;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.cloudstack.storage.volume.datastore.PrimaryDataStoreHelper;
import com.cloud.agent.AgentManager; import com.cloud.agent.AgentManager;
import com.cloud.agent.api.Answer; import com.cloud.agent.api.Answer;
import com.cloud.agent.api.CreateStoragePoolCommand; import com.cloud.agent.api.CreateStoragePoolCommand;
@ -48,6 +68,7 @@ import com.cloud.storage.dao.StoragePoolWorkDao;
import com.cloud.storage.dao.VolumeDao; import com.cloud.storage.dao.VolumeDao;
import com.cloud.user.dao.UserDao; import com.cloud.user.dao.UserDao;
import com.cloud.utils.NumbersUtil; import com.cloud.utils.NumbersUtil;
import com.cloud.utils.Pair;
import com.cloud.utils.db.DB; import com.cloud.utils.db.DB;
import com.cloud.utils.exception.CloudRuntimeException; import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.vm.VirtualMachineManager; import com.cloud.vm.VirtualMachineManager;
@ -56,23 +77,6 @@ import com.cloud.vm.dao.DomainRouterDao;
import com.cloud.vm.dao.SecondaryStorageVmDao; import com.cloud.vm.dao.SecondaryStorageVmDao;
import com.cloud.vm.dao.UserVmDao; import com.cloud.vm.dao.UserVmDao;
import com.cloud.vm.dao.VMInstanceDao; import com.cloud.vm.dao.VMInstanceDao;
import org.apache.cloudstack.engine.subsystem.api.storage.ClusterScope;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager;
import org.apache.cloudstack.engine.subsystem.api.storage.HostScope;
import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStoreInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStoreLifeCycle;
import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStoreParameters;
import org.apache.cloudstack.engine.subsystem.api.storage.ZoneScope;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.cloudstack.storage.volume.datastore.PrimaryDataStoreHelper;
import javax.inject.Inject;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.UUID;
public class CloudStackPrimaryDataStoreLifeCycleImpl extends BasePrimaryDataStoreLifeCycleImpl implements PrimaryDataStoreLifeCycle { public class CloudStackPrimaryDataStoreLifeCycleImpl extends BasePrimaryDataStoreLifeCycleImpl implements PrimaryDataStoreLifeCycle {
@Inject @Inject
@ -326,18 +330,14 @@ public class CloudStackPrimaryDataStoreLifeCycleImpl extends BasePrimaryDataStor
} }
private void validateVcenterDetails(Long zoneId, Long podId, Long clusterId, String storageHost) { private void validateVcenterDetails(Long zoneId, Long podId, Long clusterId, String storageHost) {
List<Long> allHostIds = _hostDao.listIdsForUpRouting(zoneId, podId, clusterId);
List<HostVO> allHosts = if (allHostIds.isEmpty()) {
_resourceMgr.listAllUpHosts(Host.Type.Routing, clusterId, podId, zoneId);
if (allHosts.isEmpty()) {
throw new CloudRuntimeException(String.format("No host up to associate a storage pool with in zone: %s pod: %s cluster: %s", throw new CloudRuntimeException(String.format("No host up to associate a storage pool with in zone: %s pod: %s cluster: %s",
zoneDao.findById(zoneId), podDao.findById(podId), clusterDao.findById(clusterId))); zoneDao.findById(zoneId), podDao.findById(podId), clusterDao.findById(clusterId)));
} }
for (Long hId : allHostIds) {
boolean success = false;
for (HostVO h : allHosts) {
ValidateVcenterDetailsCommand cmd = new ValidateVcenterDetailsCommand(storageHost); ValidateVcenterDetailsCommand cmd = new ValidateVcenterDetailsCommand(storageHost);
final Answer answer = agentMgr.easySend(h.getId(), cmd); final Answer answer = agentMgr.easySend(hId, cmd);
if (answer != null && answer.getResult()) { if (answer != null && answer.getResult()) {
logger.info("Successfully validated vCenter details provided"); logger.info("Successfully validated vCenter details provided");
return; return;
@ -346,7 +346,7 @@ public class CloudStackPrimaryDataStoreLifeCycleImpl extends BasePrimaryDataStor
throw new InvalidParameterValueException(String.format("Provided vCenter server details does not match with the existing vCenter in zone: %s", throw new InvalidParameterValueException(String.format("Provided vCenter server details does not match with the existing vCenter in zone: %s",
zoneDao.findById(zoneId))); zoneDao.findById(zoneId)));
} else { } else {
logger.warn("Can not validate vCenter through host {} due to ValidateVcenterDetailsCommand returns null", h); logger.warn("Can not validate vCenter through host {} due to ValidateVcenterDetailsCommand returns null", hostDao.findById(hId));
} }
} }
} }
@ -385,85 +385,57 @@ public class CloudStackPrimaryDataStoreLifeCycleImpl extends BasePrimaryDataStor
} }
} }
private Pair<List<Long>, Boolean> prepareOcfs2NodesIfNeeded(PrimaryDataStoreInfo primaryStore) {
if (!StoragePoolType.OCFS2.equals(primaryStore.getPoolType())) {
return new Pair<>(_hostDao.listIdsForUpRouting(primaryStore.getDataCenterId(),
primaryStore.getPodId(), primaryStore.getClusterId()), true);
}
List<HostVO> allHosts = _resourceMgr.listAllUpHosts(Host.Type.Routing, primaryStore.getClusterId(),
primaryStore.getPodId(), primaryStore.getDataCenterId());
if (allHosts.isEmpty()) {
return new Pair<>(Collections.emptyList(), true);
}
List<Long> hostIds = allHosts.stream().map(HostVO::getId).collect(Collectors.toList());
if (!_ocfs2Mgr.prepareNodes(allHosts, primaryStore)) {
return new Pair<>(hostIds, false);
}
return new Pair<>(hostIds, true);
}
@Override @Override
public boolean attachCluster(DataStore store, ClusterScope scope) { public boolean attachCluster(DataStore store, ClusterScope scope) {
PrimaryDataStoreInfo primarystore = (PrimaryDataStoreInfo)store; PrimaryDataStoreInfo primaryStore = (PrimaryDataStoreInfo)store;
// Check if there is host up in this cluster Pair<List<Long>, Boolean> result = prepareOcfs2NodesIfNeeded(primaryStore);
List<HostVO> allHosts = List<Long> hostIds = result.first();
_resourceMgr.listAllUpHosts(Host.Type.Routing, primarystore.getClusterId(), primarystore.getPodId(), primarystore.getDataCenterId()); if (hostIds.isEmpty()) {
if (allHosts.isEmpty()) { primaryDataStoreDao.expunge(primaryStore.getId());
primaryDataStoreDao.expunge(primarystore.getId()); throw new CloudRuntimeException("No host up to associate a storage pool with in cluster: " +
throw new CloudRuntimeException(String.format("No host up to associate a storage pool with in cluster %s", clusterDao.findById(primarystore.getClusterId()))); clusterDao.findById(primaryStore.getClusterId()));
} }
if (!result.second()) {
if (primarystore.getPoolType() == StoragePoolType.OCFS2 && !_ocfs2Mgr.prepareNodes(allHosts, primarystore)) { logger.warn("Can not create storage pool {} on {}", primaryStore,
logger.warn("Can not create storage pool {} on cluster {}", primarystore::toString, () -> clusterDao.findById(primarystore.getClusterId())); clusterDao.findById(primaryStore.getClusterId()));
primaryDataStoreDao.expunge(primarystore.getId()); primaryDataStoreDao.expunge(primaryStore.getId());
return false; return false;
} }
for (Long hId : hostIds) {
boolean success = false; HostVO host = _hostDao.findById(hId);
for (HostVO h : allHosts) { if (createStoragePool(host, primaryStore)) {
success = createStoragePool(h, primarystore);
if (success) {
break; break;
} }
} }
logger.debug("In createPool Adding the pool to each of the hosts"); logger.debug("In createPool Adding the pool to each of the hosts");
List<HostVO> poolHosts = new ArrayList<HostVO>(); storageMgr.connectHostsToPool(store, hostIds, scope, true, true);
for (HostVO h : allHosts) {
try {
storageMgr.connectHostToSharedPool(h, primarystore.getId());
poolHosts.add(h);
} catch (StorageConflictException se) {
primaryDataStoreDao.expunge(primarystore.getId());
throw new CloudRuntimeException("Storage has already been added as local storage");
} catch (Exception e) {
logger.warn("Unable to establish a connection between " + h + " and " + primarystore, e);
String reason = storageMgr.getStoragePoolMountFailureReason(e.getMessage());
if (reason != null) {
throw new CloudRuntimeException(reason);
}
}
}
if (poolHosts.isEmpty()) {
logger.warn("No host can access storage pool {} on cluster {}", primarystore::toString, () -> clusterDao.findById(primarystore.getClusterId()));
primaryDataStoreDao.expunge(primarystore.getId());
throw new CloudRuntimeException("Failed to access storage pool");
}
dataStoreHelper.attachCluster(store); dataStoreHelper.attachCluster(store);
return true; return true;
} }
@Override @Override
public boolean attachZone(DataStore dataStore, ZoneScope scope, HypervisorType hypervisorType) { public boolean attachZone(DataStore store, ZoneScope scope, HypervisorType hypervisorType) {
List<HostVO> hosts = _resourceMgr.listAllUpHostsInOneZoneByHypervisor(hypervisorType, scope.getScopeId()); List<Long> hostIds = _hostDao.listIdsForUpEnabledByZoneAndHypervisor(scope.getScopeId(), hypervisorType);
logger.debug("In createPool. Attaching the pool to each of the hosts."); logger.debug("In createPool. Attaching the pool to each of the hosts.");
List<HostVO> poolHosts = new ArrayList<HostVO>(); storageMgr.connectHostsToPool(store, hostIds, scope, true, true);
for (HostVO host : hosts) { dataStoreHelper.attachZone(store, hypervisorType);
try {
storageMgr.connectHostToSharedPool(host, dataStore.getId());
poolHosts.add(host);
} catch (StorageConflictException se) {
primaryDataStoreDao.expunge(dataStore.getId());
throw new CloudRuntimeException(String.format("Storage has already been added as local storage to host: %s", host));
} catch (Exception e) {
logger.warn("Unable to establish a connection between " + host + " and " + dataStore, e);
String reason = storageMgr.getStoragePoolMountFailureReason(e.getMessage());
if (reason != null) {
throw new CloudRuntimeException(reason);
}
}
}
if (poolHosts.isEmpty()) {
logger.warn("No host can access storage pool " + dataStore + " in this zone.");
primaryDataStoreDao.expunge(dataStore.getId());
throw new CloudRuntimeException("Failed to create storage pool as it is not accessible to hosts.");
}
dataStoreHelper.attachZone(dataStore, hypervisorType);
return true; return true;
} }

View File

@ -19,23 +19,14 @@
package org.apache.cloudstack.storage.datastore.lifecycle; package org.apache.cloudstack.storage.datastore.lifecycle;
import com.cloud.agent.AgentManager; import static org.mockito.ArgumentMatchers.anyLong;
import com.cloud.agent.api.ModifyStoragePoolAnswer; import static org.mockito.ArgumentMatchers.anyString;
import com.cloud.agent.api.ModifyStoragePoolCommand; import static org.mockito.ArgumentMatchers.eq;
import com.cloud.agent.api.StoragePoolInfo; import static org.mockito.Mockito.mock;
import com.cloud.exception.StorageConflictException; import static org.mockito.Mockito.when;
import com.cloud.host.Host;
import com.cloud.host.HostVO; import java.util.List;
import com.cloud.host.Status;
import com.cloud.resource.ResourceManager;
import com.cloud.resource.ResourceState;
import com.cloud.storage.DataStoreRole;
import com.cloud.storage.Storage;
import com.cloud.storage.StorageManager;
import com.cloud.storage.StorageManagerImpl;
import com.cloud.storage.dao.StoragePoolHostDao;
import com.cloud.utils.exception.CloudRuntimeException;
import junit.framework.TestCase;
import org.apache.cloudstack.engine.subsystem.api.storage.ClusterScope; import org.apache.cloudstack.engine.subsystem.api.storage.ClusterScope;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStore; import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager; import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager;
@ -58,14 +49,23 @@ import org.mockito.Mockito;
import org.mockito.MockitoAnnotations; import org.mockito.MockitoAnnotations;
import org.mockito.junit.MockitoJUnitRunner; import org.mockito.junit.MockitoJUnitRunner;
import org.springframework.test.util.ReflectionTestUtils; import org.springframework.test.util.ReflectionTestUtils;
import java.util.ArrayList;
import java.util.List;
import java.util.UUID;
import static org.mockito.ArgumentMatchers.anyLong; import com.cloud.agent.AgentManager;
import static org.mockito.ArgumentMatchers.anyString; import com.cloud.agent.api.ModifyStoragePoolAnswer;
import static org.mockito.ArgumentMatchers.eq; import com.cloud.agent.api.ModifyStoragePoolCommand;
import static org.mockito.Mockito.when; import com.cloud.agent.api.StoragePoolInfo;
import com.cloud.exception.StorageConflictException;
import com.cloud.host.HostVO;
import com.cloud.host.dao.HostDao;
import com.cloud.resource.ResourceManager;
import com.cloud.storage.DataStoreRole;
import com.cloud.storage.Storage;
import com.cloud.storage.StorageManager;
import com.cloud.storage.StorageManagerImpl;
import com.cloud.storage.dao.StoragePoolHostDao;
import com.cloud.utils.exception.CloudRuntimeException;
import junit.framework.TestCase;
/** /**
* Created by ajna123 on 9/22/2015. * Created by ajna123 on 9/22/2015.
@ -118,6 +118,9 @@ public class CloudStackPrimaryDataStoreLifeCycleImplTest extends TestCase {
@Mock @Mock
PrimaryDataStoreHelper primaryDataStoreHelper; PrimaryDataStoreHelper primaryDataStoreHelper;
@Mock
HostDao hostDao;
AutoCloseable closeable; AutoCloseable closeable;
@Before @Before
@ -129,17 +132,6 @@ public class CloudStackPrimaryDataStoreLifeCycleImplTest extends TestCase {
ReflectionTestUtils.setField(storageMgr, "_dataStoreMgr", _dataStoreMgr); ReflectionTestUtils.setField(storageMgr, "_dataStoreMgr", _dataStoreMgr);
ReflectionTestUtils.setField(_cloudStackPrimaryDataStoreLifeCycle, "storageMgr", storageMgr); ReflectionTestUtils.setField(_cloudStackPrimaryDataStoreLifeCycle, "storageMgr", storageMgr);
List<HostVO> hostList = new ArrayList<HostVO>();
HostVO host1 = new HostVO(1L, "aa01", Host.Type.Routing, "192.168.1.1", "255.255.255.0", null, null, null, null, null, null, null, null, null, null,
UUID.randomUUID().toString(), Status.Up, "1.0", null, null, 1L, null, 0, 0, "aa", 0, Storage.StoragePoolType.NetworkFilesystem);
HostVO host2 = new HostVO(1L, "aa02", Host.Type.Routing, "192.168.1.1", "255.255.255.0", null, null, null, null, null, null, null, null, null, null,
UUID.randomUUID().toString(), Status.Up, "1.0", null, null, 1L, null, 0, 0, "aa", 0, Storage.StoragePoolType.NetworkFilesystem);
host1.setResourceState(ResourceState.Enabled);
host2.setResourceState(ResourceState.Disabled);
hostList.add(host1);
hostList.add(host2);
when(_dataStoreMgr.getDataStore(anyLong(), eq(DataStoreRole.Primary))).thenReturn(store); when(_dataStoreMgr.getDataStore(anyLong(), eq(DataStoreRole.Primary))).thenReturn(store);
when(store.getPoolType()).thenReturn(Storage.StoragePoolType.NetworkFilesystem); when(store.getPoolType()).thenReturn(Storage.StoragePoolType.NetworkFilesystem);
when(store.isShared()).thenReturn(true); when(store.isShared()).thenReturn(true);
@ -152,7 +144,9 @@ public class CloudStackPrimaryDataStoreLifeCycleImplTest extends TestCase {
storageMgr.registerHostListener("default", hostListener); storageMgr.registerHostListener("default", hostListener);
when(_resourceMgr.listAllUpHosts(eq(Host.Type.Routing), anyLong(), anyLong(), anyLong())).thenReturn(hostList); when(hostDao.listIdsForUpRouting(anyLong(), anyLong(), anyLong()))
.thenReturn(List.of(1L, 2L));
when(hostDao.findById(anyLong())).thenReturn(mock(HostVO.class));
when(agentMgr.easySend(anyLong(), Mockito.any(ModifyStoragePoolCommand.class))).thenReturn(answer); when(agentMgr.easySend(anyLong(), Mockito.any(ModifyStoragePoolCommand.class))).thenReturn(answer);
when(answer.getResult()).thenReturn(true); when(answer.getResult()).thenReturn(true);
@ -171,18 +165,17 @@ public class CloudStackPrimaryDataStoreLifeCycleImplTest extends TestCase {
} }
@Test @Test
public void testAttachClusterException() throws Exception { public void testAttachClusterException() {
String exceptionString = "Mount failed due to incorrect mount options.";
String mountFailureReason = "Incorrect mount option specified."; String mountFailureReason = "Incorrect mount option specified.";
CloudRuntimeException exception = new CloudRuntimeException(exceptionString); ClusterScope scope = new ClusterScope(1L, 1L, 1L);
CloudRuntimeException exception = new CloudRuntimeException(mountFailureReason);
StorageManager storageManager = Mockito.mock(StorageManager.class); StorageManager storageManager = Mockito.mock(StorageManager.class);
Mockito.when(storageManager.connectHostToSharedPool(Mockito.any(), Mockito.anyLong())).thenThrow(exception); Mockito.doThrow(exception).when(storageManager).connectHostsToPool(Mockito.eq(store), Mockito.anyList(), Mockito.eq(scope), Mockito.eq(true), Mockito.eq(true));
Mockito.when(storageManager.getStoragePoolMountFailureReason(exceptionString)).thenReturn(mountFailureReason);
ReflectionTestUtils.setField(_cloudStackPrimaryDataStoreLifeCycle, "storageMgr", storageManager); ReflectionTestUtils.setField(_cloudStackPrimaryDataStoreLifeCycle, "storageMgr", storageManager);
try { try {
_cloudStackPrimaryDataStoreLifeCycle.attachCluster(store, new ClusterScope(1L, 1L, 1L)); _cloudStackPrimaryDataStoreLifeCycle.attachCluster(store, scope);
Assert.fail(); Assert.fail();
} catch (Exception e) { } catch (Exception e) {
Assert.assertEquals(e.getMessage(), mountFailureReason); Assert.assertEquals(e.getMessage(), mountFailureReason);

View File

@ -24,17 +24,12 @@ import java.net.URISyntaxException;
import java.net.URLDecoder; import java.net.URLDecoder;
import java.security.KeyManagementException; import java.security.KeyManagementException;
import java.security.NoSuchAlgorithmException; import java.security.NoSuchAlgorithmException;
import java.util.ArrayList;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.UUID; import java.util.UUID;
import javax.inject.Inject; import javax.inject.Inject;
import org.apache.cloudstack.storage.datastore.client.ScaleIOGatewayClientConnectionPool;
import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailsDao;
import org.apache.cloudstack.storage.datastore.util.ScaleIOUtil;
import org.apache.commons.collections.CollectionUtils;
import org.apache.cloudstack.engine.subsystem.api.storage.ClusterScope; import org.apache.cloudstack.engine.subsystem.api.storage.ClusterScope;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStore; import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
import org.apache.cloudstack.engine.subsystem.api.storage.HostScope; import org.apache.cloudstack.engine.subsystem.api.storage.HostScope;
@ -44,9 +39,13 @@ import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStoreParame
import org.apache.cloudstack.engine.subsystem.api.storage.ZoneScope; import org.apache.cloudstack.engine.subsystem.api.storage.ZoneScope;
import org.apache.cloudstack.storage.datastore.api.StoragePoolStatistics; import org.apache.cloudstack.storage.datastore.api.StoragePoolStatistics;
import org.apache.cloudstack.storage.datastore.client.ScaleIOGatewayClient; import org.apache.cloudstack.storage.datastore.client.ScaleIOGatewayClient;
import org.apache.cloudstack.storage.datastore.client.ScaleIOGatewayClientConnectionPool;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao; import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailsDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO; import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.cloudstack.storage.datastore.util.ScaleIOUtil;
import org.apache.cloudstack.storage.volume.datastore.PrimaryDataStoreHelper; import org.apache.cloudstack.storage.volume.datastore.PrimaryDataStoreHelper;
import org.apache.commons.collections.CollectionUtils;
import com.cloud.agent.AgentManager; import com.cloud.agent.AgentManager;
import com.cloud.agent.api.Answer; import com.cloud.agent.api.Answer;
@ -55,9 +54,9 @@ import com.cloud.agent.api.StoragePoolInfo;
import com.cloud.capacity.CapacityManager; import com.cloud.capacity.CapacityManager;
import com.cloud.dc.ClusterVO; import com.cloud.dc.ClusterVO;
import com.cloud.dc.dao.ClusterDao; import com.cloud.dc.dao.ClusterDao;
import com.cloud.dc.dao.DataCenterDao;
import com.cloud.exception.InvalidParameterValueException; import com.cloud.exception.InvalidParameterValueException;
import com.cloud.host.Host; import com.cloud.host.dao.HostDao;
import com.cloud.host.HostVO;
import com.cloud.hypervisor.Hypervisor; import com.cloud.hypervisor.Hypervisor;
import com.cloud.resource.ResourceManager; import com.cloud.resource.ResourceManager;
import com.cloud.storage.Storage; import com.cloud.storage.Storage;
@ -74,9 +73,13 @@ import com.cloud.utils.crypt.DBEncryptionUtil;
import com.cloud.utils.exception.CloudRuntimeException; import com.cloud.utils.exception.CloudRuntimeException;
public class ScaleIOPrimaryDataStoreLifeCycle extends BasePrimaryDataStoreLifeCycleImpl implements PrimaryDataStoreLifeCycle { public class ScaleIOPrimaryDataStoreLifeCycle extends BasePrimaryDataStoreLifeCycleImpl implements PrimaryDataStoreLifeCycle {
@Inject
DataCenterDao dataCenterDao;
@Inject @Inject
private ClusterDao clusterDao; private ClusterDao clusterDao;
@Inject @Inject
private HostDao hostDao;
@Inject
private PrimaryDataStoreDao primaryDataStoreDao; private PrimaryDataStoreDao primaryDataStoreDao;
@Inject @Inject
private StoragePoolDetailsDao storagePoolDetailsDao; private StoragePoolDetailsDao storagePoolDetailsDao;
@ -258,28 +261,15 @@ public class ScaleIOPrimaryDataStoreLifeCycle extends BasePrimaryDataStoreLifeCy
} }
PrimaryDataStoreInfo primaryDataStoreInfo = (PrimaryDataStoreInfo) dataStore; PrimaryDataStoreInfo primaryDataStoreInfo = (PrimaryDataStoreInfo) dataStore;
List<HostVO> hostsInCluster = resourceManager.listAllUpAndEnabledHosts(Host.Type.Routing, primaryDataStoreInfo.getClusterId(), List<Long> hostIds = hostDao.listIdsForUpRouting(primaryDataStoreInfo.getDataCenterId(),
primaryDataStoreInfo.getPodId(), primaryDataStoreInfo.getDataCenterId()); primaryDataStoreInfo.getPodId(), primaryDataStoreInfo.getClusterId());
if (hostsInCluster.isEmpty()) { if (hostIds.isEmpty()) {
primaryDataStoreDao.expunge(primaryDataStoreInfo.getId()); primaryDataStoreDao.expunge(primaryDataStoreInfo.getId());
throw new CloudRuntimeException("No hosts are Up to associate a storage pool with in cluster: " + cluster); throw new CloudRuntimeException("No hosts are Up to associate a storage pool with in cluster: " + cluster);
} }
logger.debug("Attaching the pool to each of the hosts in the cluster: {}", cluster); logger.debug("Attaching the pool to each of the hosts in the {}", cluster);
List<HostVO> poolHosts = new ArrayList<HostVO>(); storageMgr.connectHostsToPool(dataStore, hostIds, scope, false, false);
for (HostVO host : hostsInCluster) {
try {
if (storageMgr.connectHostToSharedPool(host, primaryDataStoreInfo.getId())) {
poolHosts.add(host);
}
} catch (Exception e) {
logger.warn(String.format("Unable to establish a connection between host: %s and pool: %s on the cluster: %s", host, dataStore, cluster), e);
}
}
if (poolHosts.isEmpty()) {
logger.warn("No host can access storage pool '{}' on cluster '{}'.", primaryDataStoreInfo, cluster);
}
dataStoreHelper.attachCluster(dataStore); dataStoreHelper.attachCluster(dataStore);
return true; return true;
@ -296,21 +286,10 @@ public class ScaleIOPrimaryDataStoreLifeCycle extends BasePrimaryDataStoreLifeCy
throw new CloudRuntimeException("Unsupported hypervisor type: " + hypervisorType.toString()); throw new CloudRuntimeException("Unsupported hypervisor type: " + hypervisorType.toString());
} }
logger.debug("Attaching the pool to each of the hosts in the zone: " + scope.getScopeId()); logger.debug("Attaching the pool to each of the hosts in the {}",
List<HostVO> hosts = resourceManager.listAllUpAndEnabledHostsInOneZoneByHypervisor(hypervisorType, scope.getScopeId()); dataCenterDao.findById(scope.getScopeId()));
List<HostVO> poolHosts = new ArrayList<HostVO>(); List<Long> hostIds = hostDao.listIdsForUpEnabledByZoneAndHypervisor(scope.getScopeId(), hypervisorType);
for (HostVO host : hosts) { storageMgr.connectHostsToPool(dataStore, hostIds, scope, false, false);
try {
if (storageMgr.connectHostToSharedPool(host, dataStore.getId())) {
poolHosts.add(host);
}
} catch (Exception e) {
logger.warn("Unable to establish a connection between host: " + host + " and pool: " + dataStore + "in the zone: " + scope.getScopeId(), e);
}
}
if (poolHosts.isEmpty()) {
logger.warn("No host can access storage pool " + dataStore + " in the zone: " + scope.getScopeId());
}
dataStoreHelper.attachZone(dataStore); dataStoreHelper.attachZone(dataStore);
return true; return true;

View File

@ -30,7 +30,6 @@ import static org.mockito.Mockito.when;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.List; import java.util.List;
import java.util.UUID;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStore; import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager; import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager;
@ -56,15 +55,13 @@ import org.mockito.MockedStatic;
import org.mockito.Mockito; import org.mockito.Mockito;
import org.mockito.MockitoAnnotations; import org.mockito.MockitoAnnotations;
import org.mockito.junit.MockitoJUnitRunner; import org.mockito.junit.MockitoJUnitRunner;
import org.springframework.test.util.ReflectionTestUtils;
import com.cloud.host.Host; import com.cloud.dc.DataCenterVO;
import com.cloud.host.HostVO; import com.cloud.dc.dao.DataCenterDao;
import com.cloud.host.Status; import com.cloud.host.dao.HostDao;
import com.cloud.hypervisor.Hypervisor; import com.cloud.hypervisor.Hypervisor;
import com.cloud.resource.ResourceManager;
import com.cloud.resource.ResourceState;
import com.cloud.storage.DataStoreRole; import com.cloud.storage.DataStoreRole;
import com.cloud.storage.Storage;
import com.cloud.storage.StorageManager; import com.cloud.storage.StorageManager;
import com.cloud.storage.StorageManagerImpl; import com.cloud.storage.StorageManagerImpl;
import com.cloud.storage.StoragePoolAutomation; import com.cloud.storage.StoragePoolAutomation;
@ -73,7 +70,6 @@ import com.cloud.storage.VMTemplateStoragePoolVO;
import com.cloud.storage.dao.StoragePoolHostDao; import com.cloud.storage.dao.StoragePoolHostDao;
import com.cloud.template.TemplateManager; import com.cloud.template.TemplateManager;
import com.cloud.utils.exception.CloudRuntimeException; import com.cloud.utils.exception.CloudRuntimeException;
import org.springframework.test.util.ReflectionTestUtils;
@RunWith(MockitoJUnitRunner.class) @RunWith(MockitoJUnitRunner.class)
public class ScaleIOPrimaryDataStoreLifeCycleTest { public class ScaleIOPrimaryDataStoreLifeCycleTest {
@ -85,8 +81,6 @@ public class ScaleIOPrimaryDataStoreLifeCycleTest {
@Mock @Mock
private PrimaryDataStoreHelper dataStoreHelper; private PrimaryDataStoreHelper dataStoreHelper;
@Mock @Mock
private ResourceManager resourceManager;
@Mock
private StoragePoolAutomation storagePoolAutomation; private StoragePoolAutomation storagePoolAutomation;
@Mock @Mock
private StoragePoolHostDao storagePoolHostDao; private StoragePoolHostDao storagePoolHostDao;
@ -100,6 +94,10 @@ public class ScaleIOPrimaryDataStoreLifeCycleTest {
private PrimaryDataStore store; private PrimaryDataStore store;
@Mock @Mock
private TemplateManager templateMgr; private TemplateManager templateMgr;
@Mock
HostDao hostDao;
@Mock
DataCenterDao dataCenterDao;
@InjectMocks @InjectMocks
private StorageManager storageMgr = new StorageManagerImpl(); private StorageManager storageMgr = new StorageManagerImpl();
@ -115,6 +113,7 @@ public class ScaleIOPrimaryDataStoreLifeCycleTest {
public void setUp() { public void setUp() {
closeable = MockitoAnnotations.openMocks(this); closeable = MockitoAnnotations.openMocks(this);
ReflectionTestUtils.setField(scaleIOPrimaryDataStoreLifeCycleTest, "storageMgr", storageMgr); ReflectionTestUtils.setField(scaleIOPrimaryDataStoreLifeCycleTest, "storageMgr", storageMgr);
when(dataCenterDao.findById(anyLong())).thenReturn(mock(DataCenterVO.class));
} }
@After @After
@ -137,17 +136,8 @@ public class ScaleIOPrimaryDataStoreLifeCycleTest {
final ZoneScope scope = new ZoneScope(1L); final ZoneScope scope = new ZoneScope(1L);
List<HostVO> hostList = new ArrayList<HostVO>(); when(hostDao.listIdsForUpEnabledByZoneAndHypervisor(scope.getScopeId(), Hypervisor.HypervisorType.KVM))
HostVO host1 = new HostVO(1L, "host01", Host.Type.Routing, "192.168.1.1", "255.255.255.0", null, null, null, null, null, null, null, null, null, null, .thenReturn(List.of(1L, 2L));
UUID.randomUUID().toString(), Status.Up, "1.0", null, null, 1L, null, 0, 0, "aa", 0, Storage.StoragePoolType.PowerFlex);
HostVO host2 = new HostVO(2L, "host02", Host.Type.Routing, "192.168.1.2", "255.255.255.0", null, null, null, null, null, null, null, null, null, null,
UUID.randomUUID().toString(), Status.Up, "1.0", null, null, 1L, null, 0, 0, "aa", 0, Storage.StoragePoolType.PowerFlex);
host1.setResourceState(ResourceState.Enabled);
host2.setResourceState(ResourceState.Enabled);
hostList.add(host1);
hostList.add(host2);
when(resourceManager.listAllUpAndEnabledHostsInOneZoneByHypervisor(Hypervisor.HypervisorType.KVM, 1L)).thenReturn(hostList);
when(dataStoreMgr.getDataStore(anyLong(), eq(DataStoreRole.Primary))).thenReturn(store); when(dataStoreMgr.getDataStore(anyLong(), eq(DataStoreRole.Primary))).thenReturn(store);
when(store.isShared()).thenReturn(true); when(store.isShared()).thenReturn(true);

View File

@ -219,17 +219,17 @@ public class StorPoolHelper {
} }
public static Long findClusterIdByGlobalId(String globalId, ClusterDao clusterDao) { public static Long findClusterIdByGlobalId(String globalId, ClusterDao clusterDao) {
List<ClusterVO> clusterVo = clusterDao.listAll(); List<Long> clusterIds = clusterDao.listAllIds();
if (clusterVo.size() == 1) { if (clusterIds.size() == 1) {
StorPoolUtil.spLog("There is only one cluster, sending backup to secondary command"); StorPoolUtil.spLog("There is only one cluster, sending backup to secondary command");
return null; return null;
} }
for (ClusterVO clusterVO2 : clusterVo) { for (Long clusterId : clusterIds) {
if (globalId != null && StorPoolConfigurationManager.StorPoolClusterId.valueIn(clusterVO2.getId()) != null if (globalId != null && StorPoolConfigurationManager.StorPoolClusterId.valueIn(clusterId) != null
&& globalId.contains(StorPoolConfigurationManager.StorPoolClusterId.valueIn(clusterVO2.getId()).toString())) { && globalId.contains(StorPoolConfigurationManager.StorPoolClusterId.valueIn(clusterId))) {
StorPoolUtil.spLog("Found cluster with id=%s for object with globalId=%s", clusterVO2.getId(), StorPoolUtil.spLog("Found cluster with id=%s for object with globalId=%s", clusterId,
globalId); globalId);
return clusterVO2.getId(); return clusterId;
} }
} }
throw new CloudRuntimeException( throw new CloudRuntimeException(

View File

@ -26,8 +26,11 @@ import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.Set; import java.util.Set;
import java.util.Timer; import java.util.Timer;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService; import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors; import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import javax.inject.Inject; import javax.inject.Inject;
import javax.mail.MessagingException; import javax.mail.MessagingException;
@ -75,12 +78,11 @@ import com.cloud.event.AlertGenerator;
import com.cloud.event.EventTypes; import com.cloud.event.EventTypes;
import com.cloud.host.Host; import com.cloud.host.Host;
import com.cloud.host.HostVO; import com.cloud.host.HostVO;
import com.cloud.host.dao.HostDao;
import com.cloud.network.Ipv6Service; import com.cloud.network.Ipv6Service;
import com.cloud.network.dao.IPAddressDao; import com.cloud.network.dao.IPAddressDao;
import com.cloud.org.Grouping.AllocationState; import com.cloud.org.Grouping.AllocationState;
import com.cloud.resource.ResourceManager; import com.cloud.resource.ResourceManager;
import com.cloud.service.ServiceOfferingVO;
import com.cloud.service.dao.ServiceOfferingDao;
import com.cloud.storage.StorageManager; import com.cloud.storage.StorageManager;
import com.cloud.utils.Pair; import com.cloud.utils.Pair;
import com.cloud.utils.component.ManagerBase; import com.cloud.utils.component.ManagerBase;
@ -124,9 +126,9 @@ public class AlertManagerImpl extends ManagerBase implements AlertManager, Confi
@Inject @Inject
protected ConfigDepot _configDepot; protected ConfigDepot _configDepot;
@Inject @Inject
ServiceOfferingDao _offeringsDao;
@Inject
Ipv6Service ipv6Service; Ipv6Service ipv6Service;
@Inject
HostDao hostDao;
private Timer _timer = null; private Timer _timer = null;
private long _capacityCheckPeriod = 60L * 60L * 1000L; // One hour by default. private long _capacityCheckPeriod = 60L * 60L * 1000L; // One hour by default.
@ -260,6 +262,66 @@ public class AlertManagerImpl extends ManagerBase implements AlertManager, Confi
} }
} }
/**
* Recalculates the capacities of hosts, including CPU and RAM.
*/
protected void recalculateHostCapacities() {
List<Long> hostIds = hostDao.listIdsByType(Host.Type.Routing);
if (hostIds.isEmpty()) {
return;
}
ConcurrentHashMap<Long, Future<Void>> futures = new ConcurrentHashMap<>();
ExecutorService executorService = Executors.newFixedThreadPool(Math.max(1,
Math.min(CapacityManager.CapacityCalculateWorkers.value(), hostIds.size())));
for (Long hostId : hostIds) {
futures.put(hostId, executorService.submit(() -> {
final HostVO host = hostDao.findById(hostId);
_capacityMgr.updateCapacityForHost(host);
return null;
}));
}
for (Map.Entry<Long, Future<Void>> entry: futures.entrySet()) {
try {
entry.getValue().get();
} catch (InterruptedException | ExecutionException e) {
logger.error(String.format("Error during capacity calculation for host: %d due to : %s",
entry.getKey(), e.getMessage()), e);
}
}
executorService.shutdown();
}
protected void recalculateStorageCapacities() {
List<Long> storagePoolIds = _storagePoolDao.listAllIds();
if (storagePoolIds.isEmpty()) {
return;
}
ConcurrentHashMap<Long, Future<Void>> futures = new ConcurrentHashMap<>();
ExecutorService executorService = Executors.newFixedThreadPool(Math.max(1,
Math.min(CapacityManager.CapacityCalculateWorkers.value(), storagePoolIds.size())));
for (Long poolId: storagePoolIds) {
futures.put(poolId, executorService.submit(() -> {
final StoragePoolVO pool = _storagePoolDao.findById(poolId);
long disk = _capacityMgr.getAllocatedPoolCapacity(pool, null);
if (pool.isShared()) {
_storageMgr.createCapacityEntry(pool, Capacity.CAPACITY_TYPE_STORAGE_ALLOCATED, disk);
} else {
_storageMgr.createCapacityEntry(pool, Capacity.CAPACITY_TYPE_LOCAL_STORAGE, disk);
}
return null;
}));
}
for (Map.Entry<Long, Future<Void>> entry: futures.entrySet()) {
try {
entry.getValue().get();
} catch (InterruptedException | ExecutionException e) {
logger.error(String.format("Error during capacity calculation for storage pool: %d due to : %s",
entry.getKey(), e.getMessage()), e);
}
}
executorService.shutdown();
}
@Override @Override
public void recalculateCapacity() { public void recalculateCapacity() {
// FIXME: the right way to do this is to register a listener (see RouterStatsListener, VMSyncListener) // FIXME: the right way to do this is to register a listener (see RouterStatsListener, VMSyncListener)
@ -275,36 +337,14 @@ public class AlertManagerImpl extends ManagerBase implements AlertManager, Confi
logger.debug("recalculating system capacity"); logger.debug("recalculating system capacity");
logger.debug("Executing cpu/ram capacity update"); logger.debug("Executing cpu/ram capacity update");
} }
// Calculate CPU and RAM capacities // Calculate CPU and RAM capacities
// get all hosts...even if they are not in 'UP' state recalculateHostCapacities();
List<HostVO> hosts = _resourceMgr.listAllNotInMaintenanceHostsInOneZone(Host.Type.Routing, null);
if (hosts != null) {
// prepare the service offerings
List<ServiceOfferingVO> offerings = _offeringsDao.listAllIncludingRemoved();
Map<Long, ServiceOfferingVO> offeringsMap = new HashMap<Long, ServiceOfferingVO>();
for (ServiceOfferingVO offering : offerings) {
offeringsMap.put(offering.getId(), offering);
}
for (HostVO host : hosts) {
_capacityMgr.updateCapacityForHost(host, offeringsMap);
}
}
if (logger.isDebugEnabled()) { if (logger.isDebugEnabled()) {
logger.debug("Done executing cpu/ram capacity update"); logger.debug("Done executing cpu/ram capacity update");
logger.debug("Executing storage capacity update"); logger.debug("Executing storage capacity update");
} }
// Calculate storage pool capacity // Calculate storage pool capacity
List<StoragePoolVO> storagePools = _storagePoolDao.listAll(); recalculateStorageCapacities();
for (StoragePoolVO pool : storagePools) {
long disk = _capacityMgr.getAllocatedPoolCapacity(pool, null);
if (pool.isShared()) {
_storageMgr.createCapacityEntry(pool, Capacity.CAPACITY_TYPE_STORAGE_ALLOCATED, disk);
} else {
_storageMgr.createCapacityEntry(pool, Capacity.CAPACITY_TYPE_LOCAL_STORAGE, disk);
}
}
if (logger.isDebugEnabled()) { if (logger.isDebugEnabled()) {
logger.debug("Done executing storage capacity update"); logger.debug("Done executing storage capacity update");
logger.debug("Executing capacity updates for public ip and Vlans"); logger.debug("Executing capacity updates for public ip and Vlans");

View File

@ -2355,7 +2355,7 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
// ids // ids
hostSearchBuilder.and("id", hostSearchBuilder.entity().getId(), SearchCriteria.Op.EQ); hostSearchBuilder.and("id", hostSearchBuilder.entity().getId(), SearchCriteria.Op.EQ);
hostSearchBuilder.and("name", hostSearchBuilder.entity().getName(), SearchCriteria.Op.EQ); hostSearchBuilder.and("name", hostSearchBuilder.entity().getName(), SearchCriteria.Op.EQ);
hostSearchBuilder.and("type", hostSearchBuilder.entity().getType(), SearchCriteria.Op.LIKE); hostSearchBuilder.and("type", hostSearchBuilder.entity().getType(), SearchCriteria.Op.EQ);
hostSearchBuilder.and("status", hostSearchBuilder.entity().getStatus(), SearchCriteria.Op.EQ); hostSearchBuilder.and("status", hostSearchBuilder.entity().getStatus(), SearchCriteria.Op.EQ);
hostSearchBuilder.and("dataCenterId", hostSearchBuilder.entity().getDataCenterId(), SearchCriteria.Op.EQ); hostSearchBuilder.and("dataCenterId", hostSearchBuilder.entity().getDataCenterId(), SearchCriteria.Op.EQ);
hostSearchBuilder.and("podId", hostSearchBuilder.entity().getPodId(), SearchCriteria.Op.EQ); hostSearchBuilder.and("podId", hostSearchBuilder.entity().getPodId(), SearchCriteria.Op.EQ);
@ -2407,7 +2407,7 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
sc.setParameters("name", name); sc.setParameters("name", name);
} }
if (type != null) { if (type != null) {
sc.setParameters("type", "%" + type); sc.setParameters("type", type);
} }
if (state != null) { if (state != null) {
sc.setParameters("status", state); sc.setParameters("status", state);
@ -4557,7 +4557,7 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
// check if zone is configured, if not, just return empty list // check if zone is configured, if not, just return empty list
List<HypervisorType> hypers = null; List<HypervisorType> hypers = null;
if (!isIso) { if (!isIso) {
hypers = _resourceMgr.listAvailHypervisorInZone(null, null); hypers = _resourceMgr.listAvailHypervisorInZone(null);
if (hypers == null || hypers.isEmpty()) { if (hypers == null || hypers.isEmpty()) {
return new Pair<List<TemplateJoinVO>, Integer>(new ArrayList<TemplateJoinVO>(), 0); return new Pair<List<TemplateJoinVO>, Integer>(new ArrayList<TemplateJoinVO>(), 0);
} }

View File

@ -44,6 +44,6 @@ public interface UserVmJoinDao extends GenericDao<UserVmJoinVO, Long> {
List<UserVmJoinVO> listActiveByIsoId(Long isoId); List<UserVmJoinVO> listActiveByIsoId(Long isoId);
List<UserVmJoinVO> listByAccountServiceOfferingTemplateAndNotInState(long accountId, List<VirtualMachine.State> states, List<UserVmJoinVO> listByAccountServiceOfferingTemplateAndNotInState(long accountId,
List<Long> offeringIds, List<Long> templateIds); List<VirtualMachine.State> states, List<Long> offeringIds, List<Long> templateIds);
} }

View File

@ -693,6 +693,8 @@ public class UserVmJoinDaoImpl extends GenericDaoBaseWithTagInformation<UserVmJo
public List<UserVmJoinVO> listByAccountServiceOfferingTemplateAndNotInState(long accountId, List<State> states, public List<UserVmJoinVO> listByAccountServiceOfferingTemplateAndNotInState(long accountId, List<State> states,
List<Long> offeringIds, List<Long> templateIds) { List<Long> offeringIds, List<Long> templateIds) {
SearchBuilder<UserVmJoinVO> userVmSearch = createSearchBuilder(); SearchBuilder<UserVmJoinVO> userVmSearch = createSearchBuilder();
userVmSearch.selectFields(userVmSearch.entity().getId(), userVmSearch.entity().getCpu(),
userVmSearch.entity().getRamSize());
userVmSearch.and("accountId", userVmSearch.entity().getAccountId(), Op.EQ); userVmSearch.and("accountId", userVmSearch.entity().getAccountId(), Op.EQ);
userVmSearch.and("serviceOfferingId", userVmSearch.entity().getServiceOfferingId(), Op.IN); userVmSearch.and("serviceOfferingId", userVmSearch.entity().getServiceOfferingId(), Op.IN);
userVmSearch.and("templateId", userVmSearch.entity().getTemplateId(), Op.IN); userVmSearch.and("templateId", userVmSearch.entity().getTemplateId(), Op.IN);
@ -713,6 +715,6 @@ public class UserVmJoinDaoImpl extends GenericDaoBaseWithTagInformation<UserVmJo
sc.setParameters("state", states.toArray()); sc.setParameters("state", states.toArray());
} }
sc.setParameters("displayVm", 1); sc.setParameters("displayVm", 1);
return listBy(sc); return customSearch(sc, null);
} }
} }

View File

@ -23,6 +23,7 @@ import java.util.HashMap;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.Optional; import java.util.Optional;
import java.util.stream.Collectors;
import javax.inject.Inject; import javax.inject.Inject;
import javax.naming.ConfigurationException; import javax.naming.ConfigurationException;
@ -37,6 +38,10 @@ import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
import org.apache.cloudstack.framework.messagebus.MessageBus; import org.apache.cloudstack.framework.messagebus.MessageBus;
import org.apache.cloudstack.framework.messagebus.PublishScope; import org.apache.cloudstack.framework.messagebus.PublishScope;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO; import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.cloudstack.utils.cache.LazyCache;
import org.apache.cloudstack.utils.cache.SingleCache;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.ObjectUtils;
import com.cloud.agent.AgentManager; import com.cloud.agent.AgentManager;
import com.cloud.agent.Listener; import com.cloud.agent.Listener;
@ -50,7 +55,6 @@ import com.cloud.capacity.dao.CapacityDao;
import com.cloud.configuration.Config; import com.cloud.configuration.Config;
import com.cloud.dc.ClusterDetailsDao; import com.cloud.dc.ClusterDetailsDao;
import com.cloud.dc.ClusterDetailsVO; import com.cloud.dc.ClusterDetailsVO;
import com.cloud.dc.ClusterVO;
import com.cloud.dc.dao.ClusterDao; import com.cloud.dc.dao.ClusterDao;
import com.cloud.deploy.DeploymentClusterPlanner; import com.cloud.deploy.DeploymentClusterPlanner;
import com.cloud.event.UsageEventVO; import com.cloud.event.UsageEventVO;
@ -62,7 +66,6 @@ import com.cloud.host.dao.HostDao;
import com.cloud.hypervisor.Hypervisor.HypervisorType; import com.cloud.hypervisor.Hypervisor.HypervisorType;
import com.cloud.hypervisor.dao.HypervisorCapabilitiesDao; import com.cloud.hypervisor.dao.HypervisorCapabilitiesDao;
import com.cloud.offering.ServiceOffering; import com.cloud.offering.ServiceOffering;
import com.cloud.org.Cluster;
import com.cloud.resource.ResourceListener; import com.cloud.resource.ResourceListener;
import com.cloud.resource.ResourceManager; import com.cloud.resource.ResourceManager;
import com.cloud.resource.ResourceState; import com.cloud.resource.ResourceState;
@ -141,6 +144,9 @@ public class CapacityManagerImpl extends ManagerBase implements CapacityManager,
@Inject @Inject
MessageBus _messageBus; MessageBus _messageBus;
private LazyCache<Long, Pair<String, String>> clusterValuesCache;
private SingleCache<Map<Long, ServiceOfferingVO>> serviceOfferingsCache;
@Override @Override
public boolean configure(String name, Map<String, Object> params) throws ConfigurationException { public boolean configure(String name, Map<String, Object> params) throws ConfigurationException {
_vmCapacityReleaseInterval = NumbersUtil.parseInt(_configDao.getValue(Config.CapacitySkipcountingHours.key()), 3600); _vmCapacityReleaseInterval = NumbersUtil.parseInt(_configDao.getValue(Config.CapacitySkipcountingHours.key()), 3600);
@ -156,6 +162,8 @@ public class CapacityManagerImpl extends ManagerBase implements CapacityManager,
public boolean start() { public boolean start() {
_resourceMgr.registerResourceEvent(ResourceListener.EVENT_PREPARE_MAINTENANCE_AFTER, this); _resourceMgr.registerResourceEvent(ResourceListener.EVENT_PREPARE_MAINTENANCE_AFTER, this);
_resourceMgr.registerResourceEvent(ResourceListener.EVENT_CANCEL_MAINTENANCE_AFTER, this); _resourceMgr.registerResourceEvent(ResourceListener.EVENT_CANCEL_MAINTENANCE_AFTER, this);
clusterValuesCache = new LazyCache<>(128, 60, this::getClusterValues);
serviceOfferingsCache = new SingleCache<>(60, this::getServiceOfferingsMap);
return true; return true;
} }
@ -209,8 +217,8 @@ public class CapacityManagerImpl extends ManagerBase implements CapacityManager,
long reservedMem = capacityMemory.getReservedCapacity(); long reservedMem = capacityMemory.getReservedCapacity();
long reservedCpuCore = capacityCpuCore.getReservedCapacity(); long reservedCpuCore = capacityCpuCore.getReservedCapacity();
long actualTotalCpu = capacityCpu.getTotalCapacity(); long actualTotalCpu = capacityCpu.getTotalCapacity();
float cpuOvercommitRatio = Float.parseFloat(_clusterDetailsDao.findDetail(clusterIdFinal, "cpuOvercommitRatio").getValue()); float cpuOvercommitRatio = Float.parseFloat(_clusterDetailsDao.findDetail(clusterIdFinal, VmDetailConstants.CPU_OVER_COMMIT_RATIO).getValue());
float memoryOvercommitRatio = Float.parseFloat(_clusterDetailsDao.findDetail(clusterIdFinal, "memoryOvercommitRatio").getValue()); float memoryOvercommitRatio = Float.parseFloat(_clusterDetailsDao.findDetail(clusterIdFinal, VmDetailConstants.MEMORY_OVER_COMMIT_RATIO).getValue());
int vmCPU = svo.getCpu() * svo.getSpeed(); int vmCPU = svo.getCpu() * svo.getSpeed();
int vmCPUCore = svo.getCpu(); int vmCPUCore = svo.getCpu();
long vmMem = svo.getRamSize() * 1024L * 1024L; long vmMem = svo.getRamSize() * 1024L * 1024L;
@ -283,8 +291,8 @@ public class CapacityManagerImpl extends ManagerBase implements CapacityManager,
final long hostId = vm.getHostId(); final long hostId = vm.getHostId();
final HostVO host = _hostDao.findById(hostId); final HostVO host = _hostDao.findById(hostId);
final long clusterId = host.getClusterId(); final long clusterId = host.getClusterId();
final float cpuOvercommitRatio = Float.parseFloat(_clusterDetailsDao.findDetail(clusterId, "cpuOvercommitRatio").getValue()); final float cpuOvercommitRatio = Float.parseFloat(_clusterDetailsDao.findDetail(clusterId, VmDetailConstants.CPU_OVER_COMMIT_RATIO).getValue());
final float memoryOvercommitRatio = Float.parseFloat(_clusterDetailsDao.findDetail(clusterId, "memoryOvercommitRatio").getValue()); final float memoryOvercommitRatio = Float.parseFloat(_clusterDetailsDao.findDetail(clusterId, VmDetailConstants.MEMORY_OVER_COMMIT_RATIO).getValue());
final ServiceOfferingVO svo = _offeringsDao.findById(vm.getId(), vm.getServiceOfferingId()); final ServiceOfferingVO svo = _offeringsDao.findById(vm.getId(), vm.getServiceOfferingId());
@ -376,13 +384,13 @@ public class CapacityManagerImpl extends ManagerBase implements CapacityManager,
toHumanReadableSize(capacityMem.getReservedCapacity()), toHumanReadableSize(ram), fromLastHost); toHumanReadableSize(capacityMem.getReservedCapacity()), toHumanReadableSize(ram), fromLastHost);
long cluster_id = host.getClusterId(); long cluster_id = host.getClusterId();
ClusterDetailsVO cluster_detail_cpu = _clusterDetailsDao.findDetail(cluster_id, "cpuOvercommitRatio"); ClusterDetailsVO cluster_detail_cpu = _clusterDetailsDao.findDetail(cluster_id, VmDetailConstants.CPU_OVER_COMMIT_RATIO);
ClusterDetailsVO cluster_detail_ram = _clusterDetailsDao.findDetail(cluster_id, "memoryOvercommitRatio"); ClusterDetailsVO cluster_detail_ram = _clusterDetailsDao.findDetail(cluster_id, VmDetailConstants.MEMORY_OVER_COMMIT_RATIO);
Float cpuOvercommitRatio = Float.parseFloat(cluster_detail_cpu.getValue()); Float cpuOvercommitRatio = Float.parseFloat(cluster_detail_cpu.getValue());
Float memoryOvercommitRatio = Float.parseFloat(cluster_detail_ram.getValue()); Float memoryOvercommitRatio = Float.parseFloat(cluster_detail_ram.getValue());
boolean hostHasCpuCapability, hostHasCapacity = false; boolean hostHasCpuCapability, hostHasCapacity = false;
hostHasCpuCapability = checkIfHostHasCpuCapability(host.getId(), cpucore, cpuspeed); hostHasCpuCapability = checkIfHostHasCpuCapability(host, cpucore, cpuspeed);
if (hostHasCpuCapability) { if (hostHasCpuCapability) {
// first check from reserved capacity // first check from reserved capacity
@ -412,25 +420,16 @@ public class CapacityManagerImpl extends ManagerBase implements CapacityManager,
} }
@Override @Override
public boolean checkIfHostHasCpuCapability(long hostId, Integer cpuNum, Integer cpuSpeed) { public boolean checkIfHostHasCpuCapability(Host host, Integer cpuNum, Integer cpuSpeed) {
// Check host can support the Cpu Number and Speed. // Check host can support the Cpu Number and Speed.
Host host = _hostDao.findById(hostId);
boolean isCpuNumGood = host.getCpus().intValue() >= cpuNum; boolean isCpuNumGood = host.getCpus().intValue() >= cpuNum;
boolean isCpuSpeedGood = host.getSpeed().intValue() >= cpuSpeed; boolean isCpuSpeedGood = host.getSpeed().intValue() >= cpuSpeed;
if (isCpuNumGood && isCpuSpeedGood) { boolean hasCpuCapability = isCpuNumGood && isCpuSpeedGood;
if (logger.isDebugEnabled()) {
logger.debug("Host: {} has cpu capability (cpu:{}, speed:{}) " + logger.debug("{} {} cpu capability (cpu: {}, speed: {} ) to support requested CPU: {} and requested speed: {}",
"to support requested CPU: {} and requested speed: {}", host, host.getCpus(), host.getSpeed(), cpuNum, cpuSpeed); host, hasCpuCapability ? "has" : "doesn't have" ,host.getCpus(), host.getSpeed(), cpuNum, cpuSpeed);
}
return true; return hasCpuCapability;
} else {
if (logger.isDebugEnabled()) {
logger.debug("Host: {} doesn't have cpu capability (cpu:{}, speed:{})" +
" to support requested CPU: {} and requested speed: {}", host, host.getCpus(), host.getSpeed(), cpuNum, cpuSpeed);
}
return false;
}
} }
@Override @Override
@ -628,21 +627,50 @@ public class CapacityManagerImpl extends ManagerBase implements CapacityManager,
return totalAllocatedSize; return totalAllocatedSize;
} }
@DB protected Pair<String, String> getClusterValues(long clusterId) {
@Override Map<String, String> map = _clusterDetailsDao.findDetails(clusterId,
public void updateCapacityForHost(final Host host) { List.of(VmDetailConstants.CPU_OVER_COMMIT_RATIO, VmDetailConstants.MEMORY_OVER_COMMIT_RATIO));
// prepare the service offerings return new Pair<>(map.get(VmDetailConstants.CPU_OVER_COMMIT_RATIO),
List<ServiceOfferingVO> offerings = _offeringsDao.listAllIncludingRemoved(); map.get(VmDetailConstants.MEMORY_OVER_COMMIT_RATIO));
Map<Long, ServiceOfferingVO> offeringsMap = new HashMap<Long, ServiceOfferingVO>();
for (ServiceOfferingVO offering : offerings) {
offeringsMap.put(offering.getId(), offering);
} }
updateCapacityForHost(host, offeringsMap);
protected Map<Long, ServiceOfferingVO> getServiceOfferingsMap() {
List<ServiceOfferingVO> serviceOfferings = _offeringsDao.listAllIncludingRemoved();
if (CollectionUtils.isEmpty(serviceOfferings)) {
return new HashMap<>();
}
return serviceOfferings.stream()
.collect(Collectors.toMap(
ServiceOfferingVO::getId,
offering -> offering
));
}
protected ServiceOfferingVO getServiceOffering(long id) {
Map <Long, ServiceOfferingVO> map = serviceOfferingsCache.get();
if (map.containsKey(id)) {
return map.get(id);
}
ServiceOfferingVO serviceOfferingVO = _offeringsDao.findByIdIncludingRemoved(id);
if (serviceOfferingVO != null) {
serviceOfferingsCache.invalidate();
}
return serviceOfferingVO;
}
protected Map<String, String> getVmDetailsForCapacityCalculation(long vmId) {
return _userVmDetailsDao.listDetailsKeyPairs(vmId,
List.of(VmDetailConstants.CPU_OVER_COMMIT_RATIO,
VmDetailConstants.MEMORY_OVER_COMMIT_RATIO,
UsageEventVO.DynamicParameters.memory.name(),
UsageEventVO.DynamicParameters.cpuNumber.name(),
UsageEventVO.DynamicParameters.cpuSpeed.name()));
} }
@DB @DB
@Override @Override
public void updateCapacityForHost(final Host host, final Map<Long, ServiceOfferingVO> offeringsMap) { public void updateCapacityForHost(final Host host) {
long usedCpuCore = 0; long usedCpuCore = 0;
long reservedCpuCore = 0; long reservedCpuCore = 0;
long usedCpu = 0; long usedCpu = 0;
@ -651,32 +679,27 @@ public class CapacityManagerImpl extends ManagerBase implements CapacityManager,
long reservedCpu = 0; long reservedCpu = 0;
final CapacityState capacityState = (host.getResourceState() == ResourceState.Enabled) ? CapacityState.Enabled : CapacityState.Disabled; final CapacityState capacityState = (host.getResourceState() == ResourceState.Enabled) ? CapacityState.Enabled : CapacityState.Disabled;
List<VMInstanceVO> vms = _vmDao.listUpByHostId(host.getId()); List<VMInstanceVO> vms = _vmDao.listIdServiceOfferingForUpVmsByHostId(host.getId());
if (logger.isDebugEnabled()) { logger.debug("Found {} VMs on {}", vms.size(), host);
logger.debug("Found {} VMs on host {}", vms.size(), host);
}
final List<VMInstanceVO> vosMigrating = _vmDao.listVmsMigratingFromHost(host.getId()); final List<VMInstanceVO> vosMigrating = _vmDao.listIdServiceOfferingForVmsMigratingFromHost(host.getId());
if (logger.isDebugEnabled()) { logger.debug("Found {} VMs are Migrating from {}", vosMigrating.size(), host);
logger.debug("Found {} VMs are Migrating from host {}", vosMigrating.size(), host);
}
vms.addAll(vosMigrating); vms.addAll(vosMigrating);
ClusterVO cluster = _clusterDao.findById(host.getClusterId()); Pair<String, String> clusterValues =
ClusterDetailsVO clusterDetailCpu = _clusterDetailsDao.findDetail(cluster.getId(), "cpuOvercommitRatio"); clusterValuesCache.get(host.getClusterId());
ClusterDetailsVO clusterDetailRam = _clusterDetailsDao.findDetail(cluster.getId(), "memoryOvercommitRatio"); Float clusterCpuOvercommitRatio = Float.parseFloat(clusterValues.first());
Float clusterCpuOvercommitRatio = Float.parseFloat(clusterDetailCpu.getValue()); Float clusterRamOvercommitRatio = Float.parseFloat(clusterValues.second());
Float clusterRamOvercommitRatio = Float.parseFloat(clusterDetailRam.getValue());
for (VMInstanceVO vm : vms) { for (VMInstanceVO vm : vms) {
Float cpuOvercommitRatio = 1.0f; Float cpuOvercommitRatio = 1.0f;
Float ramOvercommitRatio = 1.0f; Float ramOvercommitRatio = 1.0f;
Map<String, String> vmDetails = _userVmDetailsDao.listDetailsKeyPairs(vm.getId()); Map<String, String> vmDetails = getVmDetailsForCapacityCalculation(vm.getId());
String vmDetailCpu = vmDetails.get("cpuOvercommitRatio"); String vmDetailCpu = vmDetails.get(VmDetailConstants.CPU_OVER_COMMIT_RATIO);
String vmDetailRam = vmDetails.get("memoryOvercommitRatio"); String vmDetailRam = vmDetails.get(VmDetailConstants.MEMORY_OVER_COMMIT_RATIO);
// if vmDetailCpu or vmDetailRam is not null it means it is running in a overcommitted cluster. // if vmDetailCpu or vmDetailRam is not null it means it is running in a overcommitted cluster.
cpuOvercommitRatio = (vmDetailCpu != null) ? Float.parseFloat(vmDetailCpu) : clusterCpuOvercommitRatio; cpuOvercommitRatio = (vmDetailCpu != null) ? Float.parseFloat(vmDetailCpu) : clusterCpuOvercommitRatio;
ramOvercommitRatio = (vmDetailRam != null) ? Float.parseFloat(vmDetailRam) : clusterRamOvercommitRatio; ramOvercommitRatio = (vmDetailRam != null) ? Float.parseFloat(vmDetailRam) : clusterRamOvercommitRatio;
ServiceOffering so = offeringsMap.get(vm.getServiceOfferingId()); ServiceOffering so = getServiceOffering(vm.getServiceOfferingId());
if (so == null) { if (so == null) {
so = _offeringsDao.findByIdIncludingRemoved(vm.getServiceOfferingId()); so = _offeringsDao.findByIdIncludingRemoved(vm.getServiceOfferingId());
} }
@ -702,26 +725,25 @@ public class CapacityManagerImpl extends ManagerBase implements CapacityManager,
} }
List<VMInstanceVO> vmsByLastHostId = _vmDao.listByLastHostId(host.getId()); List<VMInstanceVO> vmsByLastHostId = _vmDao.listByLastHostId(host.getId());
if (logger.isDebugEnabled()) { logger.debug("Found {} VM, not running on {}", vmsByLastHostId.size(), host);
logger.debug("Found {} VM, not running on host {}", vmsByLastHostId.size(), host);
}
for (VMInstanceVO vm : vmsByLastHostId) { for (VMInstanceVO vm : vmsByLastHostId) {
Float cpuOvercommitRatio = 1.0f; Float cpuOvercommitRatio = 1.0f;
Float ramOvercommitRatio = 1.0f; Float ramOvercommitRatio = 1.0f;
long lastModificationTime = Optional.ofNullable(vm.getUpdateTime()).orElse(vm.getCreated()).getTime(); long lastModificationTime = Optional.ofNullable(vm.getUpdateTime()).orElse(vm.getCreated()).getTime();
long secondsSinceLastUpdate = (DateUtil.currentGMTTime().getTime() - lastModificationTime) / 1000; long secondsSinceLastUpdate = (DateUtil.currentGMTTime().getTime() - lastModificationTime) / 1000;
if (secondsSinceLastUpdate < _vmCapacityReleaseInterval) { if (secondsSinceLastUpdate < _vmCapacityReleaseInterval) {
UserVmDetailVO vmDetailCpu = _userVmDetailsDao.findDetail(vm.getId(), VmDetailConstants.CPU_OVER_COMMIT_RATIO); Map<String, String> vmDetails = getVmDetailsForCapacityCalculation(vm.getId());
UserVmDetailVO vmDetailRam = _userVmDetailsDao.findDetail(vm.getId(), VmDetailConstants.MEMORY_OVER_COMMIT_RATIO); String vmDetailCpu = vmDetails.get(VmDetailConstants.CPU_OVER_COMMIT_RATIO);
String vmDetailRam = vmDetails.get(VmDetailConstants.MEMORY_OVER_COMMIT_RATIO);
if (vmDetailCpu != null) { if (vmDetailCpu != null) {
//if vmDetail_cpu is not null it means it is running in a overcommited cluster. //if vmDetail_cpu is not null it means it is running in a overcommited cluster.
cpuOvercommitRatio = Float.parseFloat(vmDetailCpu.getValue()); cpuOvercommitRatio = Float.parseFloat(vmDetailCpu);
} }
if (vmDetailRam != null) { if (vmDetailRam != null) {
ramOvercommitRatio = Float.parseFloat(vmDetailRam.getValue()); ramOvercommitRatio = Float.parseFloat(vmDetailRam);
} }
ServiceOffering so = offeringsMap.get(vm.getServiceOfferingId()); ServiceOffering so = getServiceOffering(vm.getServiceOfferingId());
Map<String, String> vmDetails = _userVmDetailsDao.listDetailsKeyPairs(vm.getId());
if (so == null) { if (so == null) {
so = _offeringsDao.findByIdIncludingRemoved(vm.getServiceOfferingId()); so = _offeringsDao.findByIdIncludingRemoved(vm.getServiceOfferingId());
} }
@ -761,9 +783,24 @@ public class CapacityManagerImpl extends ManagerBase implements CapacityManager,
} }
} }
CapacityVO cpuCap = _capacityDao.findByHostIdType(host.getId(), Capacity.CAPACITY_TYPE_CPU); List<CapacityVO> capacities = _capacityDao.listByHostIdTypes(host.getId(), List.of(Capacity.CAPACITY_TYPE_CPU,
CapacityVO memCap = _capacityDao.findByHostIdType(host.getId(), Capacity.CAPACITY_TYPE_MEMORY); Capacity.CAPACITY_TYPE_MEMORY,
CapacityVO cpuCoreCap = _capacityDao.findByHostIdType(host.getId(), CapacityVO.CAPACITY_TYPE_CPU_CORE); CapacityVO.CAPACITY_TYPE_CPU_CORE));
CapacityVO cpuCap = null;
CapacityVO memCap = null;
CapacityVO cpuCoreCap = null;
for (CapacityVO c : capacities) {
if (c.getCapacityType() == Capacity.CAPACITY_TYPE_CPU) {
cpuCap = c;
} else if (c.getCapacityType() == Capacity.CAPACITY_TYPE_MEMORY) {
memCap = c;
} else if (c.getCapacityType() == Capacity.CAPACITY_TYPE_CPU_CORE) {
cpuCoreCap = c;
}
if (ObjectUtils.allNotNull(cpuCap, memCap, cpuCoreCap)) {
break;
}
}
if (cpuCoreCap != null) { if (cpuCoreCap != null) {
long hostTotalCpuCore = host.getCpus().longValue(); long hostTotalCpuCore = host.getCpus().longValue();
@ -995,8 +1032,8 @@ public class CapacityManagerImpl extends ManagerBase implements CapacityManager,
capacityCPU.addAnd("podId", SearchCriteria.Op.EQ, server.getPodId()); capacityCPU.addAnd("podId", SearchCriteria.Op.EQ, server.getPodId());
capacityCPU.addAnd("capacityType", SearchCriteria.Op.EQ, Capacity.CAPACITY_TYPE_CPU); capacityCPU.addAnd("capacityType", SearchCriteria.Op.EQ, Capacity.CAPACITY_TYPE_CPU);
List<CapacityVO> capacityVOCpus = _capacityDao.search(capacitySC, null); List<CapacityVO> capacityVOCpus = _capacityDao.search(capacitySC, null);
Float cpuovercommitratio = Float.parseFloat(_clusterDetailsDao.findDetail(server.getClusterId(), "cpuOvercommitRatio").getValue()); Float cpuovercommitratio = Float.parseFloat(_clusterDetailsDao.findDetail(server.getClusterId(), VmDetailConstants.CPU_OVER_COMMIT_RATIO).getValue());
Float memoryOvercommitRatio = Float.parseFloat(_clusterDetailsDao.findDetail(server.getClusterId(), "memoryOvercommitRatio").getValue()); Float memoryOvercommitRatio = Float.parseFloat(_clusterDetailsDao.findDetail(server.getClusterId(), VmDetailConstants.MEMORY_OVER_COMMIT_RATIO).getValue());
if (capacityVOCpus != null && !capacityVOCpus.isEmpty()) { if (capacityVOCpus != null && !capacityVOCpus.isEmpty()) {
CapacityVO CapacityVOCpu = capacityVOCpus.get(0); CapacityVO CapacityVOCpu = capacityVOCpus.get(0);
@ -1053,9 +1090,9 @@ public class CapacityManagerImpl extends ManagerBase implements CapacityManager,
String capacityOverProvisioningName = ""; String capacityOverProvisioningName = "";
if (capacityType == Capacity.CAPACITY_TYPE_CPU) { if (capacityType == Capacity.CAPACITY_TYPE_CPU) {
capacityOverProvisioningName = "cpuOvercommitRatio"; capacityOverProvisioningName = VmDetailConstants.CPU_OVER_COMMIT_RATIO;
} else if (capacityType == Capacity.CAPACITY_TYPE_MEMORY) { } else if (capacityType == Capacity.CAPACITY_TYPE_MEMORY) {
capacityOverProvisioningName = "memoryOvercommitRatio"; capacityOverProvisioningName = VmDetailConstants.MEMORY_OVER_COMMIT_RATIO;
} else { } else {
throw new CloudRuntimeException("Invalid capacityType - " + capacityType); throw new CloudRuntimeException("Invalid capacityType - " + capacityType);
} }
@ -1093,13 +1130,11 @@ public class CapacityManagerImpl extends ManagerBase implements CapacityManager,
public Pair<Boolean, Boolean> checkIfHostHasCpuCapabilityAndCapacity(Host host, ServiceOffering offering, boolean considerReservedCapacity) { public Pair<Boolean, Boolean> checkIfHostHasCpuCapabilityAndCapacity(Host host, ServiceOffering offering, boolean considerReservedCapacity) {
int cpu_requested = offering.getCpu() * offering.getSpeed(); int cpu_requested = offering.getCpu() * offering.getSpeed();
long ram_requested = offering.getRamSize() * 1024L * 1024L; long ram_requested = offering.getRamSize() * 1024L * 1024L;
Cluster cluster = _clusterDao.findById(host.getClusterId()); Pair<String, String> clusterDetails = getClusterValues(host.getClusterId());
ClusterDetailsVO clusterDetailsCpuOvercommit = _clusterDetailsDao.findDetail(cluster.getId(), "cpuOvercommitRatio"); Float cpuOvercommitRatio = Float.parseFloat(clusterDetails.first());
ClusterDetailsVO clusterDetailsRamOvercommmt = _clusterDetailsDao.findDetail(cluster.getId(), "memoryOvercommitRatio"); Float memoryOvercommitRatio = Float.parseFloat(clusterDetails.second());
Float cpuOvercommitRatio = Float.parseFloat(clusterDetailsCpuOvercommit.getValue());
Float memoryOvercommitRatio = Float.parseFloat(clusterDetailsRamOvercommmt.getValue());
boolean hostHasCpuCapability = checkIfHostHasCpuCapability(host.getId(), offering.getCpu(), offering.getSpeed()); boolean hostHasCpuCapability = checkIfHostHasCpuCapability(host, offering.getCpu(), offering.getSpeed());
boolean hostHasCapacity = checkIfHostHasCapacity(host, cpu_requested, ram_requested, false, cpuOvercommitRatio, memoryOvercommitRatio, boolean hostHasCapacity = checkIfHostHasCapacity(host, cpu_requested, ram_requested, false, cpuOvercommitRatio, memoryOvercommitRatio,
considerReservedCapacity); considerReservedCapacity);
@ -1241,6 +1276,6 @@ public class CapacityManagerImpl extends ManagerBase implements CapacityManager,
public ConfigKey<?>[] getConfigKeys() { public ConfigKey<?>[] getConfigKeys() {
return new ConfigKey<?>[] {CpuOverprovisioningFactor, MemOverprovisioningFactor, StorageCapacityDisableThreshold, StorageOverprovisioningFactor, return new ConfigKey<?>[] {CpuOverprovisioningFactor, MemOverprovisioningFactor, StorageCapacityDisableThreshold, StorageOverprovisioningFactor,
StorageAllocatedCapacityDisableThreshold, StorageOperationsExcludeCluster, ImageStoreNFSVersion, SecondaryStorageCapacityThreshold, StorageAllocatedCapacityDisableThreshold, StorageOperationsExcludeCluster, ImageStoreNFSVersion, SecondaryStorageCapacityThreshold,
StorageAllocatedCapacityDisableThresholdForVolumeSize }; StorageAllocatedCapacityDisableThresholdForVolumeSize, CapacityCalculateWorkers };
} }
} }

View File

@ -16,6 +16,11 @@
// under the License. // under the License.
package com.cloud.configuration; package com.cloud.configuration;
import static com.cloud.configuration.Config.SecStorageAllowedInternalDownloadSites;
import static com.cloud.offering.NetworkOffering.RoutingMode.Dynamic;
import static com.cloud.offering.NetworkOffering.RoutingMode.Static;
import static org.apache.cloudstack.framework.config.ConfigKey.CATEGORY_SYSTEM;
import java.io.UnsupportedEncodingException; import java.io.UnsupportedEncodingException;
import java.net.URI; import java.net.URI;
import java.net.URISyntaxException; import java.net.URISyntaxException;
@ -308,11 +313,6 @@ import com.google.common.collect.Sets;
import com.googlecode.ipv6.IPv6Address; import com.googlecode.ipv6.IPv6Address;
import com.googlecode.ipv6.IPv6Network; import com.googlecode.ipv6.IPv6Network;
import static com.cloud.configuration.Config.SecStorageAllowedInternalDownloadSites;
import static com.cloud.offering.NetworkOffering.RoutingMode.Dynamic;
import static com.cloud.offering.NetworkOffering.RoutingMode.Static;
import static org.apache.cloudstack.framework.config.ConfigKey.CATEGORY_SYSTEM;
public class ConfigurationManagerImpl extends ManagerBase implements ConfigurationManager, ConfigurationService, Configurable { public class ConfigurationManagerImpl extends ManagerBase implements ConfigurationManager, ConfigurationService, Configurable {
public static final String PERACCOUNT = "peraccount"; public static final String PERACCOUNT = "peraccount";
public static final String PERZONE = "perzone"; public static final String PERZONE = "perzone";
@ -2521,7 +2521,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati
// Check if there are any non-removed hosts in the zone. // Check if there are any non-removed hosts in the zone.
if (!_hostDao.listByDataCenterId(zoneId).isEmpty()) { if (!_hostDao.listEnabledIdsByDataCenterId(zoneId).isEmpty()) {
throw new CloudRuntimeException(errorMsg + "there are servers in this zone."); throw new CloudRuntimeException(errorMsg + "there are servers in this zone.");
} }

View File

@ -869,11 +869,9 @@ public class ConsoleProxyManagerImpl extends ManagerBase implements ConsoleProxy
} }
public boolean isZoneReady(Map<Long, ZoneHostInfo> zoneHostInfoMap, DataCenter dataCenter) { public boolean isZoneReady(Map<Long, ZoneHostInfo> zoneHostInfoMap, DataCenter dataCenter) {
List <HostVO> hosts = hostDao.listByDataCenterId(dataCenter.getId()); Integer totalUpAndEnabledHosts = hostDao.countUpAndEnabledHostsInZone(dataCenter.getId());
if (CollectionUtils.isEmpty(hosts)) { if (totalUpAndEnabledHosts != null && totalUpAndEnabledHosts < 1) {
if (logger.isDebugEnabled()) { logger.debug("{} has no host available which is enabled and in Up state", dataCenter);
logger.debug("Zone {} has no host available which is enabled and in Up state", dataCenter);
}
return false; return false;
} }
ZoneHostInfo zoneHostInfo = zoneHostInfoMap.get(dataCenter.getId()); ZoneHostInfo zoneHostInfo = zoneHostInfoMap.get(dataCenter.getId());
@ -894,8 +892,8 @@ public class ConsoleProxyManagerImpl extends ManagerBase implements ConsoleProxy
if (templateHostRef != null) { if (templateHostRef != null) {
Boolean useLocalStorage = BooleanUtils.toBoolean(ConfigurationManagerImpl.SystemVMUseLocalStorage.valueIn(dataCenter.getId())); Boolean useLocalStorage = BooleanUtils.toBoolean(ConfigurationManagerImpl.SystemVMUseLocalStorage.valueIn(dataCenter.getId()));
List<Pair<Long, Integer>> l = consoleProxyDao.getDatacenterStoragePoolHostInfo(dataCenter.getId(), useLocalStorage); boolean hasDatacenterStoragePoolHostInfo = consoleProxyDao.hasDatacenterStoragePoolHostInfo(dataCenter.getId(), !useLocalStorage);
if (CollectionUtils.isNotEmpty(l) && l.get(0).second() > 0) { if (hasDatacenterStoragePoolHostInfo) {
return true; return true;
} else { } else {
if (logger.isDebugEnabled()) { if (logger.isDebugEnabled()) {

View File

@ -36,22 +36,7 @@ import java.util.stream.Collectors;
import javax.inject.Inject; import javax.inject.Inject;
import javax.naming.ConfigurationException; import javax.naming.ConfigurationException;
import com.cloud.cpu.CPU;
import com.cloud.vm.UserVmManager;
import org.apache.cloudstack.affinity.AffinityGroupDomainMapVO; import org.apache.cloudstack.affinity.AffinityGroupDomainMapVO;
import com.cloud.storage.VMTemplateVO;
import com.cloud.storage.dao.VMTemplateDao;
import com.cloud.user.AccountVO;
import com.cloud.user.dao.AccountDao;
import com.cloud.exception.StorageUnavailableException;
import com.cloud.utils.db.Filter;
import com.cloud.utils.fsm.StateMachine2;
import org.apache.cloudstack.framework.config.ConfigKey;
import org.apache.cloudstack.framework.config.Configurable;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.collections.MapUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.cloudstack.affinity.AffinityGroupProcessor; import org.apache.cloudstack.affinity.AffinityGroupProcessor;
import org.apache.cloudstack.affinity.AffinityGroupService; import org.apache.cloudstack.affinity.AffinityGroupService;
import org.apache.cloudstack.affinity.AffinityGroupVMMapVO; import org.apache.cloudstack.affinity.AffinityGroupVMMapVO;
@ -64,6 +49,8 @@ import org.apache.cloudstack.engine.cloud.entity.api.db.dao.VMReservationDao;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStore; import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager; import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager;
import org.apache.cloudstack.engine.subsystem.api.storage.StoragePoolAllocator; import org.apache.cloudstack.engine.subsystem.api.storage.StoragePoolAllocator;
import org.apache.cloudstack.framework.config.ConfigKey;
import org.apache.cloudstack.framework.config.Configurable;
import org.apache.cloudstack.framework.config.dao.ConfigurationDao; import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
import org.apache.cloudstack.framework.messagebus.MessageBus; import org.apache.cloudstack.framework.messagebus.MessageBus;
import org.apache.cloudstack.framework.messagebus.MessageSubscriber; import org.apache.cloudstack.framework.messagebus.MessageSubscriber;
@ -71,6 +58,9 @@ import org.apache.cloudstack.managed.context.ManagedContextTimerTask;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao; import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO; import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.cloudstack.utils.identity.ManagementServerNode; import org.apache.cloudstack.utils.identity.ManagementServerNode;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.collections.MapUtils;
import org.apache.commons.lang3.StringUtils;
import com.cloud.agent.AgentManager; import com.cloud.agent.AgentManager;
import com.cloud.agent.Listener; import com.cloud.agent.Listener;
@ -85,6 +75,7 @@ import com.cloud.capacity.CapacityManager;
import com.cloud.capacity.dao.CapacityDao; import com.cloud.capacity.dao.CapacityDao;
import com.cloud.configuration.Config; import com.cloud.configuration.Config;
import com.cloud.configuration.ConfigurationManagerImpl; import com.cloud.configuration.ConfigurationManagerImpl;
import com.cloud.cpu.CPU;
import com.cloud.dc.ClusterDetailsDao; import com.cloud.dc.ClusterDetailsDao;
import com.cloud.dc.ClusterDetailsVO; import com.cloud.dc.ClusterDetailsVO;
import com.cloud.dc.ClusterVO; import com.cloud.dc.ClusterVO;
@ -102,6 +93,7 @@ import com.cloud.deploy.dao.PlannerHostReservationDao;
import com.cloud.exception.AffinityConflictException; import com.cloud.exception.AffinityConflictException;
import com.cloud.exception.ConnectionException; import com.cloud.exception.ConnectionException;
import com.cloud.exception.InsufficientServerCapacityException; import com.cloud.exception.InsufficientServerCapacityException;
import com.cloud.exception.StorageUnavailableException;
import com.cloud.gpu.GPU; import com.cloud.gpu.GPU;
import com.cloud.host.DetailVO; import com.cloud.host.DetailVO;
import com.cloud.host.Host; import com.cloud.host.Host;
@ -122,15 +114,19 @@ import com.cloud.storage.ScopeType;
import com.cloud.storage.StorageManager; import com.cloud.storage.StorageManager;
import com.cloud.storage.StoragePool; import com.cloud.storage.StoragePool;
import com.cloud.storage.StoragePoolHostVO; import com.cloud.storage.StoragePoolHostVO;
import com.cloud.storage.VMTemplateVO;
import com.cloud.storage.Volume; import com.cloud.storage.Volume;
import com.cloud.storage.VolumeVO; import com.cloud.storage.VolumeVO;
import com.cloud.storage.dao.DiskOfferingDao; import com.cloud.storage.dao.DiskOfferingDao;
import com.cloud.storage.dao.GuestOSCategoryDao; import com.cloud.storage.dao.GuestOSCategoryDao;
import com.cloud.storage.dao.GuestOSDao; import com.cloud.storage.dao.GuestOSDao;
import com.cloud.storage.dao.StoragePoolHostDao; import com.cloud.storage.dao.StoragePoolHostDao;
import com.cloud.storage.dao.VMTemplateDao;
import com.cloud.storage.dao.VolumeDao; import com.cloud.storage.dao.VolumeDao;
import com.cloud.template.VirtualMachineTemplate; import com.cloud.template.VirtualMachineTemplate;
import com.cloud.user.AccountManager; import com.cloud.user.AccountManager;
import com.cloud.user.AccountVO;
import com.cloud.user.dao.AccountDao;
import com.cloud.utils.DateUtil; import com.cloud.utils.DateUtil;
import com.cloud.utils.LogUtils; import com.cloud.utils.LogUtils;
import com.cloud.utils.NumbersUtil; import com.cloud.utils.NumbersUtil;
@ -138,13 +134,16 @@ import com.cloud.utils.Pair;
import com.cloud.utils.component.Manager; import com.cloud.utils.component.Manager;
import com.cloud.utils.component.ManagerBase; import com.cloud.utils.component.ManagerBase;
import com.cloud.utils.db.DB; import com.cloud.utils.db.DB;
import com.cloud.utils.db.Filter;
import com.cloud.utils.db.SearchCriteria; import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.Transaction; import com.cloud.utils.db.Transaction;
import com.cloud.utils.db.TransactionCallback; import com.cloud.utils.db.TransactionCallback;
import com.cloud.utils.db.TransactionStatus; import com.cloud.utils.db.TransactionStatus;
import com.cloud.utils.exception.CloudRuntimeException; import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.fsm.StateListener; import com.cloud.utils.fsm.StateListener;
import com.cloud.utils.fsm.StateMachine2;
import com.cloud.vm.DiskProfile; import com.cloud.vm.DiskProfile;
import com.cloud.vm.UserVmManager;
import com.cloud.vm.VMInstanceVO; import com.cloud.vm.VMInstanceVO;
import com.cloud.vm.VirtualMachine; import com.cloud.vm.VirtualMachine;
import com.cloud.vm.VirtualMachine.Event; import com.cloud.vm.VirtualMachine.Event;
@ -295,8 +294,9 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
return; return;
} }
final Long lastHostClusterId = lastHost.getClusterId(); final Long lastHostClusterId = lastHost.getClusterId();
logger.warn("VM last host ID: {} belongs to zone ID: {} for which config - {} is false and storage migration would be needed for inter-cluster migration, therefore, adding all other clusters except ID: {} from this zone to avoid list", lastHost, vm.getDataCenterId(), ConfigurationManagerImpl.MIGRATE_VM_ACROSS_CLUSTERS.key(), lastHostClusterId); logger.warn(String.format("VM last host ID: %d belongs to zone ID: %s for which config - %s is false and storage migration would be needed for inter-cluster migration, therefore, adding all other clusters except ID: %d from this zone to avoid list",
List<Long> clusterIds = _clusterDao.listAllClusters(lastHost.getDataCenterId()); lastHost.getId(), vm.getDataCenterId(), ConfigurationManagerImpl.MIGRATE_VM_ACROSS_CLUSTERS.key(), lastHostClusterId));
List<Long> clusterIds = _clusterDao.listAllClusterIds(lastHost.getDataCenterId());
Set<Long> existingAvoidedClusters = avoids.getClustersToAvoid(); Set<Long> existingAvoidedClusters = avoids.getClustersToAvoid();
clusterIds = clusterIds.stream().filter(x -> !Objects.equals(x, lastHostClusterId) && (existingAvoidedClusters == null || !existingAvoidedClusters.contains(x))).collect(Collectors.toList()); clusterIds = clusterIds.stream().filter(x -> !Objects.equals(x, lastHostClusterId) && (existingAvoidedClusters == null || !existingAvoidedClusters.contains(x))).collect(Collectors.toList());
avoids.addClusterList(clusterIds); avoids.addClusterList(clusterIds);
@ -492,7 +492,7 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
float memoryOvercommitRatio = Float.parseFloat(cluster_detail_ram.getValue()); float memoryOvercommitRatio = Float.parseFloat(cluster_detail_ram.getValue());
boolean hostHasCpuCapability, hostHasCapacity = false; boolean hostHasCpuCapability, hostHasCapacity = false;
hostHasCpuCapability = _capacityMgr.checkIfHostHasCpuCapability(host.getId(), offering.getCpu(), offering.getSpeed()); hostHasCpuCapability = _capacityMgr.checkIfHostHasCpuCapability(host, offering.getCpu(), offering.getSpeed());
if (hostHasCpuCapability) { if (hostHasCpuCapability) {
// first check from reserved capacity // first check from reserved capacity
@ -736,12 +736,10 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
* Adds disabled Hosts to the ExcludeList in order to avoid them at the deployment planner. * Adds disabled Hosts to the ExcludeList in order to avoid them at the deployment planner.
*/ */
protected void avoidDisabledHosts(DataCenter dc, ExcludeList avoids) { protected void avoidDisabledHosts(DataCenter dc, ExcludeList avoids) {
List<HostVO> disabledHosts = _hostDao.listDisabledByDataCenterId(dc.getId()); List<Long> disabledHostIds = _hostDao.listDisabledIdsByDataCenterId(dc.getId());
logger.debug("Adding hosts [{}] of datacenter [{}] to the avoid set, because these hosts are in the Disabled state.", logger.debug("Adding hosts {} of datacenter [{}] to the avoid set, because these hosts are in the Disabled state.",
disabledHosts.stream().map(HostVO::getUuid).collect(Collectors.joining(", ")), dc); StringUtils.join(disabledHostIds), dc.getUuid());
for (HostVO host : disabledHosts) { disabledHostIds.forEach(avoids::addHost);
avoids.addHost(host.getId());
}
} }
/** /**
@ -860,7 +858,7 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
List<Long> allDedicatedPods = _dedicatedDao.listAllPods(); List<Long> allDedicatedPods = _dedicatedDao.listAllPods();
allPodsInDc.retainAll(allDedicatedPods); allPodsInDc.retainAll(allDedicatedPods);
List<Long> allClustersInDc = _clusterDao.listAllClusters(dc.getId()); List<Long> allClustersInDc = _clusterDao.listAllClusterIds(dc.getId());
List<Long> allDedicatedClusters = _dedicatedDao.listAllClusters(); List<Long> allDedicatedClusters = _dedicatedDao.listAllClusters();
allClustersInDc.retainAll(allDedicatedClusters); allClustersInDc.retainAll(allDedicatedClusters);
@ -1147,9 +1145,11 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
private void checkHostReservations() { private void checkHostReservations() {
List<PlannerHostReservationVO> reservedHosts = _plannerHostReserveDao.listAllReservedHosts(); List<PlannerHostReservationVO> reservedHosts = _plannerHostReserveDao.listAllReservedHosts();
List<HostVO> hosts = _hostDao.listByIds(reservedHosts
for (PlannerHostReservationVO hostReservation : reservedHosts) { .stream()
HostVO host = _hostDao.findById(hostReservation.getHostId()); .map(PlannerHostReservationVO::getHostId)
.collect(Collectors.toList()));
for (HostVO host : hosts) {
if (host != null && host.getManagementServerId() != null && host.getManagementServerId() == _nodeId) { if (host != null && host.getManagementServerId() != null && host.getManagementServerId() == _nodeId) {
checkHostReservationRelease(host); checkHostReservationRelease(host);
} }
@ -1338,7 +1338,7 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
Pair<Host, Map<Volume, StoragePool>> potentialResources = findPotentialDeploymentResources(suitableHosts, suitableVolumeStoragePools, avoid, Pair<Host, Map<Volume, StoragePool>> potentialResources = findPotentialDeploymentResources(suitableHosts, suitableVolumeStoragePools, avoid,
resourceUsageRequired, readyAndReusedVolumes, plan.getPreferredHosts(), vmProfile.getVirtualMachine()); resourceUsageRequired, readyAndReusedVolumes, plan.getPreferredHosts(), vmProfile.getVirtualMachine());
if (potentialResources != null) { if (potentialResources != null) {
Host host = _hostDao.findById(potentialResources.first().getId()); Host host = potentialResources.first();
Map<Volume, StoragePool> storageVolMap = potentialResources.second(); Map<Volume, StoragePool> storageVolMap = potentialResources.second();
// remove the reused vol<->pool from destination, since // remove the reused vol<->pool from destination, since
// we don't have to prepare this volume. // we don't have to prepare this volume.

View File

@ -16,6 +16,29 @@
// under the License. // under the License.
package com.cloud.hypervisor.kvm.discoverer; package com.cloud.hypervisor.kvm.discoverer;
import static com.cloud.configuration.ConfigurationManagerImpl.ADD_HOST_ON_SERVICE_RESTART_KVM;
import java.net.InetAddress;
import java.net.URI;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import javax.inject.Inject;
import javax.naming.ConfigurationException;
import org.apache.cloudstack.agent.lb.IndirectAgentLB;
import org.apache.cloudstack.ca.CAManager;
import org.apache.cloudstack.ca.SetupCertificateCommand;
import org.apache.cloudstack.direct.download.DirectDownloadManager;
import org.apache.cloudstack.framework.ca.Certificate;
import org.apache.cloudstack.utils.cache.LazyCache;
import org.apache.cloudstack.utils.security.KeyStoreUtils;
import com.cloud.agent.AgentManager; import com.cloud.agent.AgentManager;
import com.cloud.agent.Listener; import com.cloud.agent.Listener;
import com.cloud.agent.api.AgentControlAnswer; import com.cloud.agent.api.AgentControlAnswer;
@ -32,6 +55,7 @@ import com.cloud.exception.DiscoveredWithErrorException;
import com.cloud.exception.DiscoveryException; import com.cloud.exception.DiscoveryException;
import com.cloud.exception.OperationTimedoutException; import com.cloud.exception.OperationTimedoutException;
import com.cloud.host.Host; import com.cloud.host.Host;
import com.cloud.host.HostInfo;
import com.cloud.host.HostVO; import com.cloud.host.HostVO;
import com.cloud.host.Status; import com.cloud.host.Status;
import com.cloud.host.dao.HostDao; import com.cloud.host.dao.HostDao;
@ -48,26 +72,7 @@ import com.cloud.utils.StringUtils;
import com.cloud.utils.exception.CloudRuntimeException; import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.ssh.SSHCmdHelper; import com.cloud.utils.ssh.SSHCmdHelper;
import com.trilead.ssh2.Connection; import com.trilead.ssh2.Connection;
import org.apache.cloudstack.agent.lb.IndirectAgentLB;
import org.apache.cloudstack.ca.CAManager;
import org.apache.cloudstack.ca.SetupCertificateCommand;
import org.apache.cloudstack.direct.download.DirectDownloadManager;
import org.apache.cloudstack.framework.ca.Certificate;
import org.apache.cloudstack.utils.security.KeyStoreUtils;
import javax.inject.Inject;
import javax.naming.ConfigurationException;
import java.net.InetAddress;
import java.net.URI;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import static com.cloud.configuration.ConfigurationManagerImpl.ADD_HOST_ON_SERVICE_RESTART_KVM;
public abstract class LibvirtServerDiscoverer extends DiscovererBase implements Discoverer, Listener, ResourceStateAdapter { public abstract class LibvirtServerDiscoverer extends DiscovererBase implements Discoverer, Listener, ResourceStateAdapter {
private final int _waitTime = 5; /* wait for 5 minutes */ private final int _waitTime = 5; /* wait for 5 minutes */
@ -89,6 +94,16 @@ public abstract class LibvirtServerDiscoverer extends DiscovererBase implements
@Inject @Inject
private HostDao hostDao; private HostDao hostDao;
private LazyCache<Long, HostVO> clusterExistingHostCache;
private HostVO getExistingHostForCluster(long clusterId) {
HostVO existingHostInCluster = _hostDao.findAnyStateHypervisorHostInCluster(clusterId);
if (existingHostInCluster != null) {
_hostDao.loadDetails(existingHostInCluster);
}
return existingHostInCluster;
}
@Override @Override
public abstract Hypervisor.HypervisorType getHypervisorType(); public abstract Hypervisor.HypervisorType getHypervisorType();
@ -425,6 +440,9 @@ public abstract class LibvirtServerDiscoverer extends DiscovererBase implements
_kvmGuestNic = _kvmPrivateNic; _kvmGuestNic = _kvmPrivateNic;
} }
clusterExistingHostCache = new LazyCache<>(32, 30,
this::getExistingHostForCluster);
agentMgr.registerForHostEvents(this, true, false, false); agentMgr.registerForHostEvents(this, true, false, false);
_resourceMgr.registerResourceStateAdapter(this.getClass().getSimpleName(), this); _resourceMgr.registerResourceStateAdapter(this.getClass().getSimpleName(), this);
return true; return true;
@ -467,12 +485,10 @@ public abstract class LibvirtServerDiscoverer extends DiscovererBase implements
throw new IllegalArgumentException("cannot add host, due to can't find cluster: " + host.getClusterId()); throw new IllegalArgumentException("cannot add host, due to can't find cluster: " + host.getClusterId());
} }
List<HostVO> hostsInCluster = _resourceMgr.listAllHostsInCluster(clusterVO.getId()); HostVO existingHostInCluster = clusterExistingHostCache.get(clusterVO.getId());
if (!hostsInCluster.isEmpty()) { if (existingHostInCluster != null) {
HostVO oneHost = hostsInCluster.get(0); String hostOsInCluster = existingHostInCluster.getDetail(HostInfo.HOST_OS);
_hostDao.loadDetails(oneHost); String hostOs = ssCmd.getHostDetails().get(HostInfo.HOST_OS);
String hostOsInCluster = oneHost.getDetail("Host.OS");
String hostOs = ssCmd.getHostDetails().get("Host.OS");
if (!isHostOsCompatibleWithOtherHost(hostOsInCluster, hostOs)) { if (!isHostOsCompatibleWithOtherHost(hostOsInCluster, hostOs)) {
String msg = String.format("host: %s with hostOS, \"%s\"into a cluster, in which there are \"%s\" hosts added", firstCmd.getPrivateIpAddress(), hostOs, hostOsInCluster); String msg = String.format("host: %s with hostOS, \"%s\"into a cluster, in which there are \"%s\" hosts added", firstCmd.getPrivateIpAddress(), hostOs, hostOsInCluster);
if (hostOs != null && hostOs.startsWith(hostOsInCluster)) { if (hostOs != null && hostOs.startsWith(hostOsInCluster)) {

View File

@ -40,16 +40,6 @@ import java.util.UUID;
import javax.inject.Inject; import javax.inject.Inject;
import javax.naming.ConfigurationException; import javax.naming.ConfigurationException;
import com.cloud.bgp.BGPService;
import com.cloud.dc.VlanDetailsVO;
import com.cloud.dc.dao.VlanDetailsDao;
import com.cloud.network.dao.NsxProviderDao;
import com.cloud.network.dao.PublicIpQuarantineDao;
import com.cloud.network.dao.VirtualRouterProviderDao;
import com.cloud.network.element.NsxProviderVO;
import com.cloud.network.element.VirtualRouterProviderVO;
import com.cloud.offering.ServiceOffering;
import com.cloud.service.dao.ServiceOfferingDao;
import org.apache.cloudstack.acl.ControlledEntity.ACLType; import org.apache.cloudstack.acl.ControlledEntity.ACLType;
import org.apache.cloudstack.acl.SecurityChecker.AccessType; import org.apache.cloudstack.acl.SecurityChecker.AccessType;
import org.apache.cloudstack.alert.AlertService; import org.apache.cloudstack.alert.AlertService;
@ -104,6 +94,7 @@ import com.cloud.alert.AlertManager;
import com.cloud.api.ApiDBUtils; import com.cloud.api.ApiDBUtils;
import com.cloud.api.query.dao.DomainRouterJoinDao; import com.cloud.api.query.dao.DomainRouterJoinDao;
import com.cloud.api.query.vo.DomainRouterJoinVO; import com.cloud.api.query.vo.DomainRouterJoinVO;
import com.cloud.bgp.BGPService;
import com.cloud.configuration.Config; import com.cloud.configuration.Config;
import com.cloud.configuration.ConfigurationManager; import com.cloud.configuration.ConfigurationManager;
import com.cloud.configuration.Resource; import com.cloud.configuration.Resource;
@ -114,12 +105,14 @@ import com.cloud.dc.DataCenterVO;
import com.cloud.dc.DataCenterVnetVO; import com.cloud.dc.DataCenterVnetVO;
import com.cloud.dc.DomainVlanMapVO; import com.cloud.dc.DomainVlanMapVO;
import com.cloud.dc.Vlan.VlanType; import com.cloud.dc.Vlan.VlanType;
import com.cloud.dc.VlanDetailsVO;
import com.cloud.dc.VlanVO; import com.cloud.dc.VlanVO;
import com.cloud.dc.dao.AccountVlanMapDao; import com.cloud.dc.dao.AccountVlanMapDao;
import com.cloud.dc.dao.DataCenterDao; import com.cloud.dc.dao.DataCenterDao;
import com.cloud.dc.dao.DataCenterVnetDao; import com.cloud.dc.dao.DataCenterVnetDao;
import com.cloud.dc.dao.DomainVlanMapDao; import com.cloud.dc.dao.DomainVlanMapDao;
import com.cloud.dc.dao.VlanDao; import com.cloud.dc.dao.VlanDao;
import com.cloud.dc.dao.VlanDetailsDao;
import com.cloud.deploy.DeployDestination; import com.cloud.deploy.DeployDestination;
import com.cloud.domain.Domain; import com.cloud.domain.Domain;
import com.cloud.domain.DomainVO; import com.cloud.domain.DomainVO;
@ -165,6 +158,7 @@ import com.cloud.network.dao.NetworkDomainDao;
import com.cloud.network.dao.NetworkDomainVO; import com.cloud.network.dao.NetworkDomainVO;
import com.cloud.network.dao.NetworkServiceMapDao; import com.cloud.network.dao.NetworkServiceMapDao;
import com.cloud.network.dao.NetworkVO; import com.cloud.network.dao.NetworkVO;
import com.cloud.network.dao.NsxProviderDao;
import com.cloud.network.dao.OvsProviderDao; import com.cloud.network.dao.OvsProviderDao;
import com.cloud.network.dao.PhysicalNetworkDao; import com.cloud.network.dao.PhysicalNetworkDao;
import com.cloud.network.dao.PhysicalNetworkServiceProviderDao; import com.cloud.network.dao.PhysicalNetworkServiceProviderDao;
@ -172,9 +166,13 @@ import com.cloud.network.dao.PhysicalNetworkServiceProviderVO;
import com.cloud.network.dao.PhysicalNetworkTrafficTypeDao; import com.cloud.network.dao.PhysicalNetworkTrafficTypeDao;
import com.cloud.network.dao.PhysicalNetworkTrafficTypeVO; import com.cloud.network.dao.PhysicalNetworkTrafficTypeVO;
import com.cloud.network.dao.PhysicalNetworkVO; import com.cloud.network.dao.PhysicalNetworkVO;
import com.cloud.network.dao.PublicIpQuarantineDao;
import com.cloud.network.dao.VirtualRouterProviderDao;
import com.cloud.network.element.NetworkElement; import com.cloud.network.element.NetworkElement;
import com.cloud.network.element.NsxProviderVO;
import com.cloud.network.element.OvsProviderVO; import com.cloud.network.element.OvsProviderVO;
import com.cloud.network.element.VirtualRouterElement; import com.cloud.network.element.VirtualRouterElement;
import com.cloud.network.element.VirtualRouterProviderVO;
import com.cloud.network.element.VpcVirtualRouterElement; import com.cloud.network.element.VpcVirtualRouterElement;
import com.cloud.network.guru.GuestNetworkGuru; import com.cloud.network.guru.GuestNetworkGuru;
import com.cloud.network.guru.NetworkGuru; import com.cloud.network.guru.NetworkGuru;
@ -198,6 +196,7 @@ import com.cloud.network.vpc.dao.VpcDao;
import com.cloud.network.vpc.dao.VpcGatewayDao; import com.cloud.network.vpc.dao.VpcGatewayDao;
import com.cloud.network.vpc.dao.VpcOfferingDao; import com.cloud.network.vpc.dao.VpcOfferingDao;
import com.cloud.offering.NetworkOffering; import com.cloud.offering.NetworkOffering;
import com.cloud.offering.ServiceOffering;
import com.cloud.offerings.NetworkOfferingVO; import com.cloud.offerings.NetworkOfferingVO;
import com.cloud.offerings.dao.NetworkOfferingDao; import com.cloud.offerings.dao.NetworkOfferingDao;
import com.cloud.offerings.dao.NetworkOfferingServiceMapDao; import com.cloud.offerings.dao.NetworkOfferingServiceMapDao;
@ -207,6 +206,7 @@ import com.cloud.projects.ProjectManager;
import com.cloud.server.ResourceTag; import com.cloud.server.ResourceTag;
import com.cloud.server.ResourceTag.ResourceObjectType; import com.cloud.server.ResourceTag.ResourceObjectType;
import com.cloud.service.ServiceOfferingVO; import com.cloud.service.ServiceOfferingVO;
import com.cloud.service.dao.ServiceOfferingDao;
import com.cloud.tags.ResourceTagVO; import com.cloud.tags.ResourceTagVO;
import com.cloud.tags.dao.ResourceTagDao; import com.cloud.tags.dao.ResourceTagDao;
import com.cloud.user.Account; import com.cloud.user.Account;

View File

@ -2715,7 +2715,7 @@ public class AutoScaleManagerImpl extends ManagerBase implements AutoScaleManage
return vmStatsById; return vmStatsById;
} }
try { try {
vmStatsById = virtualMachineManager.getVirtualMachineStatistics(host.getId(), host.getName(), vmIds); vmStatsById = virtualMachineManager.getVirtualMachineStatistics(host, vmIds);
if (MapUtils.isEmpty(vmStatsById)) { if (MapUtils.isEmpty(vmStatsById)) {
logger.warn("Got empty result for virtual machine statistics from host: " + host); logger.warn("Got empty result for virtual machine statistics from host: " + host);
} }

View File

@ -22,8 +22,8 @@ import java.util.Map;
import java.util.Random; import java.util.Random;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import com.cloud.agent.AgentManager; import com.cloud.agent.AgentManager;
import com.cloud.agent.Listener; import com.cloud.agent.Listener;

View File

@ -32,6 +32,7 @@ import java.util.List;
import java.util.Locale; import java.util.Locale;
import java.util.Map; import java.util.Map;
import java.util.Random; import java.util.Random;
import java.util.Set;
import java.util.stream.Collectors; import java.util.stream.Collectors;
import javax.inject.Inject; import javax.inject.Inject;
@ -547,8 +548,8 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
details.put("ovm3pool", allParams.get("ovm3pool")); details.put("ovm3pool", allParams.get("ovm3pool"));
details.put("ovm3cluster", allParams.get("ovm3cluster")); details.put("ovm3cluster", allParams.get("ovm3cluster"));
} }
details.put("cpuOvercommitRatio", CapacityManager.CpuOverprovisioningFactor.value().toString()); details.put(VmDetailConstants.CPU_OVER_COMMIT_RATIO, CapacityManager.CpuOverprovisioningFactor.value().toString());
details.put("memoryOvercommitRatio", CapacityManager.MemOverprovisioningFactor.value().toString()); details.put(VmDetailConstants.MEMORY_OVER_COMMIT_RATIO, CapacityManager.MemOverprovisioningFactor.value().toString());
_clusterDetailsDao.persist(cluster.getId(), details); _clusterDetailsDao.persist(cluster.getId(), details);
return result; return result;
} }
@ -558,8 +559,8 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
details.put("url", url); details.put("url", url);
details.put("username", StringUtils.defaultString(username)); details.put("username", StringUtils.defaultString(username));
details.put("password", StringUtils.defaultString(password)); details.put("password", StringUtils.defaultString(password));
details.put("cpuOvercommitRatio", CapacityManager.CpuOverprovisioningFactor.value().toString()); details.put(VmDetailConstants.CPU_OVER_COMMIT_RATIO, CapacityManager.CpuOverprovisioningFactor.value().toString());
details.put("memoryOvercommitRatio", CapacityManager.MemOverprovisioningFactor.value().toString()); details.put(VmDetailConstants.MEMORY_OVER_COMMIT_RATIO, CapacityManager.MemOverprovisioningFactor.value().toString());
_clusterDetailsDao.persist(cluster.getId(), details); _clusterDetailsDao.persist(cluster.getId(), details);
boolean success = false; boolean success = false;
@ -643,8 +644,8 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
throw ex; throw ex;
} else { } else {
if (cluster.getGuid() == null) { if (cluster.getGuid() == null) {
final List<HostVO> hosts = listAllHostsInCluster(clusterId); final List<Long> hostIds = _hostDao.listIdsByClusterId(clusterId);
if (!hosts.isEmpty()) { if (!hostIds.isEmpty()) {
final CloudRuntimeException ex = final CloudRuntimeException ex =
new CloudRuntimeException("Guid is not updated for cluster with specified cluster id; need to wait for hosts in this cluster to come up"); new CloudRuntimeException("Guid is not updated for cluster with specified cluster id; need to wait for hosts in this cluster to come up");
ex.addProxyObject(cluster.getUuid(), "clusterId"); ex.addProxyObject(cluster.getUuid(), "clusterId");
@ -780,9 +781,9 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
} }
} }
clusterId = cluster.getId(); clusterId = cluster.getId();
if (_clusterDetailsDao.findDetail(clusterId, "cpuOvercommitRatio") == null) { if (_clusterDetailsDao.findDetail(clusterId, VmDetailConstants.CPU_OVER_COMMIT_RATIO) == null) {
final ClusterDetailsVO cluster_cpu_detail = new ClusterDetailsVO(clusterId, "cpuOvercommitRatio", "1"); final ClusterDetailsVO cluster_cpu_detail = new ClusterDetailsVO(clusterId, VmDetailConstants.CPU_OVER_COMMIT_RATIO, "1");
final ClusterDetailsVO cluster_memory_detail = new ClusterDetailsVO(clusterId, "memoryOvercommitRatio", "1"); final ClusterDetailsVO cluster_memory_detail = new ClusterDetailsVO(clusterId, VmDetailConstants.MEMORY_OVER_COMMIT_RATIO, "1");
_clusterDetailsDao.persist(cluster_cpu_detail); _clusterDetailsDao.persist(cluster_cpu_detail);
_clusterDetailsDao.persist(cluster_memory_detail); _clusterDetailsDao.persist(cluster_memory_detail);
} }
@ -964,8 +965,8 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
Host hostRemoved = _hostDao.findById(hostId); Host hostRemoved = _hostDao.findById(hostId);
_hostDao.remove(hostId); _hostDao.remove(hostId);
if (clusterId != null) { if (clusterId != null) {
final List<HostVO> hosts = listAllHostsInCluster(clusterId); final List<Long> hostIds = _hostDao.listIdsByClusterId(clusterId);
if (hosts.size() == 0) { if (CollectionUtils.isEmpty(hostIds)) {
final ClusterVO cluster = _clusterDao.findById(clusterId); final ClusterVO cluster = _clusterDao.findById(clusterId);
cluster.setGuid(null); cluster.setGuid(null);
_clusterDao.update(clusterId, cluster); _clusterDao.update(clusterId, cluster);
@ -1089,21 +1090,17 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
final Hypervisor.HypervisorType hypervisorType = cluster.getHypervisorType(); final Hypervisor.HypervisorType hypervisorType = cluster.getHypervisorType();
final List<HostVO> hosts = listAllHostsInCluster(cmd.getId()); final List<Long> hostIds = _hostDao.listIdsByClusterId(cmd.getId());
if (hosts.size() > 0) { if (!hostIds.isEmpty()) {
if (logger.isDebugEnabled()) { logger.debug("{} still has hosts, can't remove", cluster);
logger.debug("Cluster: {} still has hosts, can't remove", cluster); throw new CloudRuntimeException("Cluster: " + cmd.getId() + " cannot be removed. Cluster still has hosts");
}
throw new CloudRuntimeException(String.format("Cluster: %s cannot be removed. Cluster still has hosts", cluster));
} }
// don't allow to remove the cluster if it has non-removed storage // don't allow to remove the cluster if it has non-removed storage
// pools // pools
final List<StoragePoolVO> storagePools = _storagePoolDao.listPoolsByCluster(cmd.getId()); final List<StoragePoolVO> storagePools = _storagePoolDao.listPoolsByCluster(cmd.getId());
if (storagePools.size() > 0) { if (storagePools.size() > 0) {
if (logger.isDebugEnabled()) { logger.debug("{} still has storage pools, can't remove", cluster);
logger.debug("Cluster: {} still has storage pools, can't remove", cluster);
}
throw new CloudRuntimeException(String.format("Cluster: %s cannot be removed. Cluster still has storage pools", cluster)); throw new CloudRuntimeException(String.format("Cluster: %s cannot be removed. Cluster still has storage pools", cluster));
} }
@ -2437,10 +2434,10 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
boolean clusterSupportsResigning = true; boolean clusterSupportsResigning = true;
List<HostVO> hostVOs = _hostDao.findByClusterId(host.getClusterId()); List<Long> hostIds = _hostDao.listIdsByClusterId(host.getClusterId());
for (HostVO hostVO : hostVOs) { for (Long hostId : hostIds) {
DetailVO hostDetailVO = _hostDetailsDao.findDetail(hostVO.getId(), name); DetailVO hostDetailVO = _hostDetailsDao.findDetail(hostId, name);
if (hostDetailVO == null || Boolean.parseBoolean(hostDetailVO.getValue()) == false) { if (hostDetailVO == null || Boolean.parseBoolean(hostDetailVO.getValue()) == false) {
clusterSupportsResigning = false; clusterSupportsResigning = false;
@ -3054,10 +3051,10 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
public boolean updateClusterPassword(final UpdateHostPasswordCmd command) { public boolean updateClusterPassword(final UpdateHostPasswordCmd command) {
final boolean shouldUpdateHostPasswd = command.getUpdatePasswdOnHost(); final boolean shouldUpdateHostPasswd = command.getUpdatePasswdOnHost();
// get agents for the cluster // get agents for the cluster
final List<HostVO> hosts = listAllHostsInCluster(command.getClusterId()); final List<Long> hostIds = _hostDao.listIdsByClusterId(command.getClusterId());
for (final HostVO host : hosts) { for (final Long hostId : hostIds) {
try { try {
final Boolean result = propagateResourceEvent(host.getId(), ResourceState.Event.UpdatePassword); final Boolean result = propagateResourceEvent(hostId, ResourceState.Event.UpdatePassword);
if (result != null) { if (result != null) {
return result; return result;
} }
@ -3066,8 +3063,9 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
} }
if (shouldUpdateHostPasswd) { if (shouldUpdateHostPasswd) {
final boolean isUpdated = doUpdateHostPassword(host.getId()); final boolean isUpdated = doUpdateHostPassword(hostId);
if (!isUpdated) { if (!isUpdated) {
HostVO host = _hostDao.findById(hostId);
throw new CloudRuntimeException( throw new CloudRuntimeException(
String.format("CloudStack failed to update the password of %s. Please make sure you are still able to connect to your hosts.", host)); String.format("CloudStack failed to update the password of %s. Please make sure you are still able to connect to your hosts.", host));
} }
@ -3281,26 +3279,13 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
} }
@Override @Override
public List<HypervisorType> listAvailHypervisorInZone(final Long hostId, final Long zoneId) { public List<HypervisorType> listAvailHypervisorInZone(final Long zoneId) {
final SearchCriteria<String> sc = _hypervisorsInDC.create(); List<VMTemplateVO> systemVMTemplates = _templateDao.listAllReadySystemVMTemplates(zoneId);
if (zoneId != null) { final Set<HypervisorType> hypervisors = new HashSet<>();
sc.setParameters("dataCenter", zoneId); for (final VMTemplateVO systemVMTemplate : systemVMTemplates) {
hypervisors.add(systemVMTemplate.getHypervisorType());
} }
if (hostId != null) { return new ArrayList<>(hypervisors);
// exclude the given host, since we want to check what hypervisor is already handled
// in adding this new host
sc.setParameters("id", hostId);
}
sc.setParameters("type", Host.Type.Routing);
// The search is not able to return list of enums, so getting
// list of hypervisors as strings and then converting them to enum
final List<String> hvs = _hostDao.customSearch(sc, null);
final List<HypervisorType> hypervisors = new ArrayList<HypervisorType>();
for (final String hv : hvs) {
hypervisors.add(HypervisorType.getType(hv));
}
return hypervisors;
} }
@Override @Override
@ -3318,17 +3303,15 @@ public class ResourceManagerImpl extends ManagerBase implements ResourceManager,
} }
@Override @Override
public HostStats getHostStatistics(final long hostId) { public HostStats getHostStatistics(final Host host) {
HostVO host = _hostDao.findById(hostId); final Answer answer = _agentMgr.easySend(host.getId(), new GetHostStatsCommand(host.getGuid(), host.getName(), host.getId()));
final Answer answer = _agentMgr.easySend(hostId, new GetHostStatsCommand(host.getGuid(), host.getName(), hostId));
if (answer != null && answer instanceof UnsupportedAnswer) { if (answer != null && answer instanceof UnsupportedAnswer) {
return null; return null;
} }
if (answer == null || !answer.getResult()) { if (answer == null || !answer.getResult()) {
final String msg = String.format("Unable to obtain host %s statistics. ", host); logger.warn("Unable to obtain {} statistics.", host);
logger.warn(msg);
return null; return null;
} else { } else {

View File

@ -652,7 +652,7 @@ public class RollingMaintenanceManagerImpl extends ManagerBase implements Rollin
continue; continue;
} }
boolean maxGuestLimit = capacityManager.checkIfHostReachMaxGuestLimit(host); boolean maxGuestLimit = capacityManager.checkIfHostReachMaxGuestLimit(host);
boolean hostHasCPUCapacity = capacityManager.checkIfHostHasCpuCapability(hostInCluster.getId(), cpu, speed); boolean hostHasCPUCapacity = capacityManager.checkIfHostHasCpuCapability(hostInCluster, cpu, speed);
int cpuRequested = cpu * speed; int cpuRequested = cpu * speed;
long ramRequested = ramSize * 1024L * 1024L; long ramRequested = ramSize * 1024L * 1024L;
ClusterDetailsVO clusterDetailsCpuOvercommit = clusterDetailsDao.findDetail(cluster.getId(), "cpuOvercommitRatio"); ClusterDetailsVO clusterDetailsCpuOvercommit = clusterDetailsDao.findDetail(cluster.getId(), "cpuOvercommitRatio");

View File

@ -93,7 +93,6 @@ import com.cloud.projects.Project;
import com.cloud.projects.ProjectAccount.Role; import com.cloud.projects.ProjectAccount.Role;
import com.cloud.projects.dao.ProjectAccountDao; import com.cloud.projects.dao.ProjectAccountDao;
import com.cloud.projects.dao.ProjectDao; import com.cloud.projects.dao.ProjectDao;
import com.cloud.service.ServiceOfferingVO;
import com.cloud.service.dao.ServiceOfferingDao; import com.cloud.service.dao.ServiceOfferingDao;
import com.cloud.storage.DataStoreRole; import com.cloud.storage.DataStoreRole;
import com.cloud.storage.DiskOfferingVO; import com.cloud.storage.DiskOfferingVO;
@ -105,7 +104,6 @@ import com.cloud.storage.dao.DiskOfferingDao;
import com.cloud.storage.dao.SnapshotDao; import com.cloud.storage.dao.SnapshotDao;
import com.cloud.storage.dao.VMTemplateDao; import com.cloud.storage.dao.VMTemplateDao;
import com.cloud.storage.dao.VolumeDao; import com.cloud.storage.dao.VolumeDao;
import com.cloud.storage.dao.VolumeDaoImpl.SumCount;
import com.cloud.template.VirtualMachineTemplate; import com.cloud.template.VirtualMachineTemplate;
import com.cloud.user.Account; import com.cloud.user.Account;
import com.cloud.user.AccountManager; import com.cloud.user.AccountManager;
@ -118,6 +116,7 @@ import com.cloud.utils.concurrency.NamedThreadFactory;
import com.cloud.utils.db.DB; import com.cloud.utils.db.DB;
import com.cloud.utils.db.EntityManager; import com.cloud.utils.db.EntityManager;
import com.cloud.utils.db.Filter; import com.cloud.utils.db.Filter;
import com.cloud.utils.db.GenericDaoBase.SumCount;
import com.cloud.utils.db.GenericSearchBuilder; import com.cloud.utils.db.GenericSearchBuilder;
import com.cloud.utils.db.GlobalLock; import com.cloud.utils.db.GlobalLock;
import com.cloud.utils.db.JoinBuilder; import com.cloud.utils.db.JoinBuilder;
@ -1290,16 +1289,14 @@ public class ResourceLimitManagerImpl extends ManagerBase implements ResourceLim
if (StringUtils.isEmpty(tag)) { if (StringUtils.isEmpty(tag)) {
return _userVmJoinDao.listByAccountServiceOfferingTemplateAndNotInState(accountId, states, null, null); return _userVmJoinDao.listByAccountServiceOfferingTemplateAndNotInState(accountId, states, null, null);
} }
List<ServiceOfferingVO> offerings = serviceOfferingDao.listByHostTag(tag); List<Long> offerings = serviceOfferingDao.listIdsByHostTag(tag);
List<VMTemplateVO> templates = _vmTemplateDao.listByTemplateTag(tag); List<Long> templates = _vmTemplateDao.listIdsByTemplateTag(tag);
if (CollectionUtils.isEmpty(offerings) && CollectionUtils.isEmpty(templates)) { if (CollectionUtils.isEmpty(offerings) && CollectionUtils.isEmpty(templates)) {
return new ArrayList<>(); return new ArrayList<>();
} }
return _userVmJoinDao.listByAccountServiceOfferingTemplateAndNotInState(accountId, states, return _userVmJoinDao.listByAccountServiceOfferingTemplateAndNotInState(accountId, states,
offerings.stream().map(ServiceOfferingVO::getId).collect(Collectors.toList()), offerings, templates);
templates.stream().map(VMTemplateVO::getId).collect(Collectors.toList())
);
} }
protected List<UserVmJoinVO> getVmsWithAccount(long accountId) { protected List<UserVmJoinVO> getVmsWithAccount(long accountId) {

Some files were not shown because too many files have changed in this diff Show More