storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)

Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
This commit is contained in:
sureshanaparti 2021-02-24 14:58:33 +05:30 committed by GitHub
parent 90885730ad
commit eba186aa40
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
161 changed files with 10232 additions and 588 deletions

View File

@ -143,6 +143,9 @@ hypervisor.type=kvm
# This parameter specifies a directory on the host local storage for temporary storing direct download templates
#direct.download.temporary.download.location=/var/lib/libvirt/images
# This parameter specifies a directory on the host local storage for creating and hosting the config drives
#host.cache.location=/var/cache/cloud
# set the rolling maintenance hook scripts directory
#rolling.maintenance.hooks.dir=/etc/cloudstack/agent/hooks.d

View File

@ -20,6 +20,7 @@ import java.util.List;
import java.util.Map;
import java.util.HashMap;
import com.cloud.network.element.NetworkElement;
import com.cloud.template.VirtualMachineTemplate.BootloaderType;
import com.cloud.vm.VirtualMachine;
import com.cloud.vm.VirtualMachine.Type;
@ -73,6 +74,7 @@ public class VirtualMachineTO {
String configDriveLabel = null;
String configDriveIsoRootFolder = null;
String configDriveIsoFile = null;
NetworkElement.Location configDriveLocation = NetworkElement.Location.SECONDARY;
Double cpuQuotaPercentage = null;
@ -349,6 +351,18 @@ public class VirtualMachineTO {
this.configDriveIsoFile = configDriveIsoFile;
}
public boolean isConfigDriveOnHostCache() {
return (this.configDriveLocation == NetworkElement.Location.HOST);
}
public NetworkElement.Location getConfigDriveLocation() {
return configDriveLocation;
}
public void setConfigDriveLocation(NetworkElement.Location configDriveLocation) {
this.configDriveLocation = configDriveLocation;
}
public Map<String, String> getGuestOsDetails() {
return guestOsDetails;
}

View File

@ -0,0 +1,32 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.exception;
import com.cloud.utils.SerialVersionUID;
/**
* If the cause is due to storage pool not accessible on host, calling
* problem with.
*
*/
public class StorageAccessException extends RuntimeException {
private static final long serialVersionUID = SerialVersionUID.StorageAccessException;
public StorageAccessException(String message) {
super(message);
}
}

View File

@ -26,6 +26,7 @@ import com.cloud.exception.InsufficientCapacityException;
import com.cloud.exception.ResourceUnavailableException;
import com.cloud.network.router.VirtualRouter;
import com.cloud.user.Account;
import com.cloud.utils.Pair;
public interface VirtualNetworkApplianceService {
/**
@ -73,5 +74,5 @@ public interface VirtualNetworkApplianceService {
* @param routerId id of the router
* @return
*/
boolean performRouterHealthChecks(long routerId);
Pair<Boolean, String> performRouterHealthChecks(long routerId);
}

View File

@ -39,6 +39,10 @@ import com.cloud.vm.VirtualMachineProfile;
*/
public interface NetworkElement extends Adapter {
enum Location {
SECONDARY, PRIMARY, HOST
}
Map<Service, Map<Capability, String>> getCapabilities();
/**

View File

@ -135,6 +135,7 @@ public class Storage {
OCFS2(true, false),
SMB(true, false),
Gluster(true, false),
PowerFlex(true, true), // Dell EMC PowerFlex/ScaleIO (formerly VxFlexOS)
ManagedNFS(true, false),
DatastoreCluster(true, true); // for VMware, to abstract pool of clusters

View File

@ -29,6 +29,11 @@ import com.cloud.utils.fsm.StateMachine2;
import com.cloud.utils.fsm.StateObject;
public interface Volume extends ControlledEntity, Identity, InternalIdentity, BasedOn, StateObject<Volume.State>, Displayable {
// Managed storage volume parameters (specified in the compute/disk offering for PowerFlex)
String BANDWIDTH_LIMIT_IN_MBPS = "bandwidthLimitInMbps";
String IOPS_LIMIT = "iopsLimit";
enum Type {
UNKNOWN, ROOT, SWAP, DATADISK, ISO
};
@ -79,6 +84,7 @@ public interface Volume extends ControlledEntity, Identity, InternalIdentity, Ba
s_fsm.addTransition(new StateMachine2.Transition<State, Event>(Creating, Event.OperationSucceeded, Ready, null));
s_fsm.addTransition(new StateMachine2.Transition<State, Event>(Creating, Event.DestroyRequested, Destroy, null));
s_fsm.addTransition(new StateMachine2.Transition<State, Event>(Creating, Event.CreateRequested, Creating, null));
s_fsm.addTransition(new StateMachine2.Transition<State, Event>(Ready, Event.CreateRequested, Creating, null));
s_fsm.addTransition(new StateMachine2.Transition<State, Event>(Ready, Event.ResizeRequested, Resizing, null));
s_fsm.addTransition(new StateMachine2.Transition<State, Event>(Resizing, Event.OperationSucceeded, Ready, Arrays.asList(new StateMachine2.Transition.Impact[]{StateMachine2.Transition.Impact.USAGE})));
s_fsm.addTransition(new StateMachine2.Transition<State, Event>(Resizing, Event.OperationFailed, Ready, null));

View File

@ -20,7 +20,9 @@ import java.util.List;
import java.util.Map;
import com.cloud.agent.api.to.DiskTO;
import com.cloud.host.Host;
import com.cloud.hypervisor.Hypervisor.HypervisorType;
import com.cloud.network.element.NetworkElement;
import com.cloud.offering.ServiceOffering;
import com.cloud.template.VirtualMachineTemplate;
import com.cloud.template.VirtualMachineTemplate.BootloaderType;
@ -54,6 +56,10 @@ public interface VirtualMachineProfile {
void setConfigDriveIsoFile(String isoFile);
NetworkElement.Location getConfigDriveLocation();
void setConfigDriveLocation(NetworkElement.Location location);
public static class Param {
public static final Param VmPassword = new Param("VmPassword");
@ -100,6 +106,10 @@ public interface VirtualMachineProfile {
}
}
Long getHostId();
void setHost(Host host);
String getHostName();
String getInstanceName();

View File

@ -56,6 +56,8 @@ public interface VmDetailConstants {
String PASSWORD = "password";
String ENCRYPTED_PASSWORD = "Encrypted.Password";
String CONFIG_DRIVE_LOCATION = "configDriveLocation";
// VM import with nic, disk and custom params for custom compute offering
String NIC = "nic";
String NETWORK = "network";

View File

@ -16,12 +16,12 @@
// under the License.
package org.apache.cloudstack.alert;
import com.cloud.capacity.Capacity;
import com.cloud.exception.InvalidParameterValueException;
import java.util.HashSet;
import java.util.Set;
import com.cloud.capacity.Capacity;
import com.cloud.exception.InvalidParameterValueException;
public interface AlertService {
public static class AlertType {
private static Set<AlertType> defaultAlertTypes = new HashSet<AlertType>();
@ -69,6 +69,7 @@ public interface AlertService {
public static final AlertType ALERT_TYPE_OOBM_AUTH_ERROR = new AlertType((short)29, "ALERT.OOBM.AUTHERROR", true);
public static final AlertType ALERT_TYPE_HA_ACTION = new AlertType((short)30, "ALERT.HA.ACTION", true);
public static final AlertType ALERT_TYPE_CA_CERT = new AlertType((short)31, "ALERT.CA.CERT", true);
public static final AlertType ALERT_TYPE_VM_SNAPSHOT = new AlertType((short)32, "ALERT.VM.SNAPSHOT", true);
public short getType() {
return type;

View File

@ -339,6 +339,7 @@ public class ApiConstants {
public static final String SNAPSHOT_POLICY_ID = "snapshotpolicyid";
public static final String SNAPSHOT_TYPE = "snapshottype";
public static final String SNAPSHOT_QUIESCEVM = "quiescevm";
public static final String SUPPORTS_STORAGE_SNAPSHOT = "supportsstoragesnapshot";
public static final String SOURCE_ZONE_ID = "sourcezoneid";
public static final String START_DATE = "startdate";
public static final String START_ID = "startid";
@ -834,6 +835,8 @@ public class ApiConstants {
public static final String TEMPLATETYPE = "templatetype";
public static final String SOURCETEMPLATEID = "sourcetemplateid";
public static final String POOL_TYPE ="pooltype";
public enum BootType {
UEFI, BIOS;

View File

@ -16,8 +16,11 @@
// under the License.
package org.apache.cloudstack.api.command.admin.offering;
import java.util.Collection;
import java.util.HashMap;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.apache.cloudstack.api.APICommand;
@ -31,6 +34,7 @@ import org.apache.cloudstack.api.response.DomainResponse;
import org.apache.cloudstack.api.response.VsphereStoragePoliciesResponse;
import org.apache.cloudstack.api.response.ZoneResponse;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.collections.MapUtils;
import org.apache.log4j.Logger;
import com.cloud.offering.DiskOffering;
@ -155,7 +159,10 @@ public class CreateDiskOfferingCmd extends BaseCmd {
@Parameter(name = ApiConstants.STORAGE_POLICY, type = CommandType.UUID, entityType = VsphereStoragePoliciesResponse.class,required = false, description = "Name of the storage policy defined at vCenter, this is applicable only for VMware", since = "4.15")
private Long storagePolicy;
/////////////////////////////////////////////////////
@Parameter(name = ApiConstants.DETAILS, type = CommandType.MAP, description = "details to specify disk offering parameters", since = "4.16")
private Map details;
/////////////////////////////////////////////////////
/////////////////// Accessors ///////////////////////
/////////////////////////////////////////////////////
@ -277,6 +284,20 @@ public class CreateDiskOfferingCmd extends BaseCmd {
return cacheMode;
}
public Map<String, String> getDetails() {
Map<String, String> detailsMap = new HashMap<>();
if (MapUtils.isNotEmpty(details)) {
Collection<?> props = details.values();
for (Object prop : props) {
HashMap<String, String> detail = (HashMap<String, String>) prop;
for (Map.Entry<String, String> entry: detail.entrySet()) {
detailsMap.put(entry.getKey(),entry.getValue());
}
}
}
return detailsMap;
}
public Long getStoragePolicy() {
return storagePolicy;
}

View File

@ -321,7 +321,15 @@ public class CreateServiceOfferingCmd extends BaseCmd {
Collection<?> props = details.values();
for (Object prop : props) {
HashMap<String, String> detail = (HashMap<String, String>) prop;
detailsMap.put(detail.get("key"), detail.get("value"));
// Compatibility with key and value pairs input from API cmd for details map parameter
if (!Strings.isNullOrEmpty(detail.get("key")) && !Strings.isNullOrEmpty(detail.get("value"))) {
detailsMap.put(detail.get("key"), detail.get("value"));
continue;
}
for (Map.Entry<String, String> entry: detail.entrySet()) {
detailsMap.put(entry.getKey(),entry.getValue());
}
}
}
return detailsMap;

View File

@ -111,7 +111,7 @@ public class GetRouterHealthCheckResultsCmd extends BaseCmd {
setResponseObject(routerResponse);
} catch (CloudRuntimeException ex){
ex.printStackTrace();
throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, "Failed to execute command due to exception: " + ex.getLocalizedMessage());
throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, "Failed to get health check results due to: " + ex.getLocalizedMessage());
}
}
}

View File

@ -310,6 +310,10 @@ public class UserVmResponse extends BaseResponseWithTagInformation implements Co
@Param(description = "Guest vm Boot Type")
private String bootType;
@SerializedName(ApiConstants.POOL_TYPE)
@Param(description = "the pool type of the virtual machine", since = "4.16")
private String poolType;
public UserVmResponse() {
securityGroupList = new LinkedHashSet<SecurityGroupResponse>();
nics = new LinkedHashSet<NicResponse>();
@ -901,4 +905,8 @@ public class UserVmResponse extends BaseResponseWithTagInformation implements Co
public String getBootMode() { return bootMode; }
public void setBootMode(String bootMode) { this.bootMode = bootMode; }
public String getPoolType() { return poolType; }
public void setPoolType(String poolType) { this.poolType = poolType; }
}

View File

@ -248,8 +248,12 @@ public class VolumeResponse extends BaseResponseWithTagInformation implements Co
@Param(description = "need quiesce vm or not when taking snapshot", since = "4.3")
private boolean needQuiescevm;
@SerializedName(ApiConstants.SUPPORTS_STORAGE_SNAPSHOT)
@Param(description = "true if storage snapshot is supported for the volume, false otherwise", since = "4.16")
private boolean supportsStorageSnapshot;
@SerializedName(ApiConstants.PHYSICAL_SIZE)
@Param(description = "the bytes alloaated")
@Param(description = "the bytes allocated")
private Long physicalsize;
@SerializedName(ApiConstants.VIRTUAL_SIZE)
@ -538,6 +542,14 @@ public class VolumeResponse extends BaseResponseWithTagInformation implements Co
return this.needQuiescevm;
}
public void setSupportsStorageSnapshot(boolean supportsStorageSnapshot) {
this.supportsStorageSnapshot = supportsStorageSnapshot;
}
public boolean getSupportsStorageSnapshot() {
return this.supportsStorageSnapshot;
}
public String getIsoId() {
return isoId;
}

View File

@ -16,11 +16,12 @@
// under the License.
package com.cloud.storage;
import com.cloud.storage.Storage.StoragePoolType;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import com.cloud.storage.Storage.StoragePoolType;
public class StorageTest {
@Before
public void setUp() {
@ -37,6 +38,7 @@ public class StorageTest {
Assert.assertFalse(StoragePoolType.LVM.isShared());
Assert.assertTrue(StoragePoolType.CLVM.isShared());
Assert.assertTrue(StoragePoolType.RBD.isShared());
Assert.assertTrue(StoragePoolType.PowerFlex.isShared());
Assert.assertTrue(StoragePoolType.SharedMountPoint.isShared());
Assert.assertTrue(StoragePoolType.VMFS.isShared());
Assert.assertTrue(StoragePoolType.PreSetup.isShared());
@ -59,6 +61,7 @@ public class StorageTest {
Assert.assertFalse(StoragePoolType.LVM.supportsOverProvisioning());
Assert.assertFalse(StoragePoolType.CLVM.supportsOverProvisioning());
Assert.assertTrue(StoragePoolType.RBD.supportsOverProvisioning());
Assert.assertTrue(StoragePoolType.PowerFlex.supportsOverProvisioning());
Assert.assertFalse(StoragePoolType.SharedMountPoint.supportsOverProvisioning());
Assert.assertTrue(StoragePoolType.VMFS.supportsOverProvisioning());
Assert.assertTrue(StoragePoolType.PreSetup.supportsOverProvisioning());

View File

@ -87,6 +87,11 @@
<artifactId>cloud-plugin-storage-volume-datera</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-plugin-storage-volume-scaleio</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-server</artifactId>

View File

@ -0,0 +1,55 @@
//
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
//
package com.cloud.agent.api;
import com.cloud.network.element.NetworkElement;
import com.cloud.utils.exception.ExceptionUtil;
public class HandleConfigDriveIsoAnswer extends Answer {
@LogLevel(LogLevel.Log4jLevel.Off)
private NetworkElement.Location location = NetworkElement.Location.SECONDARY;
public HandleConfigDriveIsoAnswer(final HandleConfigDriveIsoCommand cmd) {
super(cmd);
}
public HandleConfigDriveIsoAnswer(final HandleConfigDriveIsoCommand cmd, final NetworkElement.Location location) {
super(cmd);
this.location = location;
}
public HandleConfigDriveIsoAnswer(final HandleConfigDriveIsoCommand cmd, final NetworkElement.Location location, final String details) {
super(cmd, true, details);
this.location = location;
}
public HandleConfigDriveIsoAnswer(final HandleConfigDriveIsoCommand cmd, final String details) {
super(cmd, false, details);
}
public HandleConfigDriveIsoAnswer(final HandleConfigDriveIsoCommand cmd, final Exception e) {
this(cmd, ExceptionUtil.toString(e));
}
public NetworkElement.Location getConfigDriveLocation() {
return location;
}
}

View File

@ -25,16 +25,19 @@ public class HandleConfigDriveIsoCommand extends Command {
@LogLevel(LogLevel.Log4jLevel.Off)
private String isoData;
private String isoFile;
private boolean create = false;
private DataStoreTO destStore;
private boolean useHostCacheOnUnsupportedPool = false;
private boolean preferHostCache = false;
public HandleConfigDriveIsoCommand(String isoFile, String isoData, DataStoreTO destStore, boolean create) {
public HandleConfigDriveIsoCommand(String isoFile, String isoData, DataStoreTO destStore, boolean useHostCacheOnUnsupportedPool, boolean preferHostCache, boolean create) {
this.isoFile = isoFile;
this.isoData = isoData;
this.destStore = destStore;
this.create = create;
this.useHostCacheOnUnsupportedPool = useHostCacheOnUnsupportedPool;
this.preferHostCache = preferHostCache;
}
@Override
@ -57,4 +60,12 @@ public class HandleConfigDriveIsoCommand extends Command {
public String getIsoFile() {
return isoFile;
}
public boolean isHostCachePreferred() {
return preferHostCache;
}
public boolean getUseHostCacheOnUnsupportedPool() {
return useHostCacheOnUnsupportedPool;
}
}

View File

@ -19,12 +19,14 @@ package com.cloud.agent.api.routing;
public class GetRouterMonitorResultsCommand extends NetworkElementCommand {
private boolean performFreshChecks;
private boolean validateBasicTestsOnly;
protected GetRouterMonitorResultsCommand() {
}
public GetRouterMonitorResultsCommand(boolean performFreshChecks) {
public GetRouterMonitorResultsCommand(boolean performFreshChecks, boolean validateBasicTestsOnly) {
this.performFreshChecks = performFreshChecks;
this.validateBasicTestsOnly = validateBasicTestsOnly;
}
@Override
@ -35,4 +37,8 @@ public class GetRouterMonitorResultsCommand extends NetworkElementCommand {
public boolean shouldPerformFreshChecks() {
return performFreshChecks;
}
public boolean shouldValidateBasicTestsOnly() {
return validateBasicTestsOnly;
}
}

View File

@ -75,4 +75,6 @@ public class VRScripts {
public static final String DIAGNOSTICS = "diagnostics.py";
public static final String RETRIEVE_DIAGNOSTICS = "get_diagnostics_files.py";
public static final String VR_FILE_CLEANUP = "cleanup.sh";
public static final String ROUTER_FILESYSTEM_WRITABLE_CHECK = "filesystem_writable_check.py";
}

View File

@ -44,6 +44,7 @@ import org.apache.cloudstack.diagnostics.DiagnosticsCommand;
import org.apache.cloudstack.diagnostics.PrepareFilesAnswer;
import org.apache.cloudstack.diagnostics.PrepareFilesCommand;
import org.apache.cloudstack.utils.security.KeyStoreUtils;
import org.apache.commons.lang.StringUtils;
import org.apache.log4j.Logger;
import org.joda.time.Duration;
@ -65,6 +66,7 @@ import com.cloud.agent.api.routing.NetworkElementCommand;
import com.cloud.agent.resource.virtualnetwork.facade.AbstractConfigItemFacade;
import com.cloud.utils.ExecutionResult;
import com.cloud.utils.NumbersUtil;
import com.cloud.utils.Pair;
import com.cloud.utils.exception.CloudRuntimeException;
/**
@ -310,6 +312,16 @@ public class VirtualRoutingResource {
private GetRouterMonitorResultsAnswer execute(GetRouterMonitorResultsCommand cmd) {
String routerIp = cmd.getAccessDetail(NetworkElementCommand.ROUTER_IP);
Pair<Boolean, String> fileSystemTestResult = checkRouterFileSystem(routerIp);
if (!fileSystemTestResult.first()) {
return new GetRouterMonitorResultsAnswer(cmd, false, null, fileSystemTestResult.second());
}
if (cmd.shouldValidateBasicTestsOnly()) {
// Basic tests (connectivity and file system checks) are already validated
return new GetRouterMonitorResultsAnswer(cmd, true, null, "success");
}
String args = cmd.shouldPerformFreshChecks() ? "true" : "false";
s_logger.info("Fetching health check result for " + routerIp + " and executing fresh checks: " + args);
ExecutionResult result = _vrDeployer.executeInVR(routerIp, VRScripts.ROUTER_MONITOR_RESULTS, args);
@ -327,6 +339,27 @@ public class VirtualRoutingResource {
return parseLinesForHealthChecks(cmd, result.getDetails());
}
private Pair<Boolean, String> checkRouterFileSystem(String routerIp) {
ExecutionResult fileSystemWritableTestResult = _vrDeployer.executeInVR(routerIp, VRScripts.ROUTER_FILESYSTEM_WRITABLE_CHECK, null);
if (fileSystemWritableTestResult.isSuccess()) {
s_logger.debug("Router connectivity and file system writable check passed");
return new Pair<Boolean, String>(true, "success");
}
String resultDetails = fileSystemWritableTestResult.getDetails();
s_logger.warn("File system writable check failed with details: " + resultDetails);
if (StringUtils.isNotBlank(resultDetails)) {
final String readOnlyFileSystemError = "Read-only file system";
if (resultDetails.contains(readOnlyFileSystemError)) {
resultDetails = "Read-only file system";
}
} else {
resultDetails = "No results available";
}
return new Pair<Boolean, String>(false, resultDetails);
}
private GetRouterAlertsAnswer execute(GetRouterAlertsCommand cmd) {
String routerIp = cmd.getAccessDetail(NetworkElementCommand.ROUTER_IP);

View File

@ -99,10 +99,13 @@ public class StorageSubsystemCommandHandlerBase implements StorageSubsystemComma
//copy volume from image cache to primary
return processor.copyVolumeFromImageCacheToPrimary(cmd);
} else if (srcData.getObjectType() == DataObjectType.VOLUME && srcData.getDataStore().getRole() == DataStoreRole.Primary) {
if (destData.getObjectType() == DataObjectType.VOLUME && srcData instanceof VolumeObjectTO && ((VolumeObjectTO)srcData).isDirectDownload()) {
return processor.copyVolumeFromPrimaryToPrimary(cmd);
} else if (destData.getObjectType() == DataObjectType.VOLUME) {
return processor.copyVolumeFromPrimaryToSecondary(cmd);
if (destData.getObjectType() == DataObjectType.VOLUME) {
if ((srcData instanceof VolumeObjectTO && ((VolumeObjectTO)srcData).isDirectDownload()) ||
destData.getDataStore().getRole() == DataStoreRole.Primary) {
return processor.copyVolumeFromPrimaryToPrimary(cmd);
} else {
return processor.copyVolumeFromPrimaryToSecondary(cmd);
}
} else if (destData.getObjectType() == DataObjectType.TEMPLATE) {
return processor.createTemplateFromVolume(cmd);
}

View File

@ -23,14 +23,20 @@ import com.cloud.agent.api.Command;
public class CheckUrlCommand extends Command {
private String format;
private String url;
public String getFormat() {
return format;
}
public String getUrl() {
return url;
}
public CheckUrlCommand(final String url) {
public CheckUrlCommand(final String format,final String url) {
super();
this.format = format;
this.url = url;
}

View File

@ -23,6 +23,9 @@ import java.util.Map;
import org.apache.cloudstack.storage.command.StorageSubSystemCommand;
import org.apache.cloudstack.storage.to.PrimaryDataStoreTO;
import org.apache.cloudstack.storage.to.TemplateObjectTO;
import com.cloud.storage.Storage;
public abstract class DirectDownloadCommand extends StorageSubSystemCommand {
@ -32,6 +35,7 @@ public abstract class DirectDownloadCommand extends StorageSubSystemCommand {
private String url;
private Long templateId;
private TemplateObjectTO destData;
private PrimaryDataStoreTO destPool;
private String checksum;
private Map<String, String> headers;
@ -39,11 +43,12 @@ public abstract class DirectDownloadCommand extends StorageSubSystemCommand {
private Integer soTimeout;
private Integer connectionRequestTimeout;
private Long templateSize;
private boolean iso;
private Storage.ImageFormat format;
protected DirectDownloadCommand (final String url, final Long templateId, final PrimaryDataStoreTO destPool, final String checksum, final Map<String, String> headers, final Integer connectTimeout, final Integer soTimeout, final Integer connectionRequestTimeout) {
this.url = url;
this.templateId = templateId;
this.destData = destData;
this.destPool = destPool;
this.checksum = checksum;
this.headers = headers;
@ -60,6 +65,14 @@ public abstract class DirectDownloadCommand extends StorageSubSystemCommand {
return templateId;
}
public TemplateObjectTO getDestData() {
return destData;
}
public void setDestData(TemplateObjectTO destData) {
this.destData = destData;
}
public PrimaryDataStoreTO getDestPool() {
return destPool;
}
@ -104,12 +117,12 @@ public abstract class DirectDownloadCommand extends StorageSubSystemCommand {
this.templateSize = templateSize;
}
public boolean isIso() {
return iso;
public Storage.ImageFormat getFormat() {
return format;
}
public void setIso(boolean iso) {
this.iso = iso;
public void setFormat(Storage.ImageFormat format) {
this.format = format;
}
@Override
@ -120,4 +133,8 @@ public abstract class DirectDownloadCommand extends StorageSubSystemCommand {
public boolean executeInSequence() {
return false;
}
public int getWaitInMillSeconds() {
return getWait() * 1000;
}
}

View File

@ -19,12 +19,13 @@
package org.apache.cloudstack.storage.to;
import java.util.Map;
import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStore;
import com.cloud.agent.api.to.DataStoreTO;
import com.cloud.storage.DataStoreRole;
import com.cloud.storage.Storage.StoragePoolType;
import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStore;
import java.util.Map;
public class PrimaryDataStoreTO implements DataStoreTO {
public static final String MANAGED = PrimaryDataStore.MANAGED;

View File

@ -43,6 +43,7 @@ public class VolumeObjectTO implements DataTO {
private String chainInfo;
private Storage.ImageFormat format;
private Storage.ProvisioningType provisioningType;
private Long poolId;
private long id;
private Long deviceId;
@ -89,6 +90,7 @@ public class VolumeObjectTO implements DataTO {
setId(volume.getId());
format = volume.getFormat();
provisioningType = volume.getProvisioningType();
poolId = volume.getPoolId();
bytesReadRate = volume.getBytesReadRate();
bytesReadRateMax = volume.getBytesReadRateMax();
bytesReadRateMaxLength = volume.getBytesReadRateMaxLength();
@ -227,6 +229,14 @@ public class VolumeObjectTO implements DataTO {
this.provisioningType = provisioningType;
}
public Long getPoolId(){
return poolId;
}
public void setPoolId(Long poolId){
this.poolId = poolId;
}
@Override
public String toString() {
return new StringBuilder("volumeTO[uuid=").append(uuid).append("|path=").append(path).append("|datastore=").append(dataStore).append("]").toString();

View File

@ -58,7 +58,13 @@ public interface VirtualMachineManager extends Manager {
"The default label name for the config drive", false);
ConfigKey<Boolean> VmConfigDriveOnPrimaryPool = new ConfigKey<>("Advanced", Boolean.class, "vm.configdrive.primarypool.enabled", "false",
"If config drive need to be created and hosted on primary storage pool. Currently only supported for KVM.", true);
"If config drive need to be created and hosted on primary storage pool. Currently only supported for KVM.", true, ConfigKey.Scope.Zone);
ConfigKey<Boolean> VmConfigDriveUseHostCacheOnUnsupportedPool = new ConfigKey<>("Advanced", Boolean.class, "vm.configdrive.use.host.cache.on.unsupported.pool", "true",
"If true, config drive is created on the host cache storage when vm.configdrive.primarypool.enabled is true and the primary pool type doesn't support config drive.", true, ConfigKey.Scope.Zone);
ConfigKey<Boolean> VmConfigDriveForceHostCacheUse = new ConfigKey<>("Advanced", Boolean.class, "vm.configdrive.force.host.cache.use", "false",
"If true, config drive is forced to create on the host cache storage. Currently only supported for KVM.", true, ConfigKey.Scope.Zone);
ConfigKey<Boolean> ResoureCountRunningVMsonly = new ConfigKey<Boolean>("Advanced", Boolean.class, "resource.count.running.vms.only", "false",
"Count the resources of only running VMs in resource limitation.", true);

View File

@ -33,6 +33,7 @@ import com.cloud.dc.Pod;
import com.cloud.deploy.DeployDestination;
import com.cloud.exception.ConcurrentOperationException;
import com.cloud.exception.InsufficientStorageCapacityException;
import com.cloud.exception.StorageAccessException;
import com.cloud.exception.StorageUnavailableException;
import com.cloud.host.Host;
import com.cloud.hypervisor.Hypervisor.HypervisorType;
@ -104,6 +105,8 @@ public interface VolumeOrchestrationService {
void release(VirtualMachineProfile profile);
void release(long vmId, long hostId);
void cleanupVolumes(long vmId) throws ConcurrentOperationException;
void revokeAccess(DataObject dataObject, Host host, DataStore dataStore);
@ -116,7 +119,7 @@ public interface VolumeOrchestrationService {
void prepareForMigration(VirtualMachineProfile vm, DeployDestination dest);
void prepare(VirtualMachineProfile vm, DeployDestination dest) throws StorageUnavailableException, InsufficientStorageCapacityException, ConcurrentOperationException;
void prepare(VirtualMachineProfile vm, DeployDestination dest) throws StorageUnavailableException, InsufficientStorageCapacityException, ConcurrentOperationException, StorageAccessException;
boolean canVmRestartOnAnotherServer(long vmId);

View File

@ -25,6 +25,7 @@ import org.apache.cloudstack.storage.command.CommandResult;
import com.cloud.agent.api.to.DataStoreTO;
import com.cloud.agent.api.to.DataTO;
import com.cloud.host.Host;
public interface DataStoreDriver {
Map<String, String> getCapabilities();
@ -37,7 +38,9 @@ public interface DataStoreDriver {
void deleteAsync(DataStore store, DataObject data, AsyncCompletionCallback<CommandResult> callback);
void copyAsync(DataObject srcdata, DataObject destData, AsyncCompletionCallback<CopyCommandResult> callback);
void copyAsync(DataObject srcData, DataObject destData, AsyncCompletionCallback<CopyCommandResult> callback);
void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback);
boolean canCopy(DataObject srcData, DataObject destData);

View File

@ -23,6 +23,7 @@ import org.apache.cloudstack.storage.command.CommandResult;
import com.cloud.host.Host;
import com.cloud.storage.StoragePool;
import com.cloud.utils.Pair;
public interface PrimaryDataStoreDriver extends DataStoreDriver {
enum QualityOfServiceState { MIGRATION, NO_MIGRATION }
@ -72,4 +73,34 @@ public interface PrimaryDataStoreDriver extends DataStoreDriver {
void revertSnapshot(SnapshotInfo snapshotOnImageStore, SnapshotInfo snapshotOnPrimaryStore, AsyncCompletionCallback<CommandResult> callback);
void handleQualityOfServiceForVolumeMigration(VolumeInfo volumeInfo, QualityOfServiceState qualityOfServiceState);
/**
* intended for managed storage
* returns true if the storage can provide the stats (capacity and used bytes)
*/
boolean canProvideStorageStats();
/**
* intended for managed storage
* returns the total capacity and used size in bytes
*/
Pair<Long, Long> getStorageStats(StoragePool storagePool);
/**
* intended for managed storage
* returns true if the storage can provide the volume stats (physical and virtual size)
*/
boolean canProvideVolumeStats();
/**
* intended for managed storage
* returns the volume's physical and virtual size in bytes
*/
Pair<Long, Long> getVolumeStats(StoragePool storagePool, String volumeId);
/**
* intended for managed storage
* returns true if the host can access the storage pool
*/
boolean canHostAccessStoragePool(Host host, StoragePool pool);
}

View File

@ -23,6 +23,8 @@ import java.util.List;
import com.cloud.storage.DataStoreRole;
public interface TemplateDataFactory {
TemplateInfo getTemplate(long templateId);
TemplateInfo getTemplate(long templateId, DataStore store);
TemplateInfo getReadyTemplateOnImageStore(long templateId, Long zoneId);
@ -39,6 +41,8 @@ public interface TemplateDataFactory {
TemplateInfo getReadyBypassedTemplateOnPrimaryStore(long templateId, Long poolId, Long hostId);
TemplateInfo getReadyBypassedTemplateOnManagedStorage(long templateId, TemplateInfo templateOnPrimary, Long poolId, Long hostId);
boolean isTemplateMarkedForDirectDownload(long templateId);
TemplateInfo getTemplateOnPrimaryStorage(long templateId, DataStore store, String configuration);

View File

@ -28,6 +28,8 @@ public interface TemplateInfo extends DataObject, VirtualMachineTemplate {
boolean isDirectDownload();
boolean canBeDeletedFromDataStore();
boolean isDeployAsIs();
String getDeployAsIsConfiguration();

View File

@ -22,6 +22,7 @@ import com.cloud.agent.api.Answer;
import com.cloud.hypervisor.Hypervisor.HypervisorType;
import com.cloud.offering.DiskOffering.DiskCacheMode;
import com.cloud.storage.MigrationOptions;
import com.cloud.storage.Storage;
import com.cloud.storage.Volume;
import com.cloud.vm.VirtualMachine;
@ -35,6 +36,8 @@ public interface VolumeInfo extends DataObject, Volume {
HypervisorType getHypervisorType();
Storage.StoragePoolType getStoragePoolType();
Long getLastPoolId();
String getAttachedVmName();

View File

@ -25,6 +25,7 @@ import org.apache.cloudstack.framework.async.AsyncCallFuture;
import org.apache.cloudstack.storage.command.CommandResult;
import com.cloud.agent.api.to.VirtualMachineTO;
import com.cloud.exception.StorageAccessException;
import com.cloud.host.Host;
import com.cloud.hypervisor.Hypervisor.HypervisorType;
import com.cloud.offering.DiskOffering;
@ -62,13 +63,17 @@ public interface VolumeService {
*/
AsyncCallFuture<VolumeApiResult> expungeVolumeAsync(VolumeInfo volume);
void ensureVolumeIsExpungeReady(long volumeId);
boolean cloneVolume(long volumeId, long baseVolId);
AsyncCallFuture<VolumeApiResult> createVolumeFromSnapshot(VolumeInfo volume, DataStore store, SnapshotInfo snapshot);
VolumeEntity getVolumeEntity(long volumeId);
AsyncCallFuture<VolumeApiResult> createManagedStorageVolumeFromTemplateAsync(VolumeInfo volumeInfo, long destDataStoreId, TemplateInfo srcTemplateInfo, long destHostId);
TemplateInfo createManagedStorageTemplate(long srcTemplateId, long destDataStoreId, long destHostId) throws StorageAccessException;
AsyncCallFuture<VolumeApiResult> createManagedStorageVolumeFromTemplateAsync(VolumeInfo volumeInfo, long destDataStoreId, TemplateInfo srcTemplateInfo, long destHostId) throws StorageAccessException;
AsyncCallFuture<VolumeApiResult> createVolumeFromTemplateAsync(VolumeInfo volume, long dataStoreId, TemplateInfo template);

View File

@ -21,6 +21,9 @@ import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.cloudstack.framework.config.ConfigKey;
import org.apache.cloudstack.framework.config.Configurable;
import com.cloud.agent.api.StartupCommand;
import com.cloud.agent.api.StartupRoutingCommand;
import com.cloud.agent.api.VgpuTypesInfo;
@ -38,8 +41,6 @@ import com.cloud.host.Status;
import com.cloud.hypervisor.Hypervisor.HypervisorType;
import com.cloud.resource.ResourceState.Event;
import com.cloud.utils.fsm.NoTransitionException;
import org.apache.cloudstack.framework.config.ConfigKey;
import org.apache.cloudstack.framework.config.Configurable;
/**
* ResourceManager manages how physical resources are organized within the
@ -204,7 +205,7 @@ public interface ResourceManager extends ResourceService, Configurable {
*/
HashMap<String, HashMap<String, VgpuTypesInfo>> getGPUStatistics(HostVO host);
HostVO findOneRandomRunningHostByHypervisor(HypervisorType type);
HostVO findOneRandomRunningHostByHypervisor(HypervisorType type, Long dcId);
boolean cancelMaintenance(final long hostId);
}

View File

@ -109,6 +109,24 @@ public interface StorageManager extends StorageService {
ConfigKey.Scope.Cluster,
null);
ConfigKey<Integer> STORAGE_POOL_DISK_WAIT = new ConfigKey<>(Integer.class,
"storage.pool.disk.wait",
"Storage",
"60",
"Timeout (in secs) for the storage pool disk (of managed pool) to become available in the host. Currently only supported for PowerFlex.",
true,
ConfigKey.Scope.StoragePool,
null);
ConfigKey<Integer> STORAGE_POOL_CLIENT_TIMEOUT = new ConfigKey<>(Integer.class,
"storage.pool.client.timeout",
"Storage",
"60",
"Timeout (in secs) for the storage pool client connection timeout (for managed pools). Currently only supported for PowerFlex.",
true,
ConfigKey.Scope.StoragePool,
null);
ConfigKey<Integer> PRIMARY_STORAGE_DOWNLOAD_WAIT = new ConfigKey<Integer>("Storage", Integer.class, "primary.storage.download.wait", "10800",
"In second, timeout for download template to primary storage", false);
@ -144,6 +162,8 @@ public interface StorageManager extends StorageService {
Pair<Long, Answer> sendToPool(StoragePool pool, long[] hostIdsToTryFirst, List<Long> hostIdsToAvoid, Command cmd) throws StorageUnavailableException;
public Answer getVolumeStats(StoragePool pool, Command cmd);
/**
* Checks if a host has running VMs that are using its local storage pool.
* @return true if local storage is active on the host
@ -172,6 +192,14 @@ public interface StorageManager extends StorageService {
StoragePoolVO findLocalStorageOnHost(long hostId);
Host findUpAndEnabledHostWithAccessToStoragePools(List<Long> poolIds);
List<StoragePoolHostVO> findStoragePoolsConnectedToHost(long hostId);
boolean canHostAccessStoragePool(Host host, StoragePool pool);
Host getHost(long hostId);
Host updateSecondaryStorage(long secStorageId, String newUrl);
void removeStoragePoolFromCluster(long hostId, String iScsiName, StoragePool storagePool);
@ -210,7 +238,9 @@ public interface StorageManager extends StorageService {
*/
boolean storagePoolHasEnoughSpace(List<Volume> volume, StoragePool pool, Long clusterId);
boolean storagePoolHasEnoughSpaceForResize(StoragePool pool, long currentSize, long newSiz);
boolean storagePoolHasEnoughSpaceForResize(StoragePool pool, long currentSize, long newSize);
boolean storagePoolCompatibleWithVolumePool(StoragePool pool, Volume volume);
boolean isStoragePoolComplaintWithStoragePolicy(List<Volume> volumes, StoragePool pool) throws StorageUnavailableException;
@ -218,6 +248,8 @@ public interface StorageManager extends StorageService {
void connectHostToSharedPool(long hostId, long poolId) throws StorageUnavailableException, StorageConflictException;
void disconnectHostFromSharedPool(long hostId, long poolId) throws StorageUnavailableException, StorageConflictException;
void createCapacityEntry(long poolId);
DataStore createLocalStorage(Host host, StoragePoolInfo poolInfo) throws ConnectionException;

View File

@ -16,6 +16,14 @@
// under the License.
package com.cloud.storage;
import java.util.List;
import javax.inject.Inject;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.commons.collections.CollectionUtils;
import com.cloud.dc.ClusterVO;
import com.cloud.dc.dao.ClusterDao;
import com.cloud.host.HostVO;
@ -25,13 +33,6 @@ import com.cloud.storage.dao.VolumeDao;
import com.cloud.vm.VMInstanceVO;
import com.cloud.vm.dao.VMInstanceDao;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.commons.collections.CollectionUtils;
import java.util.List;
import javax.inject.Inject;
public class StorageUtil {
@Inject private ClusterDao clusterDao;
@Inject private HostDao hostDao;

View File

@ -22,7 +22,9 @@ import java.util.List;
import java.util.Map;
import com.cloud.agent.api.to.DiskTO;
import com.cloud.host.Host;
import com.cloud.hypervisor.Hypervisor.HypervisorType;
import com.cloud.network.element.NetworkElement;
import com.cloud.offering.ServiceOffering;
import com.cloud.service.ServiceOfferingVO;
import com.cloud.template.VirtualMachineTemplate;
@ -49,6 +51,8 @@ public class VirtualMachineProfileImpl implements VirtualMachineProfile {
Float cpuOvercommitRatio = 1.0f;
Float memoryOvercommitRatio = 1.0f;
Host _host = null;
VirtualMachine.Type _type;
List<String[]> vmData = null;
@ -57,6 +61,7 @@ public class VirtualMachineProfileImpl implements VirtualMachineProfile {
String configDriveIsoBaseLocation = "/tmp/";
String configDriveIsoRootFolder = null;
String configDriveIsoFile = null;
NetworkElement.Location configDriveLocation = NetworkElement.Location.SECONDARY;
public VirtualMachineProfileImpl(VirtualMachine vm, VirtualMachineTemplate template, ServiceOffering offering, Account owner, Map<Param, Object> params) {
_vm = vm;
@ -219,6 +224,19 @@ public class VirtualMachineProfileImpl implements VirtualMachineProfile {
return _params.get(name);
}
@Override
public Long getHostId() {
if (_host != null) {
return _host.getId();
}
return _vm.getHostId();
}
@Override
public void setHost(Host host) {
this._host = host;
}
@Override
public String getHostName() {
return _vm.getHostName();
@ -311,4 +329,14 @@ public class VirtualMachineProfileImpl implements VirtualMachineProfile {
public void setConfigDriveIsoFile(String isoFile) {
this.configDriveIsoFile = isoFile;
}
@Override
public NetworkElement.Location getConfigDriveLocation() {
return configDriveLocation;
}
@Override
public void setConfigDriveLocation(NetworkElement.Location location) {
this.configDriveLocation = location;
}
}

View File

@ -156,6 +156,7 @@ import com.cloud.exception.InsufficientServerCapacityException;
import com.cloud.exception.InvalidParameterValueException;
import com.cloud.exception.OperationTimedoutException;
import com.cloud.exception.ResourceUnavailableException;
import com.cloud.exception.StorageAccessException;
import com.cloud.exception.StorageUnavailableException;
import com.cloud.ha.HighAvailabilityManager;
import com.cloud.ha.HighAvailabilityManager.WorkType;
@ -743,12 +744,11 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
} catch (final InsufficientCapacityException e) {
throw new CloudRuntimeException("Unable to start a VM due to insufficient capacity", e).add(VirtualMachine.class, vmUuid);
} catch (final ResourceUnavailableException e) {
if(e.getScope() != null && e.getScope().equals(VirtualRouter.class)){
if (e.getScope() != null && e.getScope().equals(VirtualRouter.class)){
throw new CloudRuntimeException("Network is unavailable. Please contact administrator", e).add(VirtualMachine.class, vmUuid);
}
throw new CloudRuntimeException("Unable to start a VM due to unavailable resources", e).add(VirtualMachine.class, vmUuid);
}
}
protected boolean checkWorkItems(final VMInstanceVO vm, final State state) throws ConcurrentOperationException {
@ -1036,6 +1036,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
int retry = StartRetry.value();
while (retry-- != 0) { // It's != so that it can match -1.
s_logger.debug("VM start attempt #" + (StartRetry.value() - retry));
if (reuseVolume) {
// edit plan if this vm's ROOT volume is in READY state already
@ -1115,7 +1116,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
if (dest != null) {
avoids.addHost(dest.getHost().getId());
if (!template.isDeployAsIs()) {
journal.record("Deployment found ", vmProfile, dest);
journal.record("Deployment found - Attempt #" + (StartRetry.value() - retry), vmProfile, dest);
}
}
@ -1148,7 +1149,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
try {
resetVmNicsDeviceId(vm.getId());
_networkMgr.prepare(vmProfile, new DeployDestination(dest.getDataCenter(), dest.getPod(), null, null, dest.getStorageForDisks(), dest.isDisplayStorage()), ctx);
_networkMgr.prepare(vmProfile, dest, ctx);
if (vm.getHypervisorType() != HypervisorType.BareMetal) {
volumeMgr.prepare(vmProfile, dest);
}
@ -1305,6 +1306,8 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
} catch (final NoTransitionException e) {
s_logger.error("Failed to start instance " + vm, e);
throw new AgentUnavailableException("Unable to start instance due to " + e.getMessage(), destHostId, e);
} catch (final StorageAccessException e) {
s_logger.warn("Unable to access storage on host", e);
} finally {
if (startedVm == null && canRetry) {
final Step prevStep = work.getStep();
@ -1632,6 +1635,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
info.put(DiskTO.STORAGE_HOST, storagePool.getHostAddress());
info.put(DiskTO.STORAGE_PORT, String.valueOf(storagePool.getPort()));
info.put(DiskTO.IQN, volume.get_iScsiName());
info.put(DiskTO.PROTOCOL_TYPE, (volume.getPoolType() != null) ? volume.getPoolType().toString() : null);
volumesToDisconnect.add(info);
}
@ -1762,20 +1766,34 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
}
}
} finally {
try {
_networkMgr.release(profile, cleanUpEvenIfUnableToStop);
s_logger.debug("Successfully released network resources for the vm " + vm);
} catch (final Exception e) {
s_logger.warn("Unable to release some network resources.", e);
}
volumeMgr.release(profile);
s_logger.debug(String.format("Successfully cleaned up resources for the VM %s in %s state", vm, state));
releaseVmResources(profile, cleanUpEvenIfUnableToStop);
}
return true;
}
protected void releaseVmResources(final VirtualMachineProfile profile, final boolean forced) {
final VirtualMachine vm = profile.getVirtualMachine();
final State state = vm.getState();
try {
_networkMgr.release(profile, forced);
s_logger.debug(String.format("Successfully released network resources for the VM %s in %s state", vm, state));
} catch (final Exception e) {
s_logger.warn(String.format("Unable to release some network resources for the VM %s in %s state", vm, state), e);
}
try {
if (vm.getHypervisorType() != HypervisorType.BareMetal) {
volumeMgr.release(profile);
s_logger.debug(String.format("Successfully released storage resources for the VM %s in %s state", vm, state));
}
} catch (final Exception e) {
s_logger.warn(String.format("Unable to release storage resources for the VM %s in %s state", vm, state), e);
}
s_logger.debug(String.format("Successfully cleaned up resources for the VM %s in %s state", vm, state));
}
@Override
public void advanceStop(final String vmUuid, final boolean cleanUpEvenIfUnableToStop)
throws AgentUnavailableException, OperationTimedoutException, ConcurrentOperationException {
@ -1985,21 +2003,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
s_logger.debug(vm + " is stopped on the host. Proceeding to release resource held.");
}
try {
_networkMgr.release(profile, cleanUpEvenIfUnableToStop);
s_logger.debug("Successfully released network resources for the vm " + vm);
} catch (final Exception e) {
s_logger.warn("Unable to release some network resources.", e);
}
try {
if (vm.getHypervisorType() != HypervisorType.BareMetal) {
volumeMgr.release(profile);
s_logger.debug("Successfully released storage resources for the vm " + vm);
}
} catch (final Exception e) {
s_logger.warn("Unable to release storage resources.", e);
}
releaseVmResources(profile, cleanUpEvenIfUnableToStop);
try {
if (work != null) {
@ -2603,11 +2607,14 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
}
final VirtualMachineProfile vmSrc = new VirtualMachineProfileImpl(vm);
vmSrc.setHost(fromHost);
for (final NicProfile nic : _networkMgr.getNicProfiles(vm)) {
vmSrc.addNic(nic);
}
final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm, null, _offeringDao.findById(vm.getId(), vm.getServiceOfferingId()), null, null);
profile.setHost(dest.getHost());
_networkMgr.prepareNicForMigration(profile, dest);
volumeMgr.prepareForMigration(profile, dest);
profile.setConfigDriveLabel(VmConfigDriveLabel.value());
@ -2635,6 +2642,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
} finally {
if (pfma == null) {
_networkMgr.rollbackNicForMigration(vmSrc, profile);
volumeMgr.release(vm.getId(), dstHostId);
work.setStep(Step.Done);
_workDao.update(work.getId(), work);
}
@ -2644,15 +2652,21 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
try {
if (vm == null || vm.getHostId() == null || vm.getHostId() != srcHostId || !changeState(vm, Event.MigrationRequested, dstHostId, work, Step.Migrating)) {
_networkMgr.rollbackNicForMigration(vmSrc, profile);
if (vm != null) {
volumeMgr.release(vm.getId(), dstHostId);
}
s_logger.info("Migration cancelled because state has changed: " + vm);
throw new ConcurrentOperationException("Migration cancelled because state has changed: " + vm);
}
} catch (final NoTransitionException e1) {
_networkMgr.rollbackNicForMigration(vmSrc, profile);
volumeMgr.release(vm.getId(), dstHostId);
s_logger.info("Migration cancelled because " + e1.getMessage());
throw new ConcurrentOperationException("Migration cancelled because " + e1.getMessage());
} catch (final CloudRuntimeException e2) {
_networkMgr.rollbackNicForMigration(vmSrc, profile);
volumeMgr.release(vm.getId(), dstHostId);
s_logger.info("Migration cancelled because " + e2.getMessage());
work.setStep(Step.Done);
_workDao.update(work.getId(), work);
@ -2720,6 +2734,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
if (!migrated) {
s_logger.info("Migration was unsuccessful. Cleaning up: " + vm);
_networkMgr.rollbackNicForMigration(vmSrc, profile);
volumeMgr.release(vm.getId(), dstHostId);
_alertMgr.sendAlert(alertType, fromHost.getDataCenterId(), fromHost.getPodId(),
"Unable to migrate vm " + vm.getInstanceName() + " from host " + fromHost.getName() + " in zone " + dest.getDataCenter().getName() + " and pod " +
@ -2737,6 +2752,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
}
} else {
_networkMgr.commitNicForMigration(vmSrc, profile);
volumeMgr.release(vm.getId(), srcHostId);
_networkMgr.setHypervisorHostname(profile, dest, true);
}
@ -3026,8 +3042,16 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
final Cluster cluster = _clusterDao.findById(destHost.getClusterId());
final DeployDestination destination = new DeployDestination(dc, pod, cluster, destHost);
final VirtualMachineProfile vmSrc = new VirtualMachineProfileImpl(vm);
vmSrc.setHost(srcHost);
for (final NicProfile nic : _networkMgr.getNicProfiles(vm)) {
vmSrc.addNic(nic);
}
final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm, null, _offeringDao.findById(vm.getId(), vm.getServiceOfferingId()), null, null);
profile.setHost(destHost);
// Create a map of which volume should go in which storage pool.
final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm);
final Map<Volume, StoragePool> volumeToPoolMap = createMappingVolumeAndStoragePool(profile, destHost, volumeToPool);
// If none of the volumes have to be migrated, fail the call. Administrator needs to make a call for migrating
@ -3055,7 +3079,6 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
work.setResourceId(destHostId);
work = _workDao.persist(work);
// Put the vm in migrating state.
vm.setLastHostId(srcHostId);
vm.setPodIdToDeployIn(destHost.getPodId());
@ -3127,6 +3150,9 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
} finally {
if (!migrated) {
s_logger.info("Migration was unsuccessful. Cleaning up: " + vm);
_networkMgr.rollbackNicForMigration(vmSrc, profile);
volumeMgr.release(vm.getId(), destHostId);
_alertMgr.sendAlert(alertType, srcHost.getDataCenterId(), srcHost.getPodId(),
"Unable to migrate vm " + vm.getInstanceName() + " from host " + srcHost.getName() + " in zone " + dc.getName() + " and pod " + dc.getName(),
"Migrate Command failed. Please check logs.");
@ -3141,6 +3167,8 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
}
_networkMgr.setHypervisorHostname(profile, destination, false);
} else {
_networkMgr.commitNicForMigration(vmSrc, profile);
volumeMgr.release(vm.getId(), srcHostId);
_networkMgr.setHypervisorHostname(profile, destination, true);
}
@ -3415,7 +3443,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
ResourceUnavailableException {
final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
// if there are active vm snapshots task, state change is not allowed
if(_vmSnapshotMgr.hasActiveVMSnapshotTasks(vm.getId())){
if (_vmSnapshotMgr.hasActiveVMSnapshotTasks(vm.getId())) {
s_logger.error("Unable to reboot VM " + vm + " due to: " + vm.getInstanceName() + " has active VM snapshots tasks");
throw new CloudRuntimeException("Unable to reboot VM " + vm + " due to: " + vm.getInstanceName() + " has active VM snapshots tasks");
}
@ -4623,11 +4651,11 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
@Override
public ConfigKey<?>[] getConfigKeys() {
return new ConfigKey<?>[] {ClusterDeltaSyncInterval, StartRetry, VmDestroyForcestop, VmOpCancelInterval, VmOpCleanupInterval, VmOpCleanupWait,
VmOpLockStateRetry,
VmOpWaitInterval, ExecuteInSequence, VmJobCheckInterval, VmJobTimeout, VmJobStateReportInterval, VmConfigDriveLabel, VmConfigDriveOnPrimaryPool, HaVmRestartHostUp,
ResoureCountRunningVMsonly, AllowExposeHypervisorHostname, AllowExposeHypervisorHostnameAccountLevel,
VmServiceOfferingMaxCPUCores, VmServiceOfferingMaxRAMSize };
return new ConfigKey<?>[] { ClusterDeltaSyncInterval, StartRetry, VmDestroyForcestop, VmOpCancelInterval, VmOpCleanupInterval, VmOpCleanupWait,
VmOpLockStateRetry, VmOpWaitInterval, ExecuteInSequence, VmJobCheckInterval, VmJobTimeout, VmJobStateReportInterval,
VmConfigDriveLabel, VmConfigDriveOnPrimaryPool, VmConfigDriveForceHostCacheUse, VmConfigDriveUseHostCacheOnUnsupportedPool,
HaVmRestartHostUp, ResoureCountRunningVMsonly, AllowExposeHypervisorHostname, AllowExposeHypervisorHostnameAccountLevel,
VmServiceOfferingMaxCPUCores, VmServiceOfferingMaxRAMSize };
}
public List<StoragePoolAllocator> getStoragePoolAllocators() {
@ -4777,12 +4805,12 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
String.format("VM %s is at %s and we received a %s report while there is no pending jobs on it"
, vm.getInstanceName(), vm.getState(), vm.getPowerState()));
}
if(vm.isHaEnabled() && vm.getState() == State.Running
if (vm.isHaEnabled() && vm.getState() == State.Running
&& HaVmRestartHostUp.value()
&& vm.getHypervisorType() != HypervisorType.VMware
&& vm.getHypervisorType() != HypervisorType.Hyperv) {
s_logger.info("Detected out-of-band stop of a HA enabled VM " + vm.getInstanceName() + ", will schedule restart");
if(!_haMgr.hasPendingHaWork(vm.getId())) {
if (!_haMgr.hasPendingHaWork(vm.getId())) {
_haMgr.scheduleRestart(vm, true);
} else {
s_logger.info("VM " + vm.getInstanceName() + " already has an pending HA task working on it");
@ -4791,13 +4819,20 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
}
// not when report is missing
if(PowerState.PowerOff.equals(vm.getPowerState())) {
if (PowerState.PowerOff.equals(vm.getPowerState())) {
final VirtualMachineGuru vmGuru = getVmGuru(vm);
final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm);
if (!sendStop(vmGuru, profile, true, true)) {
// In case StopCommand fails, don't proceed further
return;
} else {
// Release resources on StopCommand success
releaseVmResources(profile, true);
}
} else if (PowerState.PowerReportMissing.equals(vm.getPowerState())) {
final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm);
// VM will be sync-ed to Stopped state, release the resources
releaseVmResources(profile, true);
}
try {
@ -5574,10 +5609,9 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
s_logger.trace(String.format("orchestrating VM start for '%s' %s set to %s", vm.getInstanceName(), VirtualMachineProfile.Param.BootIntoSetup, enterSetup));
}
try{
try {
orchestrateStart(vm.getUuid(), work.getParams(), work.getPlan(), _dpMgr.getDeploymentPlannerByName(work.getDeploymentPlanner()));
}
catch (CloudRuntimeException e){
} catch (CloudRuntimeException e) {
e.printStackTrace();
s_logger.info("Caught CloudRuntimeException, returning job failed " + e);
CloudRuntimeException ex = new CloudRuntimeException("Unable to start VM instance");

View File

@ -35,8 +35,6 @@ import javax.inject.Inject;
import javax.naming.ConfigurationException;
import com.cloud.agent.api.to.DatadiskTO;
import com.cloud.storage.VolumeDetailVO;
import com.cloud.storage.dao.VMTemplateDetailsDao;
import com.cloud.utils.StringUtils;
import com.cloud.vm.SecondaryStorageVmVO;
import com.cloud.vm.UserVmDetailVO;
@ -75,6 +73,8 @@ import org.apache.cloudstack.framework.config.ConfigKey;
import org.apache.cloudstack.framework.config.Configurable;
import org.apache.cloudstack.framework.jobs.AsyncJobManager;
import org.apache.cloudstack.framework.jobs.impl.AsyncJobVO;
import org.apache.cloudstack.resourcedetail.DiskOfferingDetailVO;
import org.apache.cloudstack.resourcedetail.dao.DiskOfferingDetailsDao;
import org.apache.cloudstack.storage.command.CommandResult;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreDao;
@ -103,6 +103,7 @@ import com.cloud.event.UsageEventUtils;
import com.cloud.exception.ConcurrentOperationException;
import com.cloud.exception.InsufficientStorageCapacityException;
import com.cloud.exception.InvalidParameterValueException;
import com.cloud.exception.StorageAccessException;
import com.cloud.exception.StorageUnavailableException;
import com.cloud.host.Host;
import com.cloud.host.HostVO;
@ -122,8 +123,10 @@ import com.cloud.storage.VMTemplateStorageResourceAssoc;
import com.cloud.storage.Volume;
import com.cloud.storage.Volume.Type;
import com.cloud.storage.VolumeApiService;
import com.cloud.storage.VolumeDetailVO;
import com.cloud.storage.VolumeVO;
import com.cloud.storage.dao.SnapshotDao;
import com.cloud.storage.dao.VMTemplateDetailsDao;
import com.cloud.storage.dao.VolumeDao;
import com.cloud.storage.dao.VolumeDetailsDao;
import com.cloud.template.TemplateManager;
@ -185,6 +188,8 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
@Inject
protected ResourceLimitService _resourceLimitMgr;
@Inject
DiskOfferingDetailsDao _diskOfferingDetailDao;
@Inject
VolumeDetailsDao _volDetailDao;
@Inject
DataStoreManager dataStoreMgr;
@ -748,6 +753,19 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
vol.setFormat(getSupportedImageFormatForCluster(vm.getHypervisorType()));
vol = _volsDao.persist(vol);
List<VolumeDetailVO> volumeDetailsVO = new ArrayList<VolumeDetailVO>();
DiskOfferingDetailVO bandwidthLimitDetail = _diskOfferingDetailDao.findDetail(offering.getId(), Volume.BANDWIDTH_LIMIT_IN_MBPS);
if (bandwidthLimitDetail != null) {
volumeDetailsVO.add(new VolumeDetailVO(vol.getId(), Volume.BANDWIDTH_LIMIT_IN_MBPS, bandwidthLimitDetail.getValue(), false));
}
DiskOfferingDetailVO iopsLimitDetail = _diskOfferingDetailDao.findDetail(offering.getId(), Volume.IOPS_LIMIT);
if (iopsLimitDetail != null) {
volumeDetailsVO.add(new VolumeDetailVO(vol.getId(), Volume.IOPS_LIMIT, iopsLimitDetail.getValue(), false));
}
if (!volumeDetailsVO.isEmpty()) {
_volDetailDao.saveDetails(volumeDetailsVO);
}
// Save usage event and update resource count for user vm volumes
if (vm.getType() == VirtualMachine.Type.User) {
UsageEventUtils.publishUsageEvent(EventTypes.EVENT_VOLUME_CREATE, vol.getAccountId(), vol.getDataCenterId(), vol.getId(), vol.getName(), offering.getId(), null, size,
@ -801,6 +819,19 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
vol = _volsDao.persist(vol);
List<VolumeDetailVO> volumeDetailsVO = new ArrayList<VolumeDetailVO>();
DiskOfferingDetailVO bandwidthLimitDetail = _diskOfferingDetailDao.findDetail(offering.getId(), Volume.BANDWIDTH_LIMIT_IN_MBPS);
if (bandwidthLimitDetail != null) {
volumeDetailsVO.add(new VolumeDetailVO(vol.getId(), Volume.BANDWIDTH_LIMIT_IN_MBPS, bandwidthLimitDetail.getValue(), false));
}
DiskOfferingDetailVO iopsLimitDetail = _diskOfferingDetailDao.findDetail(offering.getId(), Volume.IOPS_LIMIT);
if (iopsLimitDetail != null) {
volumeDetailsVO.add(new VolumeDetailVO(vol.getId(), Volume.IOPS_LIMIT, iopsLimitDetail.getValue(), false));
}
if (!volumeDetailsVO.isEmpty()) {
_volDetailDao.saveDetails(volumeDetailsVO);
}
if (StringUtils.isNotBlank(configurationId)) {
VolumeDetailVO deployConfigurationDetail = new VolumeDetailVO(vol.getId(), VmDetailConstants.DEPLOY_AS_IS_CONFIGURATION, configurationId, false);
_volDetailDao.persist(deployConfigurationDetail);
@ -1010,8 +1041,39 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
}
@Override
public void release(VirtualMachineProfile profile) {
// add code here
public void release(VirtualMachineProfile vmProfile) {
Long hostId = vmProfile.getVirtualMachine().getHostId();
if (hostId != null) {
revokeAccess(vmProfile.getId(), hostId);
}
}
@Override
public void release(long vmId, long hostId) {
List<VolumeVO> volumesForVm = _volsDao.findUsableVolumesForInstance(vmId);
if (volumesForVm == null || volumesForVm.isEmpty()) {
return;
}
if (s_logger.isDebugEnabled()) {
s_logger.debug("Releasing " + volumesForVm.size() + " volumes for VM: " + vmId + " from host: " + hostId);
}
for (VolumeVO volumeForVm : volumesForVm) {
VolumeInfo volumeInfo = volFactory.getVolume(volumeForVm.getId());
// pool id can be null for the VM's volumes in Allocated state
if (volumeForVm.getPoolId() != null) {
DataStore dataStore = dataStoreMgr.getDataStore(volumeForVm.getPoolId(), DataStoreRole.Primary);
PrimaryDataStore primaryDataStore = (PrimaryDataStore)dataStore;
HostVO host = _hostDao.findById(hostId);
// This might impact other managed storages, grant access for PowerFlex storage pool only
if (primaryDataStore.isManaged() && primaryDataStore.getPoolType() == Storage.StoragePoolType.PowerFlex) {
volService.revokeAccess(volumeInfo, host, dataStore);
}
}
}
}
@Override
@ -1118,6 +1180,9 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
VolumeApiResult result = future.get();
if (result.isFailed()) {
s_logger.error("Migrate volume failed:" + result.getResult());
if (result.getResult() != null && result.getResult().contains("[UNSUPPORTED]")) {
throw new CloudRuntimeException("Migrate volume failed: " + result.getResult());
}
throw new StorageUnavailableException("Migrate volume failed: " + result.getResult(), destPool.getId());
} else {
// update the volumeId for snapshots on secondary
@ -1243,6 +1308,12 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
disk.setDetails(getDetails(volumeInfo, dataStore));
PrimaryDataStore primaryDataStore = (PrimaryDataStore)dataStore;
// This might impact other managed storages, grant access for PowerFlex storage pool only
if (primaryDataStore.isManaged() && primaryDataStore.getPoolType() == Storage.StoragePoolType.PowerFlex) {
volService.grantAccess(volFactory.getVolume(vol.getId()), dest.getHost(), dataStore);
}
vm.addDisk(disk);
}
@ -1269,6 +1340,7 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
VolumeVO volume = _volumeDao.findById(volumeInfo.getId());
details.put(DiskTO.PROTOCOL_TYPE, (volume.getPoolType() != null) ? volume.getPoolType().toString() : null);
details.put(StorageManager.STORAGE_POOL_DISK_WAIT.toString(), String.valueOf(StorageManager.STORAGE_POOL_DISK_WAIT.valueIn(storagePool.getId())));
if (volume.getPoolId() != null) {
StoragePoolVO poolVO = _storagePoolDao.findById(volume.getPoolId());
@ -1386,7 +1458,7 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
return tasks;
}
private Pair<VolumeVO, DataStore> recreateVolume(VolumeVO vol, VirtualMachineProfile vm, DeployDestination dest) throws StorageUnavailableException {
private Pair<VolumeVO, DataStore> recreateVolume(VolumeVO vol, VirtualMachineProfile vm, DeployDestination dest) throws StorageUnavailableException, StorageAccessException {
VolumeVO newVol;
boolean recreate = RecreatableSystemVmEnabled.value();
DataStore destPool = null;
@ -1430,19 +1502,28 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
future = volService.createVolumeAsync(volume, destPool);
} else {
TemplateInfo templ = tmplFactory.getReadyTemplateOnImageStore(templateId, dest.getDataCenter().getId());
PrimaryDataStore primaryDataStore = (PrimaryDataStore)destPool;
if (templ == null) {
if (tmplFactory.isTemplateMarkedForDirectDownload(templateId)) {
// Template is marked for direct download bypassing Secondary Storage
templ = tmplFactory.getReadyBypassedTemplateOnPrimaryStore(templateId, destPool.getId(), dest.getHost().getId());
if (!primaryDataStore.isManaged()) {
templ = tmplFactory.getReadyBypassedTemplateOnPrimaryStore(templateId, destPool.getId(), dest.getHost().getId());
} else {
s_logger.debug("Direct download template: " + templateId + " on host: " + dest.getHost().getId() + " and copy to the managed storage pool: " + destPool.getId());
templ = volService.createManagedStorageTemplate(templateId, destPool.getId(), dest.getHost().getId());
}
if (templ == null) {
s_logger.debug("Failed to spool direct download template: " + templateId + " for data center " + dest.getDataCenter().getId());
throw new CloudRuntimeException("Failed to spool direct download template: " + templateId + " for data center " + dest.getDataCenter().getId());
}
} else {
s_logger.debug("can't find ready template: " + templateId + " for data center " + dest.getDataCenter().getId());
throw new CloudRuntimeException("can't find ready template: " + templateId + " for data center " + dest.getDataCenter().getId());
}
}
PrimaryDataStore primaryDataStore = (PrimaryDataStore)destPool;
if (primaryDataStore.isManaged()) {
DiskOffering diskOffering = _entityMgr.findById(DiskOffering.class, volume.getDiskOfferingId());
HypervisorType hyperType = vm.getVirtualMachine().getHypervisorType();
@ -1476,11 +1557,17 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
long hostId = vm.getVirtualMachine().getHostId();
Host host = _hostDao.findById(hostId);
volService.grantAccess(volFactory.getVolume(newVol.getId()), host, destPool);
try {
volService.grantAccess(volFactory.getVolume(newVol.getId()), host, destPool);
} catch (Exception e) {
throw new StorageAccessException("Unable to grant access to volume: " + newVol.getId() + " on host: " + host.getId());
}
}
newVol = _volsDao.findById(newVol.getId());
break; //break out of template-redeploy retry loop
} catch (StorageAccessException e) {
throw e;
} catch (InterruptedException | ExecutionException e) {
s_logger.error("Unable to create " + newVol, e);
throw new StorageUnavailableException("Unable to create " + newVol + ":" + e.toString(), destPool.getId());
@ -1491,7 +1578,7 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
}
@Override
public void prepare(VirtualMachineProfile vm, DeployDestination dest) throws StorageUnavailableException, InsufficientStorageCapacityException, ConcurrentOperationException {
public void prepare(VirtualMachineProfile vm, DeployDestination dest) throws StorageUnavailableException, InsufficientStorageCapacityException, ConcurrentOperationException, StorageAccessException {
if (dest == null) {
if (s_logger.isDebugEnabled()) {
@ -1534,7 +1621,20 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
volService.revokeAccess(volFactory.getVolume(vol.getId()), lastHost, storagePool);
}
volService.grantAccess(volFactory.getVolume(vol.getId()), host, (DataStore)pool);
try {
volService.grantAccess(volFactory.getVolume(vol.getId()), host, (DataStore)pool);
} catch (Exception e) {
throw new StorageAccessException("Unable to grant access to volume: " + vol.getId() + " on host: " + host.getId());
}
} else {
// This might impact other managed storages, grant access for PowerFlex storage pool only
if (pool.getPoolType() == Storage.StoragePoolType.PowerFlex) {
try {
volService.grantAccess(volFactory.getVolume(vol.getId()), host, (DataStore)pool);
} catch (Exception e) {
throw new StorageAccessException("Unable to grant access to volume: " + vol.getId() + " on host: " + host.getId());
}
}
}
}
} else if (task.type == VolumeTaskType.MIGRATE) {
@ -1847,4 +1947,4 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
}
});
}
}
}

View File

@ -32,6 +32,8 @@ public interface StoragePoolHostDao extends GenericDao<StoragePoolHostVO, Long>
List<StoragePoolHostVO> listByHostStatus(long poolId, Status hostStatus);
List<Long> findHostsConnectedToPools(List<Long> poolIds);
List<Pair<Long, Integer>> getDatacenterStoragePoolHostInfo(long dcId, boolean sharedOnly);
public void deletePrimaryRecordsForHost(long hostId);

View File

@ -21,6 +21,7 @@ import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;
import org.apache.log4j.Logger;
@ -44,6 +45,8 @@ public class StoragePoolHostDaoImpl extends GenericDaoBase<StoragePoolHostVO, Lo
protected static final String HOST_FOR_POOL_SEARCH = "SELECT * FROM storage_pool_host_ref ph, host h where ph.host_id = h.id and ph.pool_id=? and h.status=? ";
protected static final String HOSTS_FOR_POOLS_SEARCH = "SELECT DISTINCT(ph.host_id) FROM storage_pool_host_ref ph, host h WHERE ph.host_id = h.id AND h.status = 'Up' AND resource_state = 'Enabled' AND ph.pool_id IN (?)";
protected static final String STORAGE_POOL_HOST_INFO = "SELECT p.data_center_id, count(ph.host_id) " + " FROM storage_pool p, storage_pool_host_ref ph "
+ " WHERE p.id = ph.pool_id AND p.data_center_id = ? " + " GROUP by p.data_center_id";
@ -121,6 +124,33 @@ public class StoragePoolHostDaoImpl extends GenericDaoBase<StoragePoolHostVO, Lo
return result;
}
@Override
public List<Long> findHostsConnectedToPools(List<Long> poolIds) {
List<Long> hosts = new ArrayList<Long>();
if (poolIds == null || poolIds.isEmpty()) {
return hosts;
}
String poolIdsInStr = poolIds.stream().map(poolId -> String.valueOf(poolId)).collect(Collectors.joining(",", "(", ")"));
String sql = HOSTS_FOR_POOLS_SEARCH.replace("(?)", poolIdsInStr);
TransactionLegacy txn = TransactionLegacy.currentTxn();
try(PreparedStatement pstmt = txn.prepareStatement(sql);) {
try(ResultSet rs = pstmt.executeQuery();) {
while (rs.next()) {
long hostId = rs.getLong(1); // host_id column
hosts.add(hostId);
}
} catch (SQLException e) {
s_logger.warn("findHostsConnectedToPools:Exception: ", e);
}
} catch (Exception e) {
s_logger.warn("findHostsConnectedToPools:Exception: ", e);
}
return hosts;
}
@Override
public List<Pair<Long, Integer>> getDatacenterStoragePoolHostInfo(long dcId, boolean sharedOnly) {
ArrayList<Pair<Long, Integer>> l = new ArrayList<Pair<Long, Integer>>();

View File

@ -61,10 +61,10 @@ public class DataMotionServiceImpl implements DataMotionService {
}
if (srcData.getDataStore().getDriver().canCopy(srcData, destData)) {
srcData.getDataStore().getDriver().copyAsync(srcData, destData, callback);
srcData.getDataStore().getDriver().copyAsync(srcData, destData, destHost, callback);
return;
} else if (destData.getDataStore().getDriver().canCopy(srcData, destData)) {
destData.getDataStore().getDriver().copyAsync(srcData, destData, callback);
destData.getDataStore().getDriver().copyAsync(srcData, destData, destHost, callback);
return;
}

View File

@ -53,6 +53,7 @@ import com.cloud.storage.StorageManager;
import com.cloud.storage.StoragePool;
import com.cloud.storage.VMTemplateStoragePoolVO;
import com.cloud.storage.VMTemplateStorageResourceAssoc;
import com.cloud.storage.Volume;
import com.cloud.storage.VolumeVO;
import com.cloud.storage.dao.VMTemplatePoolDao;
import com.cloud.utils.exception.CloudRuntimeException;
@ -195,6 +196,10 @@ public class KvmNonManagedStorageDataMotionStrategy extends StorageSystemDataMot
@Override
protected void copyTemplateToTargetFilesystemStorageIfNeeded(VolumeInfo srcVolumeInfo, StoragePool srcStoragePool, DataStore destDataStore, StoragePool destStoragePool,
Host destHost) {
if (srcVolumeInfo.getVolumeType() != Volume.Type.ROOT || srcVolumeInfo.getTemplateId() == null) {
return;
}
VMTemplateStoragePoolVO sourceVolumeTemplateStoragePoolVO = vmTemplatePoolDao.findByPoolTemplate(destStoragePool.getId(), srcVolumeInfo.getTemplateId(), null);
if (sourceVolumeTemplateStoragePoolVO == null && destStoragePool.getPoolType() == StoragePoolType.Filesystem) {
DataStore sourceTemplateDataStore = dataStoreManagerImpl.getRandomImageStore(srcVolumeInfo.getDataCenterId());

View File

@ -574,6 +574,14 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
}
}
private void verifyFormatWithPoolType(ImageFormat imageFormat, StoragePoolType poolType) {
if (imageFormat != ImageFormat.VHD && imageFormat != ImageFormat.OVA && imageFormat != ImageFormat.QCOW2 &&
!(imageFormat == ImageFormat.RAW && StoragePoolType.PowerFlex == poolType)) {
throw new CloudRuntimeException("Only the following image types are currently supported: " +
ImageFormat.VHD.toString() + ", " + ImageFormat.OVA.toString() + ", " + ImageFormat.QCOW2.toString() + ", and " + ImageFormat.RAW.toString() + "(for PowerFlex)");
}
}
private void verifyFormat(ImageFormat imageFormat) {
if (imageFormat != ImageFormat.VHD && imageFormat != ImageFormat.OVA && imageFormat != ImageFormat.QCOW2) {
throw new CloudRuntimeException("Only the following image types are currently supported: " +
@ -585,8 +593,9 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
long volumeId = snapshotInfo.getVolumeId();
VolumeVO volumeVO = _volumeDao.findByIdIncludingRemoved(volumeId);
StoragePoolVO storagePoolVO = _storagePoolDao.findById(volumeVO.getPoolId());
verifyFormat(volumeVO.getFormat());
verifyFormatWithPoolType(volumeVO.getFormat(), storagePoolVO.getPoolType());
}
private boolean usingBackendSnapshotFor(SnapshotInfo snapshotInfo) {
@ -735,6 +744,7 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
details.put(DiskTO.MANAGED, Boolean.TRUE.toString());
details.put(DiskTO.IQN, destVolumeInfo.get_iScsiName());
details.put(DiskTO.STORAGE_HOST, destPool.getHostAddress());
details.put(DiskTO.PROTOCOL_TYPE, (destPool.getPoolType() != null) ? destPool.getPoolType().toString() : null);
command.setDestDetails(details);
@ -916,6 +926,11 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
boolean keepGrantedAccess = false;
DataStore srcDataStore = snapshotInfo.getDataStore();
StoragePoolVO storagePoolVO = _storagePoolDao.findById(srcDataStore.getId());
if (HypervisorType.KVM.equals(snapshotInfo.getHypervisorType()) && storagePoolVO.getPoolType() == StoragePoolType.PowerFlex) {
usingBackendSnapshot = false;
}
if (usingBackendSnapshot) {
createVolumeFromSnapshot(snapshotInfo);
@ -1309,7 +1324,13 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
Preconditions.checkArgument(volumeInfo != null, "Passing 'null' to volumeInfo of " +
"handleCreateVolumeFromTemplateBothOnStorageSystem is not supported.");
verifyFormat(templateInfo.getFormat());
DataStore dataStore = volumeInfo.getDataStore();
if (dataStore.getRole() == DataStoreRole.Primary) {
StoragePoolVO storagePoolVO = _storagePoolDao.findById(dataStore.getId());
verifyFormatWithPoolType(templateInfo.getFormat(), storagePoolVO.getPoolType());
} else {
verifyFormat(templateInfo.getFormat());
}
HostVO hostVO = null;
@ -1786,6 +1807,11 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
StoragePoolVO destStoragePool = _storagePoolDao.findById(destDataStore.getId());
StoragePoolVO sourceStoragePool = _storagePoolDao.findById(srcVolumeInfo.getPoolId());
// do not initiate migration for the same PowerFlex/ScaleIO pool
if (sourceStoragePool.getId() == destStoragePool.getId() && sourceStoragePool.getPoolType() == Storage.StoragePoolType.PowerFlex) {
continue;
}
if (!shouldMigrateVolume(sourceStoragePool, destHost, destStoragePool)) {
continue;
}
@ -1894,13 +1920,11 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
throw new CloudRuntimeException(errMsg);
}
}
catch (Exception ex) {
} catch (Exception ex) {
errMsg = "Copy operation failed in 'StorageSystemDataMotionStrategy.copyAsync': " + ex.getMessage();
LOGGER.error(errMsg, ex);
throw new CloudRuntimeException(errMsg);
}
finally {
} finally {
CopyCmdAnswer copyCmdAnswer = new CopyCmdAnswer(errMsg);
CopyCommandResult result = new CopyCommandResult(null, copyCmdAnswer);
@ -2197,10 +2221,6 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
throw new CloudRuntimeException("Volume with ID " + volumeInfo.getId() + " is not associated with a storage pool.");
}
if (srcStoragePoolVO.isManaged()) {
throw new CloudRuntimeException("Migrating a volume online with KVM from managed storage is not currently supported.");
}
DataStore dataStore = entry.getValue();
StoragePoolVO destStoragePoolVO = _storagePoolDao.findById(dataStore.getId());
@ -2208,6 +2228,10 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
throw new CloudRuntimeException("Destination storage pool with ID " + dataStore.getId() + " was not located.");
}
if (srcStoragePoolVO.isManaged() && srcStoragePoolVO.getId() != destStoragePoolVO.getId()) {
throw new CloudRuntimeException("Migrating a volume online with KVM from managed storage is not currently supported.");
}
if (storageTypeConsistency == null) {
storageTypeConsistency = destStoragePoolVO.isManaged();
} else if (storageTypeConsistency != destStoragePoolVO.isManaged()) {
@ -2301,7 +2325,9 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
CopyCmdAnswer copyCmdAnswer = null;
try {
if (!ImageFormat.QCOW2.equals(volumeInfo.getFormat())) {
StoragePoolVO storagePoolVO = _storagePoolDao.findById(volumeInfo.getPoolId());
if (!ImageFormat.QCOW2.equals(volumeInfo.getFormat()) && !(ImageFormat.RAW.equals(volumeInfo.getFormat()) && StoragePoolType.PowerFlex == storagePoolVO.getPoolType())) {
throw new CloudRuntimeException("When using managed storage, you can only create a template from a volume on KVM currently.");
}
@ -2317,7 +2343,7 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
try {
handleQualityOfServiceForVolumeMigration(volumeInfo, PrimaryDataStoreDriver.QualityOfServiceState.MIGRATION);
if (srcVolumeDetached) {
if (srcVolumeDetached || StoragePoolType.PowerFlex == storagePoolVO.getPoolType()) {
_volumeService.grantAccess(volumeInfo, hostVO, srcDataStore);
}
@ -2349,7 +2375,7 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
throw new CloudRuntimeException(msg + ex.getMessage(), ex);
}
finally {
if (srcVolumeDetached) {
if (srcVolumeDetached || StoragePoolType.PowerFlex == storagePoolVO.getPoolType()) {
try {
_volumeService.revokeAccess(volumeInfo, hostVO, srcDataStore);
}
@ -2415,6 +2441,8 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
volumeDetails.put(DiskTO.STORAGE_HOST, storagePoolVO.getHostAddress());
volumeDetails.put(DiskTO.STORAGE_PORT, String.valueOf(storagePoolVO.getPort()));
volumeDetails.put(DiskTO.IQN, volumeVO.get_iScsiName());
volumeDetails.put(DiskTO.PROTOCOL_TYPE, (volumeVO.getPoolType() != null) ? volumeVO.getPoolType().toString() : null);
volumeDetails.put(StorageManager.STORAGE_POOL_DISK_WAIT.toString(), String.valueOf(StorageManager.STORAGE_POOL_DISK_WAIT.valueIn(storagePoolVO.getId())));
volumeDetails.put(DiskTO.VOLUME_SIZE, String.valueOf(volumeVO.getSize()));
volumeDetails.put(DiskTO.SCSI_NAA_DEVICE_ID, getVolumeProperty(volumeInfo.getId(), DiskTO.SCSI_NAA_DEVICE_ID));
@ -2442,7 +2470,12 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
long snapshotId = snapshotInfo.getId();
snapshotDetails.put(DiskTO.IQN, getSnapshotProperty(snapshotId, DiskTO.IQN));
if (storagePoolVO.getPoolType() == StoragePoolType.PowerFlex) {
snapshotDetails.put(DiskTO.IQN, snapshotInfo.getPath());
} else {
snapshotDetails.put(DiskTO.IQN, getSnapshotProperty(snapshotId, DiskTO.IQN));
}
snapshotDetails.put(DiskTO.VOLUME_SIZE, String.valueOf(snapshotInfo.getSize()));
snapshotDetails.put(DiskTO.SCSI_NAA_DEVICE_ID, getSnapshotProperty(snapshotId, DiskTO.SCSI_NAA_DEVICE_ID));

View File

@ -70,6 +70,7 @@ import com.cloud.storage.Storage.ImageFormat;
import com.cloud.storage.Storage.StoragePoolType;
import com.cloud.storage.StoragePool;
import com.cloud.storage.VMTemplateStoragePoolVO;
import com.cloud.storage.Volume;
import com.cloud.storage.dao.DiskOfferingDao;
import com.cloud.storage.dao.VMTemplatePoolDao;
import com.cloud.utils.exception.CloudRuntimeException;
@ -327,6 +328,7 @@ public class KvmNonManagedStorageSystemDataMotionTest {
VolumeInfo srcVolumeInfo = Mockito.mock(VolumeInfo.class);
Mockito.when(srcVolumeInfo.getTemplateId()).thenReturn(0l);
Mockito.when(srcVolumeInfo.getVolumeType()).thenReturn(Volume.Type.ROOT);
StoragePool srcStoragePool = Mockito.mock(StoragePool.class);
@ -465,6 +467,8 @@ public class KvmNonManagedStorageSystemDataMotionTest {
@Test(expected = CloudRuntimeException.class)
public void testVerifyLiveMigrationMapForKVMMixedManagedUnmagedStorage() {
when(pool1.isManaged()).thenReturn(true);
when(pool1.getId()).thenReturn(POOL_1_ID);
when(pool2.getId()).thenReturn(POOL_2_ID);
lenient().when(pool2.isManaged()).thenReturn(false);
kvmNonManagedStorageDataMotionStrategy.verifyLiveMigrationForKVM(migrationMap, host2);
}

View File

@ -44,6 +44,7 @@ import com.cloud.host.HostVO;
import com.cloud.host.dao.HostDao;
import com.cloud.storage.DataStoreRole;
import com.cloud.storage.VMTemplateStoragePoolVO;
import com.cloud.storage.VMTemplateStorageResourceAssoc;
import com.cloud.storage.VMTemplateVO;
import com.cloud.storage.dao.VMTemplateDao;
import com.cloud.storage.dao.VMTemplatePoolDao;
@ -79,6 +80,16 @@ public class TemplateDataFactoryImpl implements TemplateDataFactory {
return null;
}
@Override
public TemplateInfo getTemplate(long templateId) {
VMTemplateVO templ = imageDataDao.findById(templateId);
if (templ != null) {
TemplateObject tmpl = TemplateObject.getTemplate(templ, null, null);
return tmpl;
}
return null;
}
@Override
public TemplateInfo getTemplate(long templateId, DataStore store) {
VMTemplateVO templ = imageDataDao.findById(templateId);
@ -244,6 +255,33 @@ public class TemplateDataFactoryImpl implements TemplateDataFactory {
return this.getTemplate(templateId, store);
}
@Override
public TemplateInfo getReadyBypassedTemplateOnManagedStorage(long templateId, TemplateInfo templateOnPrimary, Long poolId, Long hostId) {
VMTemplateVO templateVO = imageDataDao.findById(templateId);
if (templateVO == null || !templateVO.isDirectDownload()) {
return null;
}
if (poolId == null) {
throw new CloudRuntimeException("No storage pool specified to download template: " + templateId);
}
StoragePoolVO poolVO = primaryDataStoreDao.findById(poolId);
if (poolVO == null || !poolVO.isManaged()) {
return null;
}
VMTemplateStoragePoolVO spoolRef = templatePoolDao.findByPoolTemplate(poolId, templateId, null);
if (spoolRef == null) {
throw new CloudRuntimeException("Template not created on managed storage pool: " + poolId + " to copy the download template: " + templateId);
} else if (spoolRef.getDownloadState() == VMTemplateStorageResourceAssoc.Status.NOT_DOWNLOADED) {
directDownloadManager.downloadTemplate(templateId, poolId, hostId);
}
DataStore store = storeMgr.getDataStore(poolId, DataStoreRole.Primary);
return this.getTemplate(templateId, store);
}
@Override
public boolean isTemplateMarkedForDirectDownload(long templateId) {
VMTemplateVO templateVO = imageDataDao.findById(templateId);

View File

@ -917,7 +917,14 @@ public class TemplateServiceImpl implements TemplateService {
TemplateOpContext<TemplateApiResult> context = new TemplateOpContext<TemplateApiResult>(null, to, future);
AsyncCallbackDispatcher<TemplateServiceImpl, CommandResult> caller = AsyncCallbackDispatcher.create(this);
caller.setCallback(caller.getTarget().deleteTemplateCallback(null, null)).setContext(context);
to.getDataStore().getDriver().deleteAsync(to.getDataStore(), to, caller);
if (to.canBeDeletedFromDataStore()) {
to.getDataStore().getDriver().deleteAsync(to.getDataStore(), to, caller);
} else {
CommandResult result = new CommandResult();
caller.complete(result);
}
return future;
}

View File

@ -374,6 +374,35 @@ public class TemplateObject implements TemplateInfo {
return this.imageVO.isDirectDownload();
}
@Override
public boolean canBeDeletedFromDataStore() {
Status downloadStatus = Status.UNKNOWN;
int downloadPercent = -1;
if (getDataStore().getRole() == DataStoreRole.Primary) {
VMTemplateStoragePoolVO templatePoolRef = templatePoolDao.findByPoolTemplate(getDataStore().getId(), getId(), null);
if (templatePoolRef != null) {
downloadStatus = templatePoolRef.getDownloadState();
downloadPercent = templatePoolRef.getDownloadPercent();
}
} else if (dataStore.getRole() == DataStoreRole.Image || dataStore.getRole() == DataStoreRole.ImageCache) {
TemplateDataStoreVO templateStoreRef = templateStoreDao.findByStoreTemplate(dataStore.getId(), getId());
if (templateStoreRef != null) {
downloadStatus = templateStoreRef.getDownloadState();
downloadPercent = templateStoreRef.getDownloadPercent();
templateStoreRef.getState();
}
}
// Marking downloaded templates for deletion, but might skip any deletion handled for failed templates.
// Only templates not downloaded and in error state (with no install path) cannot be deleted from the datastore, so doesn't impact last behavior for templates with other states
if (downloadStatus == null || downloadStatus == Status.NOT_DOWNLOADED || (downloadStatus == Status.DOWNLOAD_ERROR && downloadPercent == 0)) {
s_logger.debug("Template: " + getId() + " cannot be deleted from the store: " + getDataStore().getId());
return false;
}
return true;
}
@Override
public boolean isDeployAsIs() {
if (this.imageVO == null) {

View File

@ -50,6 +50,12 @@
<artifactId>cloud-engine-storage-volume</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-plugin-storage-volume-scaleio</artifactId>
<version>${project.version}</version>
<scope>compile</scope>
</dependency>
</dependencies>
<build>
<plugins>

View File

@ -0,0 +1,93 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.storage.snapshot;
import javax.inject.Inject;
import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.StrategyPriority;
import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreVO;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.log4j.Logger;
import com.cloud.storage.DataStoreRole;
import com.cloud.storage.Snapshot;
import com.cloud.storage.Storage;
import com.cloud.storage.VolumeVO;
import com.cloud.storage.dao.VolumeDao;
public class ScaleIOSnapshotStrategy extends StorageSystemSnapshotStrategy {
@Inject
private SnapshotDataStoreDao snapshotStoreDao;
@Inject
private PrimaryDataStoreDao primaryDataStoreDao;
@Inject
private VolumeDao volumeDao;
private static final Logger LOG = Logger.getLogger(ScaleIOSnapshotStrategy.class);
@Override
public StrategyPriority canHandle(Snapshot snapshot, SnapshotOperation op) {
long volumeId = snapshot.getVolumeId();
VolumeVO volumeVO = volumeDao.findByIdIncludingRemoved(volumeId);
boolean baseVolumeExists = volumeVO.getRemoved() == null;
if (!baseVolumeExists) {
return StrategyPriority.CANT_HANDLE;
}
if (!isSnapshotStoredOnScaleIOStoragePool(snapshot)) {
return StrategyPriority.CANT_HANDLE;
}
if (SnapshotOperation.REVERT.equals(op)) {
return StrategyPriority.HIGHEST;
}
if (SnapshotOperation.DELETE.equals(op)) {
return StrategyPriority.HIGHEST;
}
return StrategyPriority.CANT_HANDLE;
}
@Override
public boolean revertSnapshot(SnapshotInfo snapshotInfo) {
VolumeInfo volumeInfo = snapshotInfo.getBaseVolume();
Storage.ImageFormat imageFormat = volumeInfo.getFormat();
if (!Storage.ImageFormat.RAW.equals(imageFormat)) {
LOG.error(String.format("Does not support revert snapshot of the image format [%s] on PowerFlex. Can only rollback snapshots of format RAW", imageFormat));
return false;
}
executeRevertSnapshot(snapshotInfo, volumeInfo);
return true;
}
protected boolean isSnapshotStoredOnScaleIOStoragePool(Snapshot snapshot) {
SnapshotDataStoreVO snapshotStore = snapshotStoreDao.findBySnapshot(snapshot.getId(), DataStoreRole.Primary);
if (snapshotStore == null) {
return false;
}
StoragePoolVO storagePoolVO = primaryDataStoreDao.findById(snapshotStore.getDataStoreId());
return storagePoolVO != null && storagePoolVO.getPoolType() == Storage.StoragePoolType.PowerFlex;
}
}

View File

@ -16,6 +16,37 @@
// under the License.
package org.apache.cloudstack.storage.snapshot;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Random;
import java.util.UUID;
import javax.inject.Inject;
import org.apache.cloudstack.engine.subsystem.api.storage.ChapInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreCapabilities;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager;
import org.apache.cloudstack.engine.subsystem.api.storage.ObjectInDataStoreStateMachine;
import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotDataFactory;
import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotResult;
import org.apache.cloudstack.engine.subsystem.api.storage.StrategyPriority;
import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.VolumeService;
import org.apache.cloudstack.storage.command.SnapshotAndCopyAnswer;
import org.apache.cloudstack.storage.command.SnapshotAndCopyCommand;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreVO;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.log4j.Logger;
import org.springframework.stereotype.Component;
import com.cloud.agent.AgentManager;
import com.cloud.agent.api.Answer;
import com.cloud.agent.api.ModifyTargetsCommand;
@ -38,18 +69,18 @@ import com.cloud.storage.Snapshot;
import com.cloud.storage.SnapshotVO;
import com.cloud.storage.Storage.ImageFormat;
import com.cloud.storage.Volume;
import com.cloud.storage.VolumeDetailVO;
import com.cloud.storage.VolumeVO;
import com.cloud.storage.dao.SnapshotDao;
import com.cloud.storage.dao.SnapshotDetailsDao;
import com.cloud.storage.dao.SnapshotDetailsVO;
import com.cloud.storage.dao.VolumeDao;
import com.cloud.storage.dao.VolumeDetailsDao;
import com.cloud.storage.VolumeDetailVO;
import com.cloud.utils.db.DB;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.fsm.NoTransitionException;
import com.cloud.vm.VirtualMachine;
import com.cloud.vm.VMInstanceVO;
import com.cloud.vm.VirtualMachine;
import com.cloud.vm.dao.VMInstanceDao;
import com.cloud.vm.snapshot.VMSnapshot;
import com.cloud.vm.snapshot.VMSnapshotService;
@ -57,37 +88,6 @@ import com.cloud.vm.snapshot.VMSnapshotVO;
import com.cloud.vm.snapshot.dao.VMSnapshotDao;
import com.google.common.base.Preconditions;
import org.apache.cloudstack.engine.subsystem.api.storage.ChapInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreCapabilities;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager;
import org.apache.cloudstack.engine.subsystem.api.storage.ObjectInDataStoreStateMachine;
import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotDataFactory;
import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotResult;
import org.apache.cloudstack.engine.subsystem.api.storage.StrategyPriority;
import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.VolumeService;
import org.apache.cloudstack.storage.command.SnapshotAndCopyAnswer;
import org.apache.cloudstack.storage.command.SnapshotAndCopyCommand;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreVO;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.log4j.Logger;
import org.springframework.stereotype.Component;
import javax.inject.Inject;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Random;
import java.util.UUID;
@Component
public class StorageSystemSnapshotStrategy extends SnapshotStrategyBase {
private static final Logger s_logger = Logger.getLogger(StorageSystemSnapshotStrategy.class);
@ -241,15 +241,16 @@ public class StorageSystemSnapshotStrategy extends SnapshotStrategyBase {
}
private boolean isAcceptableRevertFormat(VolumeVO volumeVO) {
return ImageFormat.VHD.equals(volumeVO.getFormat()) || ImageFormat.OVA.equals(volumeVO.getFormat()) || ImageFormat.QCOW2.equals(volumeVO.getFormat());
return ImageFormat.VHD.equals(volumeVO.getFormat()) || ImageFormat.OVA.equals(volumeVO.getFormat())
|| ImageFormat.QCOW2.equals(volumeVO.getFormat()) || ImageFormat.RAW.equals(volumeVO.getFormat());
}
private void verifyFormat(VolumeInfo volumeInfo) {
ImageFormat imageFormat = volumeInfo.getFormat();
if (imageFormat != ImageFormat.VHD && imageFormat != ImageFormat.OVA && imageFormat != ImageFormat.QCOW2) {
if (imageFormat != ImageFormat.VHD && imageFormat != ImageFormat.OVA && imageFormat != ImageFormat.QCOW2 && imageFormat != ImageFormat.RAW) {
throw new CloudRuntimeException("Only the following image types are currently supported: " +
ImageFormat.VHD.toString() + ", " + ImageFormat.OVA.toString() + ", and " + ImageFormat.QCOW2);
ImageFormat.VHD.toString() + ", " + ImageFormat.OVA.toString() + ", " + ImageFormat.QCOW2 + ", and " + ImageFormat.RAW);
}
}
@ -456,7 +457,7 @@ public class StorageSystemSnapshotStrategy extends SnapshotStrategyBase {
computeClusterSupportsVolumeClone = clusterDao.getSupportsResigning(hostVO.getClusterId());
}
else if (volumeInfo.getFormat() == ImageFormat.OVA || volumeInfo.getFormat() == ImageFormat.QCOW2) {
else if (volumeInfo.getFormat() == ImageFormat.OVA || volumeInfo.getFormat() == ImageFormat.QCOW2 || volumeInfo.getFormat() == ImageFormat.RAW) {
computeClusterSupportsVolumeClone = true;
}
else {
@ -760,6 +761,7 @@ public class StorageSystemSnapshotStrategy extends SnapshotStrategyBase {
sourceDetails.put(DiskTO.STORAGE_HOST, storagePoolVO.getHostAddress());
sourceDetails.put(DiskTO.STORAGE_PORT, String.valueOf(storagePoolVO.getPort()));
sourceDetails.put(DiskTO.IQN, volumeVO.get_iScsiName());
sourceDetails.put(DiskTO.PROTOCOL_TYPE, (storagePoolVO.getPoolType() != null) ? storagePoolVO.getPoolType().toString() : null);
ChapInfo chapInfo = volService.getChapInfo(volumeInfo, volumeInfo.getDataStore());
@ -778,6 +780,7 @@ public class StorageSystemSnapshotStrategy extends SnapshotStrategyBase {
destDetails.put(DiskTO.STORAGE_HOST, storagePoolVO.getHostAddress());
destDetails.put(DiskTO.STORAGE_PORT, String.valueOf(storagePoolVO.getPort()));
destDetails.put(DiskTO.PROTOCOL_TYPE, (storagePoolVO.getPoolType() != null) ? storagePoolVO.getPoolType().toString() : null);
long snapshotId = snapshotInfo.getId();

View File

@ -0,0 +1,487 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.cloudstack.storage.vmsnapshot;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.inject.Inject;
import javax.naming.ConfigurationException;
import org.apache.cloudstack.engine.subsystem.api.storage.StrategyPriority;
import org.apache.cloudstack.engine.subsystem.api.storage.VMSnapshotStrategy;
import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
import org.apache.cloudstack.storage.datastore.api.SnapshotGroup;
import org.apache.cloudstack.storage.datastore.client.ScaleIOGatewayClient;
import org.apache.cloudstack.storage.datastore.client.ScaleIOGatewayClientConnectionPool;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailsDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.cloudstack.storage.datastore.util.ScaleIOUtil;
import org.apache.cloudstack.storage.to.VolumeObjectTO;
import org.apache.log4j.Logger;
import com.cloud.agent.api.VMSnapshotTO;
import com.cloud.alert.AlertManager;
import com.cloud.event.EventTypes;
import com.cloud.event.UsageEventUtils;
import com.cloud.event.UsageEventVO;
import com.cloud.server.ManagementServerImpl;
import com.cloud.storage.DiskOfferingVO;
import com.cloud.storage.Storage;
import com.cloud.storage.VolumeVO;
import com.cloud.storage.dao.DiskOfferingDao;
import com.cloud.storage.dao.VolumeDao;
import com.cloud.uservm.UserVm;
import com.cloud.utils.NumbersUtil;
import com.cloud.utils.component.ManagerBase;
import com.cloud.utils.db.DB;
import com.cloud.utils.db.Transaction;
import com.cloud.utils.db.TransactionCallbackWithExceptionNoReturn;
import com.cloud.utils.db.TransactionStatus;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.fsm.NoTransitionException;
import com.cloud.vm.UserVmVO;
import com.cloud.vm.dao.UserVmDao;
import com.cloud.vm.snapshot.VMSnapshot;
import com.cloud.vm.snapshot.VMSnapshotDetailsVO;
import com.cloud.vm.snapshot.VMSnapshotVO;
import com.cloud.vm.snapshot.dao.VMSnapshotDao;
import com.cloud.vm.snapshot.dao.VMSnapshotDetailsDao;
public class ScaleIOVMSnapshotStrategy extends ManagerBase implements VMSnapshotStrategy {
private static final Logger LOGGER = Logger.getLogger(ScaleIOVMSnapshotStrategy.class);
@Inject
VMSnapshotHelper vmSnapshotHelper;
@Inject
UserVmDao userVmDao;
@Inject
VMSnapshotDao vmSnapshotDao;
@Inject
protected VMSnapshotDetailsDao vmSnapshotDetailsDao;
int _wait;
@Inject
ConfigurationDao configurationDao;
@Inject
VolumeDao volumeDao;
@Inject
DiskOfferingDao diskOfferingDao;
@Inject
PrimaryDataStoreDao storagePoolDao;
@Inject
StoragePoolDetailsDao storagePoolDetailsDao;
@Inject
AlertManager alertManager;
@Override
public boolean configure(String name, Map<String, Object> params) throws ConfigurationException {
String value = configurationDao.getValue("vmsnapshot.create.wait");
_wait = NumbersUtil.parseInt(value, 1800);
return true;
}
@Override
public StrategyPriority canHandle(VMSnapshot vmSnapshot) {
List<VolumeObjectTO> volumeTOs = vmSnapshotHelper.getVolumeTOList(vmSnapshot.getVmId());
if (volumeTOs == null) {
throw new CloudRuntimeException("Failed to get the volumes for the vm snapshot: " + vmSnapshot.getUuid());
}
if (volumeTOs != null && !volumeTOs.isEmpty()) {
for (VolumeObjectTO volumeTO: volumeTOs) {
Long poolId = volumeTO.getPoolId();
Storage.StoragePoolType poolType = vmSnapshotHelper.getStoragePoolType(poolId);
if (poolType != Storage.StoragePoolType.PowerFlex) {
return StrategyPriority.CANT_HANDLE;
}
}
}
return StrategyPriority.HIGHEST;
}
@Override
public VMSnapshot takeVMSnapshot(VMSnapshot vmSnapshot) {
UserVm userVm = userVmDao.findById(vmSnapshot.getVmId());
VMSnapshotVO vmSnapshotVO = (VMSnapshotVO)vmSnapshot;
try {
vmSnapshotHelper.vmSnapshotStateTransitTo(vmSnapshotVO, VMSnapshot.Event.CreateRequested);
} catch (NoTransitionException e) {
throw new CloudRuntimeException(e.getMessage());
}
boolean result = false;
try {
Map<String, String> srcVolumeDestSnapshotMap = new HashMap<>();
List<VolumeObjectTO> volumeTOs = vmSnapshotHelper.getVolumeTOList(userVm.getId());
final Long storagePoolId = vmSnapshotHelper.getStoragePoolForVM(userVm.getId());
StoragePoolVO storagePool = storagePoolDao.findById(storagePoolId);
long prev_chain_size = 0;
long virtual_size=0;
for (VolumeObjectTO volume : volumeTOs) {
String volumeSnapshotName = String.format("%s-%s-%s-%s-%s", ScaleIOUtil.VMSNAPSHOT_PREFIX, vmSnapshotVO.getId(), volume.getId(),
storagePool.getUuid().split("-")[0].substring(4), ManagementServerImpl.customCsIdentifier.value());
srcVolumeDestSnapshotMap.put(ScaleIOUtil.getVolumePath(volume.getPath()), volumeSnapshotName);
virtual_size += volume.getSize();
VolumeVO volumeVO = volumeDao.findById(volume.getId());
prev_chain_size += volumeVO.getVmSnapshotChainSize() == null ? 0 : volumeVO.getVmSnapshotChainSize();
}
VMSnapshotTO current = null;
VMSnapshotVO currentSnapshot = vmSnapshotDao.findCurrentSnapshotByVmId(userVm.getId());
if (currentSnapshot != null) {
current = vmSnapshotHelper.getSnapshotWithParents(currentSnapshot);
}
if (current == null)
vmSnapshotVO.setParent(null);
else
vmSnapshotVO.setParent(current.getId());
try {
final ScaleIOGatewayClient client = getScaleIOClient(storagePoolId);
SnapshotGroup snapshotGroup = client.takeSnapshot(srcVolumeDestSnapshotMap);
if (snapshotGroup == null) {
throw new CloudRuntimeException("Failed to take VM snapshot on PowerFlex storage pool");
}
String snapshotGroupId = snapshotGroup.getSnapshotGroupId();
List<String> volumeIds = snapshotGroup.getVolumeIds();
if (volumeIds != null && !volumeIds.isEmpty()) {
List<VMSnapshotDetailsVO> vmSnapshotDetails = new ArrayList<VMSnapshotDetailsVO>();
vmSnapshotDetails.add(new VMSnapshotDetailsVO(vmSnapshot.getId(), "SnapshotGroupId", snapshotGroupId, false));
for (int index = 0; index < volumeIds.size(); index++) {
String volumeSnapshotName = srcVolumeDestSnapshotMap.get(ScaleIOUtil.getVolumePath(volumeTOs.get(index).getPath()));
String pathWithScaleIOVolumeName = ScaleIOUtil.updatedPathWithVolumeName(volumeIds.get(index), volumeSnapshotName);
vmSnapshotDetails.add(new VMSnapshotDetailsVO(vmSnapshot.getId(), "Vol_" + volumeTOs.get(index).getId() + "_Snapshot", pathWithScaleIOVolumeName, false));
}
vmSnapshotDetailsDao.saveDetails(vmSnapshotDetails);
}
finalizeCreate(vmSnapshotVO, volumeTOs);
result = true;
LOGGER.debug("Create vm snapshot " + vmSnapshot.getName() + " succeeded for vm: " + userVm.getInstanceName());
long new_chain_size=0;
for (VolumeObjectTO volumeTo : volumeTOs) {
publishUsageEvent(EventTypes.EVENT_VM_SNAPSHOT_CREATE, vmSnapshot, userVm, volumeTo);
new_chain_size += volumeTo.getSize();
}
publishUsageEvent(EventTypes.EVENT_VM_SNAPSHOT_ON_PRIMARY, vmSnapshot, userVm, new_chain_size - prev_chain_size, virtual_size);
return vmSnapshot;
} catch (Exception e) {
String errMsg = "Unable to take vm snapshot due to: " + e.getMessage();
LOGGER.warn(errMsg, e);
throw new CloudRuntimeException(errMsg);
}
} finally {
if (!result) {
try {
vmSnapshotHelper.vmSnapshotStateTransitTo(vmSnapshot, VMSnapshot.Event.OperationFailed);
String subject = "Take snapshot failed for VM: " + userVm.getDisplayName();
String message = "Snapshot operation failed for VM: " + userVm.getDisplayName() + ", Please check and delete if any stale volumes created with VM snapshot id: " + vmSnapshot.getVmId();
alertManager.sendAlert(AlertManager.AlertType.ALERT_TYPE_VM_SNAPSHOT, userVm.getDataCenterId(), userVm.getPodIdToDeployIn(), subject, message);
} catch (NoTransitionException e1) {
LOGGER.error("Cannot set vm snapshot state due to: " + e1.getMessage());
}
}
}
}
@DB
protected void finalizeCreate(VMSnapshotVO vmSnapshot, List<VolumeObjectTO> volumeTOs) {
try {
Transaction.execute(new TransactionCallbackWithExceptionNoReturn<NoTransitionException>() {
@Override
public void doInTransactionWithoutResult(TransactionStatus status) throws NoTransitionException {
// update chain size for the volumes in the VM snapshot
for (VolumeObjectTO volume : volumeTOs) {
VolumeVO volumeVO = volumeDao.findById(volume.getId());
if (volumeVO != null) {
long vmSnapshotChainSize = volumeVO.getVmSnapshotChainSize() == null ? 0 : volumeVO.getVmSnapshotChainSize();
vmSnapshotChainSize += volumeVO.getSize();
volumeVO.setVmSnapshotChainSize(vmSnapshotChainSize);
volumeDao.persist(volumeVO);
}
}
vmSnapshot.setCurrent(true);
// change current snapshot
if (vmSnapshot.getParent() != null) {
VMSnapshotVO previousCurrent = vmSnapshotDao.findById(vmSnapshot.getParent());
previousCurrent.setCurrent(false);
vmSnapshotDao.persist(previousCurrent);
}
vmSnapshotDao.persist(vmSnapshot);
vmSnapshotHelper.vmSnapshotStateTransitTo(vmSnapshot, VMSnapshot.Event.OperationSucceeded);
}
});
} catch (Exception e) {
String errMsg = "Error while finalize create vm snapshot: " + vmSnapshot.getName() + " due to " + e.getMessage();
LOGGER.error(errMsg, e);
throw new CloudRuntimeException(errMsg);
}
}
@Override
public boolean revertVMSnapshot(VMSnapshot vmSnapshot) {
VMSnapshotVO vmSnapshotVO = (VMSnapshotVO)vmSnapshot;
UserVmVO userVm = userVmDao.findById(vmSnapshot.getVmId());
try {
vmSnapshotHelper.vmSnapshotStateTransitTo(vmSnapshotVO, VMSnapshot.Event.RevertRequested);
} catch (NoTransitionException e) {
throw new CloudRuntimeException(e.getMessage());
}
boolean result = false;
try {
List<VolumeObjectTO> volumeTOs = vmSnapshotHelper.getVolumeTOList(userVm.getId());
Long storagePoolId = vmSnapshotHelper.getStoragePoolForVM(userVm.getId());
Map<String, String> srcSnapshotDestVolumeMap = new HashMap<>();
for (VolumeObjectTO volume : volumeTOs) {
VMSnapshotDetailsVO vmSnapshotDetail = vmSnapshotDetailsDao.findDetail(vmSnapshotVO.getId(), "Vol_" + volume.getId() + "_Snapshot");
String srcSnapshotVolumeId = ScaleIOUtil.getVolumePath(vmSnapshotDetail.getValue());
String destVolumeId = ScaleIOUtil.getVolumePath(volume.getPath());
srcSnapshotDestVolumeMap.put(srcSnapshotVolumeId, destVolumeId);
}
String systemId = storagePoolDetailsDao.findDetail(storagePoolId, ScaleIOGatewayClient.STORAGE_POOL_SYSTEM_ID).getValue();
if (systemId == null) {
throw new CloudRuntimeException("Failed to get the system id for PowerFlex storage pool for reverting VM snapshot: " + vmSnapshot.getName());
}
final ScaleIOGatewayClient client = getScaleIOClient(storagePoolId);
result = client.revertSnapshot(systemId, srcSnapshotDestVolumeMap);
if (!result) {
throw new CloudRuntimeException("Failed to revert VM snapshot on PowerFlex storage pool");
}
finalizeRevert(vmSnapshotVO, volumeTOs);
result = true;
} catch (Exception e) {
String errMsg = "Revert VM: " + userVm.getInstanceName() + " to snapshot: " + vmSnapshotVO.getName() + " failed due to " + e.getMessage();
LOGGER.error(errMsg, e);
throw new CloudRuntimeException(errMsg);
} finally {
if (!result) {
try {
vmSnapshotHelper.vmSnapshotStateTransitTo(vmSnapshot, VMSnapshot.Event.OperationFailed);
} catch (NoTransitionException e1) {
LOGGER.error("Cannot set vm snapshot state due to: " + e1.getMessage());
}
}
}
return result;
}
@DB
protected void finalizeRevert(VMSnapshotVO vmSnapshot, List<VolumeObjectTO> volumeToList) {
try {
Transaction.execute(new TransactionCallbackWithExceptionNoReturn<NoTransitionException>() {
@Override
public void doInTransactionWithoutResult(TransactionStatus status) throws NoTransitionException {
// update chain size for the volumes in the VM snapshot
for (VolumeObjectTO volume : volumeToList) {
VolumeVO volumeVO = volumeDao.findById(volume.getId());
if (volumeVO != null && volumeVO.getVmSnapshotChainSize() != null && volumeVO.getVmSnapshotChainSize() >= volumeVO.getSize()) {
long vmSnapshotChainSize = volumeVO.getVmSnapshotChainSize() - volumeVO.getSize();
volumeVO.setVmSnapshotChainSize(vmSnapshotChainSize);
volumeDao.persist(volumeVO);
}
}
// update current snapshot, current snapshot is the one reverted to
VMSnapshotVO previousCurrent = vmSnapshotDao.findCurrentSnapshotByVmId(vmSnapshot.getVmId());
if (previousCurrent != null) {
previousCurrent.setCurrent(false);
vmSnapshotDao.persist(previousCurrent);
}
vmSnapshot.setCurrent(true);
vmSnapshotDao.persist(vmSnapshot);
vmSnapshotHelper.vmSnapshotStateTransitTo(vmSnapshot, VMSnapshot.Event.OperationSucceeded);
}
});
} catch (Exception e) {
String errMsg = "Error while finalize revert vm snapshot: " + vmSnapshot.getName() + " due to " + e.getMessage();
LOGGER.error(errMsg, e);
throw new CloudRuntimeException(errMsg);
}
}
@Override
public boolean deleteVMSnapshot(VMSnapshot vmSnapshot) {
UserVmVO userVm = userVmDao.findById(vmSnapshot.getVmId());
VMSnapshotVO vmSnapshotVO = (VMSnapshotVO)vmSnapshot;
try {
vmSnapshotHelper.vmSnapshotStateTransitTo(vmSnapshot, VMSnapshot.Event.ExpungeRequested);
} catch (NoTransitionException e) {
LOGGER.debug("Failed to change vm snapshot state with event ExpungeRequested");
throw new CloudRuntimeException("Failed to change vm snapshot state with event ExpungeRequested: " + e.getMessage());
}
try {
List<VolumeObjectTO> volumeTOs = vmSnapshotHelper.getVolumeTOList(vmSnapshot.getVmId());
Long storagePoolId = vmSnapshotHelper.getStoragePoolForVM(userVm.getId());
String systemId = storagePoolDetailsDao.findDetail(storagePoolId, ScaleIOGatewayClient.STORAGE_POOL_SYSTEM_ID).getValue();
if (systemId == null) {
throw new CloudRuntimeException("Failed to get the system id for PowerFlex storage pool for deleting VM snapshot: " + vmSnapshot.getName());
}
VMSnapshotDetailsVO vmSnapshotDetailsVO = vmSnapshotDetailsDao.findDetail(vmSnapshot.getId(), "SnapshotGroupId");
if (vmSnapshotDetailsVO == null) {
throw new CloudRuntimeException("Failed to get snapshot group id for the VM snapshot: " + vmSnapshot.getName());
}
String snapshotGroupId = vmSnapshotDetailsVO.getValue();
final ScaleIOGatewayClient client = getScaleIOClient(storagePoolId);
int volumesDeleted = client.deleteSnapshotGroup(systemId, snapshotGroupId);
if (volumesDeleted <= 0) {
throw new CloudRuntimeException("Failed to delete VM snapshot: " + vmSnapshot.getName());
} else if (volumesDeleted != volumeTOs.size()) {
LOGGER.warn("Unable to delete all volumes of the VM snapshot: " + vmSnapshot.getName());
}
finalizeDelete(vmSnapshotVO, volumeTOs);
long full_chain_size=0;
for (VolumeObjectTO volumeTo : volumeTOs) {
publishUsageEvent(EventTypes.EVENT_VM_SNAPSHOT_DELETE, vmSnapshot, userVm, volumeTo);
full_chain_size += volumeTo.getSize();
}
publishUsageEvent(EventTypes.EVENT_VM_SNAPSHOT_OFF_PRIMARY, vmSnapshot, userVm, full_chain_size, 0L);
return true;
} catch (Exception e) {
String errMsg = "Unable to delete vm snapshot: " + vmSnapshot.getName() + " of vm " + userVm.getInstanceName() + " due to " + e.getMessage();
LOGGER.warn(errMsg, e);
throw new CloudRuntimeException(errMsg);
}
}
@DB
protected void finalizeDelete(VMSnapshotVO vmSnapshot, List<VolumeObjectTO> volumeTOs) {
try {
Transaction.execute(new TransactionCallbackWithExceptionNoReturn<NoTransitionException>() {
@Override
public void doInTransactionWithoutResult(TransactionStatus status) throws NoTransitionException {
// update chain size for the volumes in the VM snapshot
for (VolumeObjectTO volume : volumeTOs) {
VolumeVO volumeVO = volumeDao.findById(volume.getId());
if (volumeVO != null && volumeVO.getVmSnapshotChainSize() != null && volumeVO.getVmSnapshotChainSize() >= volumeVO.getSize()) {
long vmSnapshotChainSize = volumeVO.getVmSnapshotChainSize() - volumeVO.getSize();
volumeVO.setVmSnapshotChainSize(vmSnapshotChainSize);
volumeDao.persist(volumeVO);
}
}
// update children's parent snapshots
List<VMSnapshotVO> children = vmSnapshotDao.listByParent(vmSnapshot.getId());
for (VMSnapshotVO child : children) {
child.setParent(vmSnapshot.getParent());
vmSnapshotDao.persist(child);
}
// update current snapshot
VMSnapshotVO current = vmSnapshotDao.findCurrentSnapshotByVmId(vmSnapshot.getVmId());
if (current != null && current.getId() == vmSnapshot.getId() && vmSnapshot.getParent() != null) {
VMSnapshotVO parent = vmSnapshotDao.findById(vmSnapshot.getParent());
parent.setCurrent(true);
vmSnapshotDao.persist(parent);
}
vmSnapshot.setCurrent(false);
vmSnapshotDao.persist(vmSnapshot);
vmSnapshotDao.remove(vmSnapshot.getId());
}
});
} catch (Exception e) {
String errMsg = "Error while finalize delete vm snapshot: " + vmSnapshot.getName() + " due to " + e.getMessage();
LOGGER.error(errMsg, e);
throw new CloudRuntimeException(errMsg);
}
}
@Override
public boolean deleteVMSnapshotFromDB(VMSnapshot vmSnapshot, boolean unmanage) {
try {
vmSnapshotHelper.vmSnapshotStateTransitTo(vmSnapshot, VMSnapshot.Event.ExpungeRequested);
} catch (NoTransitionException e) {
LOGGER.debug("Failed to change vm snapshot state with event ExpungeRequested");
throw new CloudRuntimeException("Failed to change vm snapshot state with event ExpungeRequested: " + e.getMessage());
}
UserVm userVm = userVmDao.findById(vmSnapshot.getVmId());
List<VolumeObjectTO> volumeTOs = vmSnapshotHelper.getVolumeTOList(userVm.getId());
long full_chain_size = 0;
for (VolumeObjectTO volumeTo: volumeTOs) {
volumeTo.setSize(0);
publishUsageEvent(EventTypes.EVENT_VM_SNAPSHOT_DELETE, vmSnapshot, userVm, volumeTo);
full_chain_size += volumeTo.getSize();
}
if (unmanage) {
publishUsageEvent(EventTypes.EVENT_VM_SNAPSHOT_OFF_PRIMARY, vmSnapshot, userVm, full_chain_size, 0L);
}
return vmSnapshotDao.remove(vmSnapshot.getId());
}
private void publishUsageEvent(String type, VMSnapshot vmSnapshot, UserVm userVm, VolumeObjectTO volumeTo) {
VolumeVO volume = volumeDao.findById(volumeTo.getId());
Long diskOfferingId = volume.getDiskOfferingId();
Long offeringId = null;
if (diskOfferingId != null) {
DiskOfferingVO offering = diskOfferingDao.findById(diskOfferingId);
if (offering != null && (offering.getType() == DiskOfferingVO.Type.Disk)) {
offeringId = offering.getId();
}
}
Map<String, String> details = new HashMap<>();
if (vmSnapshot != null) {
details.put(UsageEventVO.DynamicParameters.vmSnapshotId.name(), String.valueOf(vmSnapshot.getId()));
}
UsageEventUtils.publishUsageEvent(type, vmSnapshot.getAccountId(), userVm.getDataCenterId(), userVm.getId(), vmSnapshot.getName(), offeringId, volume.getId(), // save volume's id into templateId field
volumeTo.getSize(), VMSnapshot.class.getName(), vmSnapshot.getUuid(), details);
}
private void publishUsageEvent(String type, VMSnapshot vmSnapshot, UserVm userVm, Long vmSnapSize, Long virtualSize) {
try {
Map<String, String> details = new HashMap<>();
if (vmSnapshot != null) {
details.put(UsageEventVO.DynamicParameters.vmSnapshotId.name(), String.valueOf(vmSnapshot.getId()));
}
UsageEventUtils.publishUsageEvent(type, vmSnapshot.getAccountId(), userVm.getDataCenterId(), userVm.getId(), vmSnapshot.getName(), 0L, 0L, vmSnapSize, virtualSize,
VMSnapshot.class.getName(), vmSnapshot.getUuid(), details);
} catch (Exception e) {
LOGGER.error("Failed to publish usage event " + type, e);
}
}
private ScaleIOGatewayClient getScaleIOClient(final Long storagePoolId) throws Exception {
return ScaleIOGatewayClientConnectionPool.getInstance().getClient(storagePoolId, storagePoolDetailsDao);
}
}

View File

@ -36,7 +36,13 @@
<bean id="cephSnapshotStrategy"
class="org.apache.cloudstack.storage.snapshot.CephSnapshotStrategy" />
<bean id="scaleioSnapshotStrategy"
class="org.apache.cloudstack.storage.snapshot.ScaleIOSnapshotStrategy" />
<bean id="DefaultVMSnapshotStrategy"
class="org.apache.cloudstack.storage.vmsnapshot.DefaultVMSnapshotStrategy" />
<bean id="ScaleIOVMSnapshotStrategy"
class="org.apache.cloudstack.storage.vmsnapshot.ScaleIOVMSnapshotStrategy" />
</beans>

View File

@ -28,13 +28,13 @@ import javax.naming.ConfigurationException;
import com.cloud.exception.StorageUnavailableException;
import com.cloud.storage.StoragePoolStatus;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.log4j.Logger;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager;
import org.apache.cloudstack.engine.subsystem.api.storage.StoragePoolAllocator;
import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.log4j.Logger;
import com.cloud.capacity.Capacity;
import com.cloud.capacity.dao.CapacityDao;
@ -211,12 +211,16 @@ public abstract class AbstractStoragePoolAllocator extends AdapterBase implement
return false;
}
Volume volume = volumeDao.findById(dskCh.getVolumeId());
if(!storageMgr.storagePoolCompatibleWithVolumePool(pool, volume)) {
return false;
}
if (pool.isManaged() && !storageUtil.managedStoragePoolCanScale(pool, plan.getClusterId(), plan.getHostId())) {
return false;
}
// check capacity
Volume volume = volumeDao.findById(dskCh.getVolumeId());
List<Volume> requestVolumes = new ArrayList<>();
requestVolumes.add(volume);
if (dskCh.getHypervisorType() == HypervisorType.VMware) {

View File

@ -48,15 +48,10 @@ public class ZoneWideStoragePoolAllocator extends AbstractStoragePoolAllocator {
@Inject
private CapacityDao capacityDao;
@Override
protected List<StoragePool> select(DiskProfile dskCh, VirtualMachineProfile vmProfile, DeploymentPlan plan, ExcludeList avoid, int returnUpTo) {
LOGGER.debug("ZoneWideStoragePoolAllocator to find storage pool");
if (dskCh.useLocalStorage()) {
return null;
}
if (LOGGER.isTraceEnabled()) {
// Log the pools details that are ignored because they are in disabled state
List<StoragePoolVO> disabledPools = storagePoolDao.findDisabledPoolsByScope(plan.getDataCenterId(), null, null, ScopeType.ZONE);
@ -92,7 +87,6 @@ public class ZoneWideStoragePoolAllocator extends AbstractStoragePoolAllocator {
avoid.addPool(pool.getId());
}
for (StoragePoolVO storage : storagePools) {
if (suitablePools.size() == returnUpTo) {
break;
@ -114,7 +108,6 @@ public class ZoneWideStoragePoolAllocator extends AbstractStoragePoolAllocator {
return !ScopeType.ZONE.equals(storagePoolVO.getScope()) || !storagePoolVO.isManaged();
}
@Override
protected List<StoragePool> reorderPoolsByCapacity(DeploymentPlan plan,
List<StoragePool> pools) {

View File

@ -37,6 +37,7 @@ import com.cloud.exception.InvalidParameterValueException;
import com.cloud.host.Host;
import com.cloud.host.HostVO;
import com.cloud.host.dao.HostDao;
import com.cloud.storage.Storage;
import com.cloud.storage.VolumeVO;
import com.cloud.storage.dao.VolumeDao;
import com.cloud.utils.fsm.NoTransitionException;
@ -148,4 +149,33 @@ public class VMSnapshotHelperImpl implements VMSnapshotHelper {
return result;
}
@Override
public Long getStoragePoolForVM(Long vmId) {
List<VolumeVO> rootVolumes = volumeDao.findReadyRootVolumesByInstance(vmId);
if (rootVolumes == null || rootVolumes.isEmpty()) {
throw new InvalidParameterValueException("Failed to find root volume for the user vm:" + vmId);
}
VolumeVO rootVolume = rootVolumes.get(0);
StoragePoolVO rootVolumePool = primaryDataStoreDao.findById(rootVolume.getPoolId());
if (rootVolumePool == null) {
throw new InvalidParameterValueException("Failed to find root volume storage pool for the user vm:" + vmId);
}
if (rootVolumePool.isInMaintenance()) {
throw new InvalidParameterValueException("Storage pool for the user vm:" + vmId + " is in maintenance");
}
return rootVolumePool.getId();
}
@Override
public Storage.StoragePoolType getStoragePoolType(Long poolId) {
StoragePoolVO storagePool = primaryDataStoreDao.findById(poolId);
if (storagePool == null) {
throw new InvalidParameterValueException("storage pool is not found");
}
return storagePool.getPoolType();
}
}

View File

@ -71,6 +71,7 @@ import com.cloud.alert.AlertManager;
import com.cloud.configuration.Config;
import com.cloud.exception.AgentUnavailableException;
import com.cloud.exception.OperationTimedoutException;
import com.cloud.host.Host;
import com.cloud.host.dao.HostDao;
import com.cloud.secstorage.CommandExecLogDao;
import com.cloud.secstorage.CommandExecLogVO;
@ -363,6 +364,11 @@ public abstract class BaseImageStoreDriverImpl implements ImageStoreDriver {
}
}
@Override
public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) {
copyAsync(srcData, destData, callback);
}
private Answer sendToLeastBusyEndpoint(List<EndPoint> eps, CopyCommand cmd) {
Answer answer = null;
EndPoint endPoint = null;

View File

@ -23,6 +23,7 @@ import java.util.List;
import org.apache.cloudstack.storage.to.VolumeObjectTO;
import com.cloud.agent.api.VMSnapshotTO;
import com.cloud.storage.Storage;
import com.cloud.utils.fsm.NoTransitionException;
import com.cloud.vm.snapshot.VMSnapshot;
import com.cloud.vm.snapshot.VMSnapshotVO;
@ -35,4 +36,8 @@ public interface VMSnapshotHelper {
List<VolumeObjectTO> getVolumeTOList(Long vmId);
VMSnapshotTO getSnapshotWithParents(VMSnapshotVO snapshot);
Long getStoragePoolForVM(Long vmId);
Storage.StoragePoolType getStoragePoolType(Long poolId);
}

View File

@ -203,8 +203,7 @@ public class PrimaryDataStoreImpl implements PrimaryDataStore {
@Override
public String getName() {
// TODO Auto-generated method stub
return null;
return pdsv.getName();
}
@Override

View File

@ -29,11 +29,9 @@ import com.cloud.storage.VolumeDetailVO;
import com.cloud.storage.dao.VMTemplateDao;
import com.cloud.storage.dao.VolumeDetailsDao;
import com.cloud.vm.VmDetailConstants;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.resourcedetail.dao.DiskOfferingDetailsDao;
import org.apache.commons.lang.StringUtils;
import org.apache.log4j.Logger;
import org.apache.cloudstack.resourcedetail.dao.DiskOfferingDetailsDao;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.engine.subsystem.api.storage.DataObjectInStore;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
import org.apache.cloudstack.engine.subsystem.api.storage.ObjectInDataStoreStateMachine;
@ -44,6 +42,8 @@ import org.apache.cloudstack.storage.datastore.ObjectInDataStoreManager;
import org.apache.cloudstack.storage.datastore.db.VolumeDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.VolumeDataStoreVO;
import org.apache.cloudstack.storage.to.VolumeObjectTO;
import org.apache.commons.lang.StringUtils;
import org.apache.log4j.Logger;
import com.cloud.agent.api.Answer;
import com.cloud.agent.api.storage.DownloadAnswer;
@ -53,6 +53,7 @@ import com.cloud.hypervisor.Hypervisor.HypervisorType;
import com.cloud.offering.DiskOffering.DiskCacheMode;
import com.cloud.storage.DataStoreRole;
import com.cloud.storage.DiskOfferingVO;
import com.cloud.storage.Storage;
import com.cloud.storage.Storage.ImageFormat;
import com.cloud.storage.Storage.ProvisioningType;
import com.cloud.storage.Volume;
@ -625,6 +626,11 @@ public class VolumeObject implements VolumeInfo {
return volumeDao.getHypervisorType(volumeVO.getId());
}
@Override
public Storage.StoragePoolType getStoragePoolType() {
return volumeVO.getPoolType();
}
@Override
public Long getLastPoolId() {
return volumeVO.getLastPoolId();

View File

@ -31,6 +31,7 @@ import javax.inject.Inject;
import com.cloud.storage.VMTemplateVO;
import com.cloud.storage.dao.VMTemplateDao;
import org.apache.cloudstack.engine.cloud.entity.api.VolumeEntity;
import org.apache.cloudstack.engine.orchestration.service.VolumeOrchestrationService;
import org.apache.cloudstack.engine.subsystem.api.storage.ChapInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.CopyCommandResult;
import org.apache.cloudstack.engine.subsystem.api.storage.CreateCmdResult;
@ -47,6 +48,7 @@ import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStore;
import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStoreDriver;
import org.apache.cloudstack.engine.subsystem.api.storage.Scope;
import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.TemplateDataFactory;
import org.apache.cloudstack.engine.subsystem.api.storage.TemplateInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.VolumeDataFactory;
import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo;
@ -64,6 +66,8 @@ import org.apache.cloudstack.storage.datastore.PrimaryDataStoreProviderManager;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreVO;
import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailVO;
import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailsDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.cloudstack.storage.datastore.db.VolumeDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.VolumeDataStoreVO;
@ -88,6 +92,7 @@ import com.cloud.dc.dao.ClusterDao;
import com.cloud.event.EventTypes;
import com.cloud.event.UsageEventUtils;
import com.cloud.exception.ResourceAllocationException;
import com.cloud.exception.StorageAccessException;
import com.cloud.host.Host;
import com.cloud.host.HostVO;
import com.cloud.host.dao.HostDao;
@ -101,13 +106,16 @@ import com.cloud.server.ManagementService;
import com.cloud.storage.DataStoreRole;
import com.cloud.storage.RegisterVolumePayload;
import com.cloud.storage.ScopeType;
import com.cloud.storage.Storage;
import com.cloud.storage.Storage.StoragePoolType;
import com.cloud.storage.StorageManager;
import com.cloud.storage.StoragePool;
import com.cloud.storage.VMTemplateStoragePoolVO;
import com.cloud.storage.VMTemplateStorageResourceAssoc;
import com.cloud.storage.VMTemplateStorageResourceAssoc.Status;
import com.cloud.storage.Volume;
import com.cloud.storage.Volume.State;
import com.cloud.storage.VolumeDetailVO;
import com.cloud.storage.VolumeVO;
import com.cloud.storage.dao.VMTemplatePoolDao;
import com.cloud.storage.dao.VolumeDao;
@ -122,6 +130,7 @@ import com.cloud.utils.db.DB;
import com.cloud.utils.db.GlobalLock;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.vm.VirtualMachine;
import com.google.common.base.Strings;
import static com.cloud.storage.resource.StorageProcessor.REQUEST_TEMPLATE_RELOAD;
@ -163,6 +172,8 @@ public class VolumeServiceImpl implements VolumeService {
@Inject
private PrimaryDataStoreDao storagePoolDao;
@Inject
private StoragePoolDetailsDao _storagePoolDetailsDao;
@Inject
private HostDetailsDao hostDetailsDao;
@Inject
private ManagementService mgr;
@ -172,6 +183,12 @@ public class VolumeServiceImpl implements VolumeService {
private VolumeDetailsDao _volumeDetailsDao;
@Inject
private VMTemplateDao templateDao;
@Inject
private TemplateDataFactory tmplFactory;
@Inject
private VolumeOrchestrationService _volumeMgr;
@Inject
private StorageManager _storageMgr;
private final static String SNAPSHOT_ID = "SNAPSHOT_ID";
@ -380,6 +397,14 @@ public class VolumeServiceImpl implements VolumeService {
return future;
}
public void ensureVolumeIsExpungeReady(long volumeId) {
VolumeVO volume = volDao.findById(volumeId);
if (volume != null && volume.getPodId() != null) {
volume.setPodId(null);
volDao.update(volumeId, volume);
}
}
private boolean volumeExistsOnPrimary(VolumeVO vol) {
Long poolId = vol.getPoolId();
@ -794,6 +819,39 @@ public class VolumeServiceImpl implements VolumeService {
return null;
}
@DB
protected Void createVolumeFromBaseManagedImageCallBack(AsyncCallbackDispatcher<VolumeServiceImpl, CopyCommandResult> callback, CreateVolumeFromBaseImageContext<VolumeApiResult> context) {
CopyCommandResult result = callback.getResult();
DataObject vo = context.vo;
DataObject tmplOnPrimary = context.templateOnStore;
VolumeApiResult volResult = new VolumeApiResult((VolumeObject)vo);
if (result.isSuccess()) {
VolumeVO volume = volDao.findById(vo.getId());
CopyCmdAnswer answer = (CopyCmdAnswer)result.getAnswer();
VolumeObjectTO volumeObjectTo = (VolumeObjectTO)answer.getNewData();
volume.setPath(volumeObjectTo.getPath());
if (volumeObjectTo.getFormat() != null) {
volume.setFormat(volumeObjectTo.getFormat());
}
volDao.update(volume.getId(), volume);
vo.processEvent(Event.OperationSuccessed);
} else {
volResult.setResult(result.getResult());
try {
destroyAndReallocateManagedVolume((VolumeInfo) vo);
} catch (CloudRuntimeException ex) {
s_logger.warn("Couldn't destroy managed volume: " + vo.getId());
}
}
AsyncCallFuture<VolumeApiResult> future = context.getFuture();
future.complete(volResult);
return null;
}
/**
* Creates a template volume on managed storage, which will be used for creating ROOT volumes by cloning.
*
@ -809,6 +867,9 @@ public class VolumeServiceImpl implements VolumeService {
if (templatePoolRef == null) {
throw new CloudRuntimeException("Failed to find template " + srcTemplateInfo.getUniqueName() + " in storage pool " + destPrimaryDataStore.getId());
} else if (templatePoolRef.getState() == ObjectInDataStoreStateMachine.State.Ready) {
// Template already exists
return templateOnPrimary;
}
// At this point, we have an entry in the DB that points to our cached template.
@ -824,13 +885,6 @@ public class VolumeServiceImpl implements VolumeService {
throw new CloudRuntimeException("Unable to acquire lock on VMTemplateStoragePool: " + templatePoolRefId);
}
// Template already exists
if (templatePoolRef.getState() == ObjectInDataStoreStateMachine.State.Ready) {
_tmpltPoolDao.releaseFromLockTable(templatePoolRefId);
return templateOnPrimary;
}
try {
// create a cache volume on the back-end
@ -875,27 +929,25 @@ public class VolumeServiceImpl implements VolumeService {
* @param destHost The host that we will use for the copy
*/
private void copyTemplateToManagedTemplateVolume(TemplateInfo srcTemplateInfo, TemplateInfo templateOnPrimary, VMTemplateStoragePoolVO templatePoolRef, PrimaryDataStore destPrimaryDataStore,
Host destHost) {
Host destHost) throws StorageAccessException {
AsyncCallFuture<VolumeApiResult> copyTemplateFuture = new AsyncCallFuture<>();
int storagePoolMaxWaitSeconds = NumbersUtil.parseInt(configDao.getValue(Config.StoragePoolMaxWaitSeconds.key()), 3600);
long templatePoolRefId = templatePoolRef.getId();
templatePoolRef = _tmpltPoolDao.acquireInLockTable(templatePoolRefId, storagePoolMaxWaitSeconds);
if (templatePoolRef == null) {
throw new CloudRuntimeException("Unable to acquire lock on VMTemplateStoragePool: " + templatePoolRefId);
}
if (templatePoolRef.getDownloadState() == Status.DOWNLOADED) {
// There can be cases where we acquired the lock, but the template
// was already copied by a previous thread. Just return in that case.
s_logger.debug("Template already downloaded, nothing to do");
return;
}
try {
templatePoolRef = _tmpltPoolDao.acquireInLockTable(templatePoolRefId, storagePoolMaxWaitSeconds);
if (templatePoolRef == null) {
throw new CloudRuntimeException("Unable to acquire lock on VMTemplateStoragePool: " + templatePoolRefId);
}
if (templatePoolRef.getDownloadState() == Status.DOWNLOADED) {
// There can be cases where we acquired the lock, but the template
// was already copied by a previous thread. Just return in that case.
s_logger.debug("Template already downloaded, nothing to do");
return;
}
// copy the template from sec storage to the created volume
CreateBaseImageContext<CreateCmdResult> copyContext = new CreateBaseImageContext<>(null, null, destPrimaryDataStore, srcTemplateInfo, copyTemplateFuture, templateOnPrimary,
templatePoolRefId);
@ -913,6 +965,7 @@ public class VolumeServiceImpl implements VolumeService {
details.put(PrimaryDataStore.MANAGED_STORE_TARGET_ROOT_VOLUME, srcTemplateInfo.getUniqueName());
details.put(PrimaryDataStore.REMOVE_AFTER_COPY, Boolean.TRUE.toString());
details.put(PrimaryDataStore.VOLUME_SIZE, String.valueOf(templateOnPrimary.getSize()));
details.put(StorageManager.STORAGE_POOL_DISK_WAIT.toString(), String.valueOf(StorageManager.STORAGE_POOL_DISK_WAIT.valueIn(destPrimaryDataStore.getId())));
ChapInfo chapInfo = getChapInfo(templateOnPrimary, destPrimaryDataStore);
@ -923,11 +976,15 @@ public class VolumeServiceImpl implements VolumeService {
details.put(PrimaryDataStore.CHAP_TARGET_SECRET, chapInfo.getTargetSecret());
}
templateOnPrimary.processEvent(Event.CopyingRequested);
destPrimaryDataStore.setDetails(details);
grantAccess(templateOnPrimary, destHost, destPrimaryDataStore);
try {
grantAccess(templateOnPrimary, destHost, destPrimaryDataStore);
} catch (Exception e) {
throw new StorageAccessException("Unable to grant access to template: " + templateOnPrimary.getId() + " on host: " + destHost.getId());
}
templateOnPrimary.processEvent(Event.CopyingRequested);
VolumeApiResult result;
@ -955,6 +1012,8 @@ public class VolumeServiceImpl implements VolumeService {
// something weird happens to the volume (XenServer creates an SR, but the VDI copy can fail).
// For now, I just retry the copy.
}
} catch (StorageAccessException e) {
throw e;
} catch (Throwable e) {
s_logger.debug("Failed to create a template on primary storage", e);
@ -1031,6 +1090,126 @@ public class VolumeServiceImpl implements VolumeService {
}
}
private void createManagedVolumeCopyManagedTemplateAsync(VolumeInfo volumeInfo, PrimaryDataStore destPrimaryDataStore, TemplateInfo srcTemplateOnPrimary, Host destHost, AsyncCallFuture<VolumeApiResult> future) throws StorageAccessException {
VMTemplateStoragePoolVO templatePoolRef = _tmpltPoolDao.findByPoolTemplate(destPrimaryDataStore.getId(), srcTemplateOnPrimary.getId(), null);
if (templatePoolRef == null) {
throw new CloudRuntimeException("Failed to find template " + srcTemplateOnPrimary.getUniqueName() + " in storage pool " + srcTemplateOnPrimary.getId());
}
if (templatePoolRef.getDownloadState() == Status.NOT_DOWNLOADED) {
throw new CloudRuntimeException("Template " + srcTemplateOnPrimary.getUniqueName() + " has not been downloaded to primary storage.");
}
String volumeDetailKey = "POOL_TEMPLATE_ID_COPY_ON_HOST_" + destHost.getId();
try {
try {
grantAccess(srcTemplateOnPrimary, destHost, destPrimaryDataStore);
} catch (Exception e) {
throw new StorageAccessException("Unable to grant access to src template: " + srcTemplateOnPrimary.getId() + " on host: " + destHost.getId());
}
_volumeDetailsDao.addDetail(volumeInfo.getId(), volumeDetailKey, String.valueOf(templatePoolRef.getId()), false);
// Create a volume on managed storage.
AsyncCallFuture<VolumeApiResult> createVolumeFuture = createVolumeAsync(volumeInfo, destPrimaryDataStore);
VolumeApiResult createVolumeResult = createVolumeFuture.get();
if (createVolumeResult.isFailed()) {
throw new CloudRuntimeException("Creation of a volume failed: " + createVolumeResult.getResult());
}
// Refresh the volume info from the DB.
volumeInfo = volFactory.getVolume(volumeInfo.getId(), destPrimaryDataStore);
volumeInfo.processEvent(Event.CreateRequested);
CreateVolumeFromBaseImageContext<VolumeApiResult> context = new CreateVolumeFromBaseImageContext<>(null, volumeInfo, destPrimaryDataStore, srcTemplateOnPrimary, future, null, null);
AsyncCallbackDispatcher<VolumeServiceImpl, CopyCommandResult> caller = AsyncCallbackDispatcher.create(this);
caller.setCallback(caller.getTarget().createVolumeFromBaseManagedImageCallBack(null, null));
caller.setContext(context);
Map<String, String> details = new HashMap<String, String>();
details.put(PrimaryDataStore.MANAGED, Boolean.TRUE.toString());
details.put(PrimaryDataStore.STORAGE_HOST, destPrimaryDataStore.getHostAddress());
details.put(PrimaryDataStore.STORAGE_PORT, String.valueOf(destPrimaryDataStore.getPort()));
details.put(PrimaryDataStore.MANAGED_STORE_TARGET, volumeInfo.get_iScsiName());
details.put(PrimaryDataStore.MANAGED_STORE_TARGET_ROOT_VOLUME, volumeInfo.getName());
details.put(PrimaryDataStore.VOLUME_SIZE, String.valueOf(volumeInfo.getSize()));
details.put(StorageManager.STORAGE_POOL_DISK_WAIT.toString(), String.valueOf(StorageManager.STORAGE_POOL_DISK_WAIT.valueIn(destPrimaryDataStore.getId())));
destPrimaryDataStore.setDetails(details);
grantAccess(volumeInfo, destHost, destPrimaryDataStore);
try {
motionSrv.copyAsync(srcTemplateOnPrimary, volumeInfo, destHost, caller);
} finally {
revokeAccess(volumeInfo, destHost, destPrimaryDataStore);
}
} catch (StorageAccessException e) {
throw e;
} catch (Throwable e) {
s_logger.debug("Failed to copy managed template on primary storage", e);
String errMsg = "Failed due to " + e.toString();
try {
destroyAndReallocateManagedVolume(volumeInfo);
} catch (CloudRuntimeException ex) {
s_logger.warn("Failed to destroy managed volume: " + volumeInfo.getId());
errMsg += " : " + ex.getMessage();
}
VolumeApiResult result = new VolumeApiResult(volumeInfo);
result.setResult(errMsg);
future.complete(result);
} finally {
_volumeDetailsDao.removeDetail(volumeInfo.getId(), volumeDetailKey);
List<VolumeDetailVO> volumeDetails = _volumeDetailsDao.findDetails(volumeDetailKey, String.valueOf(templatePoolRef.getId()), false);
if (volumeDetails == null || volumeDetails.isEmpty()) {
revokeAccess(srcTemplateOnPrimary, destHost, destPrimaryDataStore);
}
}
}
private void destroyAndReallocateManagedVolume(VolumeInfo volumeInfo) {
if (volumeInfo == null) {
return;
}
VolumeVO volume = volDao.findById(volumeInfo.getId());
if (volume == null) {
return;
}
if (volume.getState() == State.Allocated) { // Possible states here: Allocated, Ready & Creating
return;
}
volumeInfo.processEvent(Event.DestroyRequested);
Volume newVol = _volumeMgr.allocateDuplicateVolume(volume, null);
VolumeVO newVolume = (VolumeVO) newVol;
newVolume.set_iScsiName(null);
volDao.update(newVolume.getId(), newVolume);
s_logger.debug("Allocated new volume: " + newVolume.getId() + " for the VM: " + volume.getInstanceId());
try {
AsyncCallFuture<VolumeApiResult> expungeVolumeFuture = expungeVolumeAsync(volumeInfo);
VolumeApiResult expungeVolumeResult = expungeVolumeFuture.get();
if (expungeVolumeResult.isFailed()) {
s_logger.warn("Failed to expunge volume: " + volumeInfo.getId() + " that was created");
throw new CloudRuntimeException("Failed to expunge volume: " + volumeInfo.getId() + " that was created");
}
} catch (Exception ex) {
if (canVolumeBeRemoved(volumeInfo.getId())) {
volDao.remove(volumeInfo.getId());
}
s_logger.warn("Unable to expunge volume: " + volumeInfo.getId() + " due to: " + ex.getMessage());
throw new CloudRuntimeException("Unable to expunge volume: " + volumeInfo.getId() + " due to: " + ex.getMessage());
}
}
private void createManagedVolumeCopyTemplateAsync(VolumeInfo volumeInfo, PrimaryDataStore primaryDataStore, TemplateInfo srcTemplateInfo, Host destHost, AsyncCallFuture<VolumeApiResult> future) {
try {
// Create a volume on managed storage.
@ -1061,6 +1240,7 @@ public class VolumeServiceImpl implements VolumeService {
details.put(PrimaryDataStore.MANAGED_STORE_TARGET, volumeInfo.get_iScsiName());
details.put(PrimaryDataStore.MANAGED_STORE_TARGET_ROOT_VOLUME, volumeInfo.getName());
details.put(PrimaryDataStore.VOLUME_SIZE, String.valueOf(volumeInfo.getSize()));
details.put(StorageManager.STORAGE_POOL_DISK_WAIT.toString(), String.valueOf(StorageManager.STORAGE_POOL_DISK_WAIT.valueIn(primaryDataStore.getId())));
ChapInfo chapInfo = getChapInfo(volumeInfo, primaryDataStore);
@ -1106,7 +1286,109 @@ public class VolumeServiceImpl implements VolumeService {
}
@Override
public AsyncCallFuture<VolumeApiResult> createManagedStorageVolumeFromTemplateAsync(VolumeInfo volumeInfo, long destDataStoreId, TemplateInfo srcTemplateInfo, long destHostId) {
public TemplateInfo createManagedStorageTemplate(long srcTemplateId, long destDataStoreId, long destHostId) throws StorageAccessException {
Host destHost = _hostDao.findById(destHostId);
if (destHost == null) {
throw new CloudRuntimeException("Destination host should not be null.");
}
TemplateInfo srcTemplateInfo = tmplFactory.getTemplate(srcTemplateId);
if (srcTemplateInfo == null) {
throw new CloudRuntimeException("Failed to get info of template: " + srcTemplateId);
}
if (Storage.ImageFormat.ISO.equals(srcTemplateInfo.getFormat())) {
throw new CloudRuntimeException("Unsupported format: " + Storage.ImageFormat.ISO.toString() + " for managed storage template");
}
GlobalLock lock = null;
TemplateInfo templateOnPrimary = null;
try {
String templateIdManagedPoolIdLockString = "templateId:" + srcTemplateId + "managedPoolId:" + destDataStoreId;
lock = GlobalLock.getInternLock(templateIdManagedPoolIdLockString);
if (lock == null) {
throw new CloudRuntimeException("Unable to create managed storage template, couldn't get global lock on " + templateIdManagedPoolIdLockString);
}
int storagePoolMaxWaitSeconds = NumbersUtil.parseInt(configDao.getValue(Config.StoragePoolMaxWaitSeconds.key()), 3600);
if (!lock.lock(storagePoolMaxWaitSeconds)) {
s_logger.debug("Unable to create managed storage template, couldn't lock on " + templateIdManagedPoolIdLockString);
throw new CloudRuntimeException("Unable to create managed storage template, couldn't lock on " + templateIdManagedPoolIdLockString);
}
PrimaryDataStore destPrimaryDataStore = dataStoreMgr.getPrimaryDataStore(destDataStoreId);
// Check if template exists on the storage pool. If not, downland and copy to managed storage pool
VMTemplateStoragePoolVO templatePoolRef = _tmpltPoolDao.findByPoolTemplate(destDataStoreId, srcTemplateId, null);
if (templatePoolRef != null && templatePoolRef.getDownloadState() == Status.DOWNLOADED) {
return tmplFactory.getTemplate(srcTemplateId, destPrimaryDataStore);
}
templateOnPrimary = createManagedTemplateVolume(srcTemplateInfo, destPrimaryDataStore);
if (templateOnPrimary == null) {
throw new CloudRuntimeException("Failed to create template " + srcTemplateInfo.getUniqueName() + " on primary storage: " + destDataStoreId);
}
templatePoolRef = _tmpltPoolDao.findByPoolTemplate(destPrimaryDataStore.getId(), templateOnPrimary.getId(), null);
if (templatePoolRef == null) {
throw new CloudRuntimeException("Failed to find template " + srcTemplateInfo.getUniqueName() + " in storage pool " + destPrimaryDataStore.getId());
}
if (templatePoolRef.getDownloadState() == Status.NOT_DOWNLOADED) {
// Populate details which will be later read by the storage subsystem.
Map<String, String> details = new HashMap<>();
details.put(PrimaryDataStore.MANAGED, Boolean.TRUE.toString());
details.put(PrimaryDataStore.STORAGE_HOST, destPrimaryDataStore.getHostAddress());
details.put(PrimaryDataStore.STORAGE_PORT, String.valueOf(destPrimaryDataStore.getPort()));
details.put(PrimaryDataStore.MANAGED_STORE_TARGET, templateOnPrimary.getInstallPath());
details.put(PrimaryDataStore.MANAGED_STORE_TARGET_ROOT_VOLUME, srcTemplateInfo.getUniqueName());
details.put(PrimaryDataStore.REMOVE_AFTER_COPY, Boolean.TRUE.toString());
details.put(PrimaryDataStore.VOLUME_SIZE, String.valueOf(templateOnPrimary.getSize()));
details.put(StorageManager.STORAGE_POOL_DISK_WAIT.toString(), String.valueOf(StorageManager.STORAGE_POOL_DISK_WAIT.valueIn(destPrimaryDataStore.getId())));
destPrimaryDataStore.setDetails(details);
try {
grantAccess(templateOnPrimary, destHost, destPrimaryDataStore);
} catch (Exception e) {
throw new StorageAccessException("Unable to grant access to template: " + templateOnPrimary.getId() + " on host: " + destHost.getId());
}
templateOnPrimary.processEvent(Event.CopyingRequested);
try {
//Download and copy template to the managed volume
TemplateInfo templateOnPrimaryNow = tmplFactory.getReadyBypassedTemplateOnManagedStorage(srcTemplateId, templateOnPrimary, destDataStoreId, destHostId);
if (templateOnPrimaryNow == null) {
s_logger.debug("Failed to prepare ready bypassed template: " + srcTemplateId + " on primary storage: " + templateOnPrimary.getId());
throw new CloudRuntimeException("Failed to prepare ready bypassed template: " + srcTemplateId + " on primary storage: " + templateOnPrimary.getId());
}
templateOnPrimary.processEvent(Event.OperationSuccessed);
return templateOnPrimaryNow;
} finally {
revokeAccess(templateOnPrimary, destHost, destPrimaryDataStore);
}
}
return null;
} catch (StorageAccessException e) {
throw e;
} catch (Throwable e) {
s_logger.debug("Failed to create template on managed primary storage", e);
if (templateOnPrimary != null) {
templateOnPrimary.processEvent(Event.OperationFailed);
}
throw new CloudRuntimeException(e.getMessage());
} finally {
if (lock != null) {
lock.unlock();
lock.releaseRef();
}
}
}
@Override
public AsyncCallFuture<VolumeApiResult> createManagedStorageVolumeFromTemplateAsync(VolumeInfo volumeInfo, long destDataStoreId, TemplateInfo srcTemplateInfo, long destHostId) throws StorageAccessException {
PrimaryDataStore destPrimaryDataStore = dataStoreMgr.getPrimaryDataStore(destDataStoreId);
Host destHost = _hostDao.findById(destHostId);
@ -1123,31 +1405,59 @@ public class VolumeServiceImpl implements VolumeService {
if (storageCanCloneVolume && computeSupportsVolumeClone) {
s_logger.debug("Storage " + destDataStoreId + " can support cloning using a cached template and compute side is OK with volume cloning.");
TemplateInfo templateOnPrimary = destPrimaryDataStore.getTemplate(srcTemplateInfo.getId(), null);
GlobalLock lock = null;
TemplateInfo templateOnPrimary = null;
if (templateOnPrimary == null) {
templateOnPrimary = createManagedTemplateVolume(srcTemplateInfo, destPrimaryDataStore);
try {
String tmplIdManagedPoolIdLockString = "tmplId:" + srcTemplateInfo.getId() + "managedPoolId:" + destDataStoreId;
lock = GlobalLock.getInternLock(tmplIdManagedPoolIdLockString);
if (lock == null) {
throw new CloudRuntimeException("Unable to create managed storage template/volume, couldn't get global lock on " + tmplIdManagedPoolIdLockString);
}
int storagePoolMaxWaitSeconds = NumbersUtil.parseInt(configDao.getValue(Config.StoragePoolMaxWaitSeconds.key()), 3600);
if (!lock.lock(storagePoolMaxWaitSeconds)) {
s_logger.debug("Unable to create managed storage template/volume, couldn't lock on " + tmplIdManagedPoolIdLockString);
throw new CloudRuntimeException("Unable to create managed storage template/volume, couldn't lock on " + tmplIdManagedPoolIdLockString);
}
templateOnPrimary = destPrimaryDataStore.getTemplate(srcTemplateInfo.getId(), null);
if (templateOnPrimary == null) {
throw new CloudRuntimeException("Failed to create template " + srcTemplateInfo.getUniqueName() + " on primary storage: " + destDataStoreId);
templateOnPrimary = createManagedTemplateVolume(srcTemplateInfo, destPrimaryDataStore);
if (templateOnPrimary == null) {
throw new CloudRuntimeException("Failed to create template " + srcTemplateInfo.getUniqueName() + " on primary storage: " + destDataStoreId);
}
}
// Copy the template to the template volume.
VMTemplateStoragePoolVO templatePoolRef = _tmpltPoolDao.findByPoolTemplate(destPrimaryDataStore.getId(), templateOnPrimary.getId(), null);
if (templatePoolRef == null) {
throw new CloudRuntimeException("Failed to find template " + srcTemplateInfo.getUniqueName() + " in storage pool " + destPrimaryDataStore.getId());
}
if (templatePoolRef.getDownloadState() == Status.NOT_DOWNLOADED) {
copyTemplateToManagedTemplateVolume(srcTemplateInfo, templateOnPrimary, templatePoolRef, destPrimaryDataStore, destHost);
}
} finally {
if (lock != null) {
lock.unlock();
lock.releaseRef();
}
}
// Copy the template to the template volume.
VMTemplateStoragePoolVO templatePoolRef = _tmpltPoolDao.findByPoolTemplate(destPrimaryDataStore.getId(), templateOnPrimary.getId(), null);
if (destPrimaryDataStore.getPoolType() != StoragePoolType.PowerFlex) {
// We have a template on primary storage. Clone it to new volume.
s_logger.debug("Creating a clone from template on primary storage " + destDataStoreId);
if (templatePoolRef == null) {
throw new CloudRuntimeException("Failed to find template " + srcTemplateInfo.getUniqueName() + " in storage pool " + destPrimaryDataStore.getId());
createManagedVolumeCloneTemplateAsync(volumeInfo, templateOnPrimary, destPrimaryDataStore, future);
} else {
// We have a template on PowerFlex primary storage. Create new volume and copy to it.
s_logger.debug("Copying the template to the volume on primary storage");
createManagedVolumeCopyManagedTemplateAsync(volumeInfo, destPrimaryDataStore, templateOnPrimary, destHost, future);
}
if (templatePoolRef.getDownloadState() == Status.NOT_DOWNLOADED) {
copyTemplateToManagedTemplateVolume(srcTemplateInfo, templateOnPrimary, templatePoolRef, destPrimaryDataStore, destHost);
}
// We have a template on primary storage. Clone it to new volume.
s_logger.debug("Creating a clone from template on primary storage " + destDataStoreId);
createManagedVolumeCloneTemplateAsync(volumeInfo, templateOnPrimary, destPrimaryDataStore, future);
} else {
s_logger.debug("Primary storage does not support cloning or no support for UUID resigning on the host side; copying the template normally");
@ -1300,6 +1610,8 @@ public class VolumeServiceImpl implements VolumeService {
// part here to make sure the credentials do not get stored in the db unencrypted.
if (pool.getPoolType() == StoragePoolType.SMB && folder != null && folder.contains("?")) {
folder = folder.substring(0, folder.indexOf("?"));
} else if (pool.getPoolType() == StoragePoolType.PowerFlex) {
folder = volume.getFolder();
}
VolumeVO newVol = new VolumeVO(volume);
@ -1309,6 +1621,7 @@ public class VolumeServiceImpl implements VolumeService {
newVol.setFolder(folder);
newVol.setPodId(pool.getPodId());
newVol.setPoolId(pool.getId());
newVol.setPoolType(pool.getPoolType());
newVol.setLastPoolId(lastPoolId);
newVol.setPodId(pool.getPodId());
return volDao.persist(newVol);
@ -1325,7 +1638,6 @@ public class VolumeServiceImpl implements VolumeService {
this.destVolume = destVolume;
this.future = future;
}
}
protected AsyncCallFuture<VolumeApiResult> copyVolumeFromImageToPrimary(VolumeInfo srcVolume, DataStore destStore) {
@ -1435,8 +1747,8 @@ public class VolumeServiceImpl implements VolumeService {
@Override
public AsyncCallFuture<VolumeApiResult> copyVolume(VolumeInfo srcVolume, DataStore destStore) {
DataStore srcStore = srcVolume.getDataStore();
if (s_logger.isDebugEnabled()) {
DataStore srcStore = srcVolume.getDataStore();
String srcRole = (srcStore != null && srcStore.getRole() != null ? srcVolume.getDataStore().getRole().toString() : "<unknown role>");
String msg = String.format("copying %s(id=%d, role=%s) to %s (id=%d, role=%s)"
@ -1457,6 +1769,11 @@ public class VolumeServiceImpl implements VolumeService {
return copyVolumeFromPrimaryToImage(srcVolume, destStore);
}
if (srcStore.getRole() == DataStoreRole.Primary && destStore.getRole() == DataStoreRole.Primary && ((PrimaryDataStore) destStore).isManaged() &&
requiresNewManagedVolumeInDestStore((PrimaryDataStore) srcStore, (PrimaryDataStore) destStore)) {
return copyManagedVolume(srcVolume, destStore);
}
// OfflineVmwareMigration: aren't we missing secondary to secondary in this logic?
AsyncCallFuture<VolumeApiResult> future = new AsyncCallFuture<VolumeApiResult>();
@ -1502,6 +1819,14 @@ public class VolumeServiceImpl implements VolumeService {
destVolume.processEvent(Event.MigrationCopyFailed);
srcVolume.processEvent(Event.OperationFailed);
destroyVolume(destVolume.getId());
if (destVolume.getStoragePoolType() == StoragePoolType.PowerFlex) {
s_logger.info("Dest volume " + destVolume.getId() + " can be removed");
destVolume.processEvent(Event.ExpungeRequested);
destVolume.processEvent(Event.OperationSuccessed);
volDao.remove(destVolume.getId());
future.complete(res);
return null;
}
destVolume = volFactory.getVolume(destVolume.getId());
AsyncCallFuture<VolumeApiResult> destroyFuture = expungeVolumeAsync(destVolume);
destroyFuture.get();
@ -1512,6 +1837,14 @@ public class VolumeServiceImpl implements VolumeService {
volDao.updateUuid(srcVolume.getId(), destVolume.getId());
try {
destroyVolume(srcVolume.getId());
if (srcVolume.getStoragePoolType() == StoragePoolType.PowerFlex) {
s_logger.info("Src volume " + srcVolume.getId() + " can be removed");
srcVolume.processEvent(Event.ExpungeRequested);
srcVolume.processEvent(Event.OperationSuccessed);
volDao.remove(srcVolume.getId());
future.complete(res);
return null;
}
srcVolume = volFactory.getVolume(srcVolume.getId());
AsyncCallFuture<VolumeApiResult> destroyFuture = expungeVolumeAsync(srcVolume);
// If volume destroy fails, this could be because of vdi is still in use state, so wait and retry.
@ -1534,6 +1867,213 @@ public class VolumeServiceImpl implements VolumeService {
return null;
}
private class CopyManagedVolumeContext<T> extends AsyncRpcContext<T> {
final VolumeInfo srcVolume;
final VolumeInfo destVolume;
final Host host;
final AsyncCallFuture<VolumeApiResult> future;
public CopyManagedVolumeContext(AsyncCompletionCallback<T> callback, AsyncCallFuture<VolumeApiResult> future, VolumeInfo srcVolume, VolumeInfo destVolume, Host host) {
super(callback);
this.srcVolume = srcVolume;
this.destVolume = destVolume;
this.host = host;
this.future = future;
}
}
private AsyncCallFuture<VolumeApiResult> copyManagedVolume(VolumeInfo srcVolume, DataStore destStore) {
AsyncCallFuture<VolumeApiResult> future = new AsyncCallFuture<VolumeApiResult>();
VolumeApiResult res = new VolumeApiResult(srcVolume);
try {
if (!snapshotMgr.canOperateOnVolume(srcVolume)) {
s_logger.debug("There are snapshots creating for this volume, can not move this volume");
res.setResult("There are snapshots creating for this volume, can not move this volume");
future.complete(res);
return future;
}
if (snapshotMgr.backedUpSnapshotsExistsForVolume(srcVolume)) {
s_logger.debug("There are backed up snapshots for this volume, can not move.");
res.setResult("[UNSUPPORTED] There are backed up snapshots for this volume, can not move. Please try again after removing them.");
future.complete(res);
return future;
}
List<Long> poolIds = new ArrayList<Long>();
poolIds.add(srcVolume.getPoolId());
poolIds.add(destStore.getId());
Host hostWithPoolsAccess = _storageMgr.findUpAndEnabledHostWithAccessToStoragePools(poolIds);
if (hostWithPoolsAccess == null) {
s_logger.debug("No host(s) available with pool access, can not move this volume");
res.setResult("No host(s) available with pool access, can not move this volume");
future.complete(res);
return future;
}
VolumeVO destVol = duplicateVolumeOnAnotherStorage(srcVolume, (StoragePool)destStore);
VolumeInfo destVolume = volFactory.getVolume(destVol.getId(), destStore);
// Create a volume on managed storage.
AsyncCallFuture<VolumeApiResult> createVolumeFuture = createVolumeAsync(destVolume, destStore);
VolumeApiResult createVolumeResult = createVolumeFuture.get();
if (createVolumeResult.isFailed()) {
throw new CloudRuntimeException("Creation of a dest volume failed: " + createVolumeResult.getResult());
}
// Refresh the volume info from the DB.
destVolume = volFactory.getVolume(destVolume.getId(), destStore);
PrimaryDataStore srcPrimaryDataStore = (PrimaryDataStore) srcVolume.getDataStore();
if (srcPrimaryDataStore.isManaged()) {
Map<String, String> srcPrimaryDataStoreDetails = new HashMap<String, String>();
srcPrimaryDataStoreDetails.put(PrimaryDataStore.MANAGED, Boolean.TRUE.toString());
srcPrimaryDataStoreDetails.put(PrimaryDataStore.STORAGE_HOST, srcPrimaryDataStore.getHostAddress());
srcPrimaryDataStoreDetails.put(PrimaryDataStore.STORAGE_PORT, String.valueOf(srcPrimaryDataStore.getPort()));
srcPrimaryDataStoreDetails.put(PrimaryDataStore.MANAGED_STORE_TARGET, srcVolume.get_iScsiName());
srcPrimaryDataStoreDetails.put(PrimaryDataStore.MANAGED_STORE_TARGET_ROOT_VOLUME, srcVolume.getName());
srcPrimaryDataStoreDetails.put(PrimaryDataStore.VOLUME_SIZE, String.valueOf(srcVolume.getSize()));
srcPrimaryDataStoreDetails.put(StorageManager.STORAGE_POOL_DISK_WAIT.toString(), String.valueOf(StorageManager.STORAGE_POOL_DISK_WAIT.valueIn(srcPrimaryDataStore.getId())));
srcPrimaryDataStore.setDetails(srcPrimaryDataStoreDetails);
grantAccess(srcVolume, hostWithPoolsAccess, srcVolume.getDataStore());
}
PrimaryDataStore destPrimaryDataStore = (PrimaryDataStore) destStore;
Map<String, String> destPrimaryDataStoreDetails = new HashMap<String, String>();
destPrimaryDataStoreDetails.put(PrimaryDataStore.MANAGED, Boolean.TRUE.toString());
destPrimaryDataStoreDetails.put(PrimaryDataStore.STORAGE_HOST, destPrimaryDataStore.getHostAddress());
destPrimaryDataStoreDetails.put(PrimaryDataStore.STORAGE_PORT, String.valueOf(destPrimaryDataStore.getPort()));
destPrimaryDataStoreDetails.put(PrimaryDataStore.MANAGED_STORE_TARGET, destVolume.get_iScsiName());
destPrimaryDataStoreDetails.put(PrimaryDataStore.MANAGED_STORE_TARGET_ROOT_VOLUME, destVolume.getName());
destPrimaryDataStoreDetails.put(PrimaryDataStore.VOLUME_SIZE, String.valueOf(destVolume.getSize()));
destPrimaryDataStoreDetails.put(StorageManager.STORAGE_POOL_DISK_WAIT.toString(), String.valueOf(StorageManager.STORAGE_POOL_DISK_WAIT.valueIn(destPrimaryDataStore.getId())));
destPrimaryDataStore.setDetails(destPrimaryDataStoreDetails);
grantAccess(destVolume, hostWithPoolsAccess, destStore);
destVolume.processEvent(Event.CreateRequested);
srcVolume.processEvent(Event.MigrationRequested);
CopyManagedVolumeContext<VolumeApiResult> context = new CopyManagedVolumeContext<VolumeApiResult>(null, future, srcVolume, destVolume, hostWithPoolsAccess);
AsyncCallbackDispatcher<VolumeServiceImpl, CopyCommandResult> caller = AsyncCallbackDispatcher.create(this);
caller.setCallback(caller.getTarget().copyManagedVolumeCallBack(null, null)).setContext(context);
motionSrv.copyAsync(srcVolume, destVolume, hostWithPoolsAccess, caller);
} catch (Exception e) {
s_logger.error("Copy to managed volume failed due to: " + e);
if(s_logger.isDebugEnabled()) {
s_logger.debug("Copy to managed volume failed.", e);
}
res.setResult(e.toString());
future.complete(res);
}
return future;
}
protected Void copyManagedVolumeCallBack(AsyncCallbackDispatcher<VolumeServiceImpl, CopyCommandResult> callback, CopyManagedVolumeContext<VolumeApiResult> context) {
VolumeInfo srcVolume = context.srcVolume;
VolumeInfo destVolume = context.destVolume;
Host host = context.host;
CopyCommandResult result = callback.getResult();
AsyncCallFuture<VolumeApiResult> future = context.future;
VolumeApiResult res = new VolumeApiResult(destVolume);
try {
if (srcVolume.getDataStore() != null && ((PrimaryDataStore) srcVolume.getDataStore()).isManaged()) {
revokeAccess(srcVolume, host, srcVolume.getDataStore());
}
revokeAccess(destVolume, host, destVolume.getDataStore());
if (result.isFailed()) {
res.setResult(result.getResult());
destVolume.processEvent(Event.MigrationCopyFailed);
srcVolume.processEvent(Event.OperationFailed);
try {
destroyVolume(destVolume.getId());
destVolume = volFactory.getVolume(destVolume.getId());
AsyncCallFuture<VolumeApiResult> destVolumeDestroyFuture = expungeVolumeAsync(destVolume);
destVolumeDestroyFuture.get();
// If dest managed volume destroy fails, wait and retry.
if (destVolumeDestroyFuture.get().isFailed()) {
Thread.sleep(5 * 1000);
destVolumeDestroyFuture = expungeVolumeAsync(destVolume);
destVolumeDestroyFuture.get();
}
future.complete(res);
} catch (Exception e) {
s_logger.debug("failed to clean up managed volume on storage", e);
}
} else {
srcVolume.processEvent(Event.OperationSuccessed);
destVolume.processEvent(Event.MigrationCopySucceeded, result.getAnswer());
volDao.updateUuid(srcVolume.getId(), destVolume.getId());
try {
destroyVolume(srcVolume.getId());
srcVolume = volFactory.getVolume(srcVolume.getId());
AsyncCallFuture<VolumeApiResult> srcVolumeDestroyFuture = expungeVolumeAsync(srcVolume);
// If src volume destroy fails, wait and retry.
if (srcVolumeDestroyFuture.get().isFailed()) {
Thread.sleep(5 * 1000);
srcVolumeDestroyFuture = expungeVolumeAsync(srcVolume);
srcVolumeDestroyFuture.get();
}
future.complete(res);
} catch (Exception e) {
s_logger.debug("failed to clean up volume on storage", e);
}
}
} catch (Exception e) {
s_logger.debug("Failed to process copy managed volume callback", e);
res.setResult(e.toString());
future.complete(res);
}
return null;
}
private boolean requiresNewManagedVolumeInDestStore(PrimaryDataStore srcDataStore, PrimaryDataStore destDataStore) {
if (srcDataStore == null || destDataStore == null) {
s_logger.warn("Unable to check for new volume, either src or dest pool is null");
return false;
}
if (srcDataStore.getPoolType() == StoragePoolType.PowerFlex && destDataStore.getPoolType() == StoragePoolType.PowerFlex) {
if (srcDataStore.getId() == destDataStore.getId()) {
return false;
}
final String STORAGE_POOL_SYSTEM_ID = "powerflex.storagepool.system.id";
String srcPoolSystemId = null;
StoragePoolDetailVO srcPoolSystemIdDetail = _storagePoolDetailsDao.findDetail(srcDataStore.getId(), STORAGE_POOL_SYSTEM_ID);
if (srcPoolSystemIdDetail != null) {
srcPoolSystemId = srcPoolSystemIdDetail.getValue();
}
String destPoolSystemId = null;
StoragePoolDetailVO destPoolSystemIdDetail = _storagePoolDetailsDao.findDetail(destDataStore.getId(), STORAGE_POOL_SYSTEM_ID);
if (destPoolSystemIdDetail != null) {
destPoolSystemId = destPoolSystemIdDetail.getValue();
}
if (Strings.isNullOrEmpty(srcPoolSystemId) || Strings.isNullOrEmpty(destPoolSystemId)) {
s_logger.warn("PowerFlex src pool: " + srcDataStore.getId() + " or dest pool: " + destDataStore.getId() +
" storage instance details are not available");
return false;
}
if (!srcPoolSystemId.equals(destPoolSystemId)) {
s_logger.debug("PowerFlex src pool: " + srcDataStore.getId() + " and dest pool: " + destDataStore.getId() +
" belongs to different storage instances, create new managed volume");
return true;
}
}
// New volume not required for all other cases (address any cases required in future)
return false;
}
private class MigrateVolumeContext<T> extends AsyncRpcContext<T> {
final VolumeInfo srcVolume;
final VolumeInfo destVolume;
@ -1569,7 +2109,7 @@ public class VolumeServiceImpl implements VolumeService {
caller.setCallback(caller.getTarget().migrateVolumeCallBack(null, null)).setContext(context);
motionSrv.copyAsync(srcVolume, destVolume, caller);
} catch (Exception e) {
s_logger.debug("Failed to copy volume", e);
s_logger.debug("Failed to migrate volume", e);
res.setResult(e.toString());
future.complete(res);
}
@ -1588,6 +2128,10 @@ public class VolumeServiceImpl implements VolumeService {
future.complete(res);
} else {
srcVolume.processEvent(Event.OperationSuccessed);
if (srcVolume.getStoragePoolType() == StoragePoolType.PowerFlex) {
future.complete(res);
return null;
}
snapshotMgr.cleanupSnapshotsByVolume(srcVolume.getId());
future.complete(res);
}
@ -2139,4 +2683,4 @@ public class VolumeServiceImpl implements VolumeService {
volDao.remove(vol.getId());
}
}
}
}

View File

@ -33,4 +33,9 @@ public interface DirectDownloadService {
* Upload a stored certificate on database with id 'certificateId' to host with id 'hostId'
*/
boolean uploadCertificate(long certificateId, long hostId);
/**
* Sync the stored certificates to host with id 'hostId'
*/
boolean syncCertificatesToHost(long hostId, long zoneId);
}

View File

@ -72,6 +72,12 @@
<artifactId>jna-platform</artifactId>
<version>${cs.jna.version}</version>
</dependency>
<dependency>
<groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-plugin-storage-volume-scaleio</artifactId>
<version>${project.version}</version>
<scope>compile</scope>
</dependency>
</dependencies>
<build>
<plugins>

View File

@ -46,9 +46,7 @@ import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import com.cloud.hypervisor.kvm.resource.rolling.maintenance.RollingMaintenanceAgentExecutor;
import com.cloud.hypervisor.kvm.resource.rolling.maintenance.RollingMaintenanceExecutor;
import com.cloud.hypervisor.kvm.resource.rolling.maintenance.RollingMaintenanceServiceExecutor;
import org.apache.cloudstack.storage.configdrive.ConfigDrive;
import org.apache.cloudstack.storage.to.PrimaryDataStoreTO;
import org.apache.cloudstack.storage.to.TemplateObjectTO;
import org.apache.cloudstack.storage.to.VolumeObjectTO;
@ -91,6 +89,7 @@ import com.cloud.agent.api.HostVmStateReportEntry;
import com.cloud.agent.api.PingCommand;
import com.cloud.agent.api.PingRoutingCommand;
import com.cloud.agent.api.PingRoutingWithNwGroupsCommand;
import com.cloud.agent.api.SecurityGroupRulesCmd;
import com.cloud.agent.api.SetupGuestNetworkCommand;
import com.cloud.agent.api.StartupCommand;
import com.cloud.agent.api.StartupRoutingCommand;
@ -113,7 +112,6 @@ import com.cloud.agent.dao.impl.PropertiesStorage;
import com.cloud.agent.resource.virtualnetwork.VRScripts;
import com.cloud.agent.resource.virtualnetwork.VirtualRouterDeployer;
import com.cloud.agent.resource.virtualnetwork.VirtualRoutingResource;
import com.cloud.agent.api.SecurityGroupRulesCmd;
import com.cloud.dc.Vlan;
import com.cloud.exception.InternalErrorException;
import com.cloud.host.Host.Type;
@ -146,6 +144,9 @@ import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.VideoDef;
import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.WatchDogDef;
import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.WatchDogDef.WatchDogAction;
import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.WatchDogDef.WatchDogModel;
import com.cloud.hypervisor.kvm.resource.rolling.maintenance.RollingMaintenanceAgentExecutor;
import com.cloud.hypervisor.kvm.resource.rolling.maintenance.RollingMaintenanceExecutor;
import com.cloud.hypervisor.kvm.resource.rolling.maintenance.RollingMaintenanceServiceExecutor;
import com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper;
import com.cloud.hypervisor.kvm.resource.wrapper.LibvirtUtilitiesHelper;
import com.cloud.hypervisor.kvm.storage.IscsiStorageCleanupMonitor;
@ -239,6 +240,9 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
public static final String SSHPUBKEYPATH = SSHKEYSPATH + File.separator + "id_rsa.pub.cloud";
public static final String DEFAULTDOMRSSHPORT = "3922";
public final static String HOST_CACHE_PATH_PARAMETER = "host.cache.location";
public final static String CONFIG_DIR = "config";
public static final String BASH_SCRIPT_PATH = "/bin/bash";
private String _mountPoint = "/mnt";
@ -518,6 +522,14 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
return directDownloadTemporaryDownloadPath;
}
public String getConfigPath() {
return getCachePath() + "/" + CONFIG_DIR;
}
public String getCachePath() {
return cachePath;
}
public String getResizeVolumePath() {
return _resizeVolumePath;
}
@ -570,6 +582,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
protected boolean dpdkSupport = false;
protected String dpdkOvsPath;
protected String directDownloadTemporaryDownloadPath;
protected String cachePath;
private String getEndIpFromStartIp(final String startIp, final int numIps) {
final String[] tokens = startIp.split("[.]");
@ -621,6 +634,10 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
return "/var/lib/libvirt/images";
}
private String getDefaultCachePath() {
return "/var/cache/cloud";
}
protected String getDefaultNetworkScriptsDir() {
return "scripts/vm/network/vnet";
}
@ -710,6 +727,11 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
directDownloadTemporaryDownloadPath = getDefaultDirectDownloadTemporaryPath();
}
cachePath = (String) params.get(HOST_CACHE_PATH_PARAMETER);
if (org.apache.commons.lang.StringUtils.isBlank(cachePath)) {
cachePath = getDefaultCachePath();
}
params.put("domr.scripts.dir", domrScriptsDir);
_virtRouterResource = new VirtualRoutingResource(this);
@ -2461,11 +2483,21 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
public String getVolumePath(final Connect conn, final DiskTO volume) throws LibvirtException, URISyntaxException {
return getVolumePath(conn, volume, false);
}
public String getVolumePath(final Connect conn, final DiskTO volume, boolean diskOnHostCache) throws LibvirtException, URISyntaxException {
final DataTO data = volume.getData();
final DataStoreTO store = data.getDataStore();
if (volume.getType() == Volume.Type.ISO && data.getPath() != null && (store instanceof NfsTO ||
store instanceof PrimaryDataStoreTO && data instanceof TemplateObjectTO && !((TemplateObjectTO) data).isDirectDownload())) {
if (data.getPath().startsWith(ConfigDrive.CONFIGDRIVEDIR) && diskOnHostCache) {
String configDrivePath = getConfigPath() + "/" + data.getPath();
return configDrivePath;
}
final String isoPath = store.getUrl().split("\\?")[0] + File.separator + data.getPath();
final int index = isoPath.lastIndexOf("/");
final String path = isoPath.substring(0, index);
@ -2503,7 +2535,11 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
if (volume.getType() == Volume.Type.ISO && data.getPath() != null) {
DataStoreTO dataStore = data.getDataStore();
String dataStoreUrl = null;
if (dataStore instanceof NfsTO) {
if (data.getPath().startsWith(ConfigDrive.CONFIGDRIVEDIR) && vmSpec.isConfigDriveOnHostCache() && data instanceof TemplateObjectTO) {
String configDrivePath = getConfigPath() + "/" + data.getPath();
physicalDisk = new KVMPhysicalDisk(configDrivePath, ((TemplateObjectTO) data).getUuid(), null);
physicalDisk.setFormat(PhysicalDiskFormat.FILE);
} else if (dataStore instanceof NfsTO) {
NfsTO nfsStore = (NfsTO)data.getDataStore();
dataStoreUrl = nfsStore.getUrl();
physicalDisk = getPhysicalDiskFromNfsStore(dataStoreUrl, data);
@ -2592,6 +2628,8 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
*/
disk.defNetworkBasedDisk(physicalDisk.getPath().replace("rbd:", ""), pool.getSourceHost(), pool.getSourcePort(), pool.getAuthUserName(),
pool.getUuid(), devId, diskBusType, DiskProtocol.RBD, DiskDef.DiskFmtType.RAW);
} else if (pool.getType() == StoragePoolType.PowerFlex) {
disk.defBlockBasedDisk(physicalDisk.getPath(), devId, diskBusTypeData);
} else if (pool.getType() == StoragePoolType.Gluster) {
final String mountpoint = pool.getLocalPath();
final String path = physicalDisk.getPath();
@ -2675,7 +2713,6 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
}
}
}
private KVMPhysicalDisk getPhysicalDiskPrimaryStore(PrimaryDataStoreTO primaryDataStoreTO, DataTO data) {
@ -2837,6 +2874,8 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
if (attachingPool.getType() == StoragePoolType.RBD) {
diskdef.defNetworkBasedDisk(attachingDisk.getPath(), attachingPool.getSourceHost(), attachingPool.getSourcePort(), attachingPool.getAuthUserName(),
attachingPool.getUuid(), devId, busT, DiskProtocol.RBD, DiskDef.DiskFmtType.RAW);
} else if (attachingPool.getType() == StoragePoolType.PowerFlex) {
diskdef.defBlockBasedDisk(attachingDisk.getPath(), devId, busT);
} else if (attachingPool.getType() == StoragePoolType.Gluster) {
diskdef.defNetworkBasedDisk(attachingDisk.getPath(), attachingPool.getSourceHost(), attachingPool.getSourcePort(), null,
null, devId, busT, DiskProtocol.GLUSTER, DiskDef.DiskFmtType.QCOW2);

View File

@ -18,7 +18,7 @@ package com.cloud.hypervisor.kvm.resource;
public class LibvirtStoragePoolDef {
public enum PoolType {
ISCSI("iscsi"), NETFS("netfs"), LOGICAL("logical"), DIR("dir"), RBD("rbd"), GLUSTERFS("glusterfs");
ISCSI("iscsi"), NETFS("netfs"), LOGICAL("logical"), DIR("dir"), RBD("rbd"), GLUSTERFS("glusterfs"), POWERFLEX("powerflex");
String _poolType;
PoolType(String poolType) {
@ -178,7 +178,7 @@ public class LibvirtStoragePoolDef {
storagePoolBuilder.append("'/>\n");
storagePoolBuilder.append("</source>\n");
}
if (_poolType != PoolType.RBD) {
if (_poolType != PoolType.RBD && _poolType != PoolType.POWERFLEX) {
storagePoolBuilder.append("<target>\n");
storagePoolBuilder.append("<path>" + _targetPath + "</path>\n");
storagePoolBuilder.append("</target>\n");

View File

@ -55,7 +55,7 @@ public class LibvirtStoragePoolXMLParser {
String host = getAttrValue("host", "name", source);
String format = getAttrValue("format", "type", source);
if (type.equalsIgnoreCase("rbd")) {
if (type.equalsIgnoreCase("rbd") || type.equalsIgnoreCase("powerflex")) {
int port = 0;
String xmlPort = getAttrValue("host", "port", source);
if (StringUtils.isNotBlank(xmlPort)) {

View File

@ -18,13 +18,15 @@
//
package com.cloud.hypervisor.kvm.resource.wrapper;
import org.apache.cloudstack.agent.directdownload.CheckUrlAnswer;
import org.apache.cloudstack.agent.directdownload.CheckUrlCommand;
import org.apache.log4j.Logger;
import com.cloud.hypervisor.kvm.resource.LibvirtComputingResource;
import com.cloud.resource.CommandWrapper;
import com.cloud.resource.ResourceWrapper;
import com.cloud.utils.UriUtils;
import org.apache.cloudstack.agent.directdownload.CheckUrlAnswer;
import org.apache.cloudstack.agent.directdownload.CheckUrlCommand;
import org.apache.log4j.Logger;
import com.cloud.utils.storage.QCOW2Utils;
@ResourceWrapper(handles = CheckUrlCommand.class)
public class LibvirtCheckUrlCommand extends CommandWrapper<CheckUrlCommand, CheckUrlAnswer, LibvirtComputingResource> {
@ -39,7 +41,12 @@ public class LibvirtCheckUrlCommand extends CommandWrapper<CheckUrlCommand, Chec
Long remoteSize = null;
try {
UriUtils.checkUrlExistence(url);
remoteSize = UriUtils.getRemoteSize(url);
if ("qcow2".equalsIgnoreCase(cmd.getFormat())) {
remoteSize = QCOW2Utils.getVirtualSize(url);
} else {
remoteSize = UriUtils.getRemoteSize(url);
}
}
catch (IllegalArgumentException e) {
s_logger.warn(e.getMessage());

View File

@ -50,7 +50,12 @@ public final class LibvirtGetVolumeStatsCommandWrapper extends CommandWrapper<Ge
StoragePoolType poolType = cmd.getPoolType();
HashMap<String, VolumeStatsEntry> statEntry = new HashMap<String, VolumeStatsEntry>();
for (String volumeUuid : cmd.getVolumeUuids()) {
statEntry.put(volumeUuid, getVolumeStat(libvirtComputingResource, conn, volumeUuid, storeUuid, poolType));
VolumeStatsEntry volumeStatsEntry = getVolumeStat(libvirtComputingResource, conn, volumeUuid, storeUuid, poolType);
if (volumeStatsEntry == null) {
String msg = "Can't get disk stats as pool or disk details unavailable for volume: " + volumeUuid + " on the storage pool: " + storeUuid;
return new GetVolumeStatsAnswer(cmd, msg, null);
}
statEntry.put(volumeUuid, volumeStatsEntry);
}
return new GetVolumeStatsAnswer(cmd, "", statEntry);
} catch (LibvirtException | CloudRuntimeException e) {
@ -58,10 +63,17 @@ public final class LibvirtGetVolumeStatsCommandWrapper extends CommandWrapper<Ge
}
}
private VolumeStatsEntry getVolumeStat(final LibvirtComputingResource libvirtComputingResource, final Connect conn, final String volumeUuid, final String storeUuid, final StoragePoolType poolType) throws LibvirtException {
KVMStoragePool sourceKVMPool = libvirtComputingResource.getStoragePoolMgr().getStoragePool(poolType, storeUuid);
if (sourceKVMPool == null) {
return null;
}
KVMPhysicalDisk sourceKVMVolume = sourceKVMPool.getPhysicalDisk(volumeUuid);
if (sourceKVMVolume == null) {
return null;
}
return new VolumeStatsEntry(volumeUuid, sourceKVMVolume.getSize(), sourceKVMVolume.getVirtualSize());
}
}

View File

@ -24,16 +24,21 @@ import java.nio.file.Path;
import java.nio.file.Paths;
import org.apache.cloudstack.storage.configdrive.ConfigDriveBuilder;
import org.apache.cloudstack.storage.to.PrimaryDataStoreTO;
import org.apache.log4j.Logger;
import com.cloud.agent.api.Answer;
import com.cloud.agent.api.HandleConfigDriveIsoAnswer;
import com.cloud.agent.api.HandleConfigDriveIsoCommand;
import com.cloud.agent.api.to.DataStoreTO;
import com.cloud.hypervisor.kvm.resource.LibvirtComputingResource;
import com.cloud.hypervisor.kvm.storage.KVMStoragePool;
import com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager;
import com.cloud.network.element.NetworkElement;
import com.cloud.resource.CommandWrapper;
import com.cloud.resource.ResourceWrapper;
import com.cloud.storage.Storage;
import com.cloud.utils.exception.CloudRuntimeException;
@ResourceWrapper(handles = HandleConfigDriveIsoCommand.class)
public final class LibvirtHandleConfigDriveCommandWrapper extends CommandWrapper<HandleConfigDriveIsoCommand, Answer, LibvirtComputingResource> {
@ -41,38 +46,103 @@ public final class LibvirtHandleConfigDriveCommandWrapper extends CommandWrapper
@Override
public Answer execute(final HandleConfigDriveIsoCommand command, final LibvirtComputingResource libvirtComputingResource) {
final KVMStoragePoolManager storagePoolMgr = libvirtComputingResource.getStoragePoolMgr();
final KVMStoragePool pool = storagePoolMgr.getStoragePool(Storage.StoragePoolType.NetworkFilesystem, command.getDestStore().getUuid());
if (pool == null) {
return new Answer(command, false, "Pool not found, config drive for KVM is only supported for NFS");
}
String mountPoint = null;
try {
if (command.isCreate()) {
LOG.debug("Creating config drive: " + command.getIsoFile());
NetworkElement.Location location = NetworkElement.Location.PRIMARY;
if (command.isHostCachePreferred()) {
LOG.debug("Using the KVM host for config drive");
mountPoint = libvirtComputingResource.getConfigPath();
location = NetworkElement.Location.HOST;
} else {
final KVMStoragePoolManager storagePoolMgr = libvirtComputingResource.getStoragePoolMgr();
KVMStoragePool pool = null;
String poolUuid = null;
Storage.StoragePoolType poolType = null;
DataStoreTO dataStoreTO = command.getDestStore();
if (dataStoreTO != null) {
if (dataStoreTO instanceof PrimaryDataStoreTO) {
PrimaryDataStoreTO primaryDataStoreTO = (PrimaryDataStoreTO) dataStoreTO;
poolType = primaryDataStoreTO.getPoolType();
} else {
poolType = Storage.StoragePoolType.NetworkFilesystem;
}
poolUuid = command.getDestStore().getUuid();
pool = storagePoolMgr.getStoragePool(poolType, poolUuid);
}
if (pool == null || poolType == null) {
return new HandleConfigDriveIsoAnswer(command, "Unable to create config drive, Pool " + (poolUuid != null ? poolUuid : "") + " not found");
}
if (pool.supportsConfigDriveIso()) {
LOG.debug("Using the pool: " + poolUuid + " for config drive");
mountPoint = pool.getLocalPath();
} else if (command.getUseHostCacheOnUnsupportedPool()) {
LOG.debug("Config drive for KVM is not supported for pool type: " + poolType.toString() + ", using the KVM host");
mountPoint = libvirtComputingResource.getConfigPath();
location = NetworkElement.Location.HOST;
} else {
LOG.debug("Config drive for KVM is not supported for pool type: " + poolType.toString());
return new HandleConfigDriveIsoAnswer(command, "Config drive for KVM is not supported for pool type: " + poolType.toString());
}
}
Path isoPath = Paths.get(mountPoint, command.getIsoFile());
File isoFile = new File(mountPoint, command.getIsoFile());
if (command.getIsoData() == null) {
return new HandleConfigDriveIsoAnswer(command, "Invalid config drive ISO data received");
}
if (isoFile.exists()) {
LOG.debug("An old config drive iso already exists");
}
final String mountPoint = pool.getLocalPath();
final Path isoPath = Paths.get(mountPoint, command.getIsoFile());
final File isoFile = new File(mountPoint, command.getIsoFile());
if (command.isCreate()) {
LOG.debug("Creating config drive: " + command.getIsoFile());
if (command.getIsoData() == null) {
return new Answer(command, false, "Invalid config drive ISO data received");
}
if (isoFile.exists()) {
LOG.debug("An old config drive iso already exists");
}
try {
Files.createDirectories(isoPath.getParent());
ConfigDriveBuilder.base64StringToFile(command.getIsoData(), mountPoint, command.getIsoFile());
} catch (IOException e) {
return new Answer(command, false, "Failed due to exception: " + e.getMessage());
}
} else {
try {
Files.deleteIfExists(isoPath);
} catch (IOException e) {
LOG.warn("Failed to delete config drive: " + isoPath.toAbsolutePath().toString());
return new Answer(command, false, "Failed due to exception: " + e.getMessage());
}
}
return new Answer(command);
return new HandleConfigDriveIsoAnswer(command, location);
} else {
LOG.debug("Deleting config drive: " + command.getIsoFile());
Path configDrivePath = null;
if (command.isHostCachePreferred()) {
// Check and delete config drive in host storage if exists
mountPoint = libvirtComputingResource.getConfigPath();
configDrivePath = Paths.get(mountPoint, command.getIsoFile());
Files.deleteIfExists(configDrivePath);
} else {
final KVMStoragePoolManager storagePoolMgr = libvirtComputingResource.getStoragePoolMgr();
KVMStoragePool pool = null;
DataStoreTO dataStoreTO = command.getDestStore();
if (dataStoreTO != null) {
if (dataStoreTO instanceof PrimaryDataStoreTO) {
PrimaryDataStoreTO primaryDataStoreTO = (PrimaryDataStoreTO) dataStoreTO;
Storage.StoragePoolType poolType = primaryDataStoreTO.getPoolType();
pool = storagePoolMgr.getStoragePool(poolType, command.getDestStore().getUuid());
} else {
pool = storagePoolMgr.getStoragePool(Storage.StoragePoolType.NetworkFilesystem, command.getDestStore().getUuid());
}
}
if (pool != null && pool.supportsConfigDriveIso()) {
mountPoint = pool.getLocalPath();
configDrivePath = Paths.get(mountPoint, command.getIsoFile());
Files.deleteIfExists(configDrivePath);
}
}
return new HandleConfigDriveIsoAnswer(command);
}
} catch (final IOException e) {
LOG.debug("Failed to handle config drive due to " + e.getMessage(), e);
return new HandleConfigDriveIsoAnswer(command, "Failed due to exception: " + e.getMessage());
} catch (final CloudRuntimeException e) {
LOG.debug("Failed to handle config drive due to " + e.getMessage(), e);
return new HandleConfigDriveIsoAnswer(command, "Failed due to exception: " + e.toString());
}
}
}

View File

@ -19,11 +19,22 @@
package com.cloud.hypervisor.kvm.resource.wrapper;
import java.net.URISyntaxException;
import java.util.HashMap;
import java.util.Map;
import org.apache.cloudstack.storage.configdrive.ConfigDrive;
import org.apache.commons.collections.MapUtils;
import org.apache.log4j.Logger;
import org.libvirt.Connect;
import org.libvirt.LibvirtException;
import com.cloud.agent.api.Answer;
import com.cloud.agent.api.PrepareForMigrationAnswer;
import com.cloud.agent.api.PrepareForMigrationCommand;
import com.cloud.agent.api.to.DpdkTO;
import com.cloud.agent.api.to.DataTO;
import com.cloud.agent.api.to.DiskTO;
import com.cloud.agent.api.to.DpdkTO;
import com.cloud.agent.api.to.NicTO;
import com.cloud.agent.api.to.VirtualMachineTO;
import com.cloud.exception.InternalErrorException;
@ -36,14 +47,6 @@ import com.cloud.resource.ResourceWrapper;
import com.cloud.storage.Volume;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.script.Script;
import org.apache.commons.collections.MapUtils;
import org.apache.log4j.Logger;
import org.libvirt.Connect;
import org.libvirt.LibvirtException;
import java.net.URISyntaxException;
import java.util.HashMap;
import java.util.Map;
@ResourceWrapper(handles = PrepareForMigrationCommand.class)
public final class LibvirtPrepareForMigrationCommandWrapper extends CommandWrapper<PrepareForMigrationCommand, Answer, LibvirtComputingResource> {
@ -86,7 +89,12 @@ public final class LibvirtPrepareForMigrationCommandWrapper extends CommandWrapp
final DiskTO[] volumes = vm.getDisks();
for (final DiskTO volume : volumes) {
if (volume.getType() == Volume.Type.ISO) {
libvirtComputingResource.getVolumePath(conn, volume);
final DataTO data = volume.getData();
if (data != null && data.getPath() != null && data.getPath().startsWith(ConfigDrive.CONFIGDRIVEDIR)) {
libvirtComputingResource.getVolumePath(conn, volume, vm.isConfigDriveOnHostCache());
} else {
libvirtComputingResource.getVolumePath(conn, volume);
}
}
}

View File

@ -330,6 +330,12 @@ public class IscsiAdmStorageAdaptor implements StorageAdaptor {
@Override
public boolean disconnectPhysicalDisk(Map<String, String> volumeToDisconnect) {
String poolType = volumeToDisconnect.get(DiskTO.PROTOCOL_TYPE);
// Unsupported pool types
if (poolType != null && poolType.equalsIgnoreCase(StoragePoolType.PowerFlex.toString())) {
return false;
}
String host = volumeToDisconnect.get(DiskTO.STORAGE_HOST);
String port = volumeToDisconnect.get(DiskTO.STORAGE_PORT);
String path = volumeToDisconnect.get(DiskTO.IQN);
@ -447,7 +453,7 @@ public class IscsiAdmStorageAdaptor implements StorageAdaptor {
}
@Override
public KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, KVMStoragePool destPool, boolean isIso) {
public KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, String destTemplatePath, KVMStoragePool destPool, Storage.ImageFormat format, int timeout) {
return null;
}
}

View File

@ -19,9 +19,9 @@ package com.cloud.hypervisor.kvm.storage;
import java.util.List;
import java.util.Map;
import com.cloud.storage.Storage;
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
import com.cloud.storage.Storage;
import com.cloud.storage.Storage.StoragePoolType;
public class IscsiAdmStoragePool implements KVMStoragePool {
@ -165,4 +165,9 @@ public class IscsiAdmStoragePool implements KVMStoragePool {
public String getLocalPath() {
return _localPath;
}
@Override
public boolean supportsConfigDriveIso() {
return false;
}
}

View File

@ -19,9 +19,9 @@ package com.cloud.hypervisor.kvm.storage;
import java.util.List;
import java.util.Map;
import com.cloud.storage.Storage;
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
import com.cloud.storage.Storage;
import com.cloud.storage.Storage.StoragePoolType;
public interface KVMStoragePool {
@ -70,4 +70,6 @@ public interface KVMStoragePool {
PhysicalDiskFormat getDefaultFormat();
public boolean createFolder(String path);
public boolean supportsConfigDriveIso();
}

View File

@ -22,15 +22,15 @@ import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import java.util.Set;
import java.util.UUID;
import java.util.concurrent.ConcurrentHashMap;
import org.apache.log4j.Logger;
import org.apache.cloudstack.storage.to.PrimaryDataStoreTO;
import org.apache.cloudstack.storage.to.VolumeObjectTO;
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
import org.apache.log4j.Logger;
import org.reflections.Reflections;
import com.cloud.agent.api.to.DiskTO;
import com.cloud.agent.api.to.VirtualMachineTO;
@ -44,8 +44,6 @@ import com.cloud.storage.Volume;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.vm.VirtualMachine;
import org.reflections.Reflections;
public class KVMStoragePoolManager {
private static final Logger s_logger = Logger.getLogger(KVMStoragePoolManager.class);
@ -100,6 +98,7 @@ public class KVMStoragePoolManager {
// add other storage adaptors here
// this._storageMapper.put("newadaptor", new NewStorageAdaptor(storagelayer));
this._storageMapper.put(StoragePoolType.ManagedNFS.toString(), new ManagedNfsStorageAdaptor(storagelayer));
this._storageMapper.put(StoragePoolType.PowerFlex.toString(), new ScaleIOStorageAdaptor(storagelayer));
// add any adaptors that wish to register themselves via annotation
Reflections reflections = new Reflections("com.cloud.hypervisor.kvm.storage");
@ -253,7 +252,7 @@ public class KVMStoragePoolManager {
if (info != null) {
pool = createStoragePool(info.name, info.host, info.port, info.path, info.userInfo, info.poolType, info.type);
} else {
throw new CloudRuntimeException("Could not fetch storage pool " + uuid + " from libvirt");
throw new CloudRuntimeException("Could not fetch storage pool " + uuid + " from libvirt due to " + e.getMessage());
}
}
return pool;
@ -286,36 +285,38 @@ public class KVMStoragePoolManager {
public KVMPhysicalDisk getPhysicalDisk(StoragePoolType type, String poolUuid, String volName) {
int cnt = 0;
int retries = 10;
int retries = 100;
KVMPhysicalDisk vol = null;
//harden get volume, try cnt times to get volume, in case volume is created on other host
//Poll more frequently and return immediately once disk is found
String errMsg = "";
while (cnt < retries) {
try {
KVMStoragePool pool = getStoragePool(type, poolUuid);
vol = pool.getPhysicalDisk(volName);
if (vol != null) {
break;
return vol;
}
} catch (Exception e) {
s_logger.debug("Failed to find volume:" + volName + " due to" + e.toString() + ", retry:" + cnt);
s_logger.debug("Failed to find volume:" + volName + " due to " + e.toString() + ", retry:" + cnt);
errMsg = e.toString();
}
try {
Thread.sleep(30000);
Thread.sleep(3000);
} catch (InterruptedException e) {
s_logger.debug("[ignored] interupted while trying to get storage pool.");
}
cnt++;
}
KVMStoragePool pool = getStoragePool(type, poolUuid);
vol = pool.getPhysicalDisk(volName);
if (vol == null) {
throw new CloudRuntimeException(errMsg);
} else {
return vol;
}
}
public KVMStoragePool createStoragePool(String name, String host, int port, String path, String userInfo, StoragePoolType type) {
@ -377,6 +378,10 @@ public class KVMStoragePoolManager {
return adaptor.createDiskFromTemplate(template, name,
PhysicalDiskFormat.DIR, provisioningType,
size, destPool, timeout);
} else if (destPool.getType() == StoragePoolType.PowerFlex) {
return adaptor.createDiskFromTemplate(template, name,
PhysicalDiskFormat.RAW, provisioningType,
size, destPool, timeout);
} else {
return adaptor.createDiskFromTemplate(template, name,
PhysicalDiskFormat.QCOW2, provisioningType,
@ -405,9 +410,9 @@ public class KVMStoragePoolManager {
return adaptor.createDiskFromTemplateBacking(template, name, format, size, destPool, timeout);
}
public KVMPhysicalDisk createPhysicalDiskFromDirectDownloadTemplate(String templateFilePath, KVMStoragePool destPool, boolean isIso) {
public KVMPhysicalDisk createPhysicalDiskFromDirectDownloadTemplate(String templateFilePath, String destTemplatePath, KVMStoragePool destPool, Storage.ImageFormat format, int timeout) {
StorageAdaptor adaptor = getStorageAdaptor(destPool.getType());
return adaptor.createTemplateFromDirectDownloadFile(templateFilePath, destPool, isIso);
return adaptor.createTemplateFromDirectDownloadFile(templateFilePath, destTemplatePath, destPool, format, timeout);
}
}

View File

@ -37,7 +37,6 @@ import java.util.UUID;
import javax.naming.ConfigurationException;
import com.cloud.utils.Pair;
import org.apache.cloudstack.agent.directdownload.DirectDownloadAnswer;
import org.apache.cloudstack.agent.directdownload.DirectDownloadCommand;
import org.apache.cloudstack.agent.directdownload.HttpDirectDownloadCommand;
@ -117,6 +116,8 @@ import com.cloud.storage.template.Processor.FormatInfo;
import com.cloud.storage.template.QCOW2Processor;
import com.cloud.storage.template.TemplateLocation;
import com.cloud.utils.NumbersUtil;
import com.cloud.utils.Pair;
import com.cloud.utils.UriUtils;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.script.Script;
import com.cloud.utils.storage.S3.S3Utils;
@ -255,11 +256,15 @@ public class KVMStorageProcessor implements StorageProcessor {
String path = details != null ? details.get("managedStoreTarget") : null;
storagePoolMgr.connectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), path, details);
if (!storagePoolMgr.connectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), path, details)) {
s_logger.warn("Failed to connect physical disk at path: " + path + ", in storage pool id: " + primaryStore.getUuid());
}
primaryVol = storagePoolMgr.copyPhysicalDisk(tmplVol, path != null ? path : destTempl.getUuid(), primaryPool, cmd.getWaitInMillSeconds());
storagePoolMgr.disconnectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), path);
if (!storagePoolMgr.disconnectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), path)) {
s_logger.warn("Failed to disconnect physical disk at path: " + path + ", in storage pool id: " + primaryStore.getUuid());
}
} else {
primaryVol = storagePoolMgr.copyPhysicalDisk(tmplVol, UUID.randomUUID().toString(), primaryPool, cmd.getWaitInMillSeconds());
}
@ -273,7 +278,7 @@ public class KVMStorageProcessor implements StorageProcessor {
final TemplateObjectTO newTemplate = new TemplateObjectTO();
newTemplate.setPath(primaryVol.getName());
newTemplate.setSize(primaryVol.getSize());
if (primaryPool.getType() == StoragePoolType.RBD) {
if (primaryPool.getType() == StoragePoolType.RBD || primaryPool.getType() == StoragePoolType.PowerFlex) {
newTemplate.setFormat(ImageFormat.RAW);
} else {
newTemplate.setFormat(ImageFormat.QCOW2);
@ -381,6 +386,27 @@ public class KVMStorageProcessor implements StorageProcessor {
if (primaryPool.getType() == StoragePoolType.CLVM) {
templatePath = ((NfsTO)imageStore).getUrl() + File.separator + templatePath;
vol = templateToPrimaryDownload(templatePath, primaryPool, volume.getUuid(), volume.getSize(), cmd.getWaitInMillSeconds());
} if (primaryPool.getType() == StoragePoolType.PowerFlex) {
Map<String, String> details = primaryStore.getDetails();
String path = details != null ? details.get("managedStoreTarget") : null;
if (!storagePoolMgr.connectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), templatePath, details)) {
s_logger.warn("Failed to connect base template volume at path: " + templatePath + ", in storage pool id: " + primaryStore.getUuid());
}
BaseVol = storagePoolMgr.getPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), templatePath);
if (BaseVol == null) {
s_logger.debug("Failed to get the physical disk for base template volume at path: " + templatePath);
throw new CloudRuntimeException("Failed to get the physical disk for base template volume at path: " + templatePath);
}
if (!storagePoolMgr.connectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), path, details)) {
s_logger.warn("Failed to connect new volume at path: " + path + ", in storage pool id: " + primaryStore.getUuid());
}
vol = storagePoolMgr.copyPhysicalDisk(BaseVol, path != null ? path : volume.getUuid(), primaryPool, cmd.getWaitInMillSeconds());
storagePoolMgr.disconnectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), path);
} else {
if (templatePath.contains("/mnt")) {
//upgrade issue, if the path contains path, need to extract the volume uuid from path
@ -1344,6 +1370,9 @@ public class KVMStorageProcessor implements StorageProcessor {
} catch (final InternalErrorException e) {
s_logger.debug("Failed to attach volume: " + vol.getPath() + ", due to ", e);
return new AttachAnswer(e.toString());
} catch (final CloudRuntimeException e) {
s_logger.debug("Failed to attach volume: " + vol.getPath() + ", due to ", e);
return new AttachAnswer(e.toString());
}
}
@ -1375,6 +1404,9 @@ public class KVMStorageProcessor implements StorageProcessor {
} catch (final InternalErrorException e) {
s_logger.debug("Failed to detach volume: " + vol.getPath() + ", due to ", e);
return new DettachAnswer(e.toString());
} catch (final CloudRuntimeException e) {
s_logger.debug("Failed to detach volume: " + vol.getPath() + ", due to ", e);
return new DettachAnswer(e.toString());
}
}
@ -1728,6 +1760,7 @@ public class KVMStorageProcessor implements StorageProcessor {
final PrimaryDataStoreTO pool = cmd.getDestPool();
DirectTemplateDownloader downloader;
KVMPhysicalDisk template;
KVMStoragePool destPool = null;
try {
s_logger.debug("Verifying temporary location for downloading the template exists on the host");
@ -1739,14 +1772,20 @@ public class KVMStorageProcessor implements StorageProcessor {
return new DirectDownloadAnswer(false, msg, true);
}
s_logger.debug("Checking for free space on the host for downloading the template");
if (!isEnoughSpaceForDownloadTemplateOnTemporaryLocation(cmd.getTemplateSize())) {
Long templateSize = null;
if (!org.apache.commons.lang.StringUtils.isBlank(cmd.getUrl())) {
String url = cmd.getUrl();
templateSize = UriUtils.getRemoteSize(url);
}
s_logger.debug("Checking for free space on the host for downloading the template with physical size: " + templateSize + " and virtual size: " + cmd.getTemplateSize());
if (!isEnoughSpaceForDownloadTemplateOnTemporaryLocation(templateSize)) {
String msg = "Not enough space on the defined temporary location to download the template " + cmd.getTemplateId();
s_logger.error(msg);
return new DirectDownloadAnswer(false, msg, true);
}
KVMStoragePool destPool = storagePoolMgr.getStoragePool(pool.getPoolType(), pool.getUuid());
destPool = storagePoolMgr.getStoragePool(pool.getPoolType(), pool.getUuid());
downloader = getDirectTemplateDownloaderFromCommand(cmd, destPool, temporaryDownloadPath);
s_logger.debug("Trying to download template");
Pair<Boolean, String> result = downloader.downloadTemplate();
@ -1759,7 +1798,19 @@ public class KVMStorageProcessor implements StorageProcessor {
s_logger.warn("Couldn't validate template checksum");
return new DirectDownloadAnswer(false, "Checksum validation failed", false);
}
template = storagePoolMgr.createPhysicalDiskFromDirectDownloadTemplate(tempFilePath, destPool, cmd.isIso());
final TemplateObjectTO destTemplate = cmd.getDestData();
String destTemplatePath = (destTemplate != null) ? destTemplate.getPath() : null;
if (!storagePoolMgr.connectPhysicalDisk(pool.getPoolType(), pool.getUuid(), destTemplatePath, null)) {
s_logger.warn("Unable to connect physical disk at path: " + destTemplatePath + ", in storage pool id: " + pool.getUuid());
}
template = storagePoolMgr.createPhysicalDiskFromDirectDownloadTemplate(tempFilePath, destTemplatePath, destPool, cmd.getFormat(), cmd.getWaitInMillSeconds());
if (!storagePoolMgr.disconnectPhysicalDisk(pool.getPoolType(), pool.getUuid(), destTemplatePath)) {
s_logger.warn("Unable to disconnect physical disk at path: " + destTemplatePath + ", in storage pool id: " + pool.getUuid());
}
} catch (CloudRuntimeException e) {
s_logger.warn("Error downloading template " + cmd.getTemplateId() + " due to: " + e.getMessage());
return new DirectDownloadAnswer(false, "Unable to download template: " + e.getMessage(), true);
@ -1780,23 +1831,56 @@ public class KVMStorageProcessor implements StorageProcessor {
final ImageFormat destFormat = destVol.getFormat();
final DataStoreTO srcStore = srcData.getDataStore();
final DataStoreTO destStore = destData.getDataStore();
final PrimaryDataStoreTO primaryStore = (PrimaryDataStoreTO)srcStore;
final PrimaryDataStoreTO primaryStoreDest = (PrimaryDataStoreTO)destStore;
final PrimaryDataStoreTO srcPrimaryStore = (PrimaryDataStoreTO)srcStore;
final PrimaryDataStoreTO destPrimaryStore = (PrimaryDataStoreTO)destStore;
final String srcVolumePath = srcData.getPath();
final String destVolumePath = destData.getPath();
KVMStoragePool destPool = null;
try {
final String volumeName = UUID.randomUUID().toString();
s_logger.debug("Copying src volume (id: " + srcVol.getId() + ", format: " + srcFormat + ", path: " + srcVolumePath + ", primary storage: [id: " + srcPrimaryStore.getId() + ", type: " + srcPrimaryStore.getPoolType() + "]) to dest volume (id: " +
destVol.getId() + ", format: " + destFormat + ", path: " + destVolumePath + ", primary storage: [id: " + destPrimaryStore.getId() + ", type: " + destPrimaryStore.getPoolType() + "]).");
if (srcPrimaryStore.isManaged()) {
if (!storagePoolMgr.connectPhysicalDisk(srcPrimaryStore.getPoolType(), srcPrimaryStore.getUuid(), srcVolumePath, srcPrimaryStore.getDetails())) {
s_logger.warn("Failed to connect src volume at path: " + srcVolumePath + ", in storage pool id: " + srcPrimaryStore.getUuid());
}
}
final KVMPhysicalDisk volume = storagePoolMgr.getPhysicalDisk(srcPrimaryStore.getPoolType(), srcPrimaryStore.getUuid(), srcVolumePath);
if (volume == null) {
s_logger.debug("Failed to get physical disk for volume: " + srcVolumePath);
throw new CloudRuntimeException("Failed to get physical disk for volume at path: " + srcVolumePath);
}
final String destVolumeName = volumeName + "." + destFormat.getFileExtension();
final KVMPhysicalDisk volume = storagePoolMgr.getPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), srcVolumePath);
volume.setFormat(PhysicalDiskFormat.valueOf(srcFormat.toString()));
destPool = storagePoolMgr.getStoragePool(primaryStoreDest.getPoolType(), primaryStoreDest.getUuid());
String destVolumeName = null;
if (destPrimaryStore.isManaged()) {
if (!storagePoolMgr.connectPhysicalDisk(destPrimaryStore.getPoolType(), destPrimaryStore.getUuid(), destVolumePath, destPrimaryStore.getDetails())) {
s_logger.warn("Failed to connect dest volume at path: " + destVolumePath + ", in storage pool id: " + destPrimaryStore.getUuid());
}
String managedStoreTarget = destPrimaryStore.getDetails() != null ? destPrimaryStore.getDetails().get("managedStoreTarget") : null;
destVolumeName = managedStoreTarget != null ? managedStoreTarget : destVolumePath;
} else {
final String volumeName = UUID.randomUUID().toString();
destVolumeName = volumeName + "." + destFormat.getFileExtension();
}
destPool = storagePoolMgr.getStoragePool(destPrimaryStore.getPoolType(), destPrimaryStore.getUuid());
storagePoolMgr.copyPhysicalDisk(volume, destVolumeName, destPool, cmd.getWaitInMillSeconds());
if (srcPrimaryStore.isManaged()) {
storagePoolMgr.disconnectPhysicalDisk(srcPrimaryStore.getPoolType(), srcPrimaryStore.getUuid(), srcVolumePath);
}
if (destPrimaryStore.isManaged()) {
storagePoolMgr.disconnectPhysicalDisk(destPrimaryStore.getPoolType(), destPrimaryStore.getUuid(), destVolumePath);
}
final VolumeObjectTO newVol = new VolumeObjectTO();
newVol.setPath(destVolumePath + File.separator + destVolumeName);
String path = destPrimaryStore.isManaged() ? destVolumeName : destVolumePath + File.separator + destVolumeName;
newVol.setPath(path);
newVol.setFormat(destFormat);
return new CopyCmdAnswer(newVol);
} catch (final CloudRuntimeException e) {

View File

@ -24,6 +24,10 @@ import java.util.List;
import java.util.Map;
import java.util.UUID;
import org.apache.cloudstack.utils.qemu.QemuImg;
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
import org.apache.cloudstack.utils.qemu.QemuImgException;
import org.apache.cloudstack.utils.qemu.QemuImgFile;
import org.apache.commons.codec.binary.Base64;
import org.apache.log4j.Logger;
import org.libvirt.Connect;
@ -42,12 +46,6 @@ import com.ceph.rbd.RbdException;
import com.ceph.rbd.RbdImage;
import com.ceph.rbd.jna.RbdImageInfo;
import com.ceph.rbd.jna.RbdSnapInfo;
import org.apache.cloudstack.utils.qemu.QemuImg;
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
import org.apache.cloudstack.utils.qemu.QemuImgException;
import org.apache.cloudstack.utils.qemu.QemuImgFile;
import com.cloud.exception.InternalErrorException;
import com.cloud.hypervisor.kvm.resource.LibvirtConnection;
import com.cloud.hypervisor.kvm.resource.LibvirtSecretDef;
@ -160,20 +158,20 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
}
@Override
public KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, KVMStoragePool destPool, boolean isIso) {
public KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, String destTemplatePath, KVMStoragePool destPool, Storage.ImageFormat format, int timeout) {
File sourceFile = new File(templateFilePath);
if (!sourceFile.exists()) {
throw new CloudRuntimeException("Direct download template file " + sourceFile + " does not exist on this host");
}
String templateUuid = UUID.randomUUID().toString();
if (isIso) {
if (Storage.ImageFormat.ISO.equals(format)) {
templateUuid += ".iso";
}
String destinationFile = destPool.getLocalPath() + File.separator + templateUuid;
if (destPool.getType() == StoragePoolType.NetworkFilesystem || destPool.getType() == StoragePoolType.Filesystem
|| destPool.getType() == StoragePoolType.SharedMountPoint) {
if (!isIso && isTemplateExtractable(templateFilePath)) {
if (!Storage.ImageFormat.ISO.equals(format) && isTemplateExtractable(templateFilePath)) {
extractDownloadedTemplate(templateFilePath, destPool, destinationFile);
} else {
Script.runSimpleBashScript("mv " + templateFilePath + " " + destinationFile);
@ -451,11 +449,13 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
type = StoragePoolType.CLVM;
} else if (spd.getPoolType() == LibvirtStoragePoolDef.PoolType.GLUSTERFS) {
type = StoragePoolType.Gluster;
} else if (spd.getPoolType() == LibvirtStoragePoolDef.PoolType.POWERFLEX) {
type = StoragePoolType.PowerFlex;
}
LibvirtStoragePool pool = new LibvirtStoragePool(uuid, storage.getName(), type, this, storage);
if (pool.getType() != StoragePoolType.RBD)
if (pool.getType() != StoragePoolType.RBD && pool.getType() != StoragePoolType.PowerFlex)
pool.setLocalPath(spd.getTargetPath());
else
pool.setLocalPath("");
@ -545,7 +545,6 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
s_logger.debug("Failed to get physical disk:", e);
throw new CloudRuntimeException(e.toString());
}
}
@Override
@ -1022,7 +1021,6 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
}
}
return disk;
}

View File

@ -45,6 +45,7 @@ public class LibvirtStoragePool implements KVMStoragePool {
protected String authSecret;
protected String sourceHost;
protected int sourcePort;
protected String sourceDir;
public LibvirtStoragePool(String uuid, String name, StoragePoolType type, StorageAdaptor adaptor, StoragePool pool) {
@ -56,7 +57,6 @@ public class LibvirtStoragePool implements KVMStoragePool {
this.used = 0;
this.available = 0;
this._pool = pool;
}
public void setCapacity(long capacity) {
@ -101,7 +101,7 @@ public class LibvirtStoragePool implements KVMStoragePool {
@Override
public PhysicalDiskFormat getDefaultFormat() {
if (getStoragePoolType() == StoragePoolType.CLVM || getStoragePoolType() == StoragePoolType.RBD) {
if (getStoragePoolType() == StoragePoolType.CLVM || getStoragePoolType() == StoragePoolType.RBD || getStoragePoolType() == StoragePoolType.PowerFlex) {
return PhysicalDiskFormat.RAW;
} else {
return PhysicalDiskFormat.QCOW2;
@ -271,4 +271,12 @@ public class LibvirtStoragePool implements KVMStoragePool {
public boolean createFolder(String path) {
return this._storageAdaptor.createFolder(this.uuid, path);
}
@Override
public boolean supportsConfigDriveIso() {
if (this.type == StoragePoolType.NetworkFilesystem) {
return true;
}
return false;
}
}

View File

@ -35,6 +35,7 @@ import com.cloud.hypervisor.kvm.resource.LibvirtStoragePoolDef;
import com.cloud.hypervisor.kvm.resource.LibvirtStoragePoolDef.PoolType;
import com.cloud.hypervisor.kvm.resource.LibvirtStorageVolumeDef;
import com.cloud.hypervisor.kvm.resource.LibvirtStorageVolumeXMLParser;
import com.cloud.storage.Storage;
import com.cloud.storage.Storage.ImageFormat;
import com.cloud.storage.Storage.ProvisioningType;
import com.cloud.storage.Storage.StoragePoolType;
@ -319,7 +320,7 @@ public class ManagedNfsStorageAdaptor implements StorageAdaptor {
}
@Override
public KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, KVMStoragePool destPool, boolean isIso) {
public KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, String destTemplatePath, KVMStoragePool destPool, Storage.ImageFormat format, int timeout) {
return null;
}

View File

@ -0,0 +1,393 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.hypervisor.kvm.storage;
import java.io.File;
import java.io.FileFilter;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import org.apache.cloudstack.storage.datastore.util.ScaleIOUtil;
import org.apache.cloudstack.utils.qemu.QemuImg;
import org.apache.cloudstack.utils.qemu.QemuImgException;
import org.apache.cloudstack.utils.qemu.QemuImgFile;
import org.apache.commons.io.filefilter.WildcardFileFilter;
import org.apache.log4j.Logger;
import com.cloud.storage.Storage;
import com.cloud.storage.StorageLayer;
import com.cloud.storage.StorageManager;
import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.script.OutputInterpreter;
import com.cloud.utils.script.Script;
import com.google.common.base.Strings;
@StorageAdaptorInfo(storagePoolType= Storage.StoragePoolType.PowerFlex)
public class ScaleIOStorageAdaptor implements StorageAdaptor {
private static final Logger LOGGER = Logger.getLogger(ScaleIOStorageAdaptor.class);
private static final Map<String, KVMStoragePool> MapStorageUuidToStoragePool = new HashMap<>();
private static final int DEFAULT_DISK_WAIT_TIME_IN_SECS = 60;
private StorageLayer storageLayer;
public ScaleIOStorageAdaptor(StorageLayer storagelayer) {
storageLayer = storagelayer;
}
@Override
public KVMStoragePool getStoragePool(String uuid) {
KVMStoragePool pool = MapStorageUuidToStoragePool.get(uuid);
if (pool == null) {
LOGGER.error("Pool: " + uuid + " not found, probably sdc not connected on agent start");
throw new CloudRuntimeException("Pool: " + uuid + " not found, reconnect sdc and restart agent if sdc not connected on agent start");
}
return pool;
}
@Override
public KVMStoragePool getStoragePool(String uuid, boolean refreshInfo) {
return getStoragePool(uuid);
}
@Override
public KVMPhysicalDisk getPhysicalDisk(String volumePath, KVMStoragePool pool) {
if (Strings.isNullOrEmpty(volumePath) || pool == null) {
LOGGER.error("Unable to get physical disk, volume path or pool not specified");
return null;
}
String volumeId = ScaleIOUtil.getVolumePath(volumePath);
try {
String diskFilePath = null;
String systemId = ScaleIOUtil.getSystemIdForVolume(volumeId);
if (!Strings.isNullOrEmpty(systemId) && systemId.length() == ScaleIOUtil.IDENTIFIER_LENGTH) {
// Disk path format: /dev/disk/by-id/emc-vol-<SystemID>-<VolumeID>
final String diskFileName = ScaleIOUtil.DISK_NAME_PREFIX + systemId + "-" + volumeId;
diskFilePath = ScaleIOUtil.DISK_PATH + File.separator + diskFileName;
final File diskFile = new File(diskFilePath);
if (!diskFile.exists()) {
LOGGER.debug("Physical disk file: " + diskFilePath + " doesn't exists on the storage pool: " + pool.getUuid());
return null;
}
} else {
LOGGER.debug("Try with wildcard filter to get the physical disk: " + volumeId + " on the storage pool: " + pool.getUuid());
final File dir = new File(ScaleIOUtil.DISK_PATH);
final FileFilter fileFilter = new WildcardFileFilter(ScaleIOUtil.DISK_NAME_PREFIX_FILTER + volumeId);
final File[] files = dir.listFiles(fileFilter);
if (files != null && files.length == 1) {
diskFilePath = files[0].getAbsolutePath();
} else {
LOGGER.debug("Unable to find the physical disk: " + volumeId + " on the storage pool: " + pool.getUuid());
return null;
}
}
KVMPhysicalDisk disk = new KVMPhysicalDisk(diskFilePath, volumePath, pool);
disk.setFormat(QemuImg.PhysicalDiskFormat.RAW);
long diskSize = getPhysicalDiskSize(diskFilePath);
disk.setSize(diskSize);
disk.setVirtualSize(diskSize);
return disk;
} catch (Exception e) {
LOGGER.error("Failed to get the physical disk: " + volumePath + " on the storage pool: " + pool.getUuid() + " due to " + e.getMessage());
throw new CloudRuntimeException("Failed to get the physical disk: " + volumePath + " on the storage pool: " + pool.getUuid());
}
}
@Override
public KVMStoragePool createStoragePool(String uuid, String host, int port, String path, String userInfo, Storage.StoragePoolType type) {
ScaleIOStoragePool storagePool = new ScaleIOStoragePool(uuid, host, port, path, type, this);
MapStorageUuidToStoragePool.put(uuid, storagePool);
return storagePool;
}
@Override
public boolean deleteStoragePool(String uuid) {
return MapStorageUuidToStoragePool.remove(uuid) != null;
}
@Override
public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) {
return null;
}
@Override
public boolean connectPhysicalDisk(String volumePath, KVMStoragePool pool, Map<String, String> details) {
if (Strings.isNullOrEmpty(volumePath) || pool == null) {
LOGGER.error("Unable to connect physical disk due to insufficient data");
throw new CloudRuntimeException("Unable to connect physical disk due to insufficient data");
}
volumePath = ScaleIOUtil.getVolumePath(volumePath);
int waitTimeInSec = DEFAULT_DISK_WAIT_TIME_IN_SECS;
if (details != null && details.containsKey(StorageManager.STORAGE_POOL_DISK_WAIT.toString())) {
String waitTime = details.get(StorageManager.STORAGE_POOL_DISK_WAIT.toString());
if (!Strings.isNullOrEmpty(waitTime)) {
waitTimeInSec = Integer.valueOf(waitTime).intValue();
}
}
return waitForDiskToBecomeAvailable(volumePath, pool, waitTimeInSec);
}
private boolean waitForDiskToBecomeAvailable(String volumePath, KVMStoragePool pool, int waitTimeInSec) {
LOGGER.debug("Waiting for the volume with id: " + volumePath + " of the storage pool: " + pool.getUuid() + " to become available for " + waitTimeInSec + " secs");
int timeBetweenTries = 1000; // Try more frequently (every sec) and return early if disk is found
KVMPhysicalDisk physicalDisk = null;
// Rescan before checking for the physical disk
ScaleIOUtil.rescanForNewVolumes();
while (waitTimeInSec > 0) {
physicalDisk = getPhysicalDisk(volumePath, pool);
if (physicalDisk != null && physicalDisk.getSize() > 0) {
LOGGER.debug("Found the volume with id: " + volumePath + " of the storage pool: " + pool.getUuid());
return true;
}
waitTimeInSec--;
try {
Thread.sleep(timeBetweenTries);
} catch (Exception ex) {
// don't do anything
}
}
physicalDisk = getPhysicalDisk(volumePath, pool);
if (physicalDisk != null && physicalDisk.getSize() > 0) {
LOGGER.debug("Found the volume using id: " + volumePath + " of the storage pool: " + pool.getUuid());
return true;
}
LOGGER.debug("Unable to find the volume with id: " + volumePath + " of the storage pool: " + pool.getUuid());
return false;
}
private long getPhysicalDiskSize(String diskPath) {
if (Strings.isNullOrEmpty(diskPath)) {
return 0;
}
Script diskCmd = new Script("blockdev", LOGGER);
diskCmd.add("--getsize64", diskPath);
OutputInterpreter.OneLineParser parser = new OutputInterpreter.OneLineParser();
String result = diskCmd.execute(parser);
if (result != null) {
LOGGER.warn("Unable to get the disk size at path: " + diskPath);
return 0;
} else {
LOGGER.info("Able to retrieve the disk size at path:" + diskPath);
}
return Long.parseLong(parser.getLine());
}
@Override
public boolean disconnectPhysicalDisk(String volumePath, KVMStoragePool pool) {
return true;
}
@Override
public boolean disconnectPhysicalDisk(Map<String, String> volumeToDisconnect) {
return true;
}
@Override
public boolean disconnectPhysicalDiskByPath(String localPath) {
return true;
}
@Override
public boolean deletePhysicalDisk(String uuid, KVMStoragePool pool, Storage.ImageFormat format) {
return true;
}
@Override
public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout) {
return null;
}
@Override
public KVMPhysicalDisk createTemplateFromDisk(KVMPhysicalDisk disk, String name, QemuImg.PhysicalDiskFormat format, long size, KVMStoragePool destPool) {
return null;
}
@Override
public List<KVMPhysicalDisk> listPhysicalDisks(String storagePoolUuid, KVMStoragePool pool) {
return null;
}
@Override
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) {
if (Strings.isNullOrEmpty(name) || disk == null || destPool == null) {
LOGGER.error("Unable to copy physical disk due to insufficient data");
throw new CloudRuntimeException("Unable to copy physical disk due to insufficient data");
}
LOGGER.debug("Copy physical disk with size: " + disk.getSize() + ", virtualsize: " + disk.getVirtualSize()+ ", format: " + disk.getFormat());
KVMPhysicalDisk destDisk = destPool.getPhysicalDisk(name);
if (destDisk == null) {
LOGGER.error("Failed to find the disk: " + name + " of the storage pool: " + destPool.getUuid());
throw new CloudRuntimeException("Failed to find the disk: " + name + " of the storage pool: " + destPool.getUuid());
}
destDisk.setFormat(QemuImg.PhysicalDiskFormat.RAW);
destDisk.setVirtualSize(disk.getVirtualSize());
destDisk.setSize(disk.getSize());
QemuImg qemu = new QemuImg(timeout);
QemuImgFile srcFile = null;
QemuImgFile destFile = null;
try {
srcFile = new QemuImgFile(disk.getPath(), disk.getFormat());
destFile = new QemuImgFile(destDisk.getPath(), destDisk.getFormat());
LOGGER.debug("Starting copy from source image " + srcFile.getFileName() + " to PowerFlex volume: " + destDisk.getPath());
qemu.convert(srcFile, destFile);
LOGGER.debug("Succesfully converted source image " + srcFile.getFileName() + " to PowerFlex volume: " + destDisk.getPath());
} catch (QemuImgException e) {
LOGGER.error("Failed to convert from " + srcFile.getFileName() + " to " + destFile.getFileName() + " the error was: " + e.getMessage());
destDisk = null;
}
return destDisk;
}
@Override
public KVMPhysicalDisk createDiskFromSnapshot(KVMPhysicalDisk snapshot, String snapshotName, String name, KVMStoragePool destPool, int timeout) {
return null;
}
@Override
public boolean refresh(KVMStoragePool pool) {
return true;
}
@Override
public boolean deleteStoragePool(KVMStoragePool pool) {
return deleteStoragePool(pool.getUuid());
}
@Override
public boolean createFolder(String uuid, String path) {
return true;
}
@Override
public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, QemuImg.PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout) {
return null;
}
@Override
public KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, String destTemplatePath, KVMStoragePool destPool, Storage.ImageFormat format, int timeout) {
if (Strings.isNullOrEmpty(templateFilePath) || Strings.isNullOrEmpty(destTemplatePath) || destPool == null) {
LOGGER.error("Unable to create template from direct download template file due to insufficient data");
throw new CloudRuntimeException("Unable to create template from direct download template file due to insufficient data");
}
LOGGER.debug("Create template from direct download template - file path: " + templateFilePath + ", dest path: " + destTemplatePath + ", format: " + format.toString());
File sourceFile = new File(templateFilePath);
if (!sourceFile.exists()) {
throw new CloudRuntimeException("Direct download template file " + templateFilePath + " does not exist on this host");
}
if (destTemplatePath == null || destTemplatePath.isEmpty()) {
LOGGER.error("Failed to create template, target template disk path not provided");
throw new CloudRuntimeException("Target template disk path not provided");
}
if (destPool.getType() != Storage.StoragePoolType.PowerFlex) {
throw new CloudRuntimeException("Unsupported storage pool type: " + destPool.getType().toString());
}
if (Storage.ImageFormat.RAW.equals(format) && Storage.ImageFormat.QCOW2.equals(format)) {
LOGGER.error("Failed to create template, unsupported template format: " + format.toString());
throw new CloudRuntimeException("Unsupported template format: " + format.toString());
}
String srcTemplateFilePath = templateFilePath;
KVMPhysicalDisk destDisk = null;
QemuImgFile srcFile = null;
QemuImgFile destFile = null;
try {
destDisk = destPool.getPhysicalDisk(destTemplatePath);
if (destDisk == null) {
LOGGER.error("Failed to find the disk: " + destTemplatePath + " of the storage pool: " + destPool.getUuid());
throw new CloudRuntimeException("Failed to find the disk: " + destTemplatePath + " of the storage pool: " + destPool.getUuid());
}
if (isTemplateExtractable(templateFilePath)) {
srcTemplateFilePath = sourceFile.getParent() + "/" + UUID.randomUUID().toString();
LOGGER.debug("Extract the downloaded template " + templateFilePath + " to " + srcTemplateFilePath);
String extractCommand = getExtractCommandForDownloadedFile(templateFilePath, srcTemplateFilePath);
Script.runSimpleBashScript(extractCommand);
Script.runSimpleBashScript("rm -f " + templateFilePath);
}
QemuImg.PhysicalDiskFormat srcFileFormat = QemuImg.PhysicalDiskFormat.RAW;
if (format == Storage.ImageFormat.RAW) {
srcFileFormat = QemuImg.PhysicalDiskFormat.RAW;
} else if (format == Storage.ImageFormat.QCOW2) {
srcFileFormat = QemuImg.PhysicalDiskFormat.QCOW2;
}
srcFile = new QemuImgFile(srcTemplateFilePath, srcFileFormat);
destFile = new QemuImgFile(destDisk.getPath(), destDisk.getFormat());
LOGGER.debug("Starting copy from source downloaded template " + srcFile.getFileName() + " to PowerFlex template volume: " + destDisk.getPath());
QemuImg qemu = new QemuImg(timeout);
qemu.convert(srcFile, destFile);
LOGGER.debug("Succesfully converted source downloaded template " + srcFile.getFileName() + " to PowerFlex template volume: " + destDisk.getPath());
} catch (QemuImgException e) {
LOGGER.error("Failed to convert from " + srcFile.getFileName() + " to " + destFile.getFileName() + " the error was: " + e.getMessage());
destDisk = null;
} finally {
Script.runSimpleBashScript("rm -f " + srcTemplateFilePath);
}
return destDisk;
}
private boolean isTemplateExtractable(String templatePath) {
String type = Script.runSimpleBashScript("file " + templatePath + " | awk -F' ' '{print $2}'");
return type.equalsIgnoreCase("bzip2") || type.equalsIgnoreCase("gzip") || type.equalsIgnoreCase("zip");
}
private String getExtractCommandForDownloadedFile(String downloadedTemplateFile, String templateFile) {
if (downloadedTemplateFile.endsWith(".zip")) {
return "unzip -p " + downloadedTemplateFile + " | cat > " + templateFile;
} else if (downloadedTemplateFile.endsWith(".bz2")) {
return "bunzip2 -c " + downloadedTemplateFile + " > " + templateFile;
} else if (downloadedTemplateFile.endsWith(".gz")) {
return "gunzip -c " + downloadedTemplateFile + " > " + templateFile;
} else {
throw new CloudRuntimeException("Unable to extract template " + downloadedTemplateFile);
}
}
}

View File

@ -0,0 +1,181 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.hypervisor.kvm.storage;
import java.util.List;
import java.util.Map;
import org.apache.cloudstack.utils.qemu.QemuImg;
import com.cloud.storage.Storage;
public class ScaleIOStoragePool implements KVMStoragePool {
private String uuid;
private String sourceHost;
private int sourcePort;
private String sourceDir;
private Storage.StoragePoolType storagePoolType;
private StorageAdaptor storageAdaptor;
private long capacity;
private long used;
private long available;
public ScaleIOStoragePool(String uuid, String host, int port, String path, Storage.StoragePoolType poolType, StorageAdaptor adaptor) {
this.uuid = uuid;
sourceHost = host;
sourcePort = port;
sourceDir = path;
storagePoolType = poolType;
storageAdaptor = adaptor;
capacity = 0;
used = 0;
available = 0;
}
@Override
public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) {
return null;
}
@Override
public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, Storage.ProvisioningType provisioningType, long size) {
return null;
}
@Override
public boolean connectPhysicalDisk(String volumeUuid, Map<String, String> details) {
return storageAdaptor.connectPhysicalDisk(volumeUuid, this, details);
}
@Override
public KVMPhysicalDisk getPhysicalDisk(String volumeId) {
return storageAdaptor.getPhysicalDisk(volumeId, this);
}
@Override
public boolean disconnectPhysicalDisk(String volumeUuid) {
return storageAdaptor.disconnectPhysicalDisk(volumeUuid, this);
}
@Override
public boolean deletePhysicalDisk(String volumeUuid, Storage.ImageFormat format) {
return true;
}
@Override
public List<KVMPhysicalDisk> listPhysicalDisks() {
return null;
}
@Override
public String getUuid() {
return uuid;
}
public void setCapacity(long capacity) {
this.capacity = capacity;
}
@Override
public long getCapacity() {
return this.capacity;
}
public void setUsed(long used) {
this.used = used;
}
@Override
public long getUsed() {
return this.used;
}
public void setAvailable(long available) {
this.available = available;
}
@Override
public long getAvailable() {
return this.available;
}
@Override
public boolean refresh() {
return false;
}
@Override
public boolean isExternalSnapshot() {
return true;
}
@Override
public String getLocalPath() {
return null;
}
@Override
public String getSourceHost() {
return this.sourceHost;
}
@Override
public String getSourceDir() {
return this.sourceDir;
}
@Override
public int getSourcePort() {
return this.sourcePort;
}
@Override
public String getAuthUserName() {
return null;
}
@Override
public String getAuthSecret() {
return null;
}
@Override
public Storage.StoragePoolType getType() {
return storagePoolType;
}
@Override
public boolean delete() {
return false;
}
@Override
public QemuImg.PhysicalDiskFormat getDefaultFormat() {
return QemuImg.PhysicalDiskFormat.RAW;
}
@Override
public boolean createFolder(String path) {
return false;
}
@Override
public boolean supportsConfigDriveIso() {
return false;
}
}

View File

@ -86,7 +86,8 @@ public interface StorageAdaptor {
* Create physical disk on Primary Storage from direct download template on the host (in temporary location)
* @param templateFilePath
* @param destPool
* @param isIso
* @param format
* @param timeout
*/
KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, KVMStoragePool destPool, boolean isIso);
KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, String destTemplatePath, KVMStoragePool destPool, Storage.ImageFormat format, int timeout);
}

View File

@ -0,0 +1,155 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.hypervisor.kvm.storage;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertTrue;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.Mockito.spy;
import static org.mockito.Mockito.when;
import java.io.File;
import java.io.FileFilter;
import org.apache.cloudstack.storage.datastore.util.ScaleIOUtil;
import org.apache.cloudstack.utils.qemu.QemuImg;
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;
import com.cloud.storage.Storage.StoragePoolType;
import com.cloud.storage.StorageLayer;
@PrepareForTest(ScaleIOUtil.class)
@RunWith(PowerMockRunner.class)
public class ScaleIOStoragePoolTest {
ScaleIOStoragePool pool;
StorageAdaptor adapter;
@Mock
StorageLayer storageLayer;
@Before
public void setUp() throws Exception {
final String uuid = "345fc603-2d7e-47d2-b719-a0110b3732e6";
final StoragePoolType type = StoragePoolType.PowerFlex;
adapter = spy(new ScaleIOStorageAdaptor(storageLayer));
pool = new ScaleIOStoragePool(uuid, "192.168.1.19", 443, "a519be2f00000000", type, adapter);
}
@After
public void tearDown() throws Exception {
}
@Test
public void testAttributes() {
assertEquals(pool.getCapacity(), 0);
assertEquals(pool.getUsed(), 0);
assertEquals(pool.getAvailable(), 0);
assertEquals(pool.getUuid(), "345fc603-2d7e-47d2-b719-a0110b3732e6");
assertEquals(pool.getSourceHost(), "192.168.1.19");
assertEquals(pool.getSourcePort(), 443);
assertEquals(pool.getSourceDir(), "a519be2f00000000");
assertEquals(pool.getType(), StoragePoolType.PowerFlex);
pool.setCapacity(131072);
pool.setUsed(24576);
pool.setAvailable(106496);
assertEquals(pool.getCapacity(), 131072);
assertEquals(pool.getUsed(), 24576);
assertEquals(pool.getAvailable(), 106496);
}
@Test
public void testDefaults() {
assertEquals(pool.getDefaultFormat(), PhysicalDiskFormat.RAW);
assertEquals(pool.getType(), StoragePoolType.PowerFlex);
assertNull(pool.getAuthUserName());
assertNull(pool.getAuthSecret());
Assert.assertFalse(pool.supportsConfigDriveIso());
assertTrue(pool.isExternalSnapshot());
}
public void testGetPhysicalDiskWithWildcardFileFilter() throws Exception {
final String volumePath = "6c3362b500000001:vol-139-3d2c-12f0";
final String systemId = "218ce1797566a00f";
File dir = PowerMockito.mock(File.class);
PowerMockito.whenNew(File.class).withAnyArguments().thenReturn(dir);
// TODO: Mock file in dir
File[] files = new File[1];
String volumeId = ScaleIOUtil.getVolumePath(volumePath);
String diskFilePath = ScaleIOUtil.DISK_PATH + File.separator + ScaleIOUtil.DISK_NAME_PREFIX + systemId + "-" + volumeId;
files[0] = new File(diskFilePath);
PowerMockito.when(dir.listFiles(any(FileFilter.class))).thenReturn(files);
KVMPhysicalDisk disk = adapter.getPhysicalDisk(volumePath, pool);
assertNull(disk);
}
@Test
public void testGetPhysicalDiskWithSystemId() throws Exception {
final String volumePath = "6c3362b500000001:vol-139-3d2c-12f0";
final String volumeId = ScaleIOUtil.getVolumePath(volumePath);
final String systemId = "218ce1797566a00f";
PowerMockito.mockStatic(ScaleIOUtil.class);
when(ScaleIOUtil.getSystemIdForVolume(volumeId)).thenReturn(systemId);
// TODO: Mock file exists
File file = PowerMockito.mock(File.class);
PowerMockito.whenNew(File.class).withAnyArguments().thenReturn(file);
PowerMockito.when(file.exists()).thenReturn(true);
KVMPhysicalDisk disk = adapter.getPhysicalDisk(volumePath, pool);
assertNull(disk);
}
@Test
public void testConnectPhysicalDisk() {
final String volumePath = "6c3362b500000001:vol-139-3d2c-12f0";
final String volumeId = ScaleIOUtil.getVolumePath(volumePath);
final String systemId = "218ce1797566a00f";
final String diskFilePath = ScaleIOUtil.DISK_PATH + File.separator + ScaleIOUtil.DISK_NAME_PREFIX + systemId + "-" + volumeId;
KVMPhysicalDisk disk = new KVMPhysicalDisk(diskFilePath, volumePath, pool);
disk.setFormat(QemuImg.PhysicalDiskFormat.RAW);
disk.setSize(8192);
disk.setVirtualSize(8192);
assertEquals(disk.getPath(), "/dev/disk/by-id/emc-vol-218ce1797566a00f-6c3362b500000001");
when(adapter.getPhysicalDisk(volumeId, pool)).thenReturn(disk);
final boolean result = adapter.connectPhysicalDisk(volumePath, pool, null);
assertTrue(result);
}
}

View File

@ -121,6 +121,7 @@
<module>storage/volume/nexenta</module>
<module>storage/volume/sample</module>
<module>storage/volume/solidfire</module>
<module>storage/volume/scaleio</module>
<module>storage-allocators/random</module>

View File

@ -48,6 +48,7 @@ import com.cloud.agent.api.Answer;
import com.cloud.agent.api.to.DataObjectType;
import com.cloud.agent.api.to.DataStoreTO;
import com.cloud.agent.api.to.DataTO;
import com.cloud.host.Host;
import com.cloud.storage.DiskOfferingVO;
import com.cloud.storage.ResizeVolumePayload;
import com.cloud.storage.Storage.StoragePoolType;
@ -59,6 +60,7 @@ import com.cloud.storage.dao.DiskOfferingDao;
import com.cloud.storage.dao.VolumeDao;
import com.cloud.storage.dao.VolumeDetailsDao;
import com.cloud.user.AccountManager;
import com.cloud.utils.Pair;
import com.cloud.utils.exception.CloudRuntimeException;
/**
@ -259,7 +261,11 @@ public class ElastistorPrimaryDataStoreDriver extends CloudStackPrimaryDataStore
@Override
public void copyAsync(DataObject srcdata, DataObject destData, AsyncCompletionCallback<CopyCommandResult> callback) {
throw new UnsupportedOperationException();
}
@Override
public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) {
throw new UnsupportedOperationException();
}
@Override
@ -409,4 +415,28 @@ public class ElastistorPrimaryDataStoreDriver extends CloudStackPrimaryDataStore
return mapCapabilities;
}
@Override
public boolean canProvideStorageStats() {
return false;
}
@Override
public Pair<Long, Long> getStorageStats(StoragePool storagePool) {
return null;
}
@Override
public boolean canProvideVolumeStats() {
return false;
}
@Override
public Pair<Long, Long> getVolumeStats(StoragePool storagePool, String volumeId) {
return null;
}
@Override
public boolean canHostAccessStoragePool(Host host, StoragePool pool) {
return true;
}
}

View File

@ -17,6 +17,37 @@
package org.apache.cloudstack.storage.datastore.driver;
import java.io.UnsupportedEncodingException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.inject.Inject;
import org.apache.cloudstack.engine.subsystem.api.storage.ChapInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.CopyCommandResult;
import org.apache.cloudstack.engine.subsystem.api.storage.CreateCmdResult;
import org.apache.cloudstack.engine.subsystem.api.storage.DataObject;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreCapabilities;
import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStoreDriver;
import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.TemplateInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.VolumeDataFactory;
import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo;
import org.apache.cloudstack.framework.async.AsyncCompletionCallback;
import org.apache.cloudstack.storage.command.CommandResult;
import org.apache.cloudstack.storage.command.CreateObjectAnswer;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailVO;
import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailsDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.cloudstack.storage.datastore.util.DateraObject;
import org.apache.cloudstack.storage.datastore.util.DateraUtil;
import org.apache.cloudstack.storage.to.SnapshotObjectTO;
import org.apache.log4j.Logger;
import com.cloud.agent.api.Answer;
import com.cloud.agent.api.to.DataObjectType;
import com.cloud.agent.api.to.DataStoreTO;
@ -44,40 +75,12 @@ import com.cloud.storage.dao.SnapshotDetailsVO;
import com.cloud.storage.dao.VMTemplatePoolDao;
import com.cloud.storage.dao.VolumeDao;
import com.cloud.storage.dao.VolumeDetailsDao;
import com.cloud.utils.Pair;
import com.cloud.utils.StringUtils;
import com.cloud.utils.db.GlobalLock;
import com.cloud.utils.exception.CloudRuntimeException;
import com.google.common.base.Preconditions;
import com.google.common.primitives.Ints;
import org.apache.cloudstack.engine.subsystem.api.storage.ChapInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.CopyCommandResult;
import org.apache.cloudstack.engine.subsystem.api.storage.CreateCmdResult;
import org.apache.cloudstack.engine.subsystem.api.storage.DataObject;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreCapabilities;
import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStoreDriver;
import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.TemplateInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.VolumeDataFactory;
import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo;
import org.apache.cloudstack.framework.async.AsyncCompletionCallback;
import org.apache.cloudstack.storage.command.CommandResult;
import org.apache.cloudstack.storage.command.CreateObjectAnswer;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailVO;
import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailsDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.cloudstack.storage.datastore.util.DateraObject;
import org.apache.cloudstack.storage.datastore.util.DateraUtil;
import org.apache.cloudstack.storage.to.SnapshotObjectTO;
import org.apache.log4j.Logger;
import javax.inject.Inject;
import java.io.UnsupportedEncodingException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import static com.cloud.utils.NumbersUtil.toHumanReadableSize;
@ -1254,6 +1257,12 @@ public class DateraPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
throw new UnsupportedOperationException();
}
@Override
public void copyAsync(DataObject srcData, DataObject destData,
Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) {
throw new UnsupportedOperationException();
}
@Override
public boolean canCopy(DataObject srcData, DataObject destData) {
return false;
@ -1825,6 +1834,30 @@ public class DateraPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
@Override
public void handleQualityOfServiceForVolumeMigration(VolumeInfo volumeInfo,
QualityOfServiceState qualityOfServiceState) {
}
@Override
public boolean canProvideStorageStats() {
return false;
}
@Override
public Pair<Long, Long> getStorageStats(StoragePool storagePool) {
return null;
}
@Override
public boolean canProvideVolumeStats() {
return false;
}
@Override
public Pair<Long, Long> getVolumeStats(StoragePool storagePool, String volumeId) {
return null;
}
@Override
public boolean canHostAccessStoragePool(Host host, StoragePool pool) {
return true;
}
}

View File

@ -76,6 +76,7 @@ import com.cloud.storage.dao.VMTemplateDao;
import com.cloud.storage.dao.VolumeDao;
import com.cloud.storage.snapshot.SnapshotManager;
import com.cloud.template.TemplateManager;
import com.cloud.utils.Pair;
import com.cloud.vm.dao.VMInstanceDao;
import static com.cloud.utils.NumbersUtil.toHumanReadableSize;
@ -277,6 +278,11 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri
}
}
@Override
public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) {
copyAsync(srcData, destData, callback);
}
@Override
public boolean canCopy(DataObject srcData, DataObject destData) {
//BUG fix for CLOUDSTACK-4618
@ -389,4 +395,29 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri
@Override
public void handleQualityOfServiceForVolumeMigration(VolumeInfo volumeInfo, QualityOfServiceState qualityOfServiceState) {}
@Override
public boolean canProvideStorageStats() {
return false;
}
@Override
public Pair<Long, Long> getStorageStats(StoragePool storagePool) {
return null;
}
@Override
public boolean canProvideVolumeStats() {
return false;
}
@Override
public Pair<Long, Long> getVolumeStats(StoragePool storagePool, String volumeId) {
return null;
}
@Override
public boolean canHostAccessStoragePool(Host host, StoragePool pool) {
return true;
}
}

View File

@ -53,6 +53,7 @@ import com.cloud.storage.StoragePool;
import com.cloud.storage.VolumeVO;
import com.cloud.storage.dao.VolumeDao;
import com.cloud.user.dao.AccountDao;
import com.cloud.utils.Pair;
public class NexentaPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
private static final Logger logger = Logger.getLogger(NexentaPrimaryDataStoreDriver.class);
@ -199,6 +200,10 @@ public class NexentaPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
@Override
public void copyAsync(DataObject srcdata, DataObject destData, AsyncCompletionCallback<CopyCommandResult> callback) {}
@Override
public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) {
}
@Override
public boolean canCopy(DataObject srcData, DataObject destData) {
return false;
@ -209,4 +214,29 @@ public class NexentaPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
@Override
public void handleQualityOfServiceForVolumeMigration(VolumeInfo volumeInfo, QualityOfServiceState qualityOfServiceState) {}
@Override
public boolean canProvideStorageStats() {
return false;
}
@Override
public boolean canProvideVolumeStats() {
return false;
}
@Override
public Pair<Long, Long> getVolumeStats(StoragePool storagePool, String volumeId) {
return null;
}
@Override
public Pair<Long, Long> getStorageStats(StoragePool storagePool) {
return null;
}
@Override
public boolean canHostAccessStoragePool(Host host, StoragePool pool) {
return true;
}
}

View File

@ -46,6 +46,7 @@ import com.cloud.agent.api.to.DataTO;
import com.cloud.host.Host;
import com.cloud.storage.StoragePool;
import com.cloud.storage.dao.StoragePoolHostDao;
import com.cloud.utils.Pair;
import com.cloud.utils.exception.CloudRuntimeException;
public class SamplePrimaryDataStoreDriverImpl implements PrimaryDataStoreDriver {
@ -224,6 +225,10 @@ public class SamplePrimaryDataStoreDriverImpl implements PrimaryDataStoreDriver
public void copyAsync(DataObject srcdata, DataObject destData, AsyncCompletionCallback<CopyCommandResult> callback) {
}
@Override
public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) {
}
@Override
public void resize(DataObject data, AsyncCompletionCallback<CreateCmdResult> callback) {
}
@ -236,4 +241,28 @@ public class SamplePrimaryDataStoreDriverImpl implements PrimaryDataStoreDriver
public void takeSnapshot(SnapshotInfo snapshot, AsyncCompletionCallback<CreateCmdResult> callback) {
}
@Override
public boolean canProvideStorageStats() {
return false;
}
@Override
public boolean canProvideVolumeStats() {
return false;
}
@Override
public Pair<Long, Long> getVolumeStats(StoragePool storagePool, String volumeId) {
return null;
}
@Override
public Pair<Long, Long> getStorageStats(StoragePool storagePool) {
return null;
}
@Override
public boolean canHostAccessStoragePool(Host host, StoragePool pool) {
return true;
}
}

View File

@ -0,0 +1,55 @@
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>cloud-plugin-storage-volume-scaleio</artifactId>
<name>Apache CloudStack Plugin - Storage Volume Dell-EMC ScaleIO/PowerFlex Provider</name>
<parent>
<groupId>org.apache.cloudstack</groupId>
<artifactId>cloudstack-plugins</artifactId>
<version>4.16.0.0-SNAPSHOT</version>
<relativePath>../../../pom.xml</relativePath>
</parent>
<dependencies>
<dependency>
<groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-engine-storage-volume</artifactId>
<version>${project.version}</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<skipTests>true</skipTests>
</configuration>
<executions>
<execution>
<phase>integration-test</phase>
<goals>
<goal>test</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

View File

@ -0,0 +1,57 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.storage.datastore.api;
public class ProtectionDomain {
String id;
String name;
String protectionDomainState;
String systemId;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getProtectionDomainState() {
return protectionDomainState;
}
public void setProtectionDomainState(String protectionDomainState) {
this.protectionDomainState = protectionDomainState;
}
public String getSystemId() {
return systemId;
}
public void setSystemId(String systemId) {
this.systemId = systemId;
}
}

View File

@ -0,0 +1,138 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.storage.datastore.api;
public class Sdc {
String id;
String name;
String mdmConnectionState;
Boolean sdcApproved;
String perfProfile;
String sdcGuid;
String sdcIp;
String[] sdcIps;
String systemId;
String osType;
String kernelVersion;
String softwareVersionInfo;
String versionInfo;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getMdmConnectionState() {
return mdmConnectionState;
}
public void setMdmConnectionState(String mdmConnectionState) {
this.mdmConnectionState = mdmConnectionState;
}
public Boolean getSdcApproved() {
return sdcApproved;
}
public void setSdcApproved(Boolean sdcApproved) {
this.sdcApproved = sdcApproved;
}
public String getPerfProfile() {
return perfProfile;
}
public void setPerfProfile(String perfProfile) {
this.perfProfile = perfProfile;
}
public String getSdcGuid() {
return sdcGuid;
}
public void setSdcGuid(String sdcGuid) {
this.sdcGuid = sdcGuid;
}
public String getSdcIp() {
return sdcIp;
}
public void setSdcIp(String sdcIp) {
this.sdcIp = sdcIp;
}
public String[] getSdcIps() {
return sdcIps;
}
public void setSdcIps(String[] sdcIps) {
this.sdcIps = sdcIps;
}
public String getSystemId() {
return systemId;
}
public void setSystemId(String systemId) {
this.systemId = systemId;
}
public String getOsType() {
return osType;
}
public void setOsType(String osType) {
this.osType = osType;
}
public String getKernelVersion() {
return kernelVersion;
}
public void setKernelVersion(String kernelVersion) {
this.kernelVersion = kernelVersion;
}
public String getSoftwareVersionInfo() {
return softwareVersionInfo;
}
public void setSoftwareVersionInfo(String softwareVersionInfo) {
this.softwareVersionInfo = softwareVersionInfo;
}
public String getVersionInfo() {
return versionInfo;
}
public void setVersionInfo(String versionInfo) {
this.versionInfo = versionInfo;
}
}

View File

@ -0,0 +1,39 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.storage.datastore.api;
public class SdcMappingInfo {
String sdcId;
String sdcIp;
public String getSdcId() {
return sdcId;
}
public void setSdcId(String sdcId) {
this.sdcId = sdcId;
}
public String getSdcIp() {
return sdcIp;
}
public void setSdcIp(String sdcIp) {
this.sdcIp = sdcIp;
}
}

View File

@ -0,0 +1,48 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.storage.datastore.api;
public class SnapshotDef {
String volumeId;
String snapshotName;
String allowOnExtManagedVol;
public String getVolumeId() {
return volumeId;
}
public void setVolumeId(String volumeId) {
this.volumeId = volumeId;
}
public String getSnapshotName() {
return snapshotName;
}
public void setSnapshotName(String snapshotName) {
this.snapshotName = snapshotName;
}
public String getAllowOnExtManagedVol() {
return allowOnExtManagedVol;
}
public void setAllowOnExtManagedVol(String allowOnExtManagedVol) {
this.allowOnExtManagedVol = allowOnExtManagedVol;
}
}

View File

@ -0,0 +1,30 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.storage.datastore.api;
public class SnapshotDefs {
SnapshotDef[] snapshotDefs;
public SnapshotDef[] getSnapshotDefs() {
return snapshotDefs;
}
public void setSnapshotDefs(SnapshotDef[] snapshotDefs) {
this.snapshotDefs = snapshotDefs;
}
}

View File

@ -0,0 +1,46 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.storage.datastore.api;
import java.util.Arrays;
import java.util.List;
public class SnapshotGroup {
String snapshotGroupId;
String[] volumeIdList;
public String getSnapshotGroupId() {
return snapshotGroupId;
}
public void setSnapshotGroupId(String snapshotGroupId) {
this.snapshotGroupId = snapshotGroupId;
}
public List<String> getVolumeIds() {
return Arrays.asList(volumeIdList);
}
public String[] getVolumeIdList() {
return volumeIdList;
}
public void setVolumeIdList(String[] volumeIdList) {
this.volumeIdList = volumeIdList;
}
}

View File

@ -0,0 +1,75 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.storage.datastore.api;
public class StoragePool {
String id;
String name;
String mediaType;
String protectionDomainId;
String systemId;
StoragePoolStatistics statistics;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getMediaType() {
return mediaType;
}
public void setMediaType(String mediaType) {
this.mediaType = mediaType;
}
public String getProtectionDomainId() {
return protectionDomainId;
}
public void setProtectionDomainId(String protectionDomainId) {
this.protectionDomainId = protectionDomainId;
}
public String getSystemId() {
return systemId;
}
public void setSystemId(String systemId) {
this.systemId = systemId;
}
public StoragePoolStatistics getStatistics() {
return statistics;
}
public void setStatistics(StoragePoolStatistics statistics) {
this.statistics = statistics;
}
}

View File

@ -0,0 +1,85 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.storage.datastore.api;
import com.google.common.base.Strings;
public class StoragePoolStatistics {
String maxCapacityInKb; // total capacity
String spareCapacityInKb; // spare capacity, space not used for volumes creation/allocation
String netCapacityInUseInKb; // user data capacity in use
String netUnusedCapacityInKb; // capacity available for volume creation (volume space to write)
public Long getMaxCapacityInKb() {
if (Strings.isNullOrEmpty(maxCapacityInKb)) {
return Long.valueOf(0);
}
return Long.valueOf(maxCapacityInKb);
}
public void setMaxCapacityInKb(String maxCapacityInKb) {
this.maxCapacityInKb = maxCapacityInKb;
}
public Long getSpareCapacityInKb() {
if (Strings.isNullOrEmpty(spareCapacityInKb)) {
return Long.valueOf(0);
}
return Long.valueOf(spareCapacityInKb);
}
public void setSpareCapacityInKb(String spareCapacityInKb) {
this.spareCapacityInKb = spareCapacityInKb;
}
public Long getNetCapacityInUseInKb() {
if (Strings.isNullOrEmpty(netCapacityInUseInKb)) {
return Long.valueOf(0);
}
return Long.valueOf(netCapacityInUseInKb);
}
public void setNetCapacityInUseInKb(String netCapacityInUseInKb) {
this.netCapacityInUseInKb = netCapacityInUseInKb;
}
public Long getNetUnusedCapacityInKb() {
if (Strings.isNullOrEmpty(netUnusedCapacityInKb)) {
return Long.valueOf(0);
}
return Long.valueOf(netUnusedCapacityInKb);
}
public Long getNetUnusedCapacityInBytes() {
return (getNetUnusedCapacityInKb() * 1024);
}
public void setNetUnusedCapacityInKb(String netUnusedCapacityInKb) {
this.netUnusedCapacityInKb = netUnusedCapacityInKb;
}
public Long getNetMaxCapacityInBytes() {
// total usable capacity = ("maxCapacityInKb" - "spareCapacityInKb") / 2
Long netMaxCapacityInKb = getMaxCapacityInKb() - getSpareCapacityInKb();
return ((netMaxCapacityInKb / 2) * 1024);
}
public Long getNetUsedCapacityInBytes() {
return (getNetMaxCapacityInBytes() - getNetUnusedCapacityInBytes());
}
}

Some files were not shown because too many files have changed in this diff Show More