kvm: volume encryption feature (#6522)

This PR introduces a feature designed to allow CloudStack to manage a generic volume encryption setting. The encryption is handled transparently to the guest OS, and is intended to handle VM guest data encryption at rest and possibly over the wire, though the actual encryption implementation is up to the primary storage driver.

In some cases cloud customers may still prefer to maintain their own guest-level volume encryption, if they don't trust the cloud provider. However, for private cloud cases this greatly simplifies the guest OS experience in terms of running volume encryption for guests without the user having to manage keys, deal with key servers and guest booting being dependent on network connectivity to them (i.e. Tang), etc, especially in cases where users are attaching/detaching data disks and moving them between VMs occasionally.

The feature can be thought of as having two parts - the API/control plane (which includes scheduling aspects), and the storage driver implementation.

This initial PR adds the encryption setting to disk offerings and service offerings (for root volume), and implements encryption support for KVM SharedMountPoint, NFS, Local, and ScaleIO storage pools.

NOTE: While not required, operations can be significantly sped up by ensuring that hosts have the `rng-tools` package and service installed and running on the management server and hypervisors. For EL hosts the service is `rngd` and for Debian it is `rng-tools`. In particular, the use of SecureRandom for generating volume passphrases can be slow if there isn't a good source of entropy. This could affect testing and build environments, and otherwise would only affect users who actually use the encryption feature. If you find tests or volume creates blocking on encryption, check this first.

### Management Server

##### API

* createDiskOffering now has an 'encrypt' Boolean
* createServiceOffering now has an 'encryptroot' Boolean. The 'root' suffix is added here in case there is ever any other need to encrypt something related to the guest configuration, like the RAM of a VM.  This has been refactored to deal with the new separation of service offering from disk offering internally.
* listDiskOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listServiceOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listHosts now shows encryption support of each hypervisor host via `encryptionsupported`
* Volumes themselves don't show encryption on/off, rather the offering should be referenced. This follows the same pattern as other disk offering based settings such as the IOPS of the volume.

##### Volume functions

A decent effort has been made to ensure that the most common volume functions have either been cleanly supported or blocked. However, for the first release it is advised to mark this feature as *experimental*, as the code base is complex and there are certainly edge cases to be found.

Many of these features could eventually be supported over time, such as creating templates from encrypted volumes, but the effort and size of the change is already overwhelming.

Supported functions:
* Data Volume create
* VM root volume create
* VM root volume reinstall
* Offline volume snapshot/restore
* Migration of VM with storage (e.g. local storage VM migration)
* Resize volume
* Detach/attach volume

Blocked functions:
* Online volume snapshot
* VM snapshot w/memory
* Scheduled snapshots (would fail when VM is running)
* Disk offering migration to offerings that don't have matching encryption
* Creating template from encrypted volume
* Creating volume from encrypted volume
* Volume extraction (would we decrypt it first, or expose the key? Probably the former).

##### Primary Storage Support

For storage developers, adding encryption support involves:

1. Updating the `StoragePoolType` for your primary storage to advertise encryption support. This is used during allocation of storage to match storage types that support encryption to storage that supports it.

2. Implementing encryption feature when your `PrimaryDataStoreDriver` is called to perform volume lifecycle functions on volumes that are requesting encryption. You are free to do what your storage supports - this could be as simple as calling a storage API with the right flag when creating a volume. Or (as is the case with the KVM storage types), as complex as managing volume details directly at the hypervisor host. The data objects passed to the storage driver will contain volume passphrases, if encryption is requested.

##### Scheduling

For the KVM implementations specified above, we are dependent on the KVM hosts having support for volume encryption tools. As such, the hosts `StartupRoutingCommand` has been modified to advertise whether the host supports encryption. This is done via a probe during agent startup to look for functioning `cryptsetup` and support in `qemu-img`. This is also visible via the listHosts API and the host details in the UI.  This was patterned after other features that require hypervisor support such as UEFI.

The `EndPointSelector` interface and `DefaultEndpointSelector` have had new methods added, which allow the caller to ask for endpoints that support encryption.  This can be used by storage drivers to find the proper hosts to send storage commands that involve encryption. Not all volume activities will require a host to support encryption (for example a snapshot backup is a simple file copy), and this is the reason why the interface has been modified to allow for the storage driver to decide, rather than just passing the data objects to the EndpointSelector and letting the implementation decide.

VM scheduling has also been modified. When a VM start is requested, if any volume that requires encryption is attached, it will filter out hosts that don't support encryption.

##### DB Changes

A volume whose disk offering enables encryption will get a passphrase generated for it before its first use. This is stored in the new 'passphrase' table, and is encrypted using the CloudStack installation's standard configured DB encryption. A field has been added to the volumes table, referencing this passphrase, and a foreign key added to ensure passphrases that are referenced can't be removed from the database.  The volumes table now also contains an encryption format field, which is set by the implementer of the encryption and used as it sees fit.

#### KVM Agent

For the KVM storage pool types supported, the encryption has been implemented at Qemu itself, using the built-in LUKS storage support. This means that the storage remains encrypted all the way to the VM process, and decrypted before the block device is visible to the guest.  This may not be necessary in order to implement encryption for /your/ storage pool type, maybe you have a kernel driver that decrypts before the block device on the system, or something like that. However, it seemed like the simplest, common place to terminate the encryption, and provides the lowest surface area for decrypted guest data.

For qcow2 based storage, `qemu-img` is used to set up a qcow2 file with LUKS encryption. For block based (currently just ScaleIO storage), the `cryptsetup` utility is used to format the block device as LUKS for data disks, but `qemu-img` and its LUKS support is used for template copy.

Any volume that requires encryption will contain a passphrase ID as a byte array when handed down to the KVM agent. Care has been taken to ensure this doesn't get logged, and it is cleared after use in attempt to avoid exposing it before garbage collection occurs.  On the agent side, this passphrase is used in two ways:

1. In cases where the volume experiences some libvirt interaction it is loaded into libvirt as an ephemeral, private secret and then referenced by secret UUID in any libvirt XML. This applies to things like VM startup, migration preparation, etc.

2. In cases where `qemu-img` needs to use this passphrase for volume operations, it is written to a `KeyFile` on the cloudstack agent's configured tmpfs and passed along. The `KeyFile` is a `Closeable` and when it is closed, it is deleted. This allows us to try-with-resources any volume operations and get the KeyFile removed regardless.

In order to support the advanced syntax required to handle encryption and passphrases with `qemu-img`, the `QemuImg` utility has been modified to support the new `--object` and `--image-opts` flags. These are modeled as `QemuObject` and `QemuImageOptions`.  These `qemu-img` flags have been designed to supersede some of the existing, older flags being used today (such as choosing file formats and paths), and an effort could be made to switch over to these wholesale. However, for now we have instead opted to keep existing functions and do some wrapping to ensure backward compatibility, so callers of `QemuImg` can choose to use either way.

It should be noted that there are also a few different Enums that represent the encryption format for various purposes. While these are analogous in principle, they represent different things and should not be confused. For example, the supported encryption format strings for the `cryptsetup` utility has `LuksType.LUKS` while `QemuImg` has a `QemuImg.PhysicalDiskFormat.LUKS`.

Some additional effort could potentially be made to support advanced encryption configurations, such as choosing between LUKS1 and LUKS2 or changing cipher details. These may require changes all the way up through the control plane. However, in practice Libvirt and Qemu currently only support LUKS1 today. Additionally, the cipher details aren't required in order to use an encrypted volume, as they're stored in the LUKS header on the volume there is no need to store these elsewhere.  As such, we need only set the one encryption format upon volume creation, which is persisted in the volumes table and then available later as needed.  In the future when LUKS2 is standard and fully supported, we could move to it as the default and old volumes will still reference LUKS1 and have the headers on-disk to ensure they remain usable. We could also possibly support an automatic upgrade of the headers down the road, or a volume migration mechanism.

Every version of cryptsetup and qemu-img tested on variants of EL7 and Ubuntu that support encryption use the XTS-AES 256 cipher, which is the leading industry standard and widely used cipher today (e.g. BitLocker and FileVault).

Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
This commit is contained in:
Marcus Sorensen 2022-09-26 22:50:59 -06:00 committed by GitHub
parent d4c6586546
commit 697e12f8f7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
114 changed files with 4328 additions and 433 deletions

View File

@ -40,6 +40,7 @@ public class DiskTO {
public static final String VMDK = "vmdk"; public static final String VMDK = "vmdk";
public static final String EXPAND_DATASTORE = "expandDatastore"; public static final String EXPAND_DATASTORE = "expandDatastore";
public static final String TEMPLATE_RESIGN = "templateResign"; public static final String TEMPLATE_RESIGN = "templateResign";
public static final String SECRET_CONSUMER_DETAIL = "storageMigrateSecretConsumer";
private DataTO data; private DataTO data;
private Long diskSeq; private Long diskSeq;

View File

@ -16,6 +16,7 @@
// under the License. // under the License.
package com.cloud.agent.api.to; package com.cloud.agent.api.to;
import com.cloud.agent.api.LogLevel;
import com.cloud.storage.Storage.StoragePoolType; import com.cloud.storage.Storage.StoragePoolType;
import com.cloud.storage.StoragePool; import com.cloud.storage.StoragePool;
@ -24,6 +25,7 @@ public class StorageFilerTO {
String uuid; String uuid;
String host; String host;
String path; String path;
@LogLevel(LogLevel.Log4jLevel.Off)
String userInfo; String userInfo;
int port; int port;
StoragePoolType type; StoragePoolType type;

View File

@ -53,6 +53,7 @@ public interface Host extends StateObject<Status>, Identity, Partition, HAResour
} }
} }
public static final String HOST_UEFI_ENABLE = "host.uefi.enable"; public static final String HOST_UEFI_ENABLE = "host.uefi.enable";
public static final String HOST_VOLUME_ENCRYPTION = "host.volume.encryption";
/** /**
* @return name of the machine. * @return name of the machine.

View File

@ -149,4 +149,8 @@ public interface DiskOffering extends InfrastructureEntity, Identity, InternalId
boolean isComputeOnly(); boolean isComputeOnly();
boolean getDiskSizeStrictness(); boolean getDiskSizeStrictness();
boolean getEncrypt();
void setEncrypt(boolean encrypt);
} }

View File

@ -25,6 +25,7 @@ public class MigrationOptions implements Serializable {
private String srcPoolUuid; private String srcPoolUuid;
private Storage.StoragePoolType srcPoolType; private Storage.StoragePoolType srcPoolType;
private Type type; private Type type;
private ScopeType scopeType;
private String srcBackingFilePath; private String srcBackingFilePath;
private boolean copySrcTemplate; private boolean copySrcTemplate;
private String srcVolumeUuid; private String srcVolumeUuid;
@ -37,18 +38,20 @@ public class MigrationOptions implements Serializable {
public MigrationOptions() { public MigrationOptions() {
} }
public MigrationOptions(String srcPoolUuid, Storage.StoragePoolType srcPoolType, String srcBackingFilePath, boolean copySrcTemplate) { public MigrationOptions(String srcPoolUuid, Storage.StoragePoolType srcPoolType, String srcBackingFilePath, boolean copySrcTemplate, ScopeType scopeType) {
this.srcPoolUuid = srcPoolUuid; this.srcPoolUuid = srcPoolUuid;
this.srcPoolType = srcPoolType; this.srcPoolType = srcPoolType;
this.type = Type.LinkedClone; this.type = Type.LinkedClone;
this.scopeType = scopeType;
this.srcBackingFilePath = srcBackingFilePath; this.srcBackingFilePath = srcBackingFilePath;
this.copySrcTemplate = copySrcTemplate; this.copySrcTemplate = copySrcTemplate;
} }
public MigrationOptions(String srcPoolUuid, Storage.StoragePoolType srcPoolType, String srcVolumeUuid) { public MigrationOptions(String srcPoolUuid, Storage.StoragePoolType srcPoolType, String srcVolumeUuid, ScopeType scopeType) {
this.srcPoolUuid = srcPoolUuid; this.srcPoolUuid = srcPoolUuid;
this.srcPoolType = srcPoolType; this.srcPoolType = srcPoolType;
this.type = Type.FullClone; this.type = Type.FullClone;
this.scopeType = scopeType;
this.srcVolumeUuid = srcVolumeUuid; this.srcVolumeUuid = srcVolumeUuid;
} }
@ -60,6 +63,8 @@ public class MigrationOptions implements Serializable {
return srcPoolType; return srcPoolType;
} }
public ScopeType getScopeType() { return scopeType; }
public String getSrcBackingFilePath() { public String getSrcBackingFilePath() {
return srcBackingFilePath; return srcBackingFilePath;
} }

View File

@ -130,33 +130,35 @@ public class Storage {
} }
public static enum StoragePoolType { public static enum StoragePoolType {
Filesystem(false, true), // local directory Filesystem(false, true, true), // local directory
NetworkFilesystem(true, true), // NFS NetworkFilesystem(true, true, true), // NFS
IscsiLUN(true, false), // shared LUN, with a clusterfs overlay IscsiLUN(true, false, false), // shared LUN, with a clusterfs overlay
Iscsi(true, false), // for e.g., ZFS Comstar Iscsi(true, false, false), // for e.g., ZFS Comstar
ISO(false, false), // for iso image ISO(false, false, false), // for iso image
LVM(false, false), // XenServer local LVM SR LVM(false, false, false), // XenServer local LVM SR
CLVM(true, false), CLVM(true, false, false),
RBD(true, true), // http://libvirt.org/storage.html#StorageBackendRBD RBD(true, true, false), // http://libvirt.org/storage.html#StorageBackendRBD
SharedMountPoint(true, false), SharedMountPoint(true, false, true),
VMFS(true, true), // VMware VMFS storage VMFS(true, true, false), // VMware VMFS storage
PreSetup(true, true), // for XenServer, Storage Pool is set up by customers. PreSetup(true, true, false), // for XenServer, Storage Pool is set up by customers.
EXT(false, true), // XenServer local EXT SR EXT(false, true, false), // XenServer local EXT SR
OCFS2(true, false), OCFS2(true, false, false),
SMB(true, false), SMB(true, false, false),
Gluster(true, false), Gluster(true, false, false),
PowerFlex(true, true), // Dell EMC PowerFlex/ScaleIO (formerly VxFlexOS) PowerFlex(true, true, true), // Dell EMC PowerFlex/ScaleIO (formerly VxFlexOS)
ManagedNFS(true, false), ManagedNFS(true, false, false),
Linstor(true, true), Linstor(true, true, false),
DatastoreCluster(true, true), // for VMware, to abstract pool of clusters DatastoreCluster(true, true, false), // for VMware, to abstract pool of clusters
StorPool(true, true); StorPool(true, true, false);
private final boolean shared; private final boolean shared;
private final boolean overprovisioning; private final boolean overprovisioning;
private final boolean encryption;
StoragePoolType(boolean shared, boolean overprovisioning) { StoragePoolType(boolean shared, boolean overprovisioning, boolean encryption) {
this.shared = shared; this.shared = shared;
this.overprovisioning = overprovisioning; this.overprovisioning = overprovisioning;
this.encryption = encryption;
} }
public boolean isShared() { public boolean isShared() {
@ -166,6 +168,8 @@ public class Storage {
public boolean supportsOverProvisioning() { public boolean supportsOverProvisioning() {
return overprovisioning; return overprovisioning;
} }
public boolean supportsEncryption() { return encryption; }
} }
public static List<StoragePoolType> getNonSharedStoragePoolTypes() { public static List<StoragePoolType> getNonSharedStoragePoolTypes() {

View File

@ -247,4 +247,12 @@ public interface Volume extends ControlledEntity, Identity, InternalIdentity, Ba
String getExternalUuid(); String getExternalUuid();
void setExternalUuid(String externalUuid); void setExternalUuid(String externalUuid);
public Long getPassphraseId();
public void setPassphraseId(Long id);
public String getEncryptFormat();
public void setEncryptFormat(String encryptFormat);
} }

View File

@ -44,6 +44,7 @@ public class DiskProfile {
private String cacheMode; private String cacheMode;
private Long minIops; private Long minIops;
private Long maxIops; private Long maxIops;
private boolean requiresEncryption;
private HypervisorType hyperType; private HypervisorType hyperType;
@ -63,6 +64,12 @@ public class DiskProfile {
this.volumeId = volumeId; this.volumeId = volumeId;
} }
public DiskProfile(long volumeId, Volume.Type type, String name, long diskOfferingId, long size, String[] tags, boolean useLocalStorage, boolean recreatable,
Long templateId, boolean requiresEncryption) {
this(volumeId, type, name, diskOfferingId, size, tags, useLocalStorage, recreatable, templateId);
this.requiresEncryption = requiresEncryption;
}
public DiskProfile(Volume vol, DiskOffering offering, HypervisorType hyperType) { public DiskProfile(Volume vol, DiskOffering offering, HypervisorType hyperType) {
this(vol.getId(), this(vol.getId(),
vol.getVolumeType(), vol.getVolumeType(),
@ -75,6 +82,7 @@ public class DiskProfile {
null); null);
this.hyperType = hyperType; this.hyperType = hyperType;
this.provisioningType = offering.getProvisioningType(); this.provisioningType = offering.getProvisioningType();
this.requiresEncryption = offering.getEncrypt() || vol.getPassphraseId() != null;
} }
public DiskProfile(DiskProfile dp) { public DiskProfile(DiskProfile dp) {
@ -230,7 +238,6 @@ public class DiskProfile {
return cacheMode; return cacheMode;
} }
public Long getMinIops() { public Long getMinIops() {
return minIops; return minIops;
} }
@ -247,4 +254,7 @@ public class DiskProfile {
this.maxIops = maxIops; this.maxIops = maxIops;
} }
public boolean requiresEncryption() { return requiresEncryption; }
public void setEncryption(boolean encrypt) { this.requiresEncryption = encrypt; }
} }

View File

@ -109,6 +109,9 @@ public class ApiConstants {
public static final String CUSTOM_JOB_ID = "customjobid"; public static final String CUSTOM_JOB_ID = "customjobid";
public static final String CURRENT_START_IP = "currentstartip"; public static final String CURRENT_START_IP = "currentstartip";
public static final String CURRENT_END_IP = "currentendip"; public static final String CURRENT_END_IP = "currentendip";
public static final String ENCRYPT = "encrypt";
public static final String ENCRYPT_ROOT = "encryptroot";
public static final String ENCRYPTION_SUPPORTED = "encryptionsupported";
public static final String MIN_IOPS = "miniops"; public static final String MIN_IOPS = "miniops";
public static final String MAX_IOPS = "maxiops"; public static final String MAX_IOPS = "maxiops";
public static final String HYPERVISOR_SNAPSHOT_RESERVE = "hypervisorsnapshotreserve"; public static final String HYPERVISOR_SNAPSHOT_RESERVE = "hypervisorsnapshotreserve";

View File

@ -163,9 +163,14 @@ public class CreateDiskOfferingCmd extends BaseCmd {
@Parameter(name = ApiConstants.DISK_SIZE_STRICTNESS, type = CommandType.BOOLEAN, description = "To allow or disallow the resize operation on the disks created from this disk offering, if the flag is true then resize is not allowed", since = "4.17") @Parameter(name = ApiConstants.DISK_SIZE_STRICTNESS, type = CommandType.BOOLEAN, description = "To allow or disallow the resize operation on the disks created from this disk offering, if the flag is true then resize is not allowed", since = "4.17")
private Boolean diskSizeStrictness; private Boolean diskSizeStrictness;
@Parameter(name = ApiConstants.ENCRYPT, type = CommandType.BOOLEAN, required=false, description = "Volumes using this offering should be encrypted", since = "4.18")
private Boolean encrypt;
@Parameter(name = ApiConstants.DETAILS, type = CommandType.MAP, description = "details to specify disk offering parameters", since = "4.16") @Parameter(name = ApiConstants.DETAILS, type = CommandType.MAP, description = "details to specify disk offering parameters", since = "4.16")
private Map details; private Map details;
///////////////////////////////////////////////////// /////////////////////////////////////////////////////
/////////////////// Accessors /////////////////////// /////////////////// Accessors ///////////////////////
///////////////////////////////////////////////////// /////////////////////////////////////////////////////
@ -202,6 +207,13 @@ public class CreateDiskOfferingCmd extends BaseCmd {
return maxIops; return maxIops;
} }
public boolean getEncrypt() {
if (encrypt == null) {
return false;
}
return encrypt;
}
public List<Long> getDomainIds() { public List<Long> getDomainIds() {
if (CollectionUtils.isNotEmpty(domainIds)) { if (CollectionUtils.isNotEmpty(domainIds)) {
Set<Long> set = new LinkedHashSet<>(domainIds); Set<Long> set = new LinkedHashSet<>(domainIds);

View File

@ -242,6 +242,10 @@ public class CreateServiceOfferingCmd extends BaseCmd {
since = "4.17") since = "4.17")
private Boolean diskOfferingStrictness; private Boolean diskOfferingStrictness;
@Parameter(name = ApiConstants.ENCRYPT_ROOT, type = CommandType.BOOLEAN, description = "VMs using this offering require root volume encryption", since="4.18")
private Boolean encryptRoot;
///////////////////////////////////////////////////// /////////////////////////////////////////////////////
/////////////////// Accessors /////////////////////// /////////////////// Accessors ///////////////////////
///////////////////////////////////////////////////// /////////////////////////////////////////////////////
@ -472,6 +476,13 @@ public class CreateServiceOfferingCmd extends BaseCmd {
return diskOfferingStrictness == null ? false : diskOfferingStrictness; return diskOfferingStrictness == null ? false : diskOfferingStrictness;
} }
public boolean getEncryptRoot() {
if (encryptRoot != null) {
return encryptRoot;
}
return false;
}
///////////////////////////////////////////////////// /////////////////////////////////////////////////////
/////////////// API Implementation/////////////////// /////////////// API Implementation///////////////////
///////////////////////////////////////////////////// /////////////////////////////////////////////////////

View File

@ -58,6 +58,9 @@ public class ListDiskOfferingsCmd extends BaseListDomainResourcesCmd {
@Parameter(name = ApiConstants.STORAGE_ID, type = CommandType.UUID, entityType = StoragePoolResponse.class, description = "The ID of the storage pool, tags of the storage pool are used to filter the offerings", since = "4.17") @Parameter(name = ApiConstants.STORAGE_ID, type = CommandType.UUID, entityType = StoragePoolResponse.class, description = "The ID of the storage pool, tags of the storage pool are used to filter the offerings", since = "4.17")
private Long storagePoolId; private Long storagePoolId;
@Parameter(name = ApiConstants.ENCRYPT, type = CommandType.BOOLEAN, description = "listed offerings support disk encryption", since = "4.18")
private Boolean encrypt;
///////////////////////////////////////////////////// /////////////////////////////////////////////////////
/////////////////// Accessors /////////////////////// /////////////////// Accessors ///////////////////////
///////////////////////////////////////////////////// /////////////////////////////////////////////////////
@ -78,9 +81,9 @@ public class ListDiskOfferingsCmd extends BaseListDomainResourcesCmd {
return volumeId; return volumeId;
} }
public Long getStoragePoolId() { public Long getStoragePoolId() { return storagePoolId; }
return storagePoolId;
} public Boolean getEncrypt() { return encrypt; }
///////////////////////////////////////////////////// /////////////////////////////////////////////////////
/////////////// API Implementation/////////////////// /////////////// API Implementation///////////////////

View File

@ -83,6 +83,12 @@ public class ListServiceOfferingsCmd extends BaseListDomainResourcesCmd {
since = "4.15") since = "4.15")
private Integer cpuSpeed; private Integer cpuSpeed;
@Parameter(name = ApiConstants.ENCRYPT_ROOT,
type = CommandType.BOOLEAN,
description = "listed offerings support root disk encryption",
since = "4.18")
private Boolean encryptRoot;
///////////////////////////////////////////////////// /////////////////////////////////////////////////////
/////////////////// Accessors /////////////////////// /////////////////// Accessors ///////////////////////
///////////////////////////////////////////////////// /////////////////////////////////////////////////////
@ -123,6 +129,8 @@ public class ListServiceOfferingsCmd extends BaseListDomainResourcesCmd {
return cpuSpeed; return cpuSpeed;
} }
public Boolean getEncryptRoot() { return encryptRoot; }
///////////////////////////////////////////////////// /////////////////////////////////////////////////////
/////////////// API Implementation/////////////////// /////////////// API Implementation///////////////////
///////////////////////////////////////////////////// /////////////////////////////////////////////////////

View File

@ -226,6 +226,10 @@ public class CreateSnapshotCmd extends BaseAsyncCreateCmd {
throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, String.format("Snapshot from volume [%s] was not found in database.", getVolumeUuid())); throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, String.format("Snapshot from volume [%s] was not found in database.", getVolumeUuid()));
} }
} catch (Exception e) { } catch (Exception e) {
if (e.getCause() instanceof UnsupportedOperationException) {
throw new ServerApiException(ApiErrorCode.UNSUPPORTED_ACTION_ERROR, String.format("Failed to create snapshot due to unsupported operation: %s", e.getCause().getMessage()));
}
String errorMessage = "Failed to create snapshot due to an internal error creating snapshot for volume " + getVolumeUuid(); String errorMessage = "Failed to create snapshot due to an internal error creating snapshot for volume " + getVolumeUuid();
s_logger.error(errorMessage, e); s_logger.error(errorMessage, e);
throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, errorMessage); throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, errorMessage);

View File

@ -156,10 +156,15 @@ public class DiskOfferingResponse extends BaseResponseWithAnnotations {
@Param(description = "the vsphere storage policy tagged to the disk offering in case of VMware", since = "4.15") @Param(description = "the vsphere storage policy tagged to the disk offering in case of VMware", since = "4.15")
private String vsphereStoragePolicy; private String vsphereStoragePolicy;
@SerializedName(ApiConstants.DISK_SIZE_STRICTNESS) @SerializedName(ApiConstants.DISK_SIZE_STRICTNESS)
@Param(description = "To allow or disallow the resize operation on the disks created from this disk offering, if the flag is true then resize is not allowed", since = "4.17") @Param(description = "To allow or disallow the resize operation on the disks created from this disk offering, if the flag is true then resize is not allowed", since = "4.17")
private Boolean diskSizeStrictness; private Boolean diskSizeStrictness;
@SerializedName(ApiConstants.ENCRYPT)
@Param(description = "Whether disks using this offering will be encrypted on primary storage", since = "4.18")
private Boolean encrypt;
@SerializedName(ApiConstants.DETAILS) @SerializedName(ApiConstants.DETAILS)
@Param(description = "additional key/value details tied with this disk offering", since = "4.17") @Param(description = "additional key/value details tied with this disk offering", since = "4.17")
private Map<String, String> details; private Map<String, String> details;
@ -381,6 +386,8 @@ public class DiskOfferingResponse extends BaseResponseWithAnnotations {
this.diskSizeStrictness = diskSizeStrictness; this.diskSizeStrictness = diskSizeStrictness;
} }
public void setEncrypt(Boolean encrypt) { this.encrypt = encrypt; }
public void setDetails(Map<String, String> details) { public void setDetails(Map<String, String> details) {
this.details = details; this.details = details;
} }

View File

@ -270,6 +270,10 @@ public class HostResponse extends BaseResponseWithAnnotations {
@Param(description = "true if the host has capability to support UEFI boot") @Param(description = "true if the host has capability to support UEFI boot")
private Boolean uefiCapabilty; private Boolean uefiCapabilty;
@SerializedName(ApiConstants.ENCRYPTION_SUPPORTED)
@Param(description = "true if the host supports encryption", since = "4.18")
private Boolean encryptionSupported;
@Override @Override
public String getObjectId() { public String getObjectId() {
return this.getId(); return this.getId();
@ -533,6 +537,13 @@ public class HostResponse extends BaseResponseWithAnnotations {
detailsCopy.remove("username"); detailsCopy.remove("username");
detailsCopy.remove("password"); detailsCopy.remove("password");
if (detailsCopy.containsKey(Host.HOST_VOLUME_ENCRYPTION)) {
this.setEncryptionSupported(Boolean.parseBoolean((String) detailsCopy.get(Host.HOST_VOLUME_ENCRYPTION)));
detailsCopy.remove(Host.HOST_VOLUME_ENCRYPTION);
} else {
this.setEncryptionSupported(new Boolean(false)); // default
}
this.details = detailsCopy; this.details = detailsCopy;
} }
@ -718,4 +729,8 @@ public class HostResponse extends BaseResponseWithAnnotations {
public void setUefiCapabilty(Boolean hostCapability) { public void setUefiCapabilty(Boolean hostCapability) {
this.uefiCapabilty = hostCapability; this.uefiCapabilty = hostCapability;
} }
public void setEncryptionSupported(Boolean encryptionSupported) {
this.encryptionSupported = encryptionSupported;
}
} }

View File

@ -226,6 +226,10 @@ public class ServiceOfferingResponse extends BaseResponseWithAnnotations {
@Param(description = "the display text of the disk offering", since = "4.17") @Param(description = "the display text of the disk offering", since = "4.17")
private String diskOfferingDisplayText; private String diskOfferingDisplayText;
@SerializedName(ApiConstants.ENCRYPT_ROOT)
@Param(description = "true if virtual machine root disk will be encrypted on storage", since = "4.18")
private Boolean encryptRoot;
public ServiceOfferingResponse() { public ServiceOfferingResponse() {
} }
@ -505,6 +509,7 @@ public class ServiceOfferingResponse extends BaseResponseWithAnnotations {
this.dynamicScalingEnabled = dynamicScalingEnabled; this.dynamicScalingEnabled = dynamicScalingEnabled;
} }
public Boolean getDiskOfferingStrictness() { public Boolean getDiskOfferingStrictness() {
return diskOfferingStrictness; return diskOfferingStrictness;
} }
@ -536,4 +541,6 @@ public class ServiceOfferingResponse extends BaseResponseWithAnnotations {
public String getDiskOfferingDisplayText() { public String getDiskOfferingDisplayText() {
return diskOfferingDisplayText; return diskOfferingDisplayText;
} }
public void setEncryptRoot(Boolean encrypt) { this.encryptRoot = encrypt; }
} }

View File

@ -20,8 +20,11 @@
package com.cloud.agent.api.storage; package com.cloud.agent.api.storage;
import com.cloud.agent.api.Command; import com.cloud.agent.api.Command;
import com.cloud.agent.api.LogLevel;
import com.cloud.agent.api.to.StorageFilerTO; import com.cloud.agent.api.to.StorageFilerTO;
import java.util.Arrays;
public class ResizeVolumeCommand extends Command { public class ResizeVolumeCommand extends Command {
private String path; private String path;
private StorageFilerTO pool; private StorageFilerTO pool;
@ -35,6 +38,10 @@ public class ResizeVolumeCommand extends Command {
private boolean managed; private boolean managed;
private String iScsiName; private String iScsiName;
@LogLevel(LogLevel.Log4jLevel.Off)
private byte[] passphrase;
private String encryptFormat;
protected ResizeVolumeCommand() { protected ResizeVolumeCommand() {
} }
@ -48,6 +55,13 @@ public class ResizeVolumeCommand extends Command {
this.managed = false; this.managed = false;
} }
public ResizeVolumeCommand(String path, StorageFilerTO pool, Long currentSize, Long newSize, boolean shrinkOk, String vmInstance,
String chainInfo, byte[] passphrase, String encryptFormat) {
this(path, pool, currentSize, newSize, shrinkOk, vmInstance, chainInfo);
this.passphrase = passphrase;
this.encryptFormat = encryptFormat;
}
public ResizeVolumeCommand(String path, StorageFilerTO pool, Long currentSize, Long newSize, boolean shrinkOk, String vmInstance, String chainInfo) { public ResizeVolumeCommand(String path, StorageFilerTO pool, Long currentSize, Long newSize, boolean shrinkOk, String vmInstance, String chainInfo) {
this(path, pool, currentSize, newSize, shrinkOk, vmInstance); this(path, pool, currentSize, newSize, shrinkOk, vmInstance);
this.chainInfo = chainInfo; this.chainInfo = chainInfo;
@ -89,6 +103,16 @@ public class ResizeVolumeCommand extends Command {
public String getChainInfo() {return chainInfo; } public String getChainInfo() {return chainInfo; }
public String getEncryptFormat() { return encryptFormat; }
public byte[] getPassphrase() { return passphrase; }
public void clearPassphrase() {
if (this.passphrase != null) {
Arrays.fill(this.passphrase, (byte) 0);
}
}
/** /**
* {@inheritDoc} * {@inheritDoc}
*/ */

View File

@ -19,6 +19,7 @@
package com.cloud.storage.resource; package com.cloud.storage.resource;
import com.cloud.serializer.GsonHelper;
import org.apache.cloudstack.agent.directdownload.DirectDownloadCommand; import org.apache.cloudstack.agent.directdownload.DirectDownloadCommand;
import org.apache.cloudstack.storage.to.VolumeObjectTO; import org.apache.cloudstack.storage.to.VolumeObjectTO;
import org.apache.cloudstack.storage.command.CheckDataStoreStoragePolicyComplainceCommand; import org.apache.cloudstack.storage.command.CheckDataStoreStoragePolicyComplainceCommand;
@ -48,6 +49,7 @@ import com.google.gson.Gson;
public class StorageSubsystemCommandHandlerBase implements StorageSubsystemCommandHandler { public class StorageSubsystemCommandHandlerBase implements StorageSubsystemCommandHandler {
private static final Logger s_logger = Logger.getLogger(StorageSubsystemCommandHandlerBase.class); private static final Logger s_logger = Logger.getLogger(StorageSubsystemCommandHandlerBase.class);
protected static final Gson s_gogger = GsonHelper.getGsonLogger();
protected StorageProcessor processor; protected StorageProcessor processor;
public StorageSubsystemCommandHandlerBase(StorageProcessor processor) { public StorageSubsystemCommandHandlerBase(StorageProcessor processor) {
@ -175,7 +177,7 @@ public class StorageSubsystemCommandHandlerBase implements StorageSubsystemComma
private void logCommand(Command cmd) { private void logCommand(Command cmd) {
try { try {
s_logger.debug(String.format("Executing command %s: [%s].", cmd.getClass().getSimpleName(), new Gson().toJson(cmd))); s_logger.debug(String.format("Executing command %s: [%s].", cmd.getClass().getSimpleName(), s_gogger.toJson(cmd)));
} catch (Exception e) { } catch (Exception e) {
s_logger.debug(String.format("Executing command %s.", cmd.getClass().getSimpleName())); s_logger.debug(String.format("Executing command %s.", cmd.getClass().getSimpleName()));
} }

View File

@ -19,6 +19,7 @@
package org.apache.cloudstack.storage.to; package org.apache.cloudstack.storage.to;
import com.cloud.agent.api.LogLevel;
import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo; import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo;
import com.cloud.agent.api.to.DataObjectType; import com.cloud.agent.api.to.DataObjectType;
@ -30,6 +31,8 @@ import com.cloud.storage.MigrationOptions;
import com.cloud.storage.Storage; import com.cloud.storage.Storage;
import com.cloud.storage.Volume; import com.cloud.storage.Volume;
import java.util.Arrays;
public class VolumeObjectTO implements DataTO { public class VolumeObjectTO implements DataTO {
private String uuid; private String uuid;
private Volume.Type volumeType; private Volume.Type volumeType;
@ -68,6 +71,10 @@ public class VolumeObjectTO implements DataTO {
private String updatedDataStoreUUID; private String updatedDataStoreUUID;
private String vSphereStoragePolicyId; private String vSphereStoragePolicyId;
@LogLevel(LogLevel.Log4jLevel.Off)
private byte[] passphrase;
private String encryptFormat;
public VolumeObjectTO() { public VolumeObjectTO() {
} }
@ -110,6 +117,8 @@ public class VolumeObjectTO implements DataTO {
this.directDownload = volume.isDirectDownload(); this.directDownload = volume.isDirectDownload();
this.deployAsIs = volume.isDeployAsIs(); this.deployAsIs = volume.isDeployAsIs();
this.vSphereStoragePolicyId = volume.getvSphereStoragePolicyId(); this.vSphereStoragePolicyId = volume.getvSphereStoragePolicyId();
this.passphrase = volume.getPassphrase();
this.encryptFormat = volume.getEncryptFormat();
} }
public String getUuid() { public String getUuid() {
@ -357,4 +366,22 @@ public class VolumeObjectTO implements DataTO {
public void setvSphereStoragePolicyId(String vSphereStoragePolicyId) { public void setvSphereStoragePolicyId(String vSphereStoragePolicyId) {
this.vSphereStoragePolicyId = vSphereStoragePolicyId; this.vSphereStoragePolicyId = vSphereStoragePolicyId;
} }
public String getEncryptFormat() { return encryptFormat; }
public void setEncryptFormat(String encryptFormat) { this.encryptFormat = encryptFormat; }
public byte[] getPassphrase() { return passphrase; }
public void setPassphrase(byte[] passphrase) { this.passphrase = passphrase; }
public void clearPassphrase() {
if (this.passphrase != null) {
Arrays.fill(this.passphrase, (byte) 0);
}
}
public boolean requiresEncryption() {
return passphrase != null && passphrase.length > 0;
}
} }

4
debian/control vendored
View File

@ -15,14 +15,14 @@ Description: A common package which contains files which are shared by several C
Package: cloudstack-management Package: cloudstack-management
Architecture: all Architecture: all
Depends: ${python3:Depends}, openjdk-11-jre-headless | java11-runtime-headless | java11-runtime | openjdk-11-jre-headless | zulu-11, cloudstack-common (= ${source:Version}), net-tools, sudo, python3-mysql.connector, augeas-tools, mysql-client | mariadb-client, adduser, bzip2, ipmitool, file, gawk, iproute2, qemu-utils, python3-dnspython, lsb-release, init-system-helpers (>= 1.14~), python3-setuptools Depends: ${python3:Depends}, openjdk-11-jre-headless | java11-runtime-headless | java11-runtime | openjdk-11-jre-headless | zulu-11, cloudstack-common (= ${source:Version}), net-tools, sudo, python3-mysql.connector, augeas-tools, mysql-client | mariadb-client, adduser, bzip2, ipmitool, file, gawk, iproute2, qemu-utils, haveged, python3-dnspython, lsb-release, init-system-helpers (>= 1.14~), python3-setuptools
Conflicts: cloud-server, cloud-client, cloud-client-ui Conflicts: cloud-server, cloud-client, cloud-client-ui
Description: CloudStack server library Description: CloudStack server library
The CloudStack management server The CloudStack management server
Package: cloudstack-agent Package: cloudstack-agent
Architecture: all Architecture: all
Depends: ${python:Depends}, ${python3:Depends}, openjdk-11-jre-headless | java11-runtime-headless | java11-runtime | openjdk-11-jre-headless | zulu-11, cloudstack-common (= ${source:Version}), lsb-base (>= 9), openssh-client, qemu-kvm (>= 2.5) | qemu-system-x86 (>= 5.2), libvirt-bin (>= 1.3) | libvirt-daemon-system (>= 3.0), iproute2, ebtables, vlan, ipset, python3-libvirt, ethtool, iptables, lsb-release, aria2, ufw, apparmor Depends: ${python:Depends}, ${python3:Depends}, openjdk-11-jre-headless | java11-runtime-headless | java11-runtime | openjdk-11-jre-headless | zulu-11, cloudstack-common (= ${source:Version}), lsb-base (>= 9), openssh-client, qemu-kvm (>= 2.5) | qemu-system-x86 (>= 5.2), libvirt-bin (>= 1.3) | libvirt-daemon-system (>= 3.0), iproute2, ebtables, vlan, ipset, python3-libvirt, ethtool, iptables, cryptsetup, rng-tools, lsb-release, aria2, ufw, apparmor
Recommends: init-system-helpers Recommends: init-system-helpers
Conflicts: cloud-agent, cloud-agent-libs, cloud-agent-deps, cloud-agent-scripts Conflicts: cloud-agent, cloud-agent-libs, cloud-agent-deps, cloud-agent-scripts
Description: CloudStack agent Description: CloudStack agent

View File

@ -23,14 +23,22 @@ import java.util.List;
public interface EndPointSelector { public interface EndPointSelector {
EndPoint select(DataObject srcData, DataObject destData); EndPoint select(DataObject srcData, DataObject destData);
EndPoint select(DataObject srcData, DataObject destData, boolean encryptionSupportRequired);
EndPoint select(DataObject srcData, DataObject destData, StorageAction action); EndPoint select(DataObject srcData, DataObject destData, StorageAction action);
EndPoint select(DataObject srcData, DataObject destData, StorageAction action, boolean encryptionSupportRequired);
EndPoint select(DataObject object); EndPoint select(DataObject object);
EndPoint select(DataStore store); EndPoint select(DataStore store);
EndPoint select(DataObject object, boolean encryptionSupportRequired);
EndPoint select(DataObject object, StorageAction action); EndPoint select(DataObject object, StorageAction action);
EndPoint select(DataObject object, StorageAction action, boolean encryptionSupportRequired);
List<EndPoint> selectAll(DataStore store); List<EndPoint> selectAll(DataStore store);
List<EndPoint> findAllEndpointsForScope(DataStore store); List<EndPoint> findAllEndpointsForScope(DataStore store);

View File

@ -93,5 +93,7 @@ public interface VolumeInfo extends DataObject, Volume {
public String getvSphereStoragePolicyId(); public String getvSphereStoragePolicyId();
public byte[] getPassphrase();
Volume getVolume(); Volume getVolume();
} }

View File

@ -38,6 +38,8 @@ import javax.inject.Inject;
import javax.naming.ConfigurationException; import javax.naming.ConfigurationException;
import com.cloud.storage.StorageUtil; import com.cloud.storage.StorageUtil;
import org.apache.cloudstack.secret.dao.PassphraseDao;
import org.apache.cloudstack.secret.PassphraseVO;
import org.apache.cloudstack.api.command.admin.vm.MigrateVMCmd; import org.apache.cloudstack.api.command.admin.vm.MigrateVMCmd;
import org.apache.cloudstack.api.command.admin.volume.MigrateVolumeCmdByAdmin; import org.apache.cloudstack.api.command.admin.volume.MigrateVolumeCmdByAdmin;
import org.apache.cloudstack.api.command.user.volume.MigrateVolumeCmd; import org.apache.cloudstack.api.command.user.volume.MigrateVolumeCmd;
@ -232,6 +234,8 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
private SecondaryStorageVmDao secondaryStorageVmDao; private SecondaryStorageVmDao secondaryStorageVmDao;
@Inject @Inject
VolumeApiService _volumeApiService; VolumeApiService _volumeApiService;
@Inject
PassphraseDao passphraseDao;
@Inject @Inject
protected SnapshotHelper snapshotHelper; protected SnapshotHelper snapshotHelper;
@ -271,7 +275,8 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
// Find a destination storage pool with the specified criteria // Find a destination storage pool with the specified criteria
DiskOffering diskOffering = _entityMgr.findById(DiskOffering.class, volumeInfo.getDiskOfferingId()); DiskOffering diskOffering = _entityMgr.findById(DiskOffering.class, volumeInfo.getDiskOfferingId());
DiskProfile dskCh = new DiskProfile(volumeInfo.getId(), volumeInfo.getVolumeType(), volumeInfo.getName(), diskOffering.getId(), diskOffering.getDiskSize(), diskOffering.getTagsArray(), DiskProfile dskCh = new DiskProfile(volumeInfo.getId(), volumeInfo.getVolumeType(), volumeInfo.getName(), diskOffering.getId(), diskOffering.getDiskSize(), diskOffering.getTagsArray(),
diskOffering.isUseLocalStorage(), diskOffering.isRecreatable(), null); diskOffering.isUseLocalStorage(), diskOffering.isRecreatable(), null, (diskOffering.getEncrypt() || volumeInfo.getPassphraseId() != null));
dskCh.setHyperType(dataDiskHyperType); dskCh.setHyperType(dataDiskHyperType);
storageMgr.setDiskProfileThrottling(dskCh, null, diskOffering); storageMgr.setDiskProfileThrottling(dskCh, null, diskOffering);
@ -305,6 +310,13 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
newVol.setInstanceId(oldVol.getInstanceId()); newVol.setInstanceId(oldVol.getInstanceId());
newVol.setRecreatable(oldVol.isRecreatable()); newVol.setRecreatable(oldVol.isRecreatable());
newVol.setFormat(oldVol.getFormat()); newVol.setFormat(oldVol.getFormat());
if (oldVol.getPassphraseId() != null) {
PassphraseVO passphrase = passphraseDao.persist(new PassphraseVO());
passphrase.clearPassphrase();
newVol.setPassphraseId(passphrase.getId());
}
return _volsDao.persist(newVol); return _volsDao.persist(newVol);
} }
@ -457,6 +469,10 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
Pair<Pod, Long> pod = null; Pair<Pod, Long> pod = null;
DiskOffering diskOffering = _entityMgr.findById(DiskOffering.class, volume.getDiskOfferingId()); DiskOffering diskOffering = _entityMgr.findById(DiskOffering.class, volume.getDiskOfferingId());
if (diskOffering.getEncrypt()) {
VolumeVO vol = (VolumeVO) volume;
volume = setPassphraseForVolumeEncryption(vol);
}
DataCenter dc = _entityMgr.findById(DataCenter.class, volume.getDataCenterId()); DataCenter dc = _entityMgr.findById(DataCenter.class, volume.getDataCenterId());
DiskProfile dskCh = new DiskProfile(volume, diskOffering, snapshot.getHypervisorType()); DiskProfile dskCh = new DiskProfile(volume, diskOffering, snapshot.getHypervisorType());
@ -581,21 +597,21 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
} }
protected DiskProfile createDiskCharacteristics(VolumeInfo volumeInfo, VirtualMachineTemplate template, DataCenter dc, DiskOffering diskOffering) { protected DiskProfile createDiskCharacteristics(VolumeInfo volumeInfo, VirtualMachineTemplate template, DataCenter dc, DiskOffering diskOffering) {
boolean requiresEncryption = diskOffering.getEncrypt() || volumeInfo.getPassphraseId() != null;
if (volumeInfo.getVolumeType() == Type.ROOT && Storage.ImageFormat.ISO != template.getFormat()) { if (volumeInfo.getVolumeType() == Type.ROOT && Storage.ImageFormat.ISO != template.getFormat()) {
String templateToString = getReflectOnlySelectedFields(template); String templateToString = getReflectOnlySelectedFields(template);
String zoneToString = getReflectOnlySelectedFields(dc); String zoneToString = getReflectOnlySelectedFields(dc);
TemplateDataStoreVO ss = _vmTemplateStoreDao.findByTemplateZoneDownloadStatus(template.getId(), dc.getId(), VMTemplateStorageResourceAssoc.Status.DOWNLOADED); TemplateDataStoreVO ss = _vmTemplateStoreDao.findByTemplateZoneDownloadStatus(template.getId(), dc.getId(), VMTemplateStorageResourceAssoc.Status.DOWNLOADED);
if (ss == null) { if (ss == null) {
throw new CloudRuntimeException(String.format("Template [%s] has not been completely downloaded to the zone [%s].", throw new CloudRuntimeException(String.format("Template [%s] has not been completely downloaded to the zone [%s].",
templateToString, zoneToString)); templateToString, zoneToString));
} }
return new DiskProfile(volumeInfo.getId(), volumeInfo.getVolumeType(), volumeInfo.getName(), diskOffering.getId(), ss.getSize(), diskOffering.getTagsArray(), diskOffering.isUseLocalStorage(), return new DiskProfile(volumeInfo.getId(), volumeInfo.getVolumeType(), volumeInfo.getName(), diskOffering.getId(), ss.getSize(), diskOffering.getTagsArray(), diskOffering.isUseLocalStorage(),
diskOffering.isRecreatable(), Storage.ImageFormat.ISO != template.getFormat() ? template.getId() : null); diskOffering.isRecreatable(), Storage.ImageFormat.ISO != template.getFormat() ? template.getId() : null, requiresEncryption);
} else { } else {
return new DiskProfile(volumeInfo.getId(), volumeInfo.getVolumeType(), volumeInfo.getName(), diskOffering.getId(), diskOffering.getDiskSize(), diskOffering.getTagsArray(), return new DiskProfile(volumeInfo.getId(), volumeInfo.getVolumeType(), volumeInfo.getName(), diskOffering.getId(), diskOffering.getDiskSize(), diskOffering.getTagsArray(),
diskOffering.isUseLocalStorage(), diskOffering.isRecreatable(), null); diskOffering.isUseLocalStorage(), diskOffering.isRecreatable(), null, requiresEncryption);
} }
} }
@ -653,8 +669,16 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
storageMgr.setDiskProfileThrottling(dskCh, null, diskOffering); storageMgr.setDiskProfileThrottling(dskCh, null, diskOffering);
} }
if (diskOffering != null && diskOffering.isCustomized()) { if (diskOffering != null) {
dskCh.setSize(size); if (diskOffering.isCustomized()) {
dskCh.setSize(size);
}
if (diskOffering.getEncrypt()) {
VolumeVO vol = _volsDao.findById(volumeInfo.getId());
setPassphraseForVolumeEncryption(vol);
volumeInfo = volFactory.getVolume(volumeInfo.getId());
}
} }
dskCh.setHyperType(hyperType); dskCh.setHyperType(hyperType);
@ -697,7 +721,6 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
throw new CloudRuntimeException(msg); throw new CloudRuntimeException(msg);
} }
} }
return result.getVolume(); return result.getVolume();
} catch (InterruptedException | ExecutionException e) { } catch (InterruptedException | ExecutionException e) {
String msg = String.format("Failed to create volume [%s] due to [%s].", volumeToString, e.getMessage()); String msg = String.format("Failed to create volume [%s] due to [%s].", volumeToString, e.getMessage());
@ -1598,6 +1621,10 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
destPool = dataStoreMgr.getDataStore(pool.getId(), DataStoreRole.Primary); destPool = dataStoreMgr.getDataStore(pool.getId(), DataStoreRole.Primary);
} }
if (vol.getState() == Volume.State.Allocated || vol.getState() == Volume.State.Creating) { if (vol.getState() == Volume.State.Allocated || vol.getState() == Volume.State.Creating) {
DiskOffering diskOffering = _entityMgr.findById(DiskOffering.class, vol.getDiskOfferingId());
if (diskOffering.getEncrypt()) {
vol = setPassphraseForVolumeEncryption(vol);
}
newVol = vol; newVol = vol;
} else { } else {
newVol = switchVolume(vol, vm); newVol = switchVolume(vol, vm);
@ -1715,6 +1742,20 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
return new Pair<VolumeVO, DataStore>(newVol, destPool); return new Pair<VolumeVO, DataStore>(newVol, destPool);
} }
private VolumeVO setPassphraseForVolumeEncryption(VolumeVO volume) {
if (volume.getPassphraseId() != null) {
return volume;
}
s_logger.debug("Creating passphrase for the volume: " + volume.getName());
long startTime = System.currentTimeMillis();
PassphraseVO passphrase = passphraseDao.persist(new PassphraseVO());
passphrase.clearPassphrase();
volume.setPassphraseId(passphrase.getId());
long finishTime = System.currentTimeMillis();
s_logger.debug("Creating and persisting passphrase took: " + (finishTime - startTime) + " ms for the volume: " + volume.toString());
return _volsDao.persist(volume);
}
@Override @Override
public void prepare(VirtualMachineProfile vm, DeployDestination dest) throws StorageUnavailableException, InsufficientStorageCapacityException, ConcurrentOperationException, StorageAccessException { public void prepare(VirtualMachineProfile vm, DeployDestination dest) throws StorageUnavailableException, InsufficientStorageCapacityException, ConcurrentOperationException, StorageAccessException {
if (dest == null) { if (dest == null) {

View File

@ -129,6 +129,8 @@ public class DiskOfferingVO implements DiskOffering {
@Column(name = "iops_write_rate_max_length") @Column(name = "iops_write_rate_max_length")
private Long iopsWriteRateMaxLength; private Long iopsWriteRateMaxLength;
@Column(name = "encrypt")
private boolean encrypt;
@Column(name = "cache_mode", updatable = true, nullable = false) @Column(name = "cache_mode", updatable = true, nullable = false)
@Enumerated(value = EnumType.STRING) @Enumerated(value = EnumType.STRING)
@ -568,10 +570,17 @@ public class DiskOfferingVO implements DiskOffering {
return hypervisorSnapshotReserve; return hypervisorSnapshotReserve;
} }
@Override
public boolean getEncrypt() { return encrypt; }
@Override
public void setEncrypt(boolean encrypt) { this.encrypt = encrypt; }
public boolean isShared() { public boolean isShared() {
return !useLocalStorage; return !useLocalStorage;
} }
public boolean getDiskSizeStrictness() { public boolean getDiskSizeStrictness() {
return diskSizeStrictness; return diskSizeStrictness;
} }

View File

@ -32,11 +32,12 @@ import javax.persistence.Temporal;
import javax.persistence.TemporalType; import javax.persistence.TemporalType;
import javax.persistence.Transient; import javax.persistence.Transient;
import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils;
import com.cloud.storage.Storage.ProvisioningType; import com.cloud.storage.Storage.ProvisioningType;
import com.cloud.storage.Storage.StoragePoolType; import com.cloud.storage.Storage.StoragePoolType;
import com.cloud.utils.NumbersUtil; import com.cloud.utils.NumbersUtil;
import com.cloud.utils.db.GenericDao; import com.cloud.utils.db.GenericDao;
import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils;
@Entity @Entity
@Table(name = "volumes") @Table(name = "volumes")
@ -173,6 +174,12 @@ public class VolumeVO implements Volume {
@Transient @Transient
private boolean deployAsIs; private boolean deployAsIs;
@Column(name = "passphrase_id")
private Long passphraseId;
@Column(name = "encrypt_format")
private String encryptFormat;
// Real Constructor // Real Constructor
public VolumeVO(Type type, String name, long dcId, long domainId, public VolumeVO(Type type, String name, long dcId, long domainId,
long accountId, long diskOfferingId, Storage.ProvisioningType provisioningType, long size, long accountId, long diskOfferingId, Storage.ProvisioningType provisioningType, long size,
@ -500,7 +507,7 @@ public class VolumeVO implements Volume {
@Override @Override
public String toString() { public String toString() {
return new StringBuilder("Vol[").append(id).append("|vm=").append(instanceId).append("|").append(volumeType).append("]").toString(); return new StringBuilder("Vol[").append(id).append("|name=").append(name).append("|vm=").append(instanceId).append("|").append(volumeType).append("]").toString();
} }
@Override @Override
@ -663,4 +670,11 @@ public class VolumeVO implements Volume {
this.externalUuid = externalUuid; this.externalUuid = externalUuid;
} }
public Long getPassphraseId() { return passphraseId; }
public void setPassphraseId(Long id) { this.passphraseId = id; }
public String getEncryptFormat() { return encryptFormat; }
public void setEncryptFormat(String encryptFormat) { this.encryptFormat = encryptFormat; }
} }

View File

@ -102,6 +102,13 @@ public interface VolumeDao extends GenericDao<VolumeVO, Long>, StateDao<Volume.S
List<VolumeVO> findIncludingRemovedByZone(long zoneId); List<VolumeVO> findIncludingRemovedByZone(long zoneId);
/**
* Lists all volumes using a given passphrase ID
* @param passphraseId
* @return list of volumes
*/
List<VolumeVO> listVolumesByPassphraseId(long passphraseId);
/** /**
* Gets the Total Primary Storage space allocated for an account * Gets the Total Primary Storage space allocated for an account
* *

View File

@ -382,6 +382,7 @@ public class VolumeDaoImpl extends GenericDaoBase<VolumeVO, Long> implements Vol
AllFieldsSearch.and("updateTime", AllFieldsSearch.entity().getUpdated(), SearchCriteria.Op.LT); AllFieldsSearch.and("updateTime", AllFieldsSearch.entity().getUpdated(), SearchCriteria.Op.LT);
AllFieldsSearch.and("updatedCount", AllFieldsSearch.entity().getUpdatedCount(), Op.EQ); AllFieldsSearch.and("updatedCount", AllFieldsSearch.entity().getUpdatedCount(), Op.EQ);
AllFieldsSearch.and("name", AllFieldsSearch.entity().getName(), Op.EQ); AllFieldsSearch.and("name", AllFieldsSearch.entity().getName(), Op.EQ);
AllFieldsSearch.and("passphraseId", AllFieldsSearch.entity().getPassphraseId(), Op.EQ);
AllFieldsSearch.done(); AllFieldsSearch.done();
RootDiskStateSearch = createSearchBuilder(); RootDiskStateSearch = createSearchBuilder();
@ -669,16 +670,25 @@ public class VolumeDaoImpl extends GenericDaoBase<VolumeVO, Long> implements Vol
} }
} }
@Override
public List<VolumeVO> listVolumesByPassphraseId(long passphraseId) {
SearchCriteria<VolumeVO> sc = AllFieldsSearch.create();
sc.setParameters("passphraseId", passphraseId);
return listBy(sc);
}
@Override @Override
@DB @DB
public boolean remove(Long id) { public boolean remove(Long id) {
TransactionLegacy txn = TransactionLegacy.currentTxn(); TransactionLegacy txn = TransactionLegacy.currentTxn();
txn.start(); txn.start();
s_logger.debug(String.format("Removing volume %s from DB", id));
VolumeVO entry = findById(id); VolumeVO entry = findById(id);
if (entry != null) { if (entry != null) {
_tagsDao.removeByIdAndType(id, ResourceObjectType.Volume); _tagsDao.removeByIdAndType(id, ResourceObjectType.Volume);
} }
boolean result = super.remove(id); boolean result = super.remove(id);
txn.commit(); txn.commit();
return result; return result;
} }

View File

@ -0,0 +1,73 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.cloudstack.secret;
import com.cloud.utils.db.Encrypt;
import com.cloud.utils.exception.CloudRuntimeException;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;
import java.security.NoSuchAlgorithmException;
import java.security.SecureRandom;
import java.util.Arrays;
import java.util.Base64;
@Entity
@Table(name = "passphrase")
public class PassphraseVO {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name = "id")
private Long id;
@Column(name = "passphrase")
@Encrypt
private byte[] passphrase;
public PassphraseVO() {
try {
SecureRandom random = SecureRandom.getInstanceStrong();
byte[] temporary = new byte[48]; // 48 byte random passphrase buffer
this.passphrase = new byte[64]; // 48 byte random passphrase as base64 for usability
random.nextBytes(temporary);
Base64.getEncoder().encode(temporary, this.passphrase);
Arrays.fill(temporary, (byte) 0); // clear passphrase from buffer
} catch (NoSuchAlgorithmException ex ) {
throw new CloudRuntimeException("Volume encryption requested but system is missing specified algorithm to generate passphrase");
}
}
public PassphraseVO(PassphraseVO existing) {
this.passphrase = existing.getPassphrase();
}
public void clearPassphrase() {
if (this.passphrase != null) {
Arrays.fill(this.passphrase, (byte) 0);
}
}
public byte[] getPassphrase() { return this.passphrase; }
public Long getId() { return this.id; }
}

View File

@ -0,0 +1,25 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.cloudstack.secret.dao;
import org.apache.cloudstack.secret.PassphraseVO;
import com.cloud.utils.db.GenericDao;
public interface PassphraseDao extends GenericDao<PassphraseVO, Long> {
}

View File

@ -0,0 +1,25 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.cloudstack.secret.dao;
import org.apache.cloudstack.secret.PassphraseVO;
import com.cloud.utils.db.GenericDaoBase;
public class PassphraseDaoImpl extends GenericDaoBase<PassphraseVO, Long> implements PassphraseDao {
}

View File

@ -302,4 +302,5 @@
<bean id="TemplateDeployAsIsDetailsDaoImpl" class="com.cloud.deployasis.dao.TemplateDeployAsIsDetailsDaoImpl" /> <bean id="TemplateDeployAsIsDetailsDaoImpl" class="com.cloud.deployasis.dao.TemplateDeployAsIsDetailsDaoImpl" />
<bean id="UserVmDeployAsIsDetailsDaoImpl" class="com.cloud.deployasis.dao.UserVmDeployAsIsDetailsDaoImpl" /> <bean id="UserVmDeployAsIsDetailsDaoImpl" class="com.cloud.deployasis.dao.UserVmDeployAsIsDetailsDaoImpl" />
<bean id="NetworkPermissionDaoImpl" class="org.apache.cloudstack.network.dao.NetworkPermissionDaoImpl" /> <bean id="NetworkPermissionDaoImpl" class="org.apache.cloudstack.network.dao.NetworkPermissionDaoImpl" />
<bean id="PassphraseDaoImpl" class="org.apache.cloudstack.secret.dao.PassphraseDaoImpl" />
</beans> </beans>

View File

@ -23,6 +23,202 @@ UPDATE `cloud`.`service_offering` so
SET so.limit_cpu_use = 1 SET so.limit_cpu_use = 1
WHERE so.default_use = 1 AND so.vm_type IN ('domainrouter', 'secondarystoragevm', 'consoleproxy', 'internalloadbalancervm', 'elasticloadbalancervm'); WHERE so.default_use = 1 AND so.vm_type IN ('domainrouter', 'secondarystoragevm', 'consoleproxy', 'internalloadbalancervm', 'elasticloadbalancervm');
-- Idempotent ADD COLUMN
DROP PROCEDURE IF EXISTS `cloud`.`IDEMPOTENT_ADD_COLUMN`;
CREATE PROCEDURE `cloud`.`IDEMPOTENT_ADD_COLUMN` (
IN in_table_name VARCHAR(200)
, IN in_column_name VARCHAR(200)
, IN in_column_definition VARCHAR(1000)
)
BEGIN
DECLARE CONTINUE HANDLER FOR 1060 BEGIN END; SET @ddl = CONCAT('ALTER TABLE ', in_table_name); SET @ddl = CONCAT(@ddl, ' ', 'ADD COLUMN') ; SET @ddl = CONCAT(@ddl, ' ', in_column_name); SET @ddl = CONCAT(@ddl, ' ', in_column_definition); PREPARE stmt FROM @ddl; EXECUTE stmt; DEALLOCATE PREPARE stmt; END;
-- Add foreign key procedure to link volumes to passphrase table
DROP PROCEDURE IF EXISTS `cloud`.`IDEMPOTENT_ADD_FOREIGN_KEY`;
CREATE PROCEDURE `cloud`.`IDEMPOTENT_ADD_FOREIGN_KEY` (
IN in_table_name VARCHAR(200),
IN in_foreign_table_name VARCHAR(200),
IN in_foreign_column_name VARCHAR(200)
)
BEGIN
DECLARE CONTINUE HANDLER FOR 1005 BEGIN END; SET @ddl = CONCAT('ALTER TABLE ', in_table_name); SET @ddl = CONCAT(@ddl, ' ', ' ADD CONSTRAINT '); SET @ddl = CONCAT(@ddl, 'fk_', in_foreign_table_name, '_', in_foreign_column_name); SET @ddl = CONCAT(@ddl, ' FOREIGN KEY (', in_foreign_table_name, '_', in_foreign_column_name, ')'); SET @ddl = CONCAT(@ddl, ' REFERENCES ', in_foreign_table_name, '(', in_foreign_column_name, ')'); PREPARE stmt FROM @ddl; EXECUTE stmt; DEALLOCATE PREPARE stmt; END;
-- Add passphrase table
CREATE TABLE IF NOT EXISTS `cloud`.`passphrase` (
`id` bigint unsigned NOT NULL auto_increment,
`passphrase` varchar(64) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- Add passphrase column to volumes table
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.volumes', 'passphrase_id', 'bigint unsigned DEFAULT NULL COMMENT ''encryption passphrase id'' ');
CALL `cloud`.`IDEMPOTENT_ADD_FOREIGN_KEY`('cloud.volumes', 'passphrase', 'id');
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.volumes', 'encrypt_format', 'varchar(64) DEFAULT NULL COMMENT ''encryption format'' ');
-- Add encrypt column to disk_offering
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.disk_offering', 'encrypt', 'tinyint(1) DEFAULT 0 COMMENT ''volume encrypt requested'' ');
-- add encryption support to disk offering view
DROP VIEW IF EXISTS `cloud`.`disk_offering_view`;
CREATE VIEW `cloud`.`disk_offering_view` AS
SELECT
`disk_offering`.`id` AS `id`,
`disk_offering`.`uuid` AS `uuid`,
`disk_offering`.`name` AS `name`,
`disk_offering`.`display_text` AS `display_text`,
`disk_offering`.`provisioning_type` AS `provisioning_type`,
`disk_offering`.`disk_size` AS `disk_size`,
`disk_offering`.`min_iops` AS `min_iops`,
`disk_offering`.`max_iops` AS `max_iops`,
`disk_offering`.`created` AS `created`,
`disk_offering`.`tags` AS `tags`,
`disk_offering`.`customized` AS `customized`,
`disk_offering`.`customized_iops` AS `customized_iops`,
`disk_offering`.`removed` AS `removed`,
`disk_offering`.`use_local_storage` AS `use_local_storage`,
`disk_offering`.`hv_ss_reserve` AS `hv_ss_reserve`,
`disk_offering`.`bytes_read_rate` AS `bytes_read_rate`,
`disk_offering`.`bytes_read_rate_max` AS `bytes_read_rate_max`,
`disk_offering`.`bytes_read_rate_max_length` AS `bytes_read_rate_max_length`,
`disk_offering`.`bytes_write_rate` AS `bytes_write_rate`,
`disk_offering`.`bytes_write_rate_max` AS `bytes_write_rate_max`,
`disk_offering`.`bytes_write_rate_max_length` AS `bytes_write_rate_max_length`,
`disk_offering`.`iops_read_rate` AS `iops_read_rate`,
`disk_offering`.`iops_read_rate_max` AS `iops_read_rate_max`,
`disk_offering`.`iops_read_rate_max_length` AS `iops_read_rate_max_length`,
`disk_offering`.`iops_write_rate` AS `iops_write_rate`,
`disk_offering`.`iops_write_rate_max` AS `iops_write_rate_max`,
`disk_offering`.`iops_write_rate_max_length` AS `iops_write_rate_max_length`,
`disk_offering`.`cache_mode` AS `cache_mode`,
`disk_offering`.`sort_key` AS `sort_key`,
`disk_offering`.`compute_only` AS `compute_only`,
`disk_offering`.`display_offering` AS `display_offering`,
`disk_offering`.`state` AS `state`,
`disk_offering`.`disk_size_strictness` AS `disk_size_strictness`,
`vsphere_storage_policy`.`value` AS `vsphere_storage_policy`,
`disk_offering`.`encrypt` AS `encrypt`,
GROUP_CONCAT(DISTINCT(domain.id)) AS domain_id,
GROUP_CONCAT(DISTINCT(domain.uuid)) AS domain_uuid,
GROUP_CONCAT(DISTINCT(domain.name)) AS domain_name,
GROUP_CONCAT(DISTINCT(domain.path)) AS domain_path,
GROUP_CONCAT(DISTINCT(zone.id)) AS zone_id,
GROUP_CONCAT(DISTINCT(zone.uuid)) AS zone_uuid,
GROUP_CONCAT(DISTINCT(zone.name)) AS zone_name
FROM
`cloud`.`disk_offering`
LEFT JOIN
`cloud`.`disk_offering_details` AS `domain_details` ON `domain_details`.`offering_id` = `disk_offering`.`id` AND `domain_details`.`name`='domainid'
LEFT JOIN
`cloud`.`domain` AS `domain` ON FIND_IN_SET(`domain`.`id`, `domain_details`.`value`)
LEFT JOIN
`cloud`.`disk_offering_details` AS `zone_details` ON `zone_details`.`offering_id` = `disk_offering`.`id` AND `zone_details`.`name`='zoneid'
LEFT JOIN
`cloud`.`data_center` AS `zone` ON FIND_IN_SET(`zone`.`id`, `zone_details`.`value`)
LEFT JOIN
`cloud`.`disk_offering_details` AS `vsphere_storage_policy` ON `vsphere_storage_policy`.`offering_id` = `disk_offering`.`id`
AND `vsphere_storage_policy`.`name` = 'storagepolicy'
WHERE
`disk_offering`.`state`='Active'
GROUP BY
`disk_offering`.`id`;
-- add encryption support to service offering view
DROP VIEW IF EXISTS `cloud`.`service_offering_view`;
CREATE VIEW `cloud`.`service_offering_view` AS
SELECT
`service_offering`.`id` AS `id`,
`service_offering`.`uuid` AS `uuid`,
`service_offering`.`name` AS `name`,
`service_offering`.`display_text` AS `display_text`,
`disk_offering`.`provisioning_type` AS `provisioning_type`,
`service_offering`.`created` AS `created`,
`disk_offering`.`tags` AS `tags`,
`service_offering`.`removed` AS `removed`,
`disk_offering`.`use_local_storage` AS `use_local_storage`,
`service_offering`.`system_use` AS `system_use`,
`disk_offering`.`id` AS `disk_offering_id`,
`disk_offering`.`name` AS `disk_offering_name`,
`disk_offering`.`uuid` AS `disk_offering_uuid`,
`disk_offering`.`display_text` AS `disk_offering_display_text`,
`disk_offering`.`customized_iops` AS `customized_iops`,
`disk_offering`.`min_iops` AS `min_iops`,
`disk_offering`.`max_iops` AS `max_iops`,
`disk_offering`.`hv_ss_reserve` AS `hv_ss_reserve`,
`disk_offering`.`bytes_read_rate` AS `bytes_read_rate`,
`disk_offering`.`bytes_read_rate_max` AS `bytes_read_rate_max`,
`disk_offering`.`bytes_read_rate_max_length` AS `bytes_read_rate_max_length`,
`disk_offering`.`bytes_write_rate` AS `bytes_write_rate`,
`disk_offering`.`bytes_write_rate_max` AS `bytes_write_rate_max`,
`disk_offering`.`bytes_write_rate_max_length` AS `bytes_write_rate_max_length`,
`disk_offering`.`iops_read_rate` AS `iops_read_rate`,
`disk_offering`.`iops_read_rate_max` AS `iops_read_rate_max`,
`disk_offering`.`iops_read_rate_max_length` AS `iops_read_rate_max_length`,
`disk_offering`.`iops_write_rate` AS `iops_write_rate`,
`disk_offering`.`iops_write_rate_max` AS `iops_write_rate_max`,
`disk_offering`.`iops_write_rate_max_length` AS `iops_write_rate_max_length`,
`disk_offering`.`cache_mode` AS `cache_mode`,
`disk_offering`.`disk_size` AS `root_disk_size`,
`disk_offering`.`encrypt` AS `encrypt_root`,
`service_offering`.`cpu` AS `cpu`,
`service_offering`.`speed` AS `speed`,
`service_offering`.`ram_size` AS `ram_size`,
`service_offering`.`nw_rate` AS `nw_rate`,
`service_offering`.`mc_rate` AS `mc_rate`,
`service_offering`.`ha_enabled` AS `ha_enabled`,
`service_offering`.`limit_cpu_use` AS `limit_cpu_use`,
`service_offering`.`host_tag` AS `host_tag`,
`service_offering`.`default_use` AS `default_use`,
`service_offering`.`vm_type` AS `vm_type`,
`service_offering`.`sort_key` AS `sort_key`,
`service_offering`.`is_volatile` AS `is_volatile`,
`service_offering`.`deployment_planner` AS `deployment_planner`,
`service_offering`.`dynamic_scaling_enabled` AS `dynamic_scaling_enabled`,
`service_offering`.`disk_offering_strictness` AS `disk_offering_strictness`,
`vsphere_storage_policy`.`value` AS `vsphere_storage_policy`,
GROUP_CONCAT(DISTINCT(domain.id)) AS domain_id,
GROUP_CONCAT(DISTINCT(domain.uuid)) AS domain_uuid,
GROUP_CONCAT(DISTINCT(domain.name)) AS domain_name,
GROUP_CONCAT(DISTINCT(domain.path)) AS domain_path,
GROUP_CONCAT(DISTINCT(zone.id)) AS zone_id,
GROUP_CONCAT(DISTINCT(zone.uuid)) AS zone_uuid,
GROUP_CONCAT(DISTINCT(zone.name)) AS zone_name,
IFNULL(`min_compute_details`.`value`, `cpu`) AS min_cpu,
IFNULL(`max_compute_details`.`value`, `cpu`) AS max_cpu,
IFNULL(`min_memory_details`.`value`, `ram_size`) AS min_memory,
IFNULL(`max_memory_details`.`value`, `ram_size`) AS max_memory
FROM
`cloud`.`service_offering`
INNER JOIN
`cloud`.`disk_offering_view` AS `disk_offering` ON service_offering.disk_offering_id = disk_offering.id
LEFT JOIN
`cloud`.`service_offering_details` AS `domain_details` ON `domain_details`.`service_offering_id` = `service_offering`.`id` AND `domain_details`.`name`='domainid'
LEFT JOIN
`cloud`.`domain` AS `domain` ON FIND_IN_SET(`domain`.`id`, `domain_details`.`value`)
LEFT JOIN
`cloud`.`service_offering_details` AS `zone_details` ON `zone_details`.`service_offering_id` = `service_offering`.`id` AND `zone_details`.`name`='zoneid'
LEFT JOIN
`cloud`.`data_center` AS `zone` ON FIND_IN_SET(`zone`.`id`, `zone_details`.`value`)
LEFT JOIN
`cloud`.`service_offering_details` AS `min_compute_details` ON `min_compute_details`.`service_offering_id` = `service_offering`.`id`
AND `min_compute_details`.`name` = 'mincpunumber'
LEFT JOIN
`cloud`.`service_offering_details` AS `max_compute_details` ON `max_compute_details`.`service_offering_id` = `service_offering`.`id`
AND `max_compute_details`.`name` = 'maxcpunumber'
LEFT JOIN
`cloud`.`service_offering_details` AS `min_memory_details` ON `min_memory_details`.`service_offering_id` = `service_offering`.`id`
AND `min_memory_details`.`name` = 'minmemory'
LEFT JOIN
`cloud`.`service_offering_details` AS `max_memory_details` ON `max_memory_details`.`service_offering_id` = `service_offering`.`id`
AND `max_memory_details`.`name` = 'maxmemory'
LEFT JOIN
`cloud`.`service_offering_details` AS `vsphere_storage_policy` ON `vsphere_storage_policy`.`service_offering_id` = `service_offering`.`id`
AND `vsphere_storage_policy`.`name` = 'storagepolicy'
WHERE
`service_offering`.`state`='Active'
GROUP BY
`service_offering`.`id`;
-- Add cidr_list column to load_balancing_rules -- Add cidr_list column to load_balancing_rules
ALTER TABLE `cloud`.`load_balancing_rules` ALTER TABLE `cloud`.`load_balancing_rules`
ADD cidr_list VARCHAR(4096); ADD cidr_list VARCHAR(4096);

View File

@ -80,6 +80,9 @@ import com.cloud.vm.VirtualMachineManager;
@Component @Component
public class AncientDataMotionStrategy implements DataMotionStrategy { public class AncientDataMotionStrategy implements DataMotionStrategy {
private static final Logger s_logger = Logger.getLogger(AncientDataMotionStrategy.class); private static final Logger s_logger = Logger.getLogger(AncientDataMotionStrategy.class);
private static final String NO_REMOTE_ENDPOINT_SSVM = "No remote endpoint to send command, check if host or ssvm is down?";
private static final String NO_REMOTE_ENDPOINT_WITH_ENCRYPTION = "No remote endpoint to send command, unable to find a valid endpoint. Requires encryption support: %s";
@Inject @Inject
EndPointSelector selector; EndPointSelector selector;
@Inject @Inject
@ -170,9 +173,8 @@ public class AncientDataMotionStrategy implements DataMotionStrategy {
VirtualMachineManager.ExecuteInSequence.value()); VirtualMachineManager.ExecuteInSequence.value());
EndPoint ep = destHost != null ? RemoteHostEndPoint.getHypervisorHostEndPoint(destHost) : selector.select(srcForCopy, destData); EndPoint ep = destHost != null ? RemoteHostEndPoint.getHypervisorHostEndPoint(destHost) : selector.select(srcForCopy, destData);
if (ep == null) { if (ep == null) {
String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; s_logger.error(NO_REMOTE_ENDPOINT_SSVM);
s_logger.error(errMsg); answer = new Answer(cmd, false, NO_REMOTE_ENDPOINT_SSVM);
answer = new Answer(cmd, false, errMsg);
} else { } else {
answer = ep.sendMessage(cmd); answer = ep.sendMessage(cmd);
} }
@ -294,9 +296,8 @@ public class AncientDataMotionStrategy implements DataMotionStrategy {
Answer answer = null; Answer answer = null;
if (ep == null) { if (ep == null) {
String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; s_logger.error(NO_REMOTE_ENDPOINT_SSVM);
s_logger.error(errMsg); answer = new Answer(cmd, false, NO_REMOTE_ENDPOINT_SSVM);
answer = new Answer(cmd, false, errMsg);
} else { } else {
answer = ep.sendMessage(cmd); answer = ep.sendMessage(cmd);
} }
@ -316,12 +317,11 @@ public class AncientDataMotionStrategy implements DataMotionStrategy {
protected Answer cloneVolume(DataObject template, DataObject volume) { protected Answer cloneVolume(DataObject template, DataObject volume) {
CopyCommand cmd = new CopyCommand(template.getTO(), addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest(volume.getTO()), 0, VirtualMachineManager.ExecuteInSequence.value()); CopyCommand cmd = new CopyCommand(template.getTO(), addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest(volume.getTO()), 0, VirtualMachineManager.ExecuteInSequence.value());
try { try {
EndPoint ep = selector.select(volume.getDataStore()); EndPoint ep = selector.select(volume, anyVolumeRequiresEncryption(volume));
Answer answer = null; Answer answer = null;
if (ep == null) { if (ep == null) {
String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; s_logger.error(NO_REMOTE_ENDPOINT_SSVM);
s_logger.error(errMsg); answer = new Answer(cmd, false, NO_REMOTE_ENDPOINT_SSVM);
answer = new Answer(cmd, false, errMsg);
} else { } else {
answer = ep.sendMessage(cmd); answer = ep.sendMessage(cmd);
} }
@ -351,14 +351,15 @@ public class AncientDataMotionStrategy implements DataMotionStrategy {
if (srcData instanceof VolumeInfo && ((VolumeInfo)srcData).isDirectDownload()) { if (srcData instanceof VolumeInfo && ((VolumeInfo)srcData).isDirectDownload()) {
bypassSecondaryStorage = true; bypassSecondaryStorage = true;
} }
boolean encryptionRequired = anyVolumeRequiresEncryption(srcData, destData);
if (cacheStore == null) { if (cacheStore == null) {
if (bypassSecondaryStorage) { if (bypassSecondaryStorage) {
CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), _copyvolumewait, VirtualMachineManager.ExecuteInSequence.value()); CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), _copyvolumewait, VirtualMachineManager.ExecuteInSequence.value());
EndPoint ep = selector.select(srcData, destData); EndPoint ep = selector.select(srcData, destData, encryptionRequired);
Answer answer = null; Answer answer = null;
if (ep == null) { if (ep == null) {
String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; String errMsg = String.format(NO_REMOTE_ENDPOINT_WITH_ENCRYPTION, encryptionRequired);
s_logger.error(errMsg); s_logger.error(errMsg);
answer = new Answer(cmd, false, errMsg); answer = new Answer(cmd, false, errMsg);
} else { } else {
@ -395,9 +396,9 @@ public class AncientDataMotionStrategy implements DataMotionStrategy {
objOnImageStore.processEvent(Event.CopyingRequested); objOnImageStore.processEvent(Event.CopyingRequested);
CopyCommand cmd = new CopyCommand(objOnImageStore.getTO(), addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest(destData.getTO()), _copyvolumewait, VirtualMachineManager.ExecuteInSequence.value()); CopyCommand cmd = new CopyCommand(objOnImageStore.getTO(), addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest(destData.getTO()), _copyvolumewait, VirtualMachineManager.ExecuteInSequence.value());
EndPoint ep = selector.select(objOnImageStore, destData); EndPoint ep = selector.select(objOnImageStore, destData, encryptionRequired);
if (ep == null) { if (ep == null) {
String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; String errMsg = String.format(NO_REMOTE_ENDPOINT_WITH_ENCRYPTION, encryptionRequired);
s_logger.error(errMsg); s_logger.error(errMsg);
answer = new Answer(cmd, false, errMsg); answer = new Answer(cmd, false, errMsg);
} else { } else {
@ -427,10 +428,10 @@ public class AncientDataMotionStrategy implements DataMotionStrategy {
} else { } else {
DataObject cacheData = cacheMgr.createCacheObject(srcData, destScope); DataObject cacheData = cacheMgr.createCacheObject(srcData, destScope);
CopyCommand cmd = new CopyCommand(cacheData.getTO(), destData.getTO(), _copyvolumewait, VirtualMachineManager.ExecuteInSequence.value()); CopyCommand cmd = new CopyCommand(cacheData.getTO(), destData.getTO(), _copyvolumewait, VirtualMachineManager.ExecuteInSequence.value());
EndPoint ep = selector.select(cacheData, destData); EndPoint ep = selector.select(cacheData, destData, encryptionRequired);
Answer answer = null; Answer answer = null;
if (ep == null) { if (ep == null) {
String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; String errMsg = String.format(NO_REMOTE_ENDPOINT_WITH_ENCRYPTION, encryptionRequired);
s_logger.error(errMsg); s_logger.error(errMsg);
answer = new Answer(cmd, false, errMsg); answer = new Answer(cmd, false, errMsg);
} else { } else {
@ -457,10 +458,12 @@ public class AncientDataMotionStrategy implements DataMotionStrategy {
command.setContextParam(DiskTO.PROTOCOL_TYPE, Storage.StoragePoolType.DatastoreCluster.toString()); command.setContextParam(DiskTO.PROTOCOL_TYPE, Storage.StoragePoolType.DatastoreCluster.toString());
} }
boolean encryptionRequired = anyVolumeRequiresEncryption(srcData, destData);
EndPoint ep = selector.select(srcData, StorageAction.MIGRATEVOLUME); EndPoint ep = selector.select(srcData, StorageAction.MIGRATEVOLUME);
Answer answer = null; Answer answer = null;
if (ep == null) { if (ep == null) {
String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; String errMsg = String.format(NO_REMOTE_ENDPOINT_WITH_ENCRYPTION, encryptionRequired);
s_logger.error(errMsg); s_logger.error(errMsg);
answer = new Answer(command, false, errMsg); answer = new Answer(command, false, errMsg);
} else { } else {
@ -556,9 +559,8 @@ public class AncientDataMotionStrategy implements DataMotionStrategy {
CopyCommand cmd = new CopyCommand(srcData.getTO(), addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest(destData.getTO()), _createprivatetemplatefromsnapshotwait, VirtualMachineManager.ExecuteInSequence.value()); CopyCommand cmd = new CopyCommand(srcData.getTO(), addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest(destData.getTO()), _createprivatetemplatefromsnapshotwait, VirtualMachineManager.ExecuteInSequence.value());
Answer answer = null; Answer answer = null;
if (ep == null) { if (ep == null) {
String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; s_logger.error(NO_REMOTE_ENDPOINT_SSVM);
s_logger.error(errMsg); answer = new Answer(cmd, false, NO_REMOTE_ENDPOINT_SSVM);
answer = new Answer(cmd, false, errMsg);
} else { } else {
answer = ep.sendMessage(cmd); answer = ep.sendMessage(cmd);
} }
@ -584,6 +586,8 @@ public class AncientDataMotionStrategy implements DataMotionStrategy {
Map<String, String> options = new HashMap<String, String>(); Map<String, String> options = new HashMap<String, String>();
options.put("fullSnapshot", fullSnapshot.toString()); options.put("fullSnapshot", fullSnapshot.toString());
options.put(BackupSnapshotAfterTakingSnapshot.key(), String.valueOf(BackupSnapshotAfterTakingSnapshot.value())); options.put(BackupSnapshotAfterTakingSnapshot.key(), String.valueOf(BackupSnapshotAfterTakingSnapshot.value()));
boolean encryptionRequired = anyVolumeRequiresEncryption(srcData, destData);
Answer answer = null; Answer answer = null;
try { try {
if (needCacheStorage(srcData, destData)) { if (needCacheStorage(srcData, destData)) {
@ -593,11 +597,10 @@ public class AncientDataMotionStrategy implements DataMotionStrategy {
CopyCommand cmd = new CopyCommand(srcData.getTO(), addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest(destData.getTO()), _backupsnapshotwait, VirtualMachineManager.ExecuteInSequence.value()); CopyCommand cmd = new CopyCommand(srcData.getTO(), addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest(destData.getTO()), _backupsnapshotwait, VirtualMachineManager.ExecuteInSequence.value());
cmd.setCacheTO(cacheData.getTO()); cmd.setCacheTO(cacheData.getTO());
cmd.setOptions(options); cmd.setOptions(options);
EndPoint ep = selector.select(srcData, destData); EndPoint ep = selector.select(srcData, destData, encryptionRequired);
if (ep == null) { if (ep == null) {
String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; s_logger.error(NO_REMOTE_ENDPOINT_SSVM);
s_logger.error(errMsg); answer = new Answer(cmd, false, NO_REMOTE_ENDPOINT_SSVM);
answer = new Answer(cmd, false, errMsg);
} else { } else {
answer = ep.sendMessage(cmd); answer = ep.sendMessage(cmd);
} }
@ -605,11 +608,10 @@ public class AncientDataMotionStrategy implements DataMotionStrategy {
addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest(destData.getTO()); addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest(destData.getTO());
CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), _backupsnapshotwait, VirtualMachineManager.ExecuteInSequence.value()); CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), _backupsnapshotwait, VirtualMachineManager.ExecuteInSequence.value());
cmd.setOptions(options); cmd.setOptions(options);
EndPoint ep = selector.select(srcData, destData, StorageAction.BACKUPSNAPSHOT); EndPoint ep = selector.select(srcData, destData, StorageAction.BACKUPSNAPSHOT, encryptionRequired);
if (ep == null) { if (ep == null) {
String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; s_logger.error(NO_REMOTE_ENDPOINT_SSVM);
s_logger.error(errMsg); answer = new Answer(cmd, false, NO_REMOTE_ENDPOINT_SSVM);
answer = new Answer(cmd, false, errMsg);
} else { } else {
answer = ep.sendMessage(cmd); answer = ep.sendMessage(cmd);
} }
@ -636,4 +638,19 @@ public class AncientDataMotionStrategy implements DataMotionStrategy {
result.setResult("Unsupported operation requested for copying data."); result.setResult("Unsupported operation requested for copying data.");
callback.complete(result); callback.complete(result);
} }
/**
* Does any object require encryption support?
*/
private boolean anyVolumeRequiresEncryption(DataObject ... objects) {
for (DataObject o : objects) {
// this fails code smell for returning true twice, but it is more readable than combining all tests into one statement
if (o instanceof VolumeInfo && ((VolumeInfo) o).getPassphraseId() != null) {
return true;
} else if (o instanceof SnapshotInfo && ((SnapshotInfo) o).getBaseVolume().getPassphraseId() != null) {
return true;
}
}
return false;
}
} }

View File

@ -33,6 +33,7 @@ import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
import org.apache.cloudstack.engine.subsystem.api.storage.StorageStrategyFactory; import org.apache.cloudstack.engine.subsystem.api.storage.StorageStrategyFactory;
import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo; import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo;
import org.apache.cloudstack.framework.async.AsyncCompletionCallback; import org.apache.cloudstack.framework.async.AsyncCompletionCallback;
import org.apache.cloudstack.secret.dao.PassphraseDao;
import org.apache.cloudstack.storage.command.CopyCmdAnswer; import org.apache.cloudstack.storage.command.CopyCmdAnswer;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.apache.log4j.Logger; import org.apache.log4j.Logger;
@ -53,6 +54,8 @@ public class DataMotionServiceImpl implements DataMotionService {
StorageStrategyFactory storageStrategyFactory; StorageStrategyFactory storageStrategyFactory;
@Inject @Inject
VolumeDao volDao; VolumeDao volDao;
@Inject
PassphraseDao passphraseDao;
@Override @Override
public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) { public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) {
@ -98,7 +101,14 @@ public class DataMotionServiceImpl implements DataMotionService {
volDao.update(sourceVO.getId(), sourceVO); volDao.update(sourceVO.getId(), sourceVO);
destinationVO.setState(Volume.State.Expunged); destinationVO.setState(Volume.State.Expunged);
destinationVO.setRemoved(new Date()); destinationVO.setRemoved(new Date());
Long passphraseId = destinationVO.getPassphraseId();
destinationVO.setPassphraseId(null);
volDao.update(destinationVO.getId(), destinationVO); volDao.update(destinationVO.getId(), destinationVO);
if (passphraseId != null) {
passphraseDao.remove(passphraseId);
}
} }
@Override @Override

View File

@ -1736,15 +1736,15 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
protected MigrationOptions createLinkedCloneMigrationOptions(VolumeInfo srcVolumeInfo, VolumeInfo destVolumeInfo, String srcVolumeBackingFile, String srcPoolUuid, Storage.StoragePoolType srcPoolType) { protected MigrationOptions createLinkedCloneMigrationOptions(VolumeInfo srcVolumeInfo, VolumeInfo destVolumeInfo, String srcVolumeBackingFile, String srcPoolUuid, Storage.StoragePoolType srcPoolType) {
VMTemplateStoragePoolVO ref = templatePoolDao.findByPoolTemplate(destVolumeInfo.getPoolId(), srcVolumeInfo.getTemplateId(), null); VMTemplateStoragePoolVO ref = templatePoolDao.findByPoolTemplate(destVolumeInfo.getPoolId(), srcVolumeInfo.getTemplateId(), null);
boolean updateBackingFileReference = ref == null; boolean updateBackingFileReference = ref == null;
String backingFile = ref != null ? ref.getInstallPath() : srcVolumeBackingFile; String backingFile = !updateBackingFileReference ? ref.getInstallPath() : srcVolumeBackingFile;
return new MigrationOptions(srcPoolUuid, srcPoolType, backingFile, updateBackingFileReference); return new MigrationOptions(srcPoolUuid, srcPoolType, backingFile, updateBackingFileReference, srcVolumeInfo.getDataStore().getScope().getScopeType());
} }
/** /**
* Return expected MigrationOptions for a full clone volume live storage migration * Return expected MigrationOptions for a full clone volume live storage migration
*/ */
protected MigrationOptions createFullCloneMigrationOptions(VolumeInfo srcVolumeInfo, VirtualMachineTO vmTO, Host srcHost, String srcPoolUuid, Storage.StoragePoolType srcPoolType) { protected MigrationOptions createFullCloneMigrationOptions(VolumeInfo srcVolumeInfo, VirtualMachineTO vmTO, Host srcHost, String srcPoolUuid, Storage.StoragePoolType srcPoolType) {
return new MigrationOptions(srcPoolUuid, srcPoolType, srcVolumeInfo.getPath()); return new MigrationOptions(srcPoolUuid, srcPoolType, srcVolumeInfo.getPath(), srcVolumeInfo.getDataStore().getScope().getScopeType());
} }
/** /**
@ -1874,6 +1874,7 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
migrateDiskInfo = configureMigrateDiskInfo(srcVolumeInfo, destPath); migrateDiskInfo = configureMigrateDiskInfo(srcVolumeInfo, destPath);
migrateDiskInfo.setSourceDiskOnStorageFileSystem(isStoragePoolTypeOfFile(sourceStoragePool)); migrateDiskInfo.setSourceDiskOnStorageFileSystem(isStoragePoolTypeOfFile(sourceStoragePool));
migrateDiskInfoList.add(migrateDiskInfo); migrateDiskInfoList.add(migrateDiskInfo);
prepareDiskWithSecretConsumerDetail(vmTO, srcVolumeInfo, destVolumeInfo.getPath());
} }
migrateStorage.put(srcVolumeInfo.getPath(), migrateDiskInfo); migrateStorage.put(srcVolumeInfo.getPath(), migrateDiskInfo);
@ -2123,6 +2124,11 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
newVol.setPoolId(storagePoolVO.getId()); newVol.setPoolId(storagePoolVO.getId());
newVol.setLastPoolId(lastPoolId); newVol.setLastPoolId(lastPoolId);
if (volume.getPassphraseId() != null) {
newVol.setPassphraseId(volume.getPassphraseId());
newVol.setEncryptFormat(volume.getEncryptFormat());
}
return _volumeDao.persist(newVol); return _volumeDao.persist(newVol);
} }
@ -2206,6 +2212,22 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
} }
} }
/**
* Include some destination volume info in vmTO, required for some PrepareForMigrationCommand processing
*
*/
protected void prepareDiskWithSecretConsumerDetail(VirtualMachineTO vmTO, VolumeInfo srcVolume, String destPath) {
if (vmTO.getDisks() != null) {
LOGGER.debug(String.format("Preparing VM TO '%s' disks with migration data", vmTO));
Arrays.stream(vmTO.getDisks()).filter(diskTO -> diskTO.getData().getId() == srcVolume.getId()).forEach( diskTO -> {
if (diskTO.getDetails() == null) {
diskTO.setDetails(new HashMap<>());
}
diskTO.getDetails().put(DiskTO.SECRET_CONSUMER_DETAIL, destPath);
});
}
}
/** /**
* At a high level: The source storage cannot be managed and * At a high level: The source storage cannot be managed and
* the destination storages can be all managed or all not managed, not mixed. * the destination storages can be all managed or all not managed, not mixed.

View File

@ -262,7 +262,6 @@ public abstract class AbstractStoragePoolAllocator extends AdapterBase implement
} }
protected boolean filter(ExcludeList avoid, StoragePool pool, DiskProfile dskCh, DeploymentPlan plan) { protected boolean filter(ExcludeList avoid, StoragePool pool, DiskProfile dskCh, DeploymentPlan plan) {
if (s_logger.isDebugEnabled()) { if (s_logger.isDebugEnabled()) {
s_logger.debug("Checking if storage pool is suitable, name: " + pool.getName() + " ,poolId: " + pool.getId()); s_logger.debug("Checking if storage pool is suitable, name: " + pool.getName() + " ,poolId: " + pool.getId());
} }
@ -273,6 +272,13 @@ public abstract class AbstractStoragePoolAllocator extends AdapterBase implement
return false; return false;
} }
if (dskCh.requiresEncryption() && !pool.getPoolType().supportsEncryption()) {
if (s_logger.isDebugEnabled()) {
s_logger.debug(String.format("Storage pool type '%s' doesn't support encryption required for volume, skipping this pool", pool.getPoolType()));
}
return false;
}
Long clusterId = pool.getClusterId(); Long clusterId = pool.getClusterId();
if (clusterId != null) { if (clusterId != null) {
ClusterVO cluster = clusterDao.findById(clusterId); ClusterVO cluster = clusterDao.findById(clusterId);

View File

@ -65,6 +65,8 @@ import com.cloud.utils.db.TransactionLegacy;
import com.cloud.utils.exception.CloudRuntimeException; import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.vm.VirtualMachine; import com.cloud.vm.VirtualMachine;
import static com.cloud.host.Host.HOST_VOLUME_ENCRYPTION;
@Component @Component
public class DefaultEndPointSelector implements EndPointSelector { public class DefaultEndPointSelector implements EndPointSelector {
private static final Logger s_logger = Logger.getLogger(DefaultEndPointSelector.class); private static final Logger s_logger = Logger.getLogger(DefaultEndPointSelector.class);
@ -72,11 +74,14 @@ public class DefaultEndPointSelector implements EndPointSelector {
private HostDao hostDao; private HostDao hostDao;
@Inject @Inject
private DedicatedResourceDao dedicatedResourceDao; private DedicatedResourceDao dedicatedResourceDao;
private static final String VOL_ENCRYPT_COLUMN_NAME = "volume_encryption_support";
private final String findOneHostOnPrimaryStorage = "select t.id from " private final String findOneHostOnPrimaryStorage = "select t.id from "
+ "(select h.id, cd.value " + "(select h.id, cd.value, hd.value as " + VOL_ENCRYPT_COLUMN_NAME + " "
+ "from host h join storage_pool_host_ref s on h.id = s.host_id " + "from host h join storage_pool_host_ref s on h.id = s.host_id "
+ "join cluster c on c.id=h.cluster_id " + "join cluster c on c.id=h.cluster_id "
+ "left join cluster_details cd on c.id=cd.cluster_id and cd.name='" + CapacityManager.StorageOperationsExcludeCluster.key() + "' " + "left join cluster_details cd on c.id=cd.cluster_id and cd.name='" + CapacityManager.StorageOperationsExcludeCluster.key() + "' "
+ "left join host_details hd on h.id=hd.host_id and hd.name='" + HOST_VOLUME_ENCRYPTION + "' "
+ "where h.status = 'Up' and h.type = 'Routing' and h.resource_state = 'Enabled' and s.pool_id = ? "; + "where h.status = 'Up' and h.type = 'Routing' and h.resource_state = 'Enabled' and s.pool_id = ? ";
private String findOneHypervisorHostInScopeByType = "select h.id from host h where h.status = 'Up' and h.hypervisor_type = ? "; private String findOneHypervisorHostInScopeByType = "select h.id from host h where h.status = 'Up' and h.hypervisor_type = ? ";
@ -118,8 +123,12 @@ public class DefaultEndPointSelector implements EndPointSelector {
} }
} }
@DB
protected EndPoint findEndPointInScope(Scope scope, String sqlBase, Long poolId) { protected EndPoint findEndPointInScope(Scope scope, String sqlBase, Long poolId) {
return findEndPointInScope(scope, sqlBase, poolId, false);
}
@DB
protected EndPoint findEndPointInScope(Scope scope, String sqlBase, Long poolId, boolean volumeEncryptionSupportRequired) {
StringBuilder sbuilder = new StringBuilder(); StringBuilder sbuilder = new StringBuilder();
sbuilder.append(sqlBase); sbuilder.append(sqlBase);
@ -142,8 +151,13 @@ public class DefaultEndPointSelector implements EndPointSelector {
dedicatedHosts = dedicatedResourceDao.listAllHosts(); dedicatedHosts = dedicatedResourceDao.listAllHosts();
} }
// TODO: order by rand() is slow if there are lot of hosts
sbuilder.append(") t where t.value<>'true' or t.value is null"); //Added for exclude cluster's subquery sbuilder.append(") t where t.value<>'true' or t.value is null"); //Added for exclude cluster's subquery
if (volumeEncryptionSupportRequired) {
sbuilder.append(String.format(" and t.%s='true'", VOL_ENCRYPT_COLUMN_NAME));
}
// TODO: order by rand() is slow if there are lot of hosts
sbuilder.append(" ORDER by "); sbuilder.append(" ORDER by ");
if (dedicatedHosts.size() > 0) { if (dedicatedHosts.size() > 0) {
moveDedicatedHostsToLowerPriority(sbuilder, dedicatedHosts); moveDedicatedHostsToLowerPriority(sbuilder, dedicatedHosts);
@ -208,7 +222,7 @@ public class DefaultEndPointSelector implements EndPointSelector {
} }
} }
protected EndPoint findEndPointForImageMove(DataStore srcStore, DataStore destStore) { protected EndPoint findEndPointForImageMove(DataStore srcStore, DataStore destStore, boolean volumeEncryptionSupportRequired) {
// find any xenserver/kvm host in the scope // find any xenserver/kvm host in the scope
Scope srcScope = srcStore.getScope(); Scope srcScope = srcStore.getScope();
Scope destScope = destStore.getScope(); Scope destScope = destStore.getScope();
@ -233,17 +247,22 @@ public class DefaultEndPointSelector implements EndPointSelector {
poolId = destStore.getId(); poolId = destStore.getId();
} }
} }
return findEndPointInScope(selectedScope, findOneHostOnPrimaryStorage, poolId); return findEndPointInScope(selectedScope, findOneHostOnPrimaryStorage, poolId, volumeEncryptionSupportRequired);
} }
@Override @Override
public EndPoint select(DataObject srcData, DataObject destData) { public EndPoint select(DataObject srcData, DataObject destData) {
return select( srcData, destData, false);
}
@Override
public EndPoint select(DataObject srcData, DataObject destData, boolean volumeEncryptionSupportRequired) {
DataStore srcStore = srcData.getDataStore(); DataStore srcStore = srcData.getDataStore();
DataStore destStore = destData.getDataStore(); DataStore destStore = destData.getDataStore();
if (moveBetweenPrimaryImage(srcStore, destStore)) { if (moveBetweenPrimaryImage(srcStore, destStore)) {
return findEndPointForImageMove(srcStore, destStore); return findEndPointForImageMove(srcStore, destStore, volumeEncryptionSupportRequired);
} else if (moveBetweenPrimaryDirectDownload(srcStore, destStore)) { } else if (moveBetweenPrimaryDirectDownload(srcStore, destStore)) {
return findEndPointForImageMove(srcStore, destStore); return findEndPointForImageMove(srcStore, destStore, volumeEncryptionSupportRequired);
} else if (moveBetweenCacheAndImage(srcStore, destStore)) { } else if (moveBetweenCacheAndImage(srcStore, destStore)) {
// pick ssvm based on image cache dc // pick ssvm based on image cache dc
DataStore selectedStore = null; DataStore selectedStore = null;
@ -274,6 +293,11 @@ public class DefaultEndPointSelector implements EndPointSelector {
@Override @Override
public EndPoint select(DataObject srcData, DataObject destData, StorageAction action) { public EndPoint select(DataObject srcData, DataObject destData, StorageAction action) {
return select(srcData, destData, action, false);
}
@Override
public EndPoint select(DataObject srcData, DataObject destData, StorageAction action, boolean encryptionRequired) {
s_logger.error("IR24 select BACKUPSNAPSHOT from primary to secondary " + srcData.getId() + " dest=" + destData.getId()); s_logger.error("IR24 select BACKUPSNAPSHOT from primary to secondary " + srcData.getId() + " dest=" + destData.getId());
if (action == StorageAction.BACKUPSNAPSHOT && srcData.getDataStore().getRole() == DataStoreRole.Primary) { if (action == StorageAction.BACKUPSNAPSHOT && srcData.getDataStore().getRole() == DataStoreRole.Primary) {
SnapshotInfo srcSnapshot = (SnapshotInfo)srcData; SnapshotInfo srcSnapshot = (SnapshotInfo)srcData;
@ -293,7 +317,7 @@ public class DefaultEndPointSelector implements EndPointSelector {
} }
} }
} }
return select(srcData, destData); return select(srcData, destData, encryptionRequired);
} }
protected EndPoint findEndpointForPrimaryStorage(DataStore store) { protected EndPoint findEndpointForPrimaryStorage(DataStore store) {
@ -350,6 +374,15 @@ public class DefaultEndPointSelector implements EndPointSelector {
return sc.list(); return sc.list();
} }
@Override
public EndPoint select(DataObject object, boolean encryptionSupportRequired) {
DataStore store = object.getDataStore();
if (store.getRole() == DataStoreRole.Primary) {
return findEndPointInScope(store.getScope(), findOneHostOnPrimaryStorage, store.getId(), encryptionSupportRequired);
}
throw new CloudRuntimeException(String.format("Storage role %s doesn't support encryption", store.getRole()));
}
@Override @Override
public EndPoint select(DataObject object) { public EndPoint select(DataObject object) {
DataStore store = object.getDataStore(); DataStore store = object.getDataStore();
@ -415,6 +448,11 @@ public class DefaultEndPointSelector implements EndPointSelector {
@Override @Override
public EndPoint select(DataObject object, StorageAction action) { public EndPoint select(DataObject object, StorageAction action) {
return select(object, action, false);
}
@Override
public EndPoint select(DataObject object, StorageAction action, boolean encryptionRequired) {
if (action == StorageAction.TAKESNAPSHOT) { if (action == StorageAction.TAKESNAPSHOT) {
SnapshotInfo snapshotInfo = (SnapshotInfo)object; SnapshotInfo snapshotInfo = (SnapshotInfo)object;
if (snapshotInfo.getHypervisorType() == Hypervisor.HypervisorType.KVM) { if (snapshotInfo.getHypervisorType() == Hypervisor.HypervisorType.KVM) {
@ -446,7 +484,7 @@ public class DefaultEndPointSelector implements EndPointSelector {
} }
} }
} }
return select(object); return select(object, encryptionRequired);
} }
@Override @Override

View File

@ -23,6 +23,11 @@ import javax.inject.Inject;
import com.cloud.configuration.Resource.ResourceType; import com.cloud.configuration.Resource.ResourceType;
import com.cloud.dc.VsphereStoragePolicyVO; import com.cloud.dc.VsphereStoragePolicyVO;
import com.cloud.dc.dao.VsphereStoragePolicyDao; import com.cloud.dc.dao.VsphereStoragePolicyDao;
import com.cloud.utils.db.Transaction;
import com.cloud.utils.db.TransactionCallbackNoReturn;
import com.cloud.utils.db.TransactionStatus;
import org.apache.cloudstack.secret.dao.PassphraseDao;
import org.apache.cloudstack.secret.PassphraseVO;
import com.cloud.service.dao.ServiceOfferingDetailsDao; import com.cloud.service.dao.ServiceOfferingDetailsDao;
import com.cloud.storage.MigrationOptions; import com.cloud.storage.MigrationOptions;
import com.cloud.storage.VMTemplateVO; import com.cloud.storage.VMTemplateVO;
@ -105,6 +110,8 @@ public class VolumeObject implements VolumeInfo {
DiskOfferingDetailsDao diskOfferingDetailsDao; DiskOfferingDetailsDao diskOfferingDetailsDao;
@Inject @Inject
VsphereStoragePolicyDao vsphereStoragePolicyDao; VsphereStoragePolicyDao vsphereStoragePolicyDao;
@Inject
PassphraseDao passphraseDao;
private Object payload; private Object payload;
private MigrationOptions migrationOptions; private MigrationOptions migrationOptions;
@ -664,11 +671,13 @@ public class VolumeObject implements VolumeInfo {
} }
protected void updateVolumeInfo(VolumeObjectTO newVolume, VolumeVO volumeVo, boolean setVolumeSize, boolean setFormat) { protected void updateVolumeInfo(VolumeObjectTO newVolume, VolumeVO volumeVo, boolean setVolumeSize, boolean setFormat) {
String previousValues = ReflectionToStringBuilderUtils.reflectOnlySelectedFields(volumeVo, "path", "size", "format", "poolId"); String previousValues = ReflectionToStringBuilderUtils.reflectOnlySelectedFields(volumeVo, "path", "size", "format", "encryptFormat", "poolId");
volumeVo.setPath(newVolume.getPath()); volumeVo.setPath(newVolume.getPath());
Long newVolumeSize = newVolume.getSize(); Long newVolumeSize = newVolume.getSize();
volumeVo.setEncryptFormat(newVolume.getEncryptFormat());
if (newVolumeSize != null && setVolumeSize) { if (newVolumeSize != null && setVolumeSize) {
volumeVo.setSize(newVolumeSize); volumeVo.setSize(newVolumeSize);
} }
@ -678,7 +687,7 @@ public class VolumeObject implements VolumeInfo {
volumeVo.setPoolId(getDataStore().getId()); volumeVo.setPoolId(getDataStore().getId());
volumeDao.update(volumeVo.getId(), volumeVo); volumeDao.update(volumeVo.getId(), volumeVo);
String newValues = ReflectionToStringBuilderUtils.reflectOnlySelectedFields(volumeVo, "path", "size", "format", "poolId"); String newValues = ReflectionToStringBuilderUtils.reflectOnlySelectedFields(volumeVo, "path", "size", "format", "encryptFormat", "poolId");
s_logger.debug(String.format("Updated %s from %s to %s ", volumeVo.getVolumeDescription(), previousValues, newValues)); s_logger.debug(String.format("Updated %s from %s to %s ", volumeVo.getVolumeDescription(), previousValues, newValues));
} }
@ -864,4 +873,61 @@ public class VolumeObject implements VolumeInfo {
public void setExternalUuid(String externalUuid) { public void setExternalUuid(String externalUuid) {
volumeVO.setExternalUuid(externalUuid); volumeVO.setExternalUuid(externalUuid);
} }
@Override
public Long getPassphraseId() {
return volumeVO.getPassphraseId();
}
@Override
public void setPassphraseId(Long id) {
volumeVO.setPassphraseId(id);
}
/**
* Removes passphrase reference from underlying volume. Also removes the associated passphrase entry if it is the last user.
*/
public void deletePassphrase() {
Transaction.execute(new TransactionCallbackNoReturn() {
@Override
public void doInTransactionWithoutResult(TransactionStatus status) {
Long passphraseId = volumeVO.getPassphraseId();
if (passphraseId != null) {
volumeVO.setPassphraseId(null);
volumeDao.persist(volumeVO);
s_logger.debug(String.format("Checking to see if we can delete passphrase id %s", passphraseId));
List<VolumeVO> volumes = volumeDao.listVolumesByPassphraseId(passphraseId);
if (volumes != null && !volumes.isEmpty()) {
s_logger.debug("Other volumes use this passphrase, skipping deletion");
return;
}
s_logger.debug(String.format("Deleting passphrase %s", passphraseId));
passphraseDao.remove(passphraseId);
}
}
});
}
/**
* Looks up passphrase from underlying volume.
* @return passphrase as bytes
*/
public byte[] getPassphrase() {
PassphraseVO passphrase = passphraseDao.findById(volumeVO.getPassphraseId());
if (passphrase != null) {
return passphrase.getPassphrase();
}
return new byte[0];
}
@Override
public String getEncryptFormat() { return volumeVO.getEncryptFormat(); }
@Override
public void setEncryptFormat(String encryptFormat) {
volumeVO.setEncryptFormat(encryptFormat);
}
} }

View File

@ -28,6 +28,7 @@ import java.util.Random;
import javax.inject.Inject; import javax.inject.Inject;
import org.apache.cloudstack.secret.dao.PassphraseDao;
import com.cloud.storage.VMTemplateVO; import com.cloud.storage.VMTemplateVO;
import com.cloud.storage.dao.VMTemplateDao; import com.cloud.storage.dao.VMTemplateDao;
import org.apache.cloudstack.annotation.AnnotationService; import org.apache.cloudstack.annotation.AnnotationService;
@ -197,6 +198,8 @@ public class VolumeServiceImpl implements VolumeService {
private AnnotationDao annotationDao; private AnnotationDao annotationDao;
@Inject @Inject
private SnapshotApiService snapshotApiService; private SnapshotApiService snapshotApiService;
@Inject
private PassphraseDao passphraseDao;
private final static String SNAPSHOT_ID = "SNAPSHOT_ID"; private final static String SNAPSHOT_ID = "SNAPSHOT_ID";
@ -446,6 +449,11 @@ public class VolumeServiceImpl implements VolumeService {
try { try {
if (result.isSuccess()) { if (result.isSuccess()) {
vo.processEvent(Event.OperationSuccessed); vo.processEvent(Event.OperationSuccessed);
if (vo.getPassphraseId() != null) {
vo.deletePassphrase();
}
if (canVolumeBeRemoved(vo.getId())) { if (canVolumeBeRemoved(vo.getId())) {
s_logger.info("Volume " + vo.getId() + " is not referred anywhere, remove it from volumes table"); s_logger.info("Volume " + vo.getId() + " is not referred anywhere, remove it from volumes table");
volDao.remove(vo.getId()); volDao.remove(vo.getId());

View File

@ -83,6 +83,7 @@ Requires: ipmitool
Requires: %{name}-common = %{_ver} Requires: %{name}-common = %{_ver}
Requires: iptables-services Requires: iptables-services
Requires: qemu-img Requires: qemu-img
Requires: haveged
Requires: python3-pip Requires: python3-pip
Requires: python3-setuptools Requires: python3-setuptools
Group: System Environment/Libraries Group: System Environment/Libraries
@ -117,6 +118,8 @@ Requires: perl
Requires: python36-libvirt Requires: python36-libvirt
Requires: qemu-img Requires: qemu-img
Requires: qemu-kvm Requires: qemu-kvm
Requires: cryptsetup
Requires: rng-tools
Provides: cloud-agent Provides: cloud-agent
Group: System Environment/Libraries Group: System Environment/Libraries
%description agent %description agent
@ -438,6 +441,7 @@ pip3 install %{_datadir}/%{name}-management/setup/wheel/six-1.15.0-py2.py3-none-
pip3 install urllib3 pip3 install urllib3
/usr/bin/systemctl enable cloudstack-management > /dev/null 2>&1 || true /usr/bin/systemctl enable cloudstack-management > /dev/null 2>&1 || true
/usr/bin/systemctl enable --now haveged > /dev/null 2>&1 || true
grep -s -q "db.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" grep -s -q "db.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties"
grep -s -q "db.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" grep -s -q "db.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties"
@ -495,9 +499,10 @@ if [ ! -d %{_sysconfdir}/libvirt/hooks ] ; then
fi fi
cp -a ${RPM_BUILD_ROOT}%{_datadir}/%{name}-agent/lib/libvirtqemuhook %{_sysconfdir}/libvirt/hooks/qemu cp -a ${RPM_BUILD_ROOT}%{_datadir}/%{name}-agent/lib/libvirtqemuhook %{_sysconfdir}/libvirt/hooks/qemu
mkdir -m 0755 -p /usr/share/cloudstack-agent/tmp mkdir -m 0755 -p /usr/share/cloudstack-agent/tmp
/sbin/service libvirtd restart /usr/bin/systemctl restart libvirtd
/sbin/systemctl enable cloudstack-agent > /dev/null 2>&1 || true /usr/bin/systemctl enable cloudstack-agent > /dev/null 2>&1 || true
/sbin/systemctl enable cloudstack-rolling-maintenance@p > /dev/null 2>&1 || true /usr/bin/systemctl enable cloudstack-rolling-maintenance@p > /dev/null 2>&1 || true
/usr/bin/systemctl enable --now rngd > /dev/null 2>&1 || true
# if saved configs from upgrade exist, copy them over # if saved configs from upgrade exist, copy them over
if [ -f "%{_sysconfdir}/cloud.rpmsave/agent/agent.properties" ]; then if [ -f "%{_sysconfdir}/cloud.rpmsave/agent/agent.properties" ]; then

View File

@ -78,6 +78,7 @@ Requires: ipmitool
Requires: %{name}-common = %{_ver} Requires: %{name}-common = %{_ver}
Requires: iptables-services Requires: iptables-services
Requires: qemu-img Requires: qemu-img
Requires: haveged
Requires: python3-pip Requires: python3-pip
Requires: python3-setuptools Requires: python3-setuptools
Requires: libgcrypt > 1.8.3 Requires: libgcrypt > 1.8.3
@ -110,6 +111,8 @@ Requires: perl
Requires: python3-libvirt Requires: python3-libvirt
Requires: qemu-img Requires: qemu-img
Requires: qemu-kvm Requires: qemu-kvm
Requires: cryptsetup
Requires: rng-tools
Requires: libgcrypt > 1.8.3 Requires: libgcrypt > 1.8.3
Provides: cloud-agent Provides: cloud-agent
Group: System Environment/Libraries Group: System Environment/Libraries
@ -429,6 +432,7 @@ fi
pip3 install %{_datadir}/%{name}-management/setup/wheel/six-1.15.0-py2.py3-none-any.whl %{_datadir}/%{name}-management/setup/wheel/setuptools-47.3.1-py3-none-any.whl %{_datadir}/%{name}-management/setup/wheel/protobuf-3.12.2-cp36-cp36m-manylinux1_x86_64.whl %{_datadir}/%{name}-management/setup/wheel/mysql_connector_python-8.0.20-cp36-cp36m-manylinux1_x86_64.whl pip3 install %{_datadir}/%{name}-management/setup/wheel/six-1.15.0-py2.py3-none-any.whl %{_datadir}/%{name}-management/setup/wheel/setuptools-47.3.1-py3-none-any.whl %{_datadir}/%{name}-management/setup/wheel/protobuf-3.12.2-cp36-cp36m-manylinux1_x86_64.whl %{_datadir}/%{name}-management/setup/wheel/mysql_connector_python-8.0.20-cp36-cp36m-manylinux1_x86_64.whl
/usr/bin/systemctl enable cloudstack-management > /dev/null 2>&1 || true /usr/bin/systemctl enable cloudstack-management > /dev/null 2>&1 || true
/usr/bin/systemctl enable --now haveged > /dev/null 2>&1 || true
grep -s -q "db.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" grep -s -q "db.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties"
grep -s -q "db.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" grep -s -q "db.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties"
@ -486,9 +490,10 @@ if [ ! -d %{_sysconfdir}/libvirt/hooks ] ; then
fi fi
cp -a ${RPM_BUILD_ROOT}%{_datadir}/%{name}-agent/lib/libvirtqemuhook %{_sysconfdir}/libvirt/hooks/qemu cp -a ${RPM_BUILD_ROOT}%{_datadir}/%{name}-agent/lib/libvirtqemuhook %{_sysconfdir}/libvirt/hooks/qemu
mkdir -m 0755 -p /usr/share/cloudstack-agent/tmp mkdir -m 0755 -p /usr/share/cloudstack-agent/tmp
/sbin/service libvirtd restart /usr/bin/systemctl restart libvirtd
/sbin/systemctl enable cloudstack-agent > /dev/null 2>&1 || true /usr/bin/systemctl enable cloudstack-agent > /dev/null 2>&1 || true
/sbin/systemctl enable cloudstack-rolling-maintenance@p > /dev/null 2>&1 || true /usr/bin/systemctl enable cloudstack-rolling-maintenance@p > /dev/null 2>&1 || true
/usr/bin/systemctl enable --now rngd > /dev/null 2>&1 || true
# if saved configs from upgrade exist, copy them over # if saved configs from upgrade exist, copy them over
if [ -f "%{_sysconfdir}/cloud.rpmsave/agent/agent.properties" ]; then if [ -f "%{_sysconfdir}/cloud.rpmsave/agent/agent.properties" ]; then

View File

@ -78,6 +78,7 @@ Requires: mkisofs
Requires: ipmitool Requires: ipmitool
Requires: %{name}-common = %{_ver} Requires: %{name}-common = %{_ver}
Requires: qemu-tools Requires: qemu-tools
Requires: haveged
Requires: python3-pip Requires: python3-pip
Requires: python3-setuptools Requires: python3-setuptools
Requires: libgcrypt20 Requires: libgcrypt20
@ -111,6 +112,8 @@ Requires: ipset
Requires: perl Requires: perl
Requires: python3-libvirt-python Requires: python3-libvirt-python
Requires: qemu-kvm Requires: qemu-kvm
Requires: cryptsetup
Requires: rng-tools
Requires: libgcrypt20 Requires: libgcrypt20
Requires: qemu-tools Requires: qemu-tools
Provides: cloud-agent Provides: cloud-agent
@ -431,6 +434,7 @@ fi
pip3 install %{_datadir}/%{name}-management/setup/wheel/six-1.15.0-py2.py3-none-any.whl %{_datadir}/%{name}-management/setup/wheel/setuptools-47.3.1-py3-none-any.whl %{_datadir}/%{name}-management/setup/wheel/protobuf-3.12.2-cp36-cp36m-manylinux1_x86_64.whl %{_datadir}/%{name}-management/setup/wheel/mysql_connector_python-8.0.20-cp36-cp36m-manylinux1_x86_64.whl pip3 install %{_datadir}/%{name}-management/setup/wheel/six-1.15.0-py2.py3-none-any.whl %{_datadir}/%{name}-management/setup/wheel/setuptools-47.3.1-py3-none-any.whl %{_datadir}/%{name}-management/setup/wheel/protobuf-3.12.2-cp36-cp36m-manylinux1_x86_64.whl %{_datadir}/%{name}-management/setup/wheel/mysql_connector_python-8.0.20-cp36-cp36m-manylinux1_x86_64.whl
/usr/bin/systemctl enable cloudstack-management > /dev/null 2>&1 || true /usr/bin/systemctl enable cloudstack-management > /dev/null 2>&1 || true
/usr/bin/systemctl enable --now haveged > /dev/null 2>&1 || true
grep -s -q "db.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" grep -s -q "db.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties"
grep -s -q "db.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" grep -s -q "db.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties"
@ -480,9 +484,10 @@ if [ ! -d %{_sysconfdir}/libvirt/hooks ] ; then
fi fi
cp -a ${RPM_BUILD_ROOT}%{_datadir}/%{name}-agent/lib/libvirtqemuhook %{_sysconfdir}/libvirt/hooks/qemu cp -a ${RPM_BUILD_ROOT}%{_datadir}/%{name}-agent/lib/libvirtqemuhook %{_sysconfdir}/libvirt/hooks/qemu
mkdir -m 0755 -p /usr/share/cloudstack-agent/tmp mkdir -m 0755 -p /usr/share/cloudstack-agent/tmp
/sbin/service libvirtd restart /usr/bin/systemctl restart libvirtd
/sbin/systemctl enable cloudstack-agent > /dev/null 2>&1 || true /usr/bin/systemctl enable cloudstack-agent > /dev/null 2>&1 || true
/sbin/systemctl enable cloudstack-rolling-maintenance@p > /dev/null 2>&1 || true /usr/bin/systemctl enable cloudstack-rolling-maintenance@p > /dev/null 2>&1 || true
/usr/bin/systemctl enable --now rngd > /dev/null 2>&1 || true
# if saved configs from upgrade exist, copy them over # if saved configs from upgrade exist, copy them over
if [ -f "%{_sysconfdir}/cloud.rpmsave/agent/agent.properties" ]; then if [ -f "%{_sysconfdir}/cloud.rpmsave/agent/agent.properties" ]; then

View File

@ -108,10 +108,53 @@
<artifactId>maven-surefire-plugin</artifactId> <artifactId>maven-surefire-plugin</artifactId>
<configuration> <configuration>
<excludes> <excludes>
<exclude>**/Qemu*.java</exclude> <exclude>**/QemuImg*.java</exclude>
</excludes> </excludes>
</configuration> </configuration>
</plugin> </plugin>
</plugins> </plugins>
</build> </build>
<profiles>
<profile>
<!-- libvirt tests only build on Linux. The mocking fails before we can Assert OS name, so skip entirely -->
<id>skip.libvirt.tests</id>
<activation>
<property>
<name>skip.libvirt.tests</name>
<value>true</value>
</property>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>copy-dependencies</id>
<phase>package</phase>
<goals>
<goal>copy-dependencies</goal>
</goals>
<configuration>
<outputDirectory>${project.build.directory}/dependencies</outputDirectory>
<includeScope>runtime</includeScope>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<excludes>
<exclude>**/QemuImg*.java</exclude>
<exclude>**/LibvirtComputingResourceTest.java</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</build>
</profile>
</profiles>
</project> </project>

View File

@ -50,6 +50,8 @@ import org.apache.cloudstack.storage.to.PrimaryDataStoreTO;
import org.apache.cloudstack.storage.to.TemplateObjectTO; import org.apache.cloudstack.storage.to.TemplateObjectTO;
import org.apache.cloudstack.storage.to.VolumeObjectTO; import org.apache.cloudstack.storage.to.VolumeObjectTO;
import org.apache.cloudstack.utils.bytescale.ByteScaleUtils; import org.apache.cloudstack.utils.bytescale.ByteScaleUtils;
import org.apache.cloudstack.utils.cryptsetup.CryptSetup;
import org.apache.cloudstack.utils.hypervisor.HypervisorUtils; import org.apache.cloudstack.utils.hypervisor.HypervisorUtils;
import org.apache.cloudstack.utils.linux.CPUStat; import org.apache.cloudstack.utils.linux.CPUStat;
import org.apache.cloudstack.utils.linux.KVMHostInfo; import org.apache.cloudstack.utils.linux.KVMHostInfo;
@ -58,6 +60,7 @@ import org.apache.cloudstack.utils.qemu.QemuImg;
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat; import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
import org.apache.cloudstack.utils.qemu.QemuImgException; import org.apache.cloudstack.utils.qemu.QemuImgException;
import org.apache.cloudstack.utils.qemu.QemuImgFile; import org.apache.cloudstack.utils.qemu.QemuImgFile;
import org.apache.cloudstack.utils.qemu.QemuObject;
import org.apache.cloudstack.utils.security.KeyStoreUtils; import org.apache.cloudstack.utils.security.KeyStoreUtils;
import org.apache.cloudstack.utils.security.ParserUtils; import org.apache.cloudstack.utils.security.ParserUtils;
import org.apache.commons.collections.MapUtils; import org.apache.commons.collections.MapUtils;
@ -67,6 +70,7 @@ import org.apache.commons.lang.BooleanUtils;
import org.apache.commons.lang.math.NumberUtils; import org.apache.commons.lang.math.NumberUtils;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.apache.log4j.Logger; import org.apache.log4j.Logger;
import org.apache.xerces.impl.xpath.regex.Match;
import org.joda.time.Duration; import org.joda.time.Duration;
import org.libvirt.Connect; import org.libvirt.Connect;
import org.libvirt.Domain; import org.libvirt.Domain;
@ -81,6 +85,7 @@ import org.libvirt.Network;
import org.libvirt.SchedParameter; import org.libvirt.SchedParameter;
import org.libvirt.SchedUlongParameter; import org.libvirt.SchedUlongParameter;
import org.libvirt.VcpuInfo; import org.libvirt.VcpuInfo;
import org.libvirt.Secret;
import org.w3c.dom.Document; import org.w3c.dom.Document;
import org.w3c.dom.Element; import org.w3c.dom.Element;
import org.w3c.dom.Node; import org.w3c.dom.Node;
@ -186,10 +191,13 @@ import com.cloud.utils.script.OutputInterpreter;
import com.cloud.utils.script.OutputInterpreter.AllLinesParser; import com.cloud.utils.script.OutputInterpreter.AllLinesParser;
import com.cloud.utils.script.Script; import com.cloud.utils.script.Script;
import com.cloud.utils.ssh.SshHelper; import com.cloud.utils.ssh.SshHelper;
import com.cloud.utils.UuidUtils;
import com.cloud.vm.VirtualMachine; import com.cloud.vm.VirtualMachine;
import com.cloud.vm.VirtualMachine.PowerState; import com.cloud.vm.VirtualMachine.PowerState;
import com.cloud.vm.VmDetailConstants; import com.cloud.vm.VmDetailConstants;
import static com.cloud.host.Host.HOST_VOLUME_ENCRYPTION;
/** /**
* LibvirtComputingResource execute requests on the computing/routing host using * LibvirtComputingResource execute requests on the computing/routing host using
* the libvirt API * the libvirt API
@ -693,6 +701,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
protected String dpdkOvsPath; protected String dpdkOvsPath;
protected String directDownloadTemporaryDownloadPath; protected String directDownloadTemporaryDownloadPath;
protected String cachePath; protected String cachePath;
protected String javaTempDir = System.getProperty("java.io.tmpdir");
private String getEndIpFromStartIp(final String startIp, final int numIps) { private String getEndIpFromStartIp(final String startIp, final int numIps) {
final String[] tokens = startIp.split("[.]"); final String[] tokens = startIp.split("[.]");
@ -2924,6 +2933,9 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
pool.getUuid(), devId, diskBusType, DiskProtocol.RBD, DiskDef.DiskFmtType.RAW); pool.getUuid(), devId, diskBusType, DiskProtocol.RBD, DiskDef.DiskFmtType.RAW);
} else if (pool.getType() == StoragePoolType.PowerFlex) { } else if (pool.getType() == StoragePoolType.PowerFlex) {
disk.defBlockBasedDisk(physicalDisk.getPath(), devId, diskBusTypeData); disk.defBlockBasedDisk(physicalDisk.getPath(), devId, diskBusTypeData);
if (physicalDisk.getFormat().equals(PhysicalDiskFormat.QCOW2)) {
disk.setDiskFormatType(DiskDef.DiskFmtType.QCOW2);
}
} else if (pool.getType() == StoragePoolType.Gluster) { } else if (pool.getType() == StoragePoolType.Gluster) {
final String mountpoint = pool.getLocalPath(); final String mountpoint = pool.getLocalPath();
final String path = physicalDisk.getPath(); final String path = physicalDisk.getPath();
@ -2960,6 +2972,12 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
if (volumeObjectTO.getCacheMode() != null) { if (volumeObjectTO.getCacheMode() != null) {
disk.setCacheMode(DiskDef.DiskCacheMode.valueOf(volumeObjectTO.getCacheMode().toString().toUpperCase())); disk.setCacheMode(DiskDef.DiskCacheMode.valueOf(volumeObjectTO.getCacheMode().toString().toUpperCase()));
} }
if (volumeObjectTO.requiresEncryption()) {
String secretUuid = createLibvirtVolumeSecret(conn, volumeObjectTO.getPath(), volumeObjectTO.getPassphrase());
DiskDef.LibvirtDiskEncryptDetails encryptDetails = new DiskDef.LibvirtDiskEncryptDetails(secretUuid, QemuObject.EncryptFormat.enumValue(volumeObjectTO.getEncryptFormat()));
disk.setLibvirtDiskEncryptDetails(encryptDetails);
}
} }
if (vm.getDevices() == null) { if (vm.getDevices() == null) {
s_logger.error("There is no devices for" + vm); s_logger.error("There is no devices for" + vm);
@ -3137,7 +3155,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
public boolean cleanupDisk(final DiskDef disk) { public boolean cleanupDisk(final DiskDef disk) {
final String path = disk.getDiskPath(); final String path = disk.getDiskPath();
if (org.apache.commons.lang.StringUtils.isBlank(path)) { if (StringUtils.isBlank(path)) {
s_logger.debug("Unable to clean up disk with null path (perhaps empty cdrom drive):" + disk); s_logger.debug("Unable to clean up disk with null path (perhaps empty cdrom drive):" + disk);
return false; return false;
} }
@ -3392,6 +3410,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
cmd.setCluster(_clusterId); cmd.setCluster(_clusterId);
cmd.setGatewayIpAddress(_localGateway); cmd.setGatewayIpAddress(_localGateway);
cmd.setIqn(getIqn()); cmd.setIqn(getIqn());
cmd.getHostDetails().put(HOST_VOLUME_ENCRYPTION, String.valueOf(hostSupportsVolumeEncryption()));
if (cmd.getHostDetails().containsKey("Host.OS")) { if (cmd.getHostDetails().containsKey("Host.OS")) {
_hostDistro = cmd.getHostDetails().get("Host.OS"); _hostDistro = cmd.getHostDetails().get("Host.OS");
@ -4705,6 +4724,32 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
return true; return true;
} }
/**
* Test host for volume encryption support
* @return boolean
*/
public boolean hostSupportsVolumeEncryption() {
// test qemu-img
try {
QemuImg qemu = new QemuImg(0);
if (!qemu.supportsImageFormat(PhysicalDiskFormat.LUKS)) {
return false;
}
} catch (QemuImgException | LibvirtException ex) {
s_logger.info("Host's qemu install doesn't support encryption", ex);
return false;
}
// test cryptsetup
CryptSetup crypt = new CryptSetup();
if (!crypt.isSupported()) {
s_logger.info("Host can't run cryptsetup");
return false;
}
return true;
}
public boolean isSecureMode(String bootMode) { public boolean isSecureMode(String bootMode) {
if (StringUtils.isNotBlank(bootMode) && "secure".equalsIgnoreCase(bootMode)) { if (StringUtils.isNotBlank(bootMode) && "secure".equalsIgnoreCase(bootMode)) {
return true; return true;
@ -4743,8 +4788,9 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
public void setBackingFileFormat(String volPath) { public void setBackingFileFormat(String volPath) {
final int timeout = 0; final int timeout = 0;
QemuImgFile file = new QemuImgFile(volPath); QemuImgFile file = new QemuImgFile(volPath);
QemuImg qemu = new QemuImg(timeout);
try{ try{
QemuImg qemu = new QemuImg(timeout);
Map<String, String> info = qemu.info(file); Map<String, String> info = qemu.info(file);
String backingFilePath = info.get(QemuImg.BACKING_FILE); String backingFilePath = info.get(QemuImg.BACKING_FILE);
String backingFileFormat = info.get(QemuImg.BACKING_FILE_FORMAT); String backingFileFormat = info.get(QemuImg.BACKING_FILE_FORMAT);
@ -4815,4 +4861,70 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
dm.setSchedulerParameters(params); dm.setSchedulerParameters(params);
} }
/**
* Set up a libvirt secret for a volume. If Libvirt says that a secret already exists for this volume path, we use its uuid.
* The UUID of the secret needs to be prescriptive such that we can register the same UUID on target host during live migration
*
* @param conn libvirt connection
* @param consumer identifier for volume in secret
* @param data secret contents
* @return uuid of matching secret for volume
* @throws LibvirtException
*/
public String createLibvirtVolumeSecret(Connect conn, String consumer, byte[] data) throws LibvirtException {
String secretUuid = null;
LibvirtSecretDef secretDef = new LibvirtSecretDef(LibvirtSecretDef.Usage.VOLUME, generateSecretUUIDFromString(consumer));
secretDef.setVolumeVolume(consumer);
secretDef.setPrivate(true);
secretDef.setEphemeral(true);
try {
Secret secret = conn.secretDefineXML(secretDef.toString());
secret.setValue(data);
secretUuid = secret.getUUIDString();
secret.free();
} catch (LibvirtException ex) {
if (ex.getMessage().contains("already defined for use")) {
Match match = new Match();
if (UuidUtils.getUuidRegex().matches(ex.getMessage(), match)) {
secretUuid = match.getCapturedText(0);
s_logger.info(String.format("Reusing previously defined secret '%s' for volume '%s'", secretUuid, consumer));
} else {
throw ex;
}
} else {
throw ex;
}
}
return secretUuid;
}
public void removeLibvirtVolumeSecret(Connect conn, String secretUuid) throws LibvirtException {
try {
Secret secret = conn.secretLookupByUUIDString(secretUuid);
secret.undefine();
} catch (LibvirtException ex) {
if (ex.getMessage().contains("Secret not found")) {
s_logger.debug(String.format("Secret uuid %s doesn't exist", secretUuid));
return;
}
throw ex;
}
s_logger.debug(String.format("Undefined secret %s", secretUuid));
}
public void cleanOldSecretsByDiskDef(Connect conn, List<DiskDef> disks) throws LibvirtException {
for (DiskDef disk : disks) {
DiskDef.LibvirtDiskEncryptDetails encryptDetails = disk.getLibvirtDiskEncryptDetails();
if (encryptDetails != null) {
removeLibvirtVolumeSecret(conn, encryptDetails.getPassphraseUuid());
}
}
}
public static String generateSecretUUIDFromString(String seed) {
return UUID.nameUUIDFromBytes(seed.getBytes()).toString();
}
} }

View File

@ -28,6 +28,7 @@ import javax.xml.parsers.ParserConfigurationException;
import org.apache.cloudstack.utils.security.ParserUtils; import org.apache.cloudstack.utils.security.ParserUtils;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.apache.cloudstack.utils.qemu.QemuObject;
import org.apache.log4j.Logger; import org.apache.log4j.Logger;
import org.w3c.dom.Document; import org.w3c.dom.Document;
import org.w3c.dom.Element; import org.w3c.dom.Element;
@ -193,6 +194,15 @@ public class LibvirtDomainXMLParser {
} }
} }
NodeList encryption = disk.getElementsByTagName("encryption");
if (encryption.getLength() != 0) {
Element encryptionElement = (Element) encryption.item(0);
String passphraseUuid = getAttrValue("secret", "uuid", encryptionElement);
QemuObject.EncryptFormat encryptFormat = QemuObject.EncryptFormat.enumValue(encryptionElement.getAttribute("format"));
DiskDef.LibvirtDiskEncryptDetails encryptDetails = new DiskDef.LibvirtDiskEncryptDetails(passphraseUuid, encryptFormat);
def.setLibvirtDiskEncryptDetails(encryptDetails);
}
diskDefs.add(def); diskDefs.add(def);
} }

View File

@ -55,10 +55,14 @@ public class LibvirtSecretDef {
return _ephemeral; return _ephemeral;
} }
public void setEphemeral(boolean ephemeral) { _ephemeral = ephemeral; }
public boolean getPrivate() { public boolean getPrivate() {
return _private; return _private;
} }
public void setPrivate(boolean isPrivate) { _private = isPrivate; }
public String getUuid() { public String getUuid() {
return _uuid; return _uuid;
} }

View File

@ -22,6 +22,7 @@ import java.util.HashMap;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import org.apache.cloudstack.utils.qemu.QemuObject;
import org.apache.commons.lang.StringEscapeUtils; import org.apache.commons.lang.StringEscapeUtils;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.apache.log4j.Logger; import org.apache.log4j.Logger;
@ -559,6 +560,19 @@ public class LibvirtVMDef {
} }
public static class DiskDef { public static class DiskDef {
public static class LibvirtDiskEncryptDetails {
String passphraseUuid;
QemuObject.EncryptFormat encryptFormat;
public LibvirtDiskEncryptDetails(String passphraseUuid, QemuObject.EncryptFormat encryptFormat) {
this.passphraseUuid = passphraseUuid;
this.encryptFormat = encryptFormat;
}
public String getPassphraseUuid() { return this.passphraseUuid; }
public QemuObject.EncryptFormat getEncryptFormat() { return this.encryptFormat; }
}
public enum DeviceType { public enum DeviceType {
FLOPPY("floppy"), DISK("disk"), CDROM("cdrom"), LUN("lun"); FLOPPY("floppy"), DISK("disk"), CDROM("cdrom"), LUN("lun");
String _type; String _type;
@ -714,6 +728,7 @@ public class LibvirtVMDef {
private boolean qemuDriver = true; private boolean qemuDriver = true;
private DiscardType _discard = DiscardType.IGNORE; private DiscardType _discard = DiscardType.IGNORE;
private IoDriver ioDriver; private IoDriver ioDriver;
private LibvirtDiskEncryptDetails encryptDetails;
public DiscardType getDiscard() { public DiscardType getDiscard() {
return _discard; return _discard;
@ -962,6 +977,8 @@ public class LibvirtVMDef {
return _diskFmtType; return _diskFmtType;
} }
public void setDiskFormatType(DiskFmtType type) { _diskFmtType = type; }
public void setBytesReadRate(Long bytesReadRate) { public void setBytesReadRate(Long bytesReadRate) {
_bytesReadRate = bytesReadRate; _bytesReadRate = bytesReadRate;
} }
@ -1026,6 +1043,10 @@ public class LibvirtVMDef {
this._serial = serial; this._serial = serial;
} }
public void setLibvirtDiskEncryptDetails(LibvirtDiskEncryptDetails details) { this.encryptDetails = details; }
public LibvirtDiskEncryptDetails getLibvirtDiskEncryptDetails() { return this.encryptDetails; }
@Override @Override
public String toString() { public String toString() {
StringBuilder diskBuilder = new StringBuilder(); StringBuilder diskBuilder = new StringBuilder();
@ -1093,7 +1114,13 @@ public class LibvirtVMDef {
diskBuilder.append("/>\n"); diskBuilder.append("/>\n");
if (_serial != null && !_serial.isEmpty() && _deviceType != DeviceType.LUN) { if (_serial != null && !_serial.isEmpty() && _deviceType != DeviceType.LUN) {
diskBuilder.append("<serial>" + _serial + "</serial>"); diskBuilder.append("<serial>" + _serial + "</serial>\n");
}
if (encryptDetails != null) {
diskBuilder.append("<encryption format='" + encryptDetails.encryptFormat + "'>\n");
diskBuilder.append("<secret type='passphrase' uuid='" + encryptDetails.passphraseUuid + "' />\n");
diskBuilder.append("</encryption>\n");
} }
if ((_deviceType != DeviceType.CDROM) && if ((_deviceType != DeviceType.CDROM) &&

View File

@ -59,13 +59,13 @@ public final class LibvirtCreateCommandWrapper extends CommandWrapper<CreateComm
vol = libvirtComputingResource.templateToPrimaryDownload(command.getTemplateUrl(), primaryPool, dskch.getPath()); vol = libvirtComputingResource.templateToPrimaryDownload(command.getTemplateUrl(), primaryPool, dskch.getPath());
} else { } else {
baseVol = primaryPool.getPhysicalDisk(command.getTemplateUrl()); baseVol = primaryPool.getPhysicalDisk(command.getTemplateUrl());
vol = storagePoolMgr.createDiskFromTemplate(baseVol, dskch.getPath(), dskch.getProvisioningType(), primaryPool, baseVol.getSize(), 0); vol = storagePoolMgr.createDiskFromTemplate(baseVol, dskch.getPath(), dskch.getProvisioningType(), primaryPool, baseVol.getSize(), 0, null);
} }
if (vol == null) { if (vol == null) {
return new Answer(command, false, " Can't create storage volume on storage pool"); return new Answer(command, false, " Can't create storage volume on storage pool");
} }
} else { } else {
vol = primaryPool.createPhysicalDisk(dskch.getPath(), dskch.getProvisioningType(), dskch.getSize()); vol = primaryPool.createPhysicalDisk(dskch.getPath(), dskch.getProvisioningType(), dskch.getSize(), null);
if (vol == null) { if (vol == null) {
return new Answer(command, false, " Can't create Physical Disk"); return new Answer(command, false, " Can't create Physical Disk");
} }

View File

@ -118,8 +118,8 @@ public final class LibvirtCreatePrivateTemplateFromVolumeCommandWrapper extends
final QemuImgFile destFile = new QemuImgFile(tmpltPath + "/" + command.getUniqueName() + ".qcow2"); final QemuImgFile destFile = new QemuImgFile(tmpltPath + "/" + command.getUniqueName() + ".qcow2");
destFile.setFormat(PhysicalDiskFormat.QCOW2); destFile.setFormat(PhysicalDiskFormat.QCOW2);
final QemuImg q = new QemuImg(0);
try { try {
final QemuImg q = new QemuImg(0);
q.convert(srcFile, destFile); q.convert(srcFile, destFile);
} catch (final QemuImgException | LibvirtException e) { } catch (final QemuImgException | LibvirtException e) {
s_logger.error("Failed to create new template while converting " + srcFile.getFileName() + " to " + destFile.getFileName() + " the error was: " + s_logger.error("Failed to create new template while converting " + srcFile.getFileName() + " to " + destFile.getFileName() + " the error was: " +

View File

@ -45,6 +45,7 @@ import javax.xml.transform.stream.StreamResult;
import org.apache.cloudstack.utils.security.ParserUtils; import org.apache.cloudstack.utils.security.ParserUtils;
import org.apache.commons.collections.MapUtils; import org.apache.commons.collections.MapUtils;
import org.apache.commons.io.FilenameUtils;
import org.apache.commons.io.IOUtils; import org.apache.commons.io.IOUtils;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.apache.log4j.Logger; import org.apache.log4j.Logger;
@ -299,6 +300,7 @@ public final class LibvirtMigrateCommandWrapper extends CommandWrapper<MigrateCo
s_logger.debug(String.format("Cleaning the disks of VM [%s] in the source pool after VM migration finished.", vmName)); s_logger.debug(String.format("Cleaning the disks of VM [%s] in the source pool after VM migration finished.", vmName));
} }
deleteOrDisconnectDisksOnSourcePool(libvirtComputingResource, migrateDiskInfoList, disks); deleteOrDisconnectDisksOnSourcePool(libvirtComputingResource, migrateDiskInfoList, disks);
libvirtComputingResource.cleanOldSecretsByDiskDef(conn, disks);
} }
} catch (final LibvirtException e) { } catch (final LibvirtException e) {
@ -573,6 +575,17 @@ public final class LibvirtMigrateCommandWrapper extends CommandWrapper<MigrateCo
diskNode.appendChild(newChildSourceNode); diskNode.appendChild(newChildSourceNode);
} else if (migrateStorageManaged && "auth".equals(diskChildNode.getNodeName())) { } else if (migrateStorageManaged && "auth".equals(diskChildNode.getNodeName())) {
diskNode.removeChild(diskChildNode); diskNode.removeChild(diskChildNode);
} else if ("encryption".equals(diskChildNode.getNodeName())) {
for (int s = 0; s < diskChildNode.getChildNodes().getLength(); s++) {
Node encryptionChild = diskChildNode.getChildNodes().item(s);
if ("secret".equals(encryptionChild.getNodeName())) {
NamedNodeMap secretAttributes = encryptionChild.getAttributes();
Node uuidAttribute = secretAttributes.getNamedItem("uuid");
String volumeFileName = FilenameUtils.getBaseName(migrateDiskInfo.getSourceText());
String newSecretUuid = LibvirtComputingResource.generateSecretUUIDFromString(volumeFileName);
uuidAttribute.setTextContent(newSecretUuid);
}
}
} }
} }
} }

View File

@ -24,6 +24,7 @@ import java.util.HashMap;
import java.util.Map; import java.util.Map;
import org.apache.cloudstack.storage.configdrive.ConfigDrive; import org.apache.cloudstack.storage.configdrive.ConfigDrive;
import org.apache.cloudstack.storage.to.VolumeObjectTO;
import org.apache.commons.collections.MapUtils; import org.apache.commons.collections.MapUtils;
import org.apache.log4j.Logger; import org.apache.log4j.Logger;
import org.libvirt.Connect; import org.libvirt.Connect;
@ -88,14 +89,31 @@ public final class LibvirtPrepareForMigrationCommandWrapper extends CommandWrapp
/* setup disks, e.g for iso */ /* setup disks, e.g for iso */
final DiskTO[] volumes = vm.getDisks(); final DiskTO[] volumes = vm.getDisks();
for (final DiskTO volume : volumes) { for (final DiskTO volume : volumes) {
final DataTO data = volume.getData();
if (volume.getType() == Volume.Type.ISO) { if (volume.getType() == Volume.Type.ISO) {
final DataTO data = volume.getData();
if (data != null && data.getPath() != null && data.getPath().startsWith(ConfigDrive.CONFIGDRIVEDIR)) { if (data != null && data.getPath() != null && data.getPath().startsWith(ConfigDrive.CONFIGDRIVEDIR)) {
libvirtComputingResource.getVolumePath(conn, volume, vm.isConfigDriveOnHostCache()); libvirtComputingResource.getVolumePath(conn, volume, vm.isConfigDriveOnHostCache());
} else { } else {
libvirtComputingResource.getVolumePath(conn, volume); libvirtComputingResource.getVolumePath(conn, volume);
} }
} }
if (data instanceof VolumeObjectTO) {
final VolumeObjectTO volumeObjectTO = (VolumeObjectTO)data;
if (volumeObjectTO.requiresEncryption()) {
String secretConsumer = volumeObjectTO.getPath();
if (volume.getDetails() != null && volume.getDetails().containsKey(DiskTO.SECRET_CONSUMER_DETAIL)) {
secretConsumer = volume.getDetails().get(DiskTO.SECRET_CONSUMER_DETAIL);
}
String secretUuid = libvirtComputingResource.createLibvirtVolumeSecret(conn, secretConsumer, volumeObjectTO.getPassphrase());
s_logger.debug(String.format("Created libvirt secret %s for disk %s", secretUuid, volumeObjectTO.getPath()));
volumeObjectTO.clearPassphrase();
} else {
s_logger.debug(String.format("disk %s has no passphrase or encryption", volumeObjectTO));
}
}
} }
skipDisconnect = true; skipDisconnect = true;

View File

@ -19,9 +19,26 @@
package com.cloud.hypervisor.kvm.resource.wrapper; package com.cloud.hypervisor.kvm.resource.wrapper;
import static com.cloud.utils.NumbersUtil.toHumanReadableSize;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import com.cloud.hypervisor.kvm.storage.ScaleIOStorageAdaptor;
import org.apache.cloudstack.utils.cryptsetup.KeyFile;
import org.apache.cloudstack.utils.qemu.QemuImageOptions;
import org.apache.cloudstack.utils.qemu.QemuImg;
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat; import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
import org.apache.cloudstack.utils.qemu.QemuImgException;
import org.apache.cloudstack.utils.qemu.QemuImgFile;
import org.apache.cloudstack.utils.qemu.QemuObject;
import org.apache.log4j.Logger; import org.apache.log4j.Logger;
import org.libvirt.Connect; import org.libvirt.Connect;
import org.libvirt.Domain;
import org.libvirt.DomainInfo;
import org.libvirt.LibvirtException; import org.libvirt.LibvirtException;
import org.libvirt.StorageVol; import org.libvirt.StorageVol;
@ -39,8 +56,6 @@ import com.cloud.storage.Storage.StoragePoolType;
import com.cloud.utils.exception.CloudRuntimeException; import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.script.Script; import com.cloud.utils.script.Script;
import static com.cloud.utils.NumbersUtil.toHumanReadableSize;
/* /*
* Uses a local script now, eventually support for virStorageVolResize() will maybe work on qcow2 and lvm and we can do this in libvirt calls * Uses a local script now, eventually support for virStorageVolResize() will maybe work on qcow2 and lvm and we can do this in libvirt calls
*/ */
@ -51,8 +66,8 @@ public final class LibvirtResizeVolumeCommandWrapper extends CommandWrapper<Resi
@Override @Override
public Answer execute(final ResizeVolumeCommand command, final LibvirtComputingResource libvirtComputingResource) { public Answer execute(final ResizeVolumeCommand command, final LibvirtComputingResource libvirtComputingResource) {
final String volid = command.getPath(); final String volumeId = command.getPath();
final long newSize = command.getNewSize(); long newSize = command.getNewSize();
final long currentSize = command.getCurrentSize(); final long currentSize = command.getCurrentSize();
final String vmInstanceName = command.getInstanceName(); final String vmInstanceName = command.getInstanceName();
final boolean shrinkOk = command.getShrinkOk(); final boolean shrinkOk = command.getShrinkOk();
@ -69,11 +84,23 @@ public final class LibvirtResizeVolumeCommandWrapper extends CommandWrapper<Resi
final KVMStoragePoolManager storagePoolMgr = libvirtComputingResource.getStoragePoolMgr(); final KVMStoragePoolManager storagePoolMgr = libvirtComputingResource.getStoragePoolMgr();
KVMStoragePool pool = storagePoolMgr.getStoragePool(spool.getType(), spool.getUuid()); KVMStoragePool pool = storagePoolMgr.getStoragePool(spool.getType(), spool.getUuid());
final KVMPhysicalDisk vol = pool.getPhysicalDisk(volid); if (spool.getType().equals(StoragePoolType.PowerFlex)) {
pool.connectPhysicalDisk(volumeId, null);
}
final KVMPhysicalDisk vol = pool.getPhysicalDisk(volumeId);
final String path = vol.getPath(); final String path = vol.getPath();
String type = notifyOnlyType; String type = notifyOnlyType;
if (pool.getType() != StoragePoolType.RBD && pool.getType() != StoragePoolType.Linstor) { if (spool.getType().equals(StoragePoolType.PowerFlex) && vol.getFormat().equals(PhysicalDiskFormat.QCOW2)) {
// PowerFlex QCOW2 sizing needs to consider overhead.
newSize = ScaleIOStorageAdaptor.getUsableBytesFromRawBytes(newSize);
} else if (spool.getType().equals(StoragePoolType.PowerFlex)) {
// PowerFlex RAW/LUKS is already resized, we just notify the domain based on new size (considering LUKS overhead)
newSize = getVirtualSizeFromFile(path);
}
if (pool.getType() != StoragePoolType.RBD && pool.getType() != StoragePoolType.Linstor && pool.getType() != StoragePoolType.PowerFlex) {
type = libvirtComputingResource.getResizeScriptType(pool, vol); type = libvirtComputingResource.getResizeScriptType(pool, vol);
if (type.equals("QCOW2") && shrinkOk) { if (type.equals("QCOW2") && shrinkOk) {
return new ResizeVolumeAnswer(command, false, "Unable to shrink volumes of type " + type); return new ResizeVolumeAnswer(command, false, "Unable to shrink volumes of type " + type);
@ -84,9 +111,9 @@ public final class LibvirtResizeVolumeCommandWrapper extends CommandWrapper<Resi
s_logger.debug("Resizing volume: " + path + ", from: " + toHumanReadableSize(currentSize) + ", to: " + toHumanReadableSize(newSize) + ", type: " + type + ", name: " + vmInstanceName + ", shrinkOk: " + shrinkOk); s_logger.debug("Resizing volume: " + path + ", from: " + toHumanReadableSize(currentSize) + ", to: " + toHumanReadableSize(newSize) + ", type: " + type + ", name: " + vmInstanceName + ", shrinkOk: " + shrinkOk);
/* libvirt doesn't support resizing (C)LVM devices, and corrupts QCOW2 in some scenarios, so we have to do these via Bash script */ /* libvirt doesn't support resizing (C)LVM devices, and corrupts QCOW2 in some scenarios, so we have to do these via qemu-img */
if (pool.getType() != StoragePoolType.CLVM && pool.getType() != StoragePoolType.Linstor && if (pool.getType() != StoragePoolType.CLVM && pool.getType() != StoragePoolType.Linstor && pool.getType() != StoragePoolType.PowerFlex
vol.getFormat() != PhysicalDiskFormat.QCOW2) { && vol.getFormat() != PhysicalDiskFormat.QCOW2) {
s_logger.debug("Volume " + path + " can be resized by libvirt. Asking libvirt to resize the volume."); s_logger.debug("Volume " + path + " can be resized by libvirt. Asking libvirt to resize the volume.");
try { try {
final LibvirtUtilitiesHelper libvirtUtilitiesHelper = libvirtComputingResource.getLibvirtUtilitiesHelper(); final LibvirtUtilitiesHelper libvirtUtilitiesHelper = libvirtComputingResource.getLibvirtUtilitiesHelper();
@ -107,35 +134,95 @@ public final class LibvirtResizeVolumeCommandWrapper extends CommandWrapper<Resi
return new ResizeVolumeAnswer(command, false, e.toString()); return new ResizeVolumeAnswer(command, false, e.toString());
} }
} }
s_logger.debug("Invoking resize script to handle type " + type);
final Script resizecmd = new Script(libvirtComputingResource.getResizeVolumePath(), libvirtComputingResource.getCmdsTimeout(), s_logger); boolean vmIsRunning = isVmRunning(vmInstanceName, libvirtComputingResource);
resizecmd.add("-s", String.valueOf(newSize));
resizecmd.add("-c", String.valueOf(currentSize));
resizecmd.add("-p", path);
resizecmd.add("-t", type);
resizecmd.add("-r", String.valueOf(shrinkOk));
resizecmd.add("-v", vmInstanceName);
final String result = resizecmd.execute();
if (result != null) { /* when VM is offline, we use qemu-img directly to resize encrypted volumes.
if(type.equals(notifyOnlyType)) { If VM is online, the existing resize script will call virsh blockresize which works
return new ResizeVolumeAnswer(command, true, "Resize succeeded, but need reboot to notify guest"); with both encrypted and non-encrypted volumes.
} else { */
return new ResizeVolumeAnswer(command, false, result); if (!vmIsRunning && command.getPassphrase() != null && command.getPassphrase().length > 0 ) {
s_logger.debug("Invoking qemu-img to resize an offline, encrypted volume");
QemuObject.EncryptFormat encryptFormat = QemuObject.EncryptFormat.enumValue(command.getEncryptFormat());
resizeEncryptedQcowFile(vol, encryptFormat,newSize, command.getPassphrase(), libvirtComputingResource);
} else {
s_logger.debug("Invoking resize script to handle type " + type);
final Script resizecmd = new Script(libvirtComputingResource.getResizeVolumePath(), libvirtComputingResource.getCmdsTimeout(), s_logger);
resizecmd.add("-s", String.valueOf(newSize));
resizecmd.add("-c", String.valueOf(currentSize));
resizecmd.add("-p", path);
resizecmd.add("-t", type);
resizecmd.add("-r", String.valueOf(shrinkOk));
resizecmd.add("-v", vmInstanceName);
final String result = resizecmd.execute();
if (result != null) {
if(type.equals(notifyOnlyType)) {
return new ResizeVolumeAnswer(command, true, "Resize succeeded, but need reboot to notify guest");
} else {
return new ResizeVolumeAnswer(command, false, result);
}
} }
} }
/* fetch new size as seen from libvirt, don't want to assume anything */ /* fetch new size as seen from libvirt, don't want to assume anything */
pool = storagePoolMgr.getStoragePool(spool.getType(), spool.getUuid()); pool = storagePoolMgr.getStoragePool(spool.getType(), spool.getUuid());
pool.refresh(); pool.refresh();
final long finalSize = pool.getPhysicalDisk(volid).getVirtualSize(); final long finalSize = pool.getPhysicalDisk(volumeId).getVirtualSize();
s_logger.debug("after resize, size reports as: " + toHumanReadableSize(finalSize) + ", requested: " + toHumanReadableSize(newSize)); s_logger.debug("after resize, size reports as: " + toHumanReadableSize(finalSize) + ", requested: " + toHumanReadableSize(newSize));
return new ResizeVolumeAnswer(command, true, "success", finalSize); return new ResizeVolumeAnswer(command, true, "success", finalSize);
} catch (final CloudRuntimeException e) { } catch (final CloudRuntimeException e) {
final String error = "Failed to resize volume: " + e.getMessage(); final String error = "Failed to resize volume: " + e.getMessage();
s_logger.debug(error); s_logger.debug(error);
return new ResizeVolumeAnswer(command, false, error); return new ResizeVolumeAnswer(command, false, error);
} finally {
command.clearPassphrase();
}
}
private boolean isVmRunning(final String vmName, final LibvirtComputingResource libvirtComputingResource) {
try {
final LibvirtUtilitiesHelper libvirtUtilitiesHelper = libvirtComputingResource.getLibvirtUtilitiesHelper();
Connect conn = libvirtUtilitiesHelper.getConnectionByVmName(vmName);
Domain dom = conn.domainLookupByName(vmName);
return (dom != null && dom.getInfo().state == DomainInfo.DomainState.VIR_DOMAIN_RUNNING);
} catch (LibvirtException ex) {
s_logger.info(String.format("Did not find a running VM '%s'", vmName));
}
return false;
}
private void resizeEncryptedQcowFile(final KVMPhysicalDisk vol, final QemuObject.EncryptFormat encryptFormat, long newSize,
byte[] passphrase, final LibvirtComputingResource libvirtComputingResource) throws CloudRuntimeException {
List<QemuObject> passphraseObjects = new ArrayList<>();
try (KeyFile keyFile = new KeyFile(passphrase)) {
passphraseObjects.add(
QemuObject.prepareSecretForQemuImg(vol.getFormat(), encryptFormat, keyFile.toString(), "sec0", null)
);
QemuImg q = new QemuImg(libvirtComputingResource.getCmdsTimeout());
QemuImageOptions imgOptions = new QemuImageOptions(vol.getFormat(), vol.getPath(),"sec0");
q.resize(imgOptions, passphraseObjects, newSize);
} catch (QemuImgException | LibvirtException ex) {
throw new CloudRuntimeException("Failed to run qemu-img for resize", ex);
} catch (IOException ex) {
throw new CloudRuntimeException("Failed to create keyfile for encrypted resize", ex);
} finally {
Arrays.fill(passphrase, (byte) 0);
}
}
private long getVirtualSizeFromFile(String path) {
try {
QemuImg qemu = new QemuImg(0);
QemuImgFile qemuFile = new QemuImgFile(path);
Map<String, String> info = qemu.info(qemuFile);
if (info.containsKey(QemuImg.VIRTUAL_SIZE)) {
return Long.parseLong(info.get(QemuImg.VIRTUAL_SIZE));
} else {
throw new CloudRuntimeException("Unable to determine virtual size of volume at path " + path);
}
} catch (QemuImgException | LibvirtException ex) {
throw new CloudRuntimeException("Error when inspecting volume at path " + path, ex);
} }
} }
} }

View File

@ -99,6 +99,10 @@ public final class LibvirtStopCommandWrapper extends CommandWrapper<StopCommand,
if (disks != null && disks.size() > 0) { if (disks != null && disks.size() > 0) {
for (final DiskDef disk : disks) { for (final DiskDef disk : disks) {
libvirtComputingResource.cleanupDisk(disk); libvirtComputingResource.cleanupDisk(disk);
DiskDef.LibvirtDiskEncryptDetails diskEncryptDetails = disk.getLibvirtDiskEncryptDetails();
if (diskEncryptDetails != null) {
libvirtComputingResource.removeLibvirtVolumeSecret(conn, diskEncryptDetails.getPassphraseUuid());
}
} }
} }
else { else {

View File

@ -21,12 +21,12 @@ import java.util.List;
import java.util.Map; import java.util.Map;
import org.apache.cloudstack.utils.qemu.QemuImg; import org.apache.cloudstack.utils.qemu.QemuImg;
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
import org.apache.cloudstack.utils.qemu.QemuImgException; import org.apache.cloudstack.utils.qemu.QemuImgException;
import org.apache.cloudstack.utils.qemu.QemuImgFile; import org.apache.cloudstack.utils.qemu.QemuImgFile;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.apache.log4j.Logger; import org.apache.log4j.Logger;
import org.libvirt.LibvirtException;
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
import com.cloud.agent.api.to.DiskTO; import com.cloud.agent.api.to.DiskTO;
import com.cloud.storage.Storage; import com.cloud.storage.Storage;
@ -35,7 +35,6 @@ import com.cloud.storage.Storage.StoragePoolType;
import com.cloud.utils.exception.CloudRuntimeException; import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.script.OutputInterpreter; import com.cloud.utils.script.OutputInterpreter;
import com.cloud.utils.script.Script; import com.cloud.utils.script.Script;
import org.libvirt.LibvirtException;
@StorageAdaptorInfo(storagePoolType=StoragePoolType.Iscsi) @StorageAdaptorInfo(storagePoolType=StoragePoolType.Iscsi)
public class IscsiAdmStorageAdaptor implements StorageAdaptor { public class IscsiAdmStorageAdaptor implements StorageAdaptor {
@ -75,7 +74,7 @@ public class IscsiAdmStorageAdaptor implements StorageAdaptor {
// called from LibvirtComputingResource.execute(CreateCommand) // called from LibvirtComputingResource.execute(CreateCommand)
// does not apply for iScsiAdmStorageAdaptor // does not apply for iScsiAdmStorageAdaptor
@Override @Override
public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, KVMStoragePool pool, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, KVMStoragePool pool, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) {
throw new UnsupportedOperationException("Creating a physical disk is not supported."); throw new UnsupportedOperationException("Creating a physical disk is not supported.");
} }
@ -384,7 +383,7 @@ public class IscsiAdmStorageAdaptor implements StorageAdaptor {
@Override @Override
public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, PhysicalDiskFormat format,
ProvisioningType provisioningType, long size, ProvisioningType provisioningType, long size,
KVMStoragePool destPool, int timeout) { KVMStoragePool destPool, int timeout, byte[] passphrase) {
throw new UnsupportedOperationException("Creating a disk from a template is not yet supported for this configuration."); throw new UnsupportedOperationException("Creating a disk from a template is not yet supported for this configuration.");
} }
@ -394,8 +393,12 @@ public class IscsiAdmStorageAdaptor implements StorageAdaptor {
} }
@Override @Override
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk srcDisk, String destVolumeUuid, KVMStoragePool destPool, int timeout) { public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) {
QemuImg q = new QemuImg(timeout); return copyPhysicalDisk(disk, name, destPool, timeout, null, null, null);
}
@Override
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk srcDisk, String destVolumeUuid, KVMStoragePool destPool, int timeout, byte[] srcPassphrase, byte[] destPassphrase, ProvisioningType provisioningType) {
QemuImgFile srcFile; QemuImgFile srcFile;
@ -414,6 +417,7 @@ public class IscsiAdmStorageAdaptor implements StorageAdaptor {
QemuImgFile destFile = new QemuImgFile(destDisk.getPath(), destDisk.getFormat()); QemuImgFile destFile = new QemuImgFile(destDisk.getPath(), destDisk.getFormat());
try { try {
QemuImg q = new QemuImg(timeout);
q.convert(srcFile, destFile); q.convert(srcFile, destFile);
} catch (QemuImgException | LibvirtException ex) { } catch (QemuImgException | LibvirtException ex) {
String msg = "Failed to copy data from " + srcDisk.getPath() + " to " + String msg = "Failed to copy data from " + srcDisk.getPath() + " to " +
@ -443,7 +447,7 @@ public class IscsiAdmStorageAdaptor implements StorageAdaptor {
} }
@Override @Override
public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout) { public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout, byte[] passphrase) {
return null; return null;
} }

View File

@ -89,7 +89,7 @@ public class IscsiAdmStoragePool implements KVMStoragePool {
// from LibvirtComputingResource.createDiskFromTemplate(KVMPhysicalDisk, String, PhysicalDiskFormat, long, KVMStoragePool) // from LibvirtComputingResource.createDiskFromTemplate(KVMPhysicalDisk, String, PhysicalDiskFormat, long, KVMStoragePool)
// does not apply for iScsiAdmStoragePool // does not apply for iScsiAdmStoragePool
@Override @Override
public KVMPhysicalDisk createPhysicalDisk(String name, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { public KVMPhysicalDisk createPhysicalDisk(String name, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) {
throw new UnsupportedOperationException("Creating a physical disk is not supported."); throw new UnsupportedOperationException("Creating a physical disk is not supported.");
} }
@ -97,7 +97,7 @@ public class IscsiAdmStoragePool implements KVMStoragePool {
// from KVMStorageProcessor.createVolume(CreateObjectCommand) // from KVMStorageProcessor.createVolume(CreateObjectCommand)
// does not apply for iScsiAdmStoragePool // does not apply for iScsiAdmStoragePool
@Override @Override
public KVMPhysicalDisk createPhysicalDisk(String name, Storage.ProvisioningType provisioningType, long size) { public KVMPhysicalDisk createPhysicalDisk(String name, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) {
throw new UnsupportedOperationException("Creating a physical disk is not supported."); throw new UnsupportedOperationException("Creating a physical disk is not supported.");
} }

View File

@ -17,11 +17,13 @@
package com.cloud.hypervisor.kvm.storage; package com.cloud.hypervisor.kvm.storage;
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat; import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
import org.apache.cloudstack.utils.qemu.QemuObject;
public class KVMPhysicalDisk { public class KVMPhysicalDisk {
private String path; private String path;
private String name; private String name;
private KVMStoragePool pool; private KVMStoragePool pool;
private boolean useAsTemplate;
public static String RBDStringBuilder(String monHost, int monPort, String authUserName, String authSecret, String image) { public static String RBDStringBuilder(String monHost, int monPort, String authUserName, String authSecret, String image) {
String rbdOpts; String rbdOpts;
@ -49,6 +51,7 @@ public class KVMPhysicalDisk {
private PhysicalDiskFormat format; private PhysicalDiskFormat format;
private long size; private long size;
private long virtualSize; private long virtualSize;
private QemuObject.EncryptFormat qemuEncryptFormat;
public KVMPhysicalDisk(String path, String name, KVMStoragePool pool) { public KVMPhysicalDisk(String path, String name, KVMStoragePool pool) {
this.path = path; this.path = path;
@ -101,4 +104,15 @@ public class KVMPhysicalDisk {
this.path = path; this.path = path;
} }
public QemuObject.EncryptFormat getQemuEncryptFormat() {
return this.qemuEncryptFormat;
}
public void setQemuEncryptFormat(QemuObject.EncryptFormat format) {
this.qemuEncryptFormat = format;
}
public void setUseAsTemplate() { this.useAsTemplate = true; }
public boolean useAsTemplate() { return this.useAsTemplate; }
} }

View File

@ -25,9 +25,9 @@ import com.cloud.storage.Storage;
import com.cloud.storage.Storage.StoragePoolType; import com.cloud.storage.Storage.StoragePoolType;
public interface KVMStoragePool { public interface KVMStoragePool {
public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size); public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase);
public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, Storage.ProvisioningType provisioningType, long size); public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, Storage.ProvisioningType provisioningType, long size, byte[] passphrase);
public boolean connectPhysicalDisk(String volumeUuid, Map<String, String> details); public boolean connectPhysicalDisk(String volumeUuid, Map<String, String> details);

View File

@ -386,35 +386,35 @@ public class KVMStoragePoolManager {
} }
public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, Storage.ProvisioningType provisioningType, public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, Storage.ProvisioningType provisioningType,
KVMStoragePool destPool, int timeout) { KVMStoragePool destPool, int timeout, byte[] passphrase) {
return createDiskFromTemplate(template, name, provisioningType, destPool, template.getSize(), timeout); return createDiskFromTemplate(template, name, provisioningType, destPool, template.getSize(), timeout, passphrase);
} }
public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, Storage.ProvisioningType provisioningType, public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, Storage.ProvisioningType provisioningType,
KVMStoragePool destPool, long size, int timeout) { KVMStoragePool destPool, long size, int timeout, byte[] passphrase) {
StorageAdaptor adaptor = getStorageAdaptor(destPool.getType()); StorageAdaptor adaptor = getStorageAdaptor(destPool.getType());
// LibvirtStorageAdaptor-specific statement // LibvirtStorageAdaptor-specific statement
if (destPool.getType() == StoragePoolType.RBD) { if (destPool.getType() == StoragePoolType.RBD) {
return adaptor.createDiskFromTemplate(template, name, return adaptor.createDiskFromTemplate(template, name,
PhysicalDiskFormat.RAW, provisioningType, PhysicalDiskFormat.RAW, provisioningType,
size, destPool, timeout); size, destPool, timeout, passphrase);
} else if (destPool.getType() == StoragePoolType.CLVM) { } else if (destPool.getType() == StoragePoolType.CLVM) {
return adaptor.createDiskFromTemplate(template, name, return adaptor.createDiskFromTemplate(template, name,
PhysicalDiskFormat.RAW, provisioningType, PhysicalDiskFormat.RAW, provisioningType,
size, destPool, timeout); size, destPool, timeout, passphrase);
} else if (template.getFormat() == PhysicalDiskFormat.DIR) { } else if (template.getFormat() == PhysicalDiskFormat.DIR) {
return adaptor.createDiskFromTemplate(template, name, return adaptor.createDiskFromTemplate(template, name,
PhysicalDiskFormat.DIR, provisioningType, PhysicalDiskFormat.DIR, provisioningType,
size, destPool, timeout); size, destPool, timeout, passphrase);
} else if (destPool.getType() == StoragePoolType.PowerFlex || destPool.getType() == StoragePoolType.Linstor) { } else if (destPool.getType() == StoragePoolType.PowerFlex || destPool.getType() == StoragePoolType.Linstor) {
return adaptor.createDiskFromTemplate(template, name, return adaptor.createDiskFromTemplate(template, name,
PhysicalDiskFormat.RAW, provisioningType, PhysicalDiskFormat.RAW, provisioningType,
size, destPool, timeout); size, destPool, timeout, passphrase);
} else { } else {
return adaptor.createDiskFromTemplate(template, name, return adaptor.createDiskFromTemplate(template, name,
PhysicalDiskFormat.QCOW2, provisioningType, PhysicalDiskFormat.QCOW2, provisioningType,
size, destPool, timeout); size, destPool, timeout, passphrase);
} }
} }
@ -425,13 +425,18 @@ public class KVMStoragePoolManager {
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) { public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) {
StorageAdaptor adaptor = getStorageAdaptor(destPool.getType()); StorageAdaptor adaptor = getStorageAdaptor(destPool.getType());
return adaptor.copyPhysicalDisk(disk, name, destPool, timeout); return adaptor.copyPhysicalDisk(disk, name, destPool, timeout, null, null, null);
}
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout, byte[] srcPassphrase, byte[] dstPassphrase, Storage.ProvisioningType provisioningType) {
StorageAdaptor adaptor = getStorageAdaptor(destPool.getType());
return adaptor.copyPhysicalDisk(disk, name, destPool, timeout, srcPassphrase, dstPassphrase, provisioningType);
} }
public KVMPhysicalDisk createDiskWithTemplateBacking(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, long size, public KVMPhysicalDisk createDiskWithTemplateBacking(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, long size,
KVMStoragePool destPool, int timeout) { KVMStoragePool destPool, int timeout, byte[] passphrase) {
StorageAdaptor adaptor = getStorageAdaptor(destPool.getType()); StorageAdaptor adaptor = getStorageAdaptor(destPool.getType());
return adaptor.createDiskFromTemplateBacking(template, name, format, size, destPool, timeout); return adaptor.createDiskFromTemplateBacking(template, name, format, size, destPool, timeout, passphrase);
} }
public KVMPhysicalDisk createPhysicalDiskFromDirectDownloadTemplate(String templateFilePath, String destTemplatePath, KVMStoragePool destPool, Storage.ImageFormat format, int timeout) { public KVMPhysicalDisk createPhysicalDiskFromDirectDownloadTemplate(String templateFilePath, String destTemplatePath, KVMStoragePool destPool, Storage.ImageFormat format, int timeout) {

View File

@ -37,6 +37,7 @@ import java.util.UUID;
import javax.naming.ConfigurationException; import javax.naming.ConfigurationException;
import com.cloud.storage.ScopeType;
import org.apache.cloudstack.agent.directdownload.DirectDownloadAnswer; import org.apache.cloudstack.agent.directdownload.DirectDownloadAnswer;
import org.apache.cloudstack.agent.directdownload.DirectDownloadCommand; import org.apache.cloudstack.agent.directdownload.DirectDownloadCommand;
import org.apache.cloudstack.agent.directdownload.HttpDirectDownloadCommand; import org.apache.cloudstack.agent.directdownload.HttpDirectDownloadCommand;
@ -68,6 +69,7 @@ import org.apache.cloudstack.utils.qemu.QemuImg;
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat; import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
import org.apache.cloudstack.utils.qemu.QemuImgException; import org.apache.cloudstack.utils.qemu.QemuImgException;
import org.apache.cloudstack.utils.qemu.QemuImgFile; import org.apache.cloudstack.utils.qemu.QemuImgFile;
import org.apache.cloudstack.utils.qemu.QemuObject;
import org.apache.commons.collections.MapUtils; import org.apache.commons.collections.MapUtils;
import org.apache.commons.io.FileUtils; import org.apache.commons.io.FileUtils;
@ -148,7 +150,6 @@ public class KVMStorageProcessor implements StorageProcessor {
private int _cmdsTimeout; private int _cmdsTimeout;
private static final String MANAGE_SNAPSTHOT_CREATE_OPTION = "-c"; private static final String MANAGE_SNAPSTHOT_CREATE_OPTION = "-c";
private static final String MANAGE_SNAPSTHOT_DESTROY_OPTION = "-d";
private static final String NAME_OPTION = "-n"; private static final String NAME_OPTION = "-n";
private static final String CEPH_MON_HOST = "mon_host"; private static final String CEPH_MON_HOST = "mon_host";
private static final String CEPH_AUTH_KEY = "key"; private static final String CEPH_AUTH_KEY = "key";
@ -250,6 +251,7 @@ public class KVMStorageProcessor implements StorageProcessor {
} }
/* Copy volume to primary storage */ /* Copy volume to primary storage */
tmplVol.setUseAsTemplate();
s_logger.debug("Copying template to primary storage, template format is " + tmplVol.getFormat() ); s_logger.debug("Copying template to primary storage, template format is " + tmplVol.getFormat() );
final KVMStoragePool primaryPool = storagePoolMgr.getStoragePool(primaryStore.getPoolType(), primaryStore.getUuid()); final KVMStoragePool primaryPool = storagePoolMgr.getStoragePool(primaryStore.getPoolType(), primaryStore.getUuid());
@ -422,7 +424,7 @@ public class KVMStorageProcessor implements StorageProcessor {
s_logger.warn("Failed to connect new volume at path: " + path + ", in storage pool id: " + primaryStore.getUuid()); s_logger.warn("Failed to connect new volume at path: " + path + ", in storage pool id: " + primaryStore.getUuid());
} }
vol = storagePoolMgr.copyPhysicalDisk(BaseVol, path != null ? path : volume.getUuid(), primaryPool, cmd.getWaitInMillSeconds()); vol = storagePoolMgr.copyPhysicalDisk(BaseVol, path != null ? path : volume.getUuid(), primaryPool, cmd.getWaitInMillSeconds(), null, volume.getPassphrase(), volume.getProvisioningType());
storagePoolMgr.disconnectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), path); storagePoolMgr.disconnectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), path);
} else { } else {
@ -432,7 +434,7 @@ public class KVMStorageProcessor implements StorageProcessor {
} }
BaseVol = storagePoolMgr.getPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), templatePath); BaseVol = storagePoolMgr.getPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), templatePath);
vol = storagePoolMgr.createDiskFromTemplate(BaseVol, volume.getUuid(), volume.getProvisioningType(), vol = storagePoolMgr.createDiskFromTemplate(BaseVol, volume.getUuid(), volume.getProvisioningType(),
BaseVol.getPool(), volume.getSize(), cmd.getWaitInMillSeconds()); BaseVol.getPool(), volume.getSize(), cmd.getWaitInMillSeconds(), volume.getPassphrase());
} }
if (vol == null) { if (vol == null) {
return new CopyCmdAnswer(" Can't create storage volume on storage pool"); return new CopyCmdAnswer(" Can't create storage volume on storage pool");
@ -441,6 +443,9 @@ public class KVMStorageProcessor implements StorageProcessor {
final VolumeObjectTO newVol = new VolumeObjectTO(); final VolumeObjectTO newVol = new VolumeObjectTO();
newVol.setPath(vol.getName()); newVol.setPath(vol.getName());
newVol.setSize(volume.getSize()); newVol.setSize(volume.getSize());
if (vol.getQemuEncryptFormat() != null) {
newVol.setEncryptFormat(vol.getQemuEncryptFormat().toString());
}
if (vol.getFormat() == PhysicalDiskFormat.RAW) { if (vol.getFormat() == PhysicalDiskFormat.RAW) {
newVol.setFormat(ImageFormat.RAW); newVol.setFormat(ImageFormat.RAW);
@ -454,6 +459,8 @@ public class KVMStorageProcessor implements StorageProcessor {
} catch (final CloudRuntimeException e) { } catch (final CloudRuntimeException e) {
s_logger.debug("Failed to create volume: ", e); s_logger.debug("Failed to create volume: ", e);
return new CopyCmdAnswer(e.toString()); return new CopyCmdAnswer(e.toString());
} finally {
volume.clearPassphrase();
} }
} }
@ -524,6 +531,7 @@ public class KVMStorageProcessor implements StorageProcessor {
return new CopyCmdAnswer(e.toString()); return new CopyCmdAnswer(e.toString());
} finally { } finally {
srcVol.clearPassphrase();
if (secondaryStoragePool != null) { if (secondaryStoragePool != null) {
storagePoolMgr.deleteStoragePool(secondaryStoragePool.getType(), secondaryStoragePool.getUuid()); storagePoolMgr.deleteStoragePool(secondaryStoragePool.getType(), secondaryStoragePool.getUuid());
} }
@ -570,6 +578,8 @@ public class KVMStorageProcessor implements StorageProcessor {
s_logger.debug("Failed to copyVolumeFromPrimaryToSecondary: ", e); s_logger.debug("Failed to copyVolumeFromPrimaryToSecondary: ", e);
return new CopyCmdAnswer(e.toString()); return new CopyCmdAnswer(e.toString());
} finally { } finally {
srcVol.clearPassphrase();
destVol.clearPassphrase();
if (secondaryStoragePool != null) { if (secondaryStoragePool != null) {
storagePoolMgr.deleteStoragePool(secondaryStoragePool.getType(), secondaryStoragePool.getUuid()); storagePoolMgr.deleteStoragePool(secondaryStoragePool.getType(), secondaryStoragePool.getUuid());
} }
@ -697,6 +707,7 @@ public class KVMStorageProcessor implements StorageProcessor {
s_logger.debug("Failed to createTemplateFromVolume: ", e); s_logger.debug("Failed to createTemplateFromVolume: ", e);
return new CopyCmdAnswer(e.toString()); return new CopyCmdAnswer(e.toString());
} finally { } finally {
volume.clearPassphrase();
if (secondaryStorage != null) { if (secondaryStorage != null) {
secondaryStorage.delete(); secondaryStorage.delete();
} }
@ -942,6 +953,8 @@ public class KVMStorageProcessor implements StorageProcessor {
Connect conn = null; Connect conn = null;
KVMPhysicalDisk snapshotDisk = null; KVMPhysicalDisk snapshotDisk = null;
KVMStoragePool primaryPool = null; KVMStoragePool primaryPool = null;
final VolumeObjectTO srcVolume = snapshot.getVolume();
try { try {
conn = LibvirtConnection.getConnectionByVmName(vmName); conn = LibvirtConnection.getConnectionByVmName(vmName);
@ -1024,13 +1037,11 @@ public class KVMStorageProcessor implements StorageProcessor {
newSnapshot.setPath(snapshotRelPath + File.separator + descName); newSnapshot.setPath(snapshotRelPath + File.separator + descName);
newSnapshot.setPhysicalSize(size); newSnapshot.setPhysicalSize(size);
return new CopyCmdAnswer(newSnapshot); return new CopyCmdAnswer(newSnapshot);
} catch (final LibvirtException e) { } catch (final LibvirtException | CloudRuntimeException e) {
s_logger.debug("Failed to backup snapshot: ", e);
return new CopyCmdAnswer(e.toString());
} catch (final CloudRuntimeException e) {
s_logger.debug("Failed to backup snapshot: ", e); s_logger.debug("Failed to backup snapshot: ", e);
return new CopyCmdAnswer(e.toString()); return new CopyCmdAnswer(e.toString());
} finally { } finally {
srcVolume.clearPassphrase();
if (isCreatedFromVmSnapshot) { if (isCreatedFromVmSnapshot) {
s_logger.debug("Ignoring removal of vm snapshot on primary as this snapshot is created from vm snapshot"); s_logger.debug("Ignoring removal of vm snapshot on primary as this snapshot is created from vm snapshot");
} else if (primaryPool.getType() != StoragePoolType.RBD) { } else if (primaryPool.getType() != StoragePoolType.RBD) {
@ -1058,16 +1069,6 @@ public class KVMStorageProcessor implements StorageProcessor {
} }
} }
private void deleteSnapshotViaManageSnapshotScript(final String snapshotName, KVMPhysicalDisk snapshotDisk) {
final Script command = new Script(_manageSnapshotPath, _cmdsTimeout, s_logger);
command.add(MANAGE_SNAPSTHOT_DESTROY_OPTION, snapshotDisk.getPath());
command.add(NAME_OPTION, snapshotName);
final String result = command.execute();
if (result != null) {
s_logger.debug("Failed to delete snapshot on primary: " + result);
}
}
protected synchronized String attachOrDetachISO(final Connect conn, final String vmName, String isoPath, final boolean isAttach, Map<String, String> params) throws LibvirtException, URISyntaxException, protected synchronized String attachOrDetachISO(final Connect conn, final String vmName, String isoPath, final boolean isAttach, Map<String, String> params) throws LibvirtException, URISyntaxException,
InternalErrorException { InternalErrorException {
String isoXml = null; String isoXml = null;
@ -1213,7 +1214,7 @@ public class KVMStorageProcessor implements StorageProcessor {
final Long bytesReadRate, final Long bytesReadRateMax, final Long bytesReadRateMaxLength, final Long bytesReadRate, final Long bytesReadRateMax, final Long bytesReadRateMaxLength,
final Long bytesWriteRate, final Long bytesWriteRateMax, final Long bytesWriteRateMaxLength, final Long bytesWriteRate, final Long bytesWriteRateMax, final Long bytesWriteRateMaxLength,
final Long iopsReadRate, final Long iopsReadRateMax, final Long iopsReadRateMaxLength, final Long iopsReadRate, final Long iopsReadRateMax, final Long iopsReadRateMaxLength,
final Long iopsWriteRate, final Long iopsWriteRateMax, final Long iopsWriteRateMaxLength, final String cacheMode) throws LibvirtException, InternalErrorException { final Long iopsWriteRate, final Long iopsWriteRateMax, final Long iopsWriteRateMaxLength, final String cacheMode, final DiskDef.LibvirtDiskEncryptDetails encryptDetails) throws LibvirtException, InternalErrorException {
List<DiskDef> disks = null; List<DiskDef> disks = null;
Domain dm = null; Domain dm = null;
DiskDef diskdef = null; DiskDef diskdef = null;
@ -1281,12 +1282,21 @@ public class KVMStorageProcessor implements StorageProcessor {
final String glusterVolume = attachingPool.getSourceDir().replace("/", ""); final String glusterVolume = attachingPool.getSourceDir().replace("/", "");
diskdef.defNetworkBasedDisk(glusterVolume + path.replace(mountpoint, ""), attachingPool.getSourceHost(), attachingPool.getSourcePort(), null, diskdef.defNetworkBasedDisk(glusterVolume + path.replace(mountpoint, ""), attachingPool.getSourceHost(), attachingPool.getSourcePort(), null,
null, devId, busT, DiskProtocol.GLUSTER, DiskDef.DiskFmtType.QCOW2); null, devId, busT, DiskProtocol.GLUSTER, DiskDef.DiskFmtType.QCOW2);
} else if (attachingPool.getType() == StoragePoolType.PowerFlex) {
diskdef.defBlockBasedDisk(attachingDisk.getPath(), devId, busT);
if (attachingDisk.getFormat() == PhysicalDiskFormat.QCOW2) {
diskdef.setDiskFormatType(DiskDef.DiskFmtType.QCOW2);
}
} else if (attachingDisk.getFormat() == PhysicalDiskFormat.QCOW2) { } else if (attachingDisk.getFormat() == PhysicalDiskFormat.QCOW2) {
diskdef.defFileBasedDisk(attachingDisk.getPath(), devId, busT, DiskDef.DiskFmtType.QCOW2); diskdef.defFileBasedDisk(attachingDisk.getPath(), devId, busT, DiskDef.DiskFmtType.QCOW2);
} else if (attachingDisk.getFormat() == PhysicalDiskFormat.RAW) { } else if (attachingDisk.getFormat() == PhysicalDiskFormat.RAW) {
diskdef.defBlockBasedDisk(attachingDisk.getPath(), devId, busT); diskdef.defBlockBasedDisk(attachingDisk.getPath(), devId, busT);
} }
if (encryptDetails != null) {
diskdef.setLibvirtDiskEncryptDetails(encryptDetails);
}
if ((bytesReadRate != null) && (bytesReadRate > 0)) { if ((bytesReadRate != null) && (bytesReadRate > 0)) {
diskdef.setBytesReadRate(bytesReadRate); diskdef.setBytesReadRate(bytesReadRate);
} }
@ -1344,19 +1354,27 @@ public class KVMStorageProcessor implements StorageProcessor {
final PrimaryDataStoreTO primaryStore = (PrimaryDataStoreTO)vol.getDataStore(); final PrimaryDataStoreTO primaryStore = (PrimaryDataStoreTO)vol.getDataStore();
final String vmName = cmd.getVmName(); final String vmName = cmd.getVmName();
final String serial = resource.diskUuidToSerial(vol.getUuid()); final String serial = resource.diskUuidToSerial(vol.getUuid());
try { try {
final Connect conn = LibvirtConnection.getConnectionByVmName(vmName); final Connect conn = LibvirtConnection.getConnectionByVmName(vmName);
DiskDef.LibvirtDiskEncryptDetails encryptDetails = null;
if (vol.requiresEncryption()) {
String secretUuid = resource.createLibvirtVolumeSecret(conn, vol.getPath(), vol.getPassphrase());
encryptDetails = new DiskDef.LibvirtDiskEncryptDetails(secretUuid, QemuObject.EncryptFormat.enumValue(vol.getEncryptFormat()));
vol.clearPassphrase();
}
storagePoolMgr.connectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), vol.getPath(), disk.getDetails()); storagePoolMgr.connectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), vol.getPath(), disk.getDetails());
final KVMPhysicalDisk phyDisk = storagePoolMgr.getPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), vol.getPath()); final KVMPhysicalDisk phyDisk = storagePoolMgr.getPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), vol.getPath());
final String volCacheMode = vol.getCacheMode() == null ? null : vol.getCacheMode().toString(); final String volCacheMode = vol.getCacheMode() == null ? null : vol.getCacheMode().toString();
s_logger.debug(String.format("Attaching physical disk %s with format %s", phyDisk.getPath(), phyDisk.getFormat()));
attachOrDetachDisk(conn, true, vmName, phyDisk, disk.getDiskSeq().intValue(), serial, attachOrDetachDisk(conn, true, vmName, phyDisk, disk.getDiskSeq().intValue(), serial,
vol.getBytesReadRate(), vol.getBytesReadRateMax(), vol.getBytesReadRateMaxLength(), vol.getBytesReadRate(), vol.getBytesReadRateMax(), vol.getBytesReadRateMaxLength(),
vol.getBytesWriteRate(), vol.getBytesWriteRateMax(), vol.getBytesWriteRateMaxLength(), vol.getBytesWriteRate(), vol.getBytesWriteRateMax(), vol.getBytesWriteRateMaxLength(),
vol.getIopsReadRate(), vol.getIopsReadRateMax(), vol.getIopsReadRateMaxLength(), vol.getIopsReadRate(), vol.getIopsReadRateMax(), vol.getIopsReadRateMaxLength(),
vol.getIopsWriteRate(), vol.getIopsWriteRateMax(), vol.getIopsWriteRateMaxLength(), volCacheMode); vol.getIopsWriteRate(), vol.getIopsWriteRateMax(), vol.getIopsWriteRateMaxLength(), volCacheMode, encryptDetails);
return new AttachAnswer(disk); return new AttachAnswer(disk);
} catch (final LibvirtException e) { } catch (final LibvirtException e) {
@ -1369,6 +1387,8 @@ public class KVMStorageProcessor implements StorageProcessor {
} catch (final CloudRuntimeException e) { } catch (final CloudRuntimeException e) {
s_logger.debug("Failed to attach volume: " + vol.getPath() + ", due to ", e); s_logger.debug("Failed to attach volume: " + vol.getPath() + ", due to ", e);
return new AttachAnswer(e.toString()); return new AttachAnswer(e.toString());
} finally {
vol.clearPassphrase();
} }
} }
@ -1389,7 +1409,7 @@ public class KVMStorageProcessor implements StorageProcessor {
vol.getBytesReadRate(), vol.getBytesReadRateMax(), vol.getBytesReadRateMaxLength(), vol.getBytesReadRate(), vol.getBytesReadRateMax(), vol.getBytesReadRateMaxLength(),
vol.getBytesWriteRate(), vol.getBytesWriteRateMax(), vol.getBytesWriteRateMaxLength(), vol.getBytesWriteRate(), vol.getBytesWriteRateMax(), vol.getBytesWriteRateMaxLength(),
vol.getIopsReadRate(), vol.getIopsReadRateMax(), vol.getIopsReadRateMaxLength(), vol.getIopsReadRate(), vol.getIopsReadRateMax(), vol.getIopsReadRateMaxLength(),
vol.getIopsWriteRate(), vol.getIopsWriteRateMax(), vol.getIopsWriteRateMaxLength(), volCacheMode); vol.getIopsWriteRate(), vol.getIopsWriteRateMax(), vol.getIopsWriteRateMaxLength(), volCacheMode, null);
storagePoolMgr.disconnectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), vol.getPath()); storagePoolMgr.disconnectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), vol.getPath());
@ -1403,6 +1423,8 @@ public class KVMStorageProcessor implements StorageProcessor {
} catch (final CloudRuntimeException e) { } catch (final CloudRuntimeException e) {
s_logger.debug("Failed to detach volume: " + vol.getPath() + ", due to ", e); s_logger.debug("Failed to detach volume: " + vol.getPath() + ", due to ", e);
return new DettachAnswer(e.toString()); return new DettachAnswer(e.toString());
} finally {
vol.clearPassphrase();
} }
} }
@ -1421,7 +1443,7 @@ public class KVMStorageProcessor implements StorageProcessor {
destTemplate = primaryPool.getPhysicalDisk(srcBackingFilePath); destTemplate = primaryPool.getPhysicalDisk(srcBackingFilePath);
} }
return storagePoolMgr.createDiskWithTemplateBacking(destTemplate, volume.getUuid(), format, volume.getSize(), return storagePoolMgr.createDiskWithTemplateBacking(destTemplate, volume.getUuid(), format, volume.getSize(),
primaryPool, timeout); primaryPool, timeout, volume.getPassphrase());
} }
/** /**
@ -1429,7 +1451,7 @@ public class KVMStorageProcessor implements StorageProcessor {
*/ */
protected KVMPhysicalDisk createFullCloneVolume(MigrationOptions migrationOptions, VolumeObjectTO volume, KVMStoragePool primaryPool, PhysicalDiskFormat format) { protected KVMPhysicalDisk createFullCloneVolume(MigrationOptions migrationOptions, VolumeObjectTO volume, KVMStoragePool primaryPool, PhysicalDiskFormat format) {
s_logger.debug("For VM migration with full-clone volume: Creating empty stub disk for source disk " + migrationOptions.getSrcVolumeUuid() + " and size: " + toHumanReadableSize(volume.getSize()) + " and format: " + format); s_logger.debug("For VM migration with full-clone volume: Creating empty stub disk for source disk " + migrationOptions.getSrcVolumeUuid() + " and size: " + toHumanReadableSize(volume.getSize()) + " and format: " + format);
return primaryPool.createPhysicalDisk(volume.getUuid(), format, volume.getProvisioningType(), volume.getSize()); return primaryPool.createPhysicalDisk(volume.getUuid(), format, volume.getProvisioningType(), volume.getSize(), volume.getPassphrase());
} }
@Override @Override
@ -1452,24 +1474,25 @@ public class KVMStorageProcessor implements StorageProcessor {
MigrationOptions migrationOptions = volume.getMigrationOptions(); MigrationOptions migrationOptions = volume.getMigrationOptions();
if (migrationOptions != null) { if (migrationOptions != null) {
String srcStoreUuid = migrationOptions.getSrcPoolUuid();
StoragePoolType srcPoolType = migrationOptions.getSrcPoolType();
KVMStoragePool srcPool = storagePoolMgr.getStoragePool(srcPoolType, srcStoreUuid);
int timeout = migrationOptions.getTimeout(); int timeout = migrationOptions.getTimeout();
if (migrationOptions.getType() == MigrationOptions.Type.LinkedClone) { if (migrationOptions.getType() == MigrationOptions.Type.LinkedClone) {
KVMStoragePool srcPool = getTemplateSourcePoolUsingMigrationOptions(primaryPool, migrationOptions);
vol = createLinkedCloneVolume(migrationOptions, srcPool, primaryPool, volume, format, timeout); vol = createLinkedCloneVolume(migrationOptions, srcPool, primaryPool, volume, format, timeout);
} else if (migrationOptions.getType() == MigrationOptions.Type.FullClone) { } else if (migrationOptions.getType() == MigrationOptions.Type.FullClone) {
vol = createFullCloneVolume(migrationOptions, volume, primaryPool, format); vol = createFullCloneVolume(migrationOptions, volume, primaryPool, format);
} }
} else { } else {
vol = primaryPool.createPhysicalDisk(volume.getUuid(), format, vol = primaryPool.createPhysicalDisk(volume.getUuid(), format,
volume.getProvisioningType(), disksize); volume.getProvisioningType(), disksize, volume.getPassphrase());
} }
final VolumeObjectTO newVol = new VolumeObjectTO(); final VolumeObjectTO newVol = new VolumeObjectTO();
if(vol != null) { if(vol != null) {
newVol.setPath(vol.getName()); newVol.setPath(vol.getName());
if (vol.getQemuEncryptFormat() != null) {
newVol.setEncryptFormat(vol.getQemuEncryptFormat().toString());
}
} }
newVol.setSize(volume.getSize()); newVol.setSize(volume.getSize());
newVol.setFormat(ImageFormat.valueOf(format.toString().toUpperCase())); newVol.setFormat(ImageFormat.valueOf(format.toString().toUpperCase()));
@ -1478,6 +1501,8 @@ public class KVMStorageProcessor implements StorageProcessor {
} catch (final Exception e) { } catch (final Exception e) {
s_logger.debug("Failed to create volume: ", e); s_logger.debug("Failed to create volume: ", e);
return new CreateObjectAnswer(e.toString()); return new CreateObjectAnswer(e.toString());
} finally {
volume.clearPassphrase();
} }
} }
@ -1553,6 +1578,10 @@ public class KVMStorageProcessor implements StorageProcessor {
} }
} }
if (state == DomainInfo.DomainState.VIR_DOMAIN_RUNNING && volume.requiresEncryption()) {
throw new CloudRuntimeException("VM is running, encrypted volume snapshots aren't supported");
}
final KVMStoragePool primaryPool = storagePoolMgr.getStoragePool(primaryStore.getPoolType(), primaryStore.getUuid()); final KVMStoragePool primaryPool = storagePoolMgr.getStoragePool(primaryStore.getPoolType(), primaryStore.getUuid());
final KVMPhysicalDisk disk = storagePoolMgr.getPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), volume.getPath()); final KVMPhysicalDisk disk = storagePoolMgr.getPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), volume.getPath());
@ -1649,6 +1678,8 @@ public class KVMStorageProcessor implements StorageProcessor {
String errorMsg = String.format("Failed take snapshot for volume [%s], in VM [%s], due to [%s].", volume, vmName, ex.getMessage()); String errorMsg = String.format("Failed take snapshot for volume [%s], in VM [%s], due to [%s].", volume, vmName, ex.getMessage());
s_logger.error(errorMsg, ex); s_logger.error(errorMsg, ex);
return new CreateObjectAnswer(errorMsg); return new CreateObjectAnswer(errorMsg);
} finally {
volume.clearPassphrase();
} }
} }
@ -1662,11 +1693,11 @@ public class KVMStorageProcessor implements StorageProcessor {
protected void extractDiskFromFullVmSnapshot(KVMPhysicalDisk disk, VolumeObjectTO volume, String snapshotPath, String snapshotName, String vmName, Domain vm) protected void extractDiskFromFullVmSnapshot(KVMPhysicalDisk disk, VolumeObjectTO volume, String snapshotPath, String snapshotName, String vmName, Domain vm)
throws LibvirtException { throws LibvirtException {
QemuImg qemuImg = new QemuImg(_cmdsTimeout);
QemuImgFile srcFile = new QemuImgFile(disk.getPath(), disk.getFormat()); QemuImgFile srcFile = new QemuImgFile(disk.getPath(), disk.getFormat());
QemuImgFile destFile = new QemuImgFile(snapshotPath, disk.getFormat()); QemuImgFile destFile = new QemuImgFile(snapshotPath, disk.getFormat());
try { try {
QemuImg qemuImg = new QemuImg(_cmdsTimeout);
s_logger.debug(String.format("Converting full VM snapshot [%s] of VM [%s] to external disk snapshot of the volume [%s].", snapshotName, vmName, volume)); s_logger.debug(String.format("Converting full VM snapshot [%s] of VM [%s] to external disk snapshot of the volume [%s].", snapshotName, vmName, volume));
qemuImg.convert(srcFile, destFile, null, snapshotName, true); qemuImg.convert(srcFile, destFile, null, snapshotName, true);
} catch (QemuImgException qemuException) { } catch (QemuImgException qemuException) {
@ -1906,18 +1937,20 @@ public class KVMStorageProcessor implements StorageProcessor {
} catch (final CloudRuntimeException e) { } catch (final CloudRuntimeException e) {
s_logger.debug("Failed to delete volume: ", e); s_logger.debug("Failed to delete volume: ", e);
return new Answer(null, false, e.toString()); return new Answer(null, false, e.toString());
} finally {
vol.clearPassphrase();
} }
} }
@Override @Override
public Answer createVolumeFromSnapshot(final CopyCommand cmd) { public Answer createVolumeFromSnapshot(final CopyCommand cmd) {
final DataTO srcData = cmd.getSrcTO();
final SnapshotObjectTO snapshot = (SnapshotObjectTO)srcData;
final VolumeObjectTO volume = snapshot.getVolume();
try { try {
final DataTO srcData = cmd.getSrcTO();
final SnapshotObjectTO snapshot = (SnapshotObjectTO)srcData;
final DataTO destData = cmd.getDestTO(); final DataTO destData = cmd.getDestTO();
final PrimaryDataStoreTO pool = (PrimaryDataStoreTO)destData.getDataStore(); final PrimaryDataStoreTO pool = (PrimaryDataStoreTO)destData.getDataStore();
final DataStoreTO imageStore = srcData.getDataStore(); final DataStoreTO imageStore = srcData.getDataStore();
final VolumeObjectTO volume = snapshot.getVolume();
if (!(imageStore instanceof NfsTO || imageStore instanceof PrimaryDataStoreTO)) { if (!(imageStore instanceof NfsTO || imageStore instanceof PrimaryDataStoreTO)) {
return new CopyCmdAnswer("unsupported protocol"); return new CopyCmdAnswer("unsupported protocol");
@ -1946,6 +1979,8 @@ public class KVMStorageProcessor implements StorageProcessor {
} catch (final CloudRuntimeException e) { } catch (final CloudRuntimeException e) {
s_logger.debug("Failed to createVolumeFromSnapshot: ", e); s_logger.debug("Failed to createVolumeFromSnapshot: ", e);
return new CopyCmdAnswer(e.toString()); return new CopyCmdAnswer(e.toString());
} finally {
volume.clearPassphrase();
} }
} }
@ -2075,15 +2110,15 @@ public class KVMStorageProcessor implements StorageProcessor {
@Override @Override
public Answer deleteSnapshot(final DeleteCommand cmd) { public Answer deleteSnapshot(final DeleteCommand cmd) {
String snapshotFullName = ""; String snapshotFullName = "";
SnapshotObjectTO snapshotTO = (SnapshotObjectTO) cmd.getData();
VolumeObjectTO volume = snapshotTO.getVolume();
try { try {
SnapshotObjectTO snapshotTO = (SnapshotObjectTO) cmd.getData();
PrimaryDataStoreTO primaryStore = (PrimaryDataStoreTO) snapshotTO.getDataStore(); PrimaryDataStoreTO primaryStore = (PrimaryDataStoreTO) snapshotTO.getDataStore();
KVMStoragePool primaryPool = storagePoolMgr.getStoragePool(primaryStore.getPoolType(), primaryStore.getUuid()); KVMStoragePool primaryPool = storagePoolMgr.getStoragePool(primaryStore.getPoolType(), primaryStore.getUuid());
String snapshotFullPath = snapshotTO.getPath(); String snapshotFullPath = snapshotTO.getPath();
String snapshotName = snapshotFullPath.substring(snapshotFullPath.lastIndexOf("/") + 1); String snapshotName = snapshotFullPath.substring(snapshotFullPath.lastIndexOf("/") + 1);
snapshotFullName = snapshotName; snapshotFullName = snapshotName;
if (primaryPool.getType() == StoragePoolType.RBD) { if (primaryPool.getType() == StoragePoolType.RBD) {
VolumeObjectTO volume = snapshotTO.getVolume();
KVMPhysicalDisk disk = storagePoolMgr.getPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), volume.getPath()); KVMPhysicalDisk disk = storagePoolMgr.getPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), volume.getPath());
snapshotFullName = disk.getName() + "@" + snapshotName; snapshotFullName = disk.getName() + "@" + snapshotName;
Rados r = radosConnect(primaryPool); Rados r = radosConnect(primaryPool);
@ -2106,6 +2141,7 @@ public class KVMStorageProcessor implements StorageProcessor {
rbd.close(image); rbd.close(image);
r.ioCtxDestroy(io); r.ioCtxDestroy(io);
} }
} else if (storagePoolTypesToDeleteSnapshotFile.contains(primaryPool.getType())) { } else if (storagePoolTypesToDeleteSnapshotFile.contains(primaryPool.getType())) {
s_logger.info(String.format("Deleting snapshot (id=%s, name=%s, path=%s, storage type=%s) on primary storage", snapshotTO.getId(), snapshotTO.getName(), s_logger.info(String.format("Deleting snapshot (id=%s, name=%s, path=%s, storage type=%s) on primary storage", snapshotTO.getId(), snapshotTO.getName(),
snapshotTO.getPath(), primaryPool.getType())); snapshotTO.getPath(), primaryPool.getType()));
@ -2126,6 +2162,8 @@ public class KVMStorageProcessor implements StorageProcessor {
} catch (Exception e) { } catch (Exception e) {
s_logger.error("Failed to remove snapshot " + snapshotFullName + ", with exception: " + e.toString()); s_logger.error("Failed to remove snapshot " + snapshotFullName + ", with exception: " + e.toString());
return new Answer(cmd, false, "Failed to remove snapshot " + snapshotFullName); return new Answer(cmd, false, "Failed to remove snapshot " + snapshotFullName);
} finally {
volume.clearPassphrase();
} }
} }
@ -2311,6 +2349,9 @@ public class KVMStorageProcessor implements StorageProcessor {
} catch (final CloudRuntimeException e) { } catch (final CloudRuntimeException e) {
s_logger.debug("Failed to copyVolumeFromPrimaryToPrimary: ", e); s_logger.debug("Failed to copyVolumeFromPrimaryToPrimary: ", e);
return new CopyCmdAnswer(e.toString()); return new CopyCmdAnswer(e.toString());
} finally {
srcVol.clearPassphrase();
destVol.clearPassphrase();
} }
} }
@ -2355,4 +2396,23 @@ public class KVMStorageProcessor implements StorageProcessor {
s_logger.info("SyncVolumePathCommand not currently applicable for KVMStorageProcessor"); s_logger.info("SyncVolumePathCommand not currently applicable for KVMStorageProcessor");
return new Answer(cmd, false, "Not currently applicable for KVMStorageProcessor"); return new Answer(cmd, false, "Not currently applicable for KVMStorageProcessor");
} }
/**
* Determine if migration is using host-local source pool. If so, return this host's storage as the template source,
* rather than remote host's
* @param localPool The host-local storage pool being migrated to
* @param migrationOptions The migration options provided with a migrating volume
* @return
*/
public KVMStoragePool getTemplateSourcePoolUsingMigrationOptions(KVMStoragePool localPool, MigrationOptions migrationOptions) {
if (migrationOptions == null) {
throw new CloudRuntimeException("Migration options cannot be null when choosing a storage pool for migration");
}
if (migrationOptions.getScopeType().equals(ScopeType.HOST)) {
return localPool;
}
return storagePoolMgr.getStoragePool(migrationOptions.getSrcPoolType(), migrationOptions.getSrcPoolUuid());
}
} }

View File

@ -17,6 +17,7 @@
package com.cloud.hypervisor.kvm.storage; package com.cloud.hypervisor.kvm.storage;
import java.io.File; import java.io.File;
import java.io.IOException;
import java.nio.charset.Charset; import java.nio.charset.Charset;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.HashMap; import java.util.HashMap;
@ -24,10 +25,12 @@ import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.UUID; import java.util.UUID;
import org.apache.cloudstack.utils.cryptsetup.KeyFile;
import org.apache.cloudstack.utils.qemu.QemuImg; import org.apache.cloudstack.utils.qemu.QemuImg;
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat; import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
import org.apache.cloudstack.utils.qemu.QemuImgException; import org.apache.cloudstack.utils.qemu.QemuImgException;
import org.apache.cloudstack.utils.qemu.QemuImgFile; import org.apache.cloudstack.utils.qemu.QemuImgFile;
import org.apache.cloudstack.utils.qemu.QemuObject;
import org.apache.commons.codec.binary.Base64; import org.apache.commons.codec.binary.Base64;
import org.apache.log4j.Logger; import org.apache.log4j.Logger;
import org.libvirt.Connect; import org.libvirt.Connect;
@ -117,9 +120,9 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
@Override @Override
public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, long size, public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, long size,
KVMStoragePool destPool, int timeout) { KVMStoragePool destPool, int timeout, byte[] passphrase) {
String volumeDesc = String.format("volume [%s], with template backing [%s], in pool [%s] (%s), with size [%s]", name, template.getName(), destPool.getUuid(), String volumeDesc = String.format("volume [%s], with template backing [%s], in pool [%s] (%s), with size [%s] and encryption is %s", name, template.getName(), destPool.getUuid(),
destPool.getType(), size); destPool.getType(), size, passphrase != null && passphrase.length > 0);
if (!poolTypesThatEnableCreateDiskFromTemplateBacking.contains(destPool.getType())) { if (!poolTypesThatEnableCreateDiskFromTemplateBacking.contains(destPool.getType())) {
s_logger.info(String.format("Skipping creation of %s due to pool type is none of the following types %s.", volumeDesc, poolTypesThatEnableCreateDiskFromTemplateBacking.stream() s_logger.info(String.format("Skipping creation of %s due to pool type is none of the following types %s.", volumeDesc, poolTypesThatEnableCreateDiskFromTemplateBacking.stream()
@ -138,12 +141,22 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
String destPoolLocalPath = destPool.getLocalPath(); String destPoolLocalPath = destPool.getLocalPath();
String destPath = String.format("%s%s%s", destPoolLocalPath, destPoolLocalPath.endsWith("/") ? "" : "/", name); String destPath = String.format("%s%s%s", destPoolLocalPath, destPoolLocalPath.endsWith("/") ? "" : "/", name);
try { Map<String, String> options = new HashMap<>();
List<QemuObject> passphraseObjects = new ArrayList<>();
try (KeyFile keyFile = new KeyFile(passphrase)) {
QemuImgFile destFile = new QemuImgFile(destPath, format); QemuImgFile destFile = new QemuImgFile(destPath, format);
destFile.setSize(size); destFile.setSize(size);
QemuImgFile backingFile = new QemuImgFile(template.getPath(), template.getFormat()); QemuImgFile backingFile = new QemuImgFile(template.getPath(), template.getFormat());
new QemuImg(timeout).create(destFile, backingFile);
} catch (QemuImgException e) { if (keyFile.isSet()) {
passphraseObjects.add(QemuObject.prepareSecretForQemuImg(format, QemuObject.EncryptFormat.LUKS, keyFile.toString(), "sec0", options));
}
s_logger.debug(String.format("Passphrase is staged to keyFile: %s", keyFile.isSet()));
QemuImg qemu = new QemuImg(timeout);
qemu.create(destFile, backingFile, options, passphraseObjects);
} catch (QemuImgException | LibvirtException | IOException e) {
// why don't we throw an exception here? I guess we fail to find the volume later and that results in a failure returned?
s_logger.error(String.format("Failed to create %s in [%s] due to [%s].", volumeDesc, destPath, e.getMessage()), e); s_logger.error(String.format("Failed to create %s in [%s] due to [%s].", volumeDesc, destPath, e.getMessage()), e);
} }
@ -756,7 +769,7 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
@Override @Override
public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool,
PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) {
s_logger.info("Attempting to create volume " + name + " (" + pool.getType().toString() + ") in pool " s_logger.info("Attempting to create volume " + name + " (" + pool.getType().toString() + ") in pool "
+ pool.getUuid() + " with size " + toHumanReadableSize(size)); + pool.getUuid() + " with size " + toHumanReadableSize(size));
@ -768,11 +781,9 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
case Filesystem: case Filesystem:
switch (format) { switch (format) {
case QCOW2: case QCOW2:
return createPhysicalDiskByQemuImg(name, pool, format, provisioningType, size);
case RAW: case RAW:
return createPhysicalDiskByQemuImg(name, pool, format, provisioningType, size); return createPhysicalDiskByQemuImg(name, pool, format, provisioningType, size, passphrase);
case DIR: case DIR:
return createPhysicalDiskByLibVirt(name, pool, format, provisioningType, size);
case TAR: case TAR:
return createPhysicalDiskByLibVirt(name, pool, format, provisioningType, size); return createPhysicalDiskByLibVirt(name, pool, format, provisioningType, size);
default: default:
@ -816,37 +827,50 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
private KVMPhysicalDisk createPhysicalDiskByQemuImg(String name, KVMStoragePool pool, private KVMPhysicalDisk createPhysicalDiskByQemuImg(String name, KVMStoragePool pool,
PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) {
String volPath = pool.getLocalPath() + "/" + name; String volPath = pool.getLocalPath() + "/" + name;
String volName = name; String volName = name;
long virtualSize = 0; long virtualSize = 0;
long actualSize = 0; long actualSize = 0;
QemuObject.EncryptFormat encryptFormat = null;
List<QemuObject> passphraseObjects = new ArrayList<>();
final int timeout = 0; final int timeout = 0;
QemuImgFile destFile = new QemuImgFile(volPath); QemuImgFile destFile = new QemuImgFile(volPath);
destFile.setFormat(format); destFile.setFormat(format);
destFile.setSize(size); destFile.setSize(size);
QemuImg qemu = new QemuImg(timeout);
Map<String, String> options = new HashMap<String, String>(); Map<String, String> options = new HashMap<String, String>();
if (pool.getType() == StoragePoolType.NetworkFilesystem){ if (pool.getType() == StoragePoolType.NetworkFilesystem){
options.put("preallocation", QemuImg.PreallocationType.getPreallocationType(provisioningType).toString()); options.put("preallocation", QemuImg.PreallocationType.getPreallocationType(provisioningType).toString());
} }
try{ try (KeyFile keyFile = new KeyFile(passphrase)) {
qemu.create(destFile, options); QemuImg qemu = new QemuImg(timeout);
if (keyFile.isSet()) {
passphraseObjects.add(QemuObject.prepareSecretForQemuImg(format, QemuObject.EncryptFormat.LUKS, keyFile.toString(), "sec0", options));
// make room for encryption header on raw format, use LUKS
if (format == PhysicalDiskFormat.RAW) {
destFile.setSize(destFile.getSize() - (16<<20));
destFile.setFormat(PhysicalDiskFormat.LUKS);
}
encryptFormat = QemuObject.EncryptFormat.LUKS;
}
qemu.create(destFile, null, options, passphraseObjects);
Map<String, String> info = qemu.info(destFile); Map<String, String> info = qemu.info(destFile);
virtualSize = Long.parseLong(info.get(QemuImg.VIRTUAL_SIZE)); virtualSize = Long.parseLong(info.get(QemuImg.VIRTUAL_SIZE));
actualSize = new File(destFile.getFileName()).length(); actualSize = new File(destFile.getFileName()).length();
} catch (QemuImgException | LibvirtException e) { } catch (QemuImgException | LibvirtException | IOException e) {
s_logger.error("Failed to create " + volPath + throw new CloudRuntimeException(String.format("Failed to create %s due to a failed execution of qemu-img", volPath), e);
" due to a failed executing of qemu-img: " + e.getMessage());
} }
KVMPhysicalDisk disk = new KVMPhysicalDisk(volPath, volName, pool); KVMPhysicalDisk disk = new KVMPhysicalDisk(volPath, volName, pool);
disk.setFormat(format); disk.setFormat(format);
disk.setSize(actualSize); disk.setSize(actualSize);
disk.setVirtualSize(virtualSize); disk.setVirtualSize(virtualSize);
disk.setQemuEncryptFormat(encryptFormat);
return disk; return disk;
} }
@ -988,7 +1012,7 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
*/ */
@Override @Override
public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template,
String name, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout) { String name, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout, byte[] passphrase) {
s_logger.info("Creating volume " + name + " from template " + template.getName() + " in pool " + destPool.getUuid() + s_logger.info("Creating volume " + name + " from template " + template.getName() + " in pool " + destPool.getUuid() +
" (" + destPool.getType().toString() + ") with size " + toHumanReadableSize(size)); " (" + destPool.getType().toString() + ") with size " + toHumanReadableSize(size));
@ -998,12 +1022,14 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
if (destPool.getType() == StoragePoolType.RBD) { if (destPool.getType() == StoragePoolType.RBD) {
disk = createDiskFromTemplateOnRBD(template, name, format, provisioningType, size, destPool, timeout); disk = createDiskFromTemplateOnRBD(template, name, format, provisioningType, size, destPool, timeout);
} else { } else {
try { try (KeyFile keyFile = new KeyFile(passphrase)){
String newUuid = name; String newUuid = name;
disk = destPool.createPhysicalDisk(newUuid, format, provisioningType, template.getVirtualSize()); List<QemuObject> passphraseObjects = new ArrayList<>();
disk = destPool.createPhysicalDisk(newUuid, format, provisioningType, template.getVirtualSize(), passphrase);
if (disk == null) { if (disk == null) {
throw new CloudRuntimeException("Failed to create disk from template " + template.getName()); throw new CloudRuntimeException("Failed to create disk from template " + template.getName());
} }
if (template.getFormat() == PhysicalDiskFormat.TAR) { if (template.getFormat() == PhysicalDiskFormat.TAR) {
Script.runSimpleBashScript("tar -x -f " + template.getPath() + " -C " + disk.getPath(), timeout); // TO BE FIXED to aware provisioningType Script.runSimpleBashScript("tar -x -f " + template.getPath() + " -C " + disk.getPath(), timeout); // TO BE FIXED to aware provisioningType
} else if (template.getFormat() == PhysicalDiskFormat.DIR) { } else if (template.getFormat() == PhysicalDiskFormat.DIR) {
@ -1020,32 +1046,45 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
} }
Map<String, String> options = new HashMap<String, String>(); Map<String, String> options = new HashMap<String, String>();
options.put("preallocation", QemuImg.PreallocationType.getPreallocationType(provisioningType).toString()); options.put("preallocation", QemuImg.PreallocationType.getPreallocationType(provisioningType).toString());
if (keyFile.isSet()) {
passphraseObjects.add(QemuObject.prepareSecretForQemuImg(format, QemuObject.EncryptFormat.LUKS, keyFile.toString(), "sec0", options));
disk.setQemuEncryptFormat(QemuObject.EncryptFormat.LUKS);
}
switch(provisioningType){ switch(provisioningType){
case THIN: case THIN:
QemuImgFile backingFile = new QemuImgFile(template.getPath(), template.getFormat()); QemuImgFile backingFile = new QemuImgFile(template.getPath(), template.getFormat());
qemu.create(destFile, backingFile, options); qemu.create(destFile, backingFile, options, passphraseObjects);
break; break;
case SPARSE: case SPARSE:
case FAT: case FAT:
QemuImgFile srcFile = new QemuImgFile(template.getPath(), template.getFormat()); QemuImgFile srcFile = new QemuImgFile(template.getPath(), template.getFormat());
qemu.convert(srcFile, destFile, options, null); qemu.convert(srcFile, destFile, options, passphraseObjects, null, false);
break; break;
} }
} else if (format == PhysicalDiskFormat.RAW) { } else if (format == PhysicalDiskFormat.RAW) {
PhysicalDiskFormat destFormat = PhysicalDiskFormat.RAW;
Map<String, String> options = new HashMap<String, String>();
if (keyFile.isSet()) {
destFormat = PhysicalDiskFormat.LUKS;
disk.setQemuEncryptFormat(QemuObject.EncryptFormat.LUKS);
passphraseObjects.add(QemuObject.prepareSecretForQemuImg(destFormat, QemuObject.EncryptFormat.LUKS, keyFile.toString(), "sec0", options));
}
QemuImgFile sourceFile = new QemuImgFile(template.getPath(), template.getFormat()); QemuImgFile sourceFile = new QemuImgFile(template.getPath(), template.getFormat());
QemuImgFile destFile = new QemuImgFile(disk.getPath(), PhysicalDiskFormat.RAW); QemuImgFile destFile = new QemuImgFile(disk.getPath(), destFormat);
if (size > template.getVirtualSize()) { if (size > template.getVirtualSize()) {
destFile.setSize(size); destFile.setSize(size);
} else { } else {
destFile.setSize(template.getVirtualSize()); destFile.setSize(template.getVirtualSize());
} }
QemuImg qemu = new QemuImg(timeout); QemuImg qemu = new QemuImg(timeout);
Map<String, String> options = new HashMap<String, String>(); qemu.convert(sourceFile, destFile, options, passphraseObjects, null, false);
qemu.convert(sourceFile, destFile, options, null);
} }
} catch (QemuImgException | LibvirtException e) { } catch (QemuImgException | LibvirtException | IOException e) {
s_logger.error("Failed to create " + disk.getPath() + throw new CloudRuntimeException(String.format("Failed to create %s due to a failed execution of qemu-img", name), e);
" due to a failed executing of qemu-img: " + e.getMessage());
} }
} }
@ -1080,7 +1119,6 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
} }
QemuImg qemu = new QemuImg(timeout);
QemuImgFile srcFile; QemuImgFile srcFile;
QemuImgFile destFile = new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(destPool.getSourceHost(), QemuImgFile destFile = new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(destPool.getSourceHost(),
destPool.getSourcePort(), destPool.getSourcePort(),
@ -1089,10 +1127,10 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
disk.getPath())); disk.getPath()));
destFile.setFormat(format); destFile.setFormat(format);
if (srcPool.getType() != StoragePoolType.RBD) { if (srcPool.getType() != StoragePoolType.RBD) {
srcFile = new QemuImgFile(template.getPath(), template.getFormat()); srcFile = new QemuImgFile(template.getPath(), template.getFormat());
try{ try{
QemuImg qemu = new QemuImg(timeout);
qemu.convert(srcFile, destFile); qemu.convert(srcFile, destFile);
} catch (QemuImgException | LibvirtException e) { } catch (QemuImgException | LibvirtException e) {
s_logger.error("Failed to create " + disk.getPath() + s_logger.error("Failed to create " + disk.getPath() +
@ -1254,6 +1292,11 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
} }
} }
@Override
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) {
return copyPhysicalDisk(disk, name, destPool, timeout, null, null, null);
}
/** /**
* This copies a volume from Primary Storage to Secondary Storage * This copies a volume from Primary Storage to Secondary Storage
* *
@ -1261,7 +1304,7 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
* in ManagementServerImpl shows that the destPool is always a Secondary Storage Pool * in ManagementServerImpl shows that the destPool is always a Secondary Storage Pool
*/ */
@Override @Override
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) { public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout, byte[] srcPassphrase, byte[] dstPassphrase, Storage.ProvisioningType provisioningType) {
/** /**
With RBD you can't run qemu-img convert with an existing RBD image as destination With RBD you can't run qemu-img convert with an existing RBD image as destination
@ -1282,9 +1325,9 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
s_logger.debug("copyPhysicalDisk: disk size:" + toHumanReadableSize(disk.getSize()) + ", virtualsize:" + toHumanReadableSize(disk.getVirtualSize())+" format:"+disk.getFormat()); s_logger.debug("copyPhysicalDisk: disk size:" + toHumanReadableSize(disk.getSize()) + ", virtualsize:" + toHumanReadableSize(disk.getVirtualSize())+" format:"+disk.getFormat());
if (destPool.getType() != StoragePoolType.RBD) { if (destPool.getType() != StoragePoolType.RBD) {
if (disk.getFormat() == PhysicalDiskFormat.TAR) { if (disk.getFormat() == PhysicalDiskFormat.TAR) {
newDisk = destPool.createPhysicalDisk(name, PhysicalDiskFormat.DIR, Storage.ProvisioningType.THIN, disk.getVirtualSize()); newDisk = destPool.createPhysicalDisk(name, PhysicalDiskFormat.DIR, Storage.ProvisioningType.THIN, disk.getVirtualSize(), null);
} else { } else {
newDisk = destPool.createPhysicalDisk(name, Storage.ProvisioningType.THIN, disk.getVirtualSize()); newDisk = destPool.createPhysicalDisk(name, Storage.ProvisioningType.THIN, disk.getVirtualSize(), null);
} }
} else { } else {
newDisk = new KVMPhysicalDisk(destPool.getSourceDir() + "/" + name, name, destPool); newDisk = new KVMPhysicalDisk(destPool.getSourceDir() + "/" + name, name, destPool);
@ -1296,7 +1339,13 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
String destPath = newDisk.getPath(); String destPath = newDisk.getPath();
PhysicalDiskFormat destFormat = newDisk.getFormat(); PhysicalDiskFormat destFormat = newDisk.getFormat();
QemuImg qemu = new QemuImg(timeout); QemuImg qemu;
try {
qemu = new QemuImg(timeout);
} catch (QemuImgException | LibvirtException ex ) {
throw new CloudRuntimeException("Failed to create qemu-img command", ex);
}
QemuImgFile srcFile = null; QemuImgFile srcFile = null;
QemuImgFile destFile = null; QemuImgFile destFile = null;
@ -1333,7 +1382,7 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
newDisk = null; newDisk = null;
} }
} }
} catch (QemuImgException | LibvirtException e) { } catch (QemuImgException e) {
s_logger.error("Failed to fetch the information of file " + srcFile.getFileName() + " the error was: " + e.getMessage()); s_logger.error("Failed to fetch the information of file " + srcFile.getFileName() + " the error was: " + e.getMessage());
newDisk = null; newDisk = null;
} }
@ -1443,5 +1492,4 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
private void deleteDirVol(LibvirtStoragePool pool, StorageVol vol) throws LibvirtException { private void deleteDirVol(LibvirtStoragePool pool, StorageVol vol) throws LibvirtException {
Script.runSimpleBashScript("rm -r --interactive=never " + vol.getPath()); Script.runSimpleBashScript("rm -r --interactive=never " + vol.getPath());
} }
} }

View File

@ -112,15 +112,15 @@ public class LibvirtStoragePool implements KVMStoragePool {
@Override @Override
public KVMPhysicalDisk createPhysicalDisk(String name, public KVMPhysicalDisk createPhysicalDisk(String name,
PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) {
return this._storageAdaptor return this._storageAdaptor
.createPhysicalDisk(name, this, format, provisioningType, size); .createPhysicalDisk(name, this, format, provisioningType, size, passphrase);
} }
@Override @Override
public KVMPhysicalDisk createPhysicalDisk(String name, Storage.ProvisioningType provisioningType, long size) { public KVMPhysicalDisk createPhysicalDisk(String name, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) {
return this._storageAdaptor.createPhysicalDisk(name, this, return this._storageAdaptor.createPhysicalDisk(name, this,
this.getDefaultFormat(), provisioningType, size); this.getDefaultFormat(), provisioningType, size, passphrase);
} }
@Override @Override

View File

@ -16,6 +16,26 @@
// under the License. // under the License.
package com.cloud.hypervisor.kvm.storage; package com.cloud.hypervisor.kvm.storage;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.StringJoiner;
import javax.annotation.Nonnull;
import org.apache.cloudstack.utils.qemu.QemuImg;
import org.apache.cloudstack.utils.qemu.QemuImgException;
import org.apache.cloudstack.utils.qemu.QemuImgFile;
import org.apache.log4j.Logger;
import org.libvirt.LibvirtException;
import com.cloud.storage.Storage;
import com.cloud.utils.exception.CloudRuntimeException;
import com.linbit.linstor.api.ApiClient; import com.linbit.linstor.api.ApiClient;
import com.linbit.linstor.api.ApiException; import com.linbit.linstor.api.ApiException;
import com.linbit.linstor.api.Configuration; import com.linbit.linstor.api.Configuration;
@ -33,25 +53,6 @@ import com.linbit.linstor.api.model.ResourceWithVolumes;
import com.linbit.linstor.api.model.StoragePool; import com.linbit.linstor.api.model.StoragePool;
import com.linbit.linstor.api.model.VolumeDefinition; import com.linbit.linstor.api.model.VolumeDefinition;
import javax.annotation.Nonnull;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.StringJoiner;
import com.cloud.storage.Storage;
import com.cloud.utils.exception.CloudRuntimeException;
import org.apache.cloudstack.utils.qemu.QemuImg;
import org.apache.cloudstack.utils.qemu.QemuImgException;
import org.apache.cloudstack.utils.qemu.QemuImgFile;
import org.apache.log4j.Logger;
import org.libvirt.LibvirtException;
@StorageAdaptorInfo(storagePoolType=Storage.StoragePoolType.Linstor) @StorageAdaptorInfo(storagePoolType=Storage.StoragePoolType.Linstor)
public class LinstorStorageAdaptor implements StorageAdaptor { public class LinstorStorageAdaptor implements StorageAdaptor {
private static final Logger s_logger = Logger.getLogger(LinstorStorageAdaptor.class); private static final Logger s_logger = Logger.getLogger(LinstorStorageAdaptor.class);
@ -197,7 +198,7 @@ public class LinstorStorageAdaptor implements StorageAdaptor {
@Override @Override
public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, QemuImg.PhysicalDiskFormat format, public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, QemuImg.PhysicalDiskFormat format,
Storage.ProvisioningType provisioningType, long size) Storage.ProvisioningType provisioningType, long size, byte[] passphrase)
{ {
final String rscName = getLinstorRscName(name); final String rscName = getLinstorRscName(name);
LinstorStoragePool lpool = (LinstorStoragePool) pool; LinstorStoragePool lpool = (LinstorStoragePool) pool;
@ -377,7 +378,8 @@ public class LinstorStorageAdaptor implements StorageAdaptor {
Storage.ProvisioningType provisioningType, Storage.ProvisioningType provisioningType,
long size, long size,
KVMStoragePool destPool, KVMStoragePool destPool,
int timeout) int timeout,
byte[] passphrase)
{ {
s_logger.info("Linstor: createDiskFromTemplate"); s_logger.info("Linstor: createDiskFromTemplate");
return copyPhysicalDisk(template, name, destPool, timeout); return copyPhysicalDisk(template, name, destPool, timeout);
@ -401,23 +403,28 @@ public class LinstorStorageAdaptor implements StorageAdaptor {
} }
@Override @Override
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPools, int timeout) public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) {
return copyPhysicalDisk(disk, name, destPool, timeout, null, null, null);
}
@Override
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPools, int timeout, byte[] srcPassphrase, byte[] destPassphrase, Storage.ProvisioningType provisioningType)
{ {
s_logger.debug("Linstor: copyPhysicalDisk"); s_logger.debug("Linstor: copyPhysicalDisk");
final QemuImg.PhysicalDiskFormat sourceFormat = disk.getFormat(); final QemuImg.PhysicalDiskFormat sourceFormat = disk.getFormat();
final String sourcePath = disk.getPath(); final String sourcePath = disk.getPath();
final QemuImg qemu = new QemuImg(timeout);
final QemuImgFile srcFile = new QemuImgFile(sourcePath, sourceFormat); final QemuImgFile srcFile = new QemuImgFile(sourcePath, sourceFormat);
final KVMPhysicalDisk dstDisk = destPools.createPhysicalDisk( final KVMPhysicalDisk dstDisk = destPools.createPhysicalDisk(
name, QemuImg.PhysicalDiskFormat.RAW, Storage.ProvisioningType.FAT, disk.getVirtualSize()); name, QemuImg.PhysicalDiskFormat.RAW, Storage.ProvisioningType.FAT, disk.getVirtualSize(), null);
final QemuImgFile destFile = new QemuImgFile(dstDisk.getPath()); final QemuImgFile destFile = new QemuImgFile(dstDisk.getPath());
destFile.setFormat(dstDisk.getFormat()); destFile.setFormat(dstDisk.getFormat());
destFile.setSize(disk.getVirtualSize()); destFile.setSize(disk.getVirtualSize());
try { try {
final QemuImg qemu = new QemuImg(timeout);
qemu.convert(srcFile, destFile); qemu.convert(srcFile, destFile);
} catch (QemuImgException | LibvirtException e) { } catch (QemuImgException | LibvirtException e) {
s_logger.error(e); s_logger.error(e);
@ -452,7 +459,7 @@ public class LinstorStorageAdaptor implements StorageAdaptor {
QemuImg.PhysicalDiskFormat format, QemuImg.PhysicalDiskFormat format,
long size, long size,
KVMStoragePool destPool, KVMStoragePool destPool,
int timeout) int timeout, byte[] passphrase)
{ {
s_logger.debug("Linstor: createDiskFromTemplateBacking"); s_logger.debug("Linstor: createDiskFromTemplateBacking");
return null; return null;

View File

@ -19,9 +19,10 @@ package com.cloud.hypervisor.kvm.storage;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import com.cloud.storage.Storage;
import org.apache.cloudstack.utils.qemu.QemuImg; import org.apache.cloudstack.utils.qemu.QemuImg;
import com.cloud.storage.Storage;
public class LinstorStoragePool implements KVMStoragePool { public class LinstorStoragePool implements KVMStoragePool {
private final String _uuid; private final String _uuid;
private final String _sourceHost; private final String _sourceHost;
@ -42,15 +43,15 @@ public class LinstorStoragePool implements KVMStoragePool {
@Override @Override
public KVMPhysicalDisk createPhysicalDisk(String name, QemuImg.PhysicalDiskFormat format, public KVMPhysicalDisk createPhysicalDisk(String name, QemuImg.PhysicalDiskFormat format,
Storage.ProvisioningType provisioningType, long size) Storage.ProvisioningType provisioningType, long size, byte[] passphrase)
{ {
return _storageAdaptor.createPhysicalDisk(name, this, format, provisioningType, size); return _storageAdaptor.createPhysicalDisk(name, this, format, provisioningType, size, passphrase);
} }
@Override @Override
public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, Storage.ProvisioningType provisioningType, long size) public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, Storage.ProvisioningType provisioningType, long size, byte[] passphrase)
{ {
return _storageAdaptor.createPhysicalDisk(volumeUuid,this, getDefaultFormat(), provisioningType, size); return _storageAdaptor.createPhysicalDisk(volumeUuid,this, getDefaultFormat(), provisioningType, size, passphrase);
} }
@Override @Override

View File

@ -291,6 +291,11 @@ public class ManagedNfsStorageAdaptor implements StorageAdaptor {
@Override @Override
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) { public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) {
return copyPhysicalDisk(disk, name, destPool, timeout, null, null, null);
}
@Override
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout, byte[] srcPassphrase, byte[] destPassphrase, ProvisioningType provisioningType) {
throw new UnsupportedOperationException("Copying a disk is not supported in this configuration."); throw new UnsupportedOperationException("Copying a disk is not supported in this configuration.");
} }
@ -315,7 +320,7 @@ public class ManagedNfsStorageAdaptor implements StorageAdaptor {
} }
@Override @Override
public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout) { public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout, byte[] passphrase) {
return null; return null;
} }
@ -325,7 +330,7 @@ public class ManagedNfsStorageAdaptor implements StorageAdaptor {
} }
@Override @Override
public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, PhysicalDiskFormat format, ProvisioningType provisioningType, long size) { public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, PhysicalDiskFormat format, ProvisioningType provisioningType, long size, byte[] passphrase) {
return null; return null;
} }
@ -335,7 +340,7 @@ public class ManagedNfsStorageAdaptor implements StorageAdaptor {
} }
@Override @Override
public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout) { public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout, byte[] passphrase) {
return null; return null;
} }
} }

View File

@ -19,6 +19,8 @@ package com.cloud.hypervisor.kvm.storage;
import java.io.File; import java.io.File;
import java.io.FileFilter; import java.io.FileFilter;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays; import java.util.Arrays;
import java.util.HashMap; import java.util.HashMap;
import java.util.List; import java.util.List;
@ -26,11 +28,17 @@ import java.util.Map;
import java.util.UUID; import java.util.UUID;
import org.apache.cloudstack.storage.datastore.util.ScaleIOUtil; import org.apache.cloudstack.storage.datastore.util.ScaleIOUtil;
import org.apache.cloudstack.utils.cryptsetup.CryptSetup;
import org.apache.cloudstack.utils.cryptsetup.CryptSetupException;
import org.apache.cloudstack.utils.cryptsetup.KeyFile;
import org.apache.cloudstack.utils.qemu.QemuImageOptions;
import org.apache.cloudstack.utils.qemu.QemuImg; import org.apache.cloudstack.utils.qemu.QemuImg;
import org.apache.cloudstack.utils.qemu.QemuImgException; import org.apache.cloudstack.utils.qemu.QemuImgException;
import org.apache.cloudstack.utils.qemu.QemuImgFile; import org.apache.cloudstack.utils.qemu.QemuImgFile;
import org.apache.cloudstack.utils.qemu.QemuObject;
import org.apache.commons.io.filefilter.WildcardFileFilter; import org.apache.commons.io.filefilter.WildcardFileFilter;
import org.apache.log4j.Logger; import org.apache.log4j.Logger;
import org.libvirt.LibvirtException;
import com.cloud.storage.Storage; import com.cloud.storage.Storage;
import com.cloud.storage.StorageLayer; import com.cloud.storage.StorageLayer;
@ -39,7 +47,6 @@ import com.cloud.utils.exception.CloudRuntimeException;
import com.cloud.utils.script.OutputInterpreter; import com.cloud.utils.script.OutputInterpreter;
import com.cloud.utils.script.Script; import com.cloud.utils.script.Script;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.libvirt.LibvirtException;
@StorageAdaptorInfo(storagePoolType= Storage.StoragePoolType.PowerFlex) @StorageAdaptorInfo(storagePoolType= Storage.StoragePoolType.PowerFlex)
public class ScaleIOStorageAdaptor implements StorageAdaptor { public class ScaleIOStorageAdaptor implements StorageAdaptor {
@ -103,11 +110,27 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor {
} }
KVMPhysicalDisk disk = new KVMPhysicalDisk(diskFilePath, volumePath, pool); KVMPhysicalDisk disk = new KVMPhysicalDisk(diskFilePath, volumePath, pool);
disk.setFormat(QemuImg.PhysicalDiskFormat.RAW);
// try to discover format as written to disk, rather than assuming raw.
// We support qcow2 for stored primary templates, disks seen as other should be treated as raw.
QemuImg qemu = new QemuImg(0);
QemuImgFile qemuFile = new QemuImgFile(diskFilePath);
Map<String, String> details = qemu.info(qemuFile);
String detectedFormat = details.getOrDefault(QemuImg.FILE_FORMAT, "none");
if (detectedFormat.equalsIgnoreCase(QemuImg.PhysicalDiskFormat.QCOW2.toString())) {
disk.setFormat(QemuImg.PhysicalDiskFormat.QCOW2);
} else {
disk.setFormat(QemuImg.PhysicalDiskFormat.RAW);
}
long diskSize = getPhysicalDiskSize(diskFilePath); long diskSize = getPhysicalDiskSize(diskFilePath);
disk.setSize(diskSize); disk.setSize(diskSize);
disk.setVirtualSize(diskSize);
if (details.containsKey(QemuImg.VIRTUAL_SIZE)) {
disk.setVirtualSize(Long.parseLong(details.get(QemuImg.VIRTUAL_SIZE)));
} else {
disk.setVirtualSize(diskSize);
}
return disk; return disk;
} catch (Exception e) { } catch (Exception e) {
@ -128,9 +151,59 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor {
return MapStorageUuidToStoragePool.remove(uuid) != null; return MapStorageUuidToStoragePool.remove(uuid) != null;
} }
/**
* ScaleIO doesn't need to communicate with the hypervisor normally to create a volume. This is used only to prepare a ScaleIO data disk for encryption.
* Thin encrypted volumes are provisioned in QCOW2 format, which insulates the guest from zeroes/unallocated blocks in the block device that would
* otherwise show up as garbage data through the encryption layer. As a bonus, encrypted QCOW2 format handles discard.
* @param name disk path
* @param pool pool
* @param format disk format
* @param provisioningType provisioning type
* @param size disk size
* @param passphrase passphrase
* @return the disk object
*/
@Override @Override
public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) {
return null; if (passphrase == null || passphrase.length == 0) {
return null;
}
if(!connectPhysicalDisk(name, pool, null)) {
throw new CloudRuntimeException(String.format("Failed to ensure disk %s was present", name));
}
KVMPhysicalDisk disk = getPhysicalDisk(name, pool);
if (provisioningType.equals(Storage.ProvisioningType.THIN)) {
disk.setFormat(QemuImg.PhysicalDiskFormat.QCOW2);
disk.setQemuEncryptFormat(QemuObject.EncryptFormat.LUKS);
try (KeyFile keyFile = new KeyFile(passphrase)){
QemuImg qemuImg = new QemuImg(0, true, false);
Map<String, String> options = new HashMap<>();
List<QemuObject> qemuObjects = new ArrayList<>();
long formattedSize = getUsableBytesFromRawBytes(disk.getSize());
options.put("preallocation", QemuImg.PreallocationType.Metadata.toString());
qemuObjects.add(QemuObject.prepareSecretForQemuImg(disk.getFormat(), disk.getQemuEncryptFormat(), keyFile.toString(), "sec0", options));
QemuImgFile file = new QemuImgFile(disk.getPath(), formattedSize, disk.getFormat());
qemuImg.create(file, null, options, qemuObjects);
LOGGER.debug(String.format("Successfully formatted %s as encrypted QCOW2", file.getFileName()));
} catch (QemuImgException | LibvirtException | IOException ex) {
throw new CloudRuntimeException("Failed to set up encrypted QCOW on block device " + disk.getPath(), ex);
}
} else {
try {
CryptSetup crypt = new CryptSetup();
crypt.luksFormat(passphrase, CryptSetup.LuksType.LUKS, disk.getPath());
disk.setQemuEncryptFormat(QemuObject.EncryptFormat.LUKS);
disk.setFormat(QemuImg.PhysicalDiskFormat.RAW);
} catch (CryptSetupException ex) {
throw new CloudRuntimeException("Failed to set up encryption for block device " + disk.getPath(), ex);
}
}
return disk;
} }
@Override @Override
@ -228,7 +301,7 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor {
} }
@Override @Override
public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout) { public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout, byte[] passphrase) {
return null; return null;
} }
@ -244,11 +317,20 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor {
@Override @Override
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) { public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) {
return copyPhysicalDisk(disk, name, destPool, timeout, null, null, Storage.ProvisioningType.THIN);
}
@Override
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout, byte[] srcPassphrase, byte[]dstPassphrase, Storage.ProvisioningType provisioningType) {
if (StringUtils.isEmpty(name) || disk == null || destPool == null) { if (StringUtils.isEmpty(name) || disk == null || destPool == null) {
LOGGER.error("Unable to copy physical disk due to insufficient data"); LOGGER.error("Unable to copy physical disk due to insufficient data");
throw new CloudRuntimeException("Unable to copy physical disk due to insufficient data"); throw new CloudRuntimeException("Unable to copy physical disk due to insufficient data");
} }
if (provisioningType == null) {
provisioningType = Storage.ProvisioningType.THIN;
}
LOGGER.debug("Copy physical disk with size: " + disk.getSize() + ", virtualsize: " + disk.getVirtualSize()+ ", format: " + disk.getFormat()); LOGGER.debug("Copy physical disk with size: " + disk.getSize() + ", virtualsize: " + disk.getVirtualSize()+ ", format: " + disk.getFormat());
KVMPhysicalDisk destDisk = destPool.getPhysicalDisk(name); KVMPhysicalDisk destDisk = destPool.getPhysicalDisk(name);
@ -257,24 +339,65 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor {
throw new CloudRuntimeException("Failed to find the disk: " + name + " of the storage pool: " + destPool.getUuid()); throw new CloudRuntimeException("Failed to find the disk: " + name + " of the storage pool: " + destPool.getUuid());
} }
destDisk.setFormat(QemuImg.PhysicalDiskFormat.RAW);
destDisk.setVirtualSize(disk.getVirtualSize()); destDisk.setVirtualSize(disk.getVirtualSize());
destDisk.setSize(disk.getSize()); destDisk.setSize(disk.getSize());
QemuImg qemu = new QemuImg(timeout); QemuImg qemu = null;
QemuImgFile srcFile = null; QemuImgFile srcQemuFile = null;
QemuImgFile destFile = null; QemuImgFile destQemuFile = null;
String srcKeyName = "sec0";
String destKeyName = "sec1";
List<QemuObject> qemuObjects = new ArrayList<>();
Map<String, String> options = new HashMap<String, String>();
CryptSetup cryptSetup = null;
try { try (KeyFile srcKey = new KeyFile(srcPassphrase); KeyFile dstKey = new KeyFile(dstPassphrase)){
srcFile = new QemuImgFile(disk.getPath(), disk.getFormat()); qemu = new QemuImg(timeout, provisioningType.equals(Storage.ProvisioningType.FAT), false);
destFile = new QemuImgFile(destDisk.getPath(), destDisk.getFormat()); String srcPath = disk.getPath();
String destPath = destDisk.getPath();
LOGGER.debug("Starting copy from source disk image " + srcFile.getFileName() + " to PowerFlex volume: " + destDisk.getPath()); QemuImageOptions qemuImageOpts = new QemuImageOptions(srcPath);
qemu.convert(srcFile, destFile, true);
LOGGER.debug("Successfully converted source disk image " + srcFile.getFileName() + " to PowerFlex volume: " + destDisk.getPath()); srcQemuFile = new QemuImgFile(srcPath, disk.getFormat());
} catch (QemuImgException | LibvirtException e) { destQemuFile = new QemuImgFile(destPath);
if (disk.useAsTemplate()) {
destQemuFile.setFormat(QemuImg.PhysicalDiskFormat.QCOW2);
}
if (srcKey.isSet()) {
qemuObjects.add(QemuObject.prepareSecretForQemuImg(disk.getFormat(), disk.getQemuEncryptFormat(), srcKey.toString(), srcKeyName, options));
qemuImageOpts = new QemuImageOptions(disk.getFormat(), srcPath, srcKeyName);
}
if (dstKey.isSet()) {
if (!provisioningType.equals(Storage.ProvisioningType.FAT)) {
destDisk.setFormat(QemuImg.PhysicalDiskFormat.QCOW2);
destQemuFile.setFormat(QemuImg.PhysicalDiskFormat.QCOW2);
options.put("preallocation", QemuImg.PreallocationType.Metadata.toString());
} else {
qemu.setSkipZero(false);
destDisk.setFormat(QemuImg.PhysicalDiskFormat.RAW);
// qemu-img wants to treat RAW + encrypt formatting as LUKS
destQemuFile.setFormat(QemuImg.PhysicalDiskFormat.LUKS);
}
qemuObjects.add(QemuObject.prepareSecretForQemuImg(destDisk.getFormat(), QemuObject.EncryptFormat.LUKS, dstKey.toString(), destKeyName, options));
destDisk.setQemuEncryptFormat(QemuObject.EncryptFormat.LUKS);
}
boolean forceSourceFormat = srcQemuFile.getFormat() == QemuImg.PhysicalDiskFormat.RAW;
LOGGER.debug(String.format("Starting copy from source disk %s(%s) to PowerFlex volume %s(%s), forcing source format is %b", srcQemuFile.getFileName(), srcQemuFile.getFormat(), destQemuFile.getFileName(), destQemuFile.getFormat(), forceSourceFormat));
qemu.convert(srcQemuFile, destQemuFile, options, qemuObjects, qemuImageOpts,null, forceSourceFormat);
LOGGER.debug("Successfully converted source disk image " + srcQemuFile.getFileName() + " to PowerFlex volume: " + destDisk.getPath());
if (destQemuFile.getFormat() == QemuImg.PhysicalDiskFormat.QCOW2 && !disk.useAsTemplate()) {
QemuImageOptions resizeOptions = new QemuImageOptions(destQemuFile.getFormat(), destPath, destKeyName);
resizeQcow2ToVolume(destPath, resizeOptions, qemuObjects, timeout);
LOGGER.debug("Resized volume at " + destPath);
}
} catch (QemuImgException | LibvirtException | IOException e) {
try { try {
Map<String, String> srcInfo = qemu.info(srcFile); Map<String, String> srcInfo = qemu.info(srcQemuFile);
LOGGER.debug("Source disk info: " + Arrays.asList(srcInfo)); LOGGER.debug("Source disk info: " + Arrays.asList(srcInfo));
} catch (Exception ignored) { } catch (Exception ignored) {
LOGGER.warn("Unable to get info from source disk: " + disk.getName()); LOGGER.warn("Unable to get info from source disk: " + disk.getName());
@ -283,11 +406,20 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor {
String errMsg = String.format("Unable to convert/copy from %s to %s, due to: %s", disk.getName(), name, ((StringUtils.isEmpty(e.getMessage())) ? "an unknown error" : e.getMessage())); String errMsg = String.format("Unable to convert/copy from %s to %s, due to: %s", disk.getName(), name, ((StringUtils.isEmpty(e.getMessage())) ? "an unknown error" : e.getMessage()));
LOGGER.error(errMsg); LOGGER.error(errMsg);
throw new CloudRuntimeException(errMsg, e); throw new CloudRuntimeException(errMsg, e);
} finally {
if (cryptSetup != null) {
try {
cryptSetup.close(name);
} catch (CryptSetupException ex) {
LOGGER.warn("Failed to clean up LUKS disk after copying disk", ex);
}
}
} }
return destDisk; return destDisk;
} }
@Override @Override
public boolean refresh(KVMStoragePool pool) { public boolean refresh(KVMStoragePool pool) {
return true; return true;
@ -310,7 +442,7 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor {
@Override @Override
public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, QemuImg.PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout) { public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, QemuImg.PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout, byte[] passphrase) {
return null; return null;
} }
@ -347,6 +479,7 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor {
QemuImgFile srcFile = null; QemuImgFile srcFile = null;
QemuImgFile destFile = null; QemuImgFile destFile = null;
try { try {
QemuImg qemu = new QemuImg(timeout, true, false);
destDisk = destPool.getPhysicalDisk(destTemplatePath); destDisk = destPool.getPhysicalDisk(destTemplatePath);
if (destDisk == null) { if (destDisk == null) {
LOGGER.error("Failed to find the disk: " + destTemplatePath + " of the storage pool: " + destPool.getUuid()); LOGGER.error("Failed to find the disk: " + destTemplatePath + " of the storage pool: " + destPool.getUuid());
@ -369,14 +502,21 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor {
} }
srcFile = new QemuImgFile(srcTemplateFilePath, srcFileFormat); srcFile = new QemuImgFile(srcTemplateFilePath, srcFileFormat);
destFile = new QemuImgFile(destDisk.getPath(), destDisk.getFormat()); qemu.info(srcFile);
/**
* Even though the disk itself is raw, we store templates on ScaleIO in qcow2 format.
* This improves performance by reading/writing less data to volume, saves the unused space for encryption header, and
* nicely encapsulates VM images that might contain LUKS data (as opposed to converting to raw which would look like a LUKS volume).
*/
destFile = new QemuImgFile(destDisk.getPath(), QemuImg.PhysicalDiskFormat.QCOW2);
destFile.setSize(srcFile.getSize());
LOGGER.debug("Starting copy from source downloaded template " + srcFile.getFileName() + " to PowerFlex template volume: " + destDisk.getPath()); LOGGER.debug("Starting copy from source downloaded template " + srcFile.getFileName() + " to PowerFlex template volume: " + destDisk.getPath());
QemuImg qemu = new QemuImg(timeout); qemu.create(destFile);
qemu.convert(srcFile, destFile); qemu.convert(srcFile, destFile);
LOGGER.debug("Successfully converted source downloaded template " + srcFile.getFileName() + " to PowerFlex template volume: " + destDisk.getPath()); LOGGER.debug("Successfully converted source downloaded template " + srcFile.getFileName() + " to PowerFlex template volume: " + destDisk.getPath());
} catch (QemuImgException | LibvirtException e) { } catch (QemuImgException | LibvirtException e) {
LOGGER.error("Failed to convert from " + srcFile.getFileName() + " to " + destFile.getFileName() + " the error was: " + e.getMessage(), e); LOGGER.error("Failed to convert. The error was: " + e.getMessage(), e);
destDisk = null; destDisk = null;
} finally { } finally {
Script.runSimpleBashScript("rm -f " + srcTemplateFilePath); Script.runSimpleBashScript("rm -f " + srcTemplateFilePath);
@ -401,4 +541,25 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor {
throw new CloudRuntimeException("Unable to extract template " + downloadedTemplateFile); throw new CloudRuntimeException("Unable to extract template " + downloadedTemplateFile);
} }
} }
public void resizeQcow2ToVolume(String volumePath, QemuImageOptions options, List<QemuObject> objects, Integer timeout) throws QemuImgException, LibvirtException {
long rawSizeBytes = getPhysicalDiskSize(volumePath);
long usableSizeBytes = getUsableBytesFromRawBytes(rawSizeBytes);
QemuImg qemu = new QemuImg(timeout);
qemu.resize(options, objects, usableSizeBytes);
}
/**
* Calculates usable size from raw size, assuming qcow2 requires 192k/1GB for metadata
* We also remove 32MiB for potential encryption/safety factor.
* @param raw size in bytes
* @return usable size in bytesbytes
*/
public static long getUsableBytesFromRawBytes(Long raw) {
long usable = raw - (32 << 20) - ((raw >> 30) * 200704);
if (usable < 0) {
usable = 0L;
}
return usable;
}
} }

View File

@ -70,12 +70,12 @@ public class ScaleIOStoragePool implements KVMStoragePool {
} }
@Override @Override
public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) {
return null; return this.storageAdaptor.createPhysicalDisk(volumeUuid, this, format, provisioningType, size, passphrase);
} }
@Override @Override
public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, Storage.ProvisioningType provisioningType, long size) { public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) {
return null; return null;
} }

View File

@ -40,7 +40,7 @@ public interface StorageAdaptor {
public boolean deleteStoragePool(String uuid); public boolean deleteStoragePool(String uuid);
public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool,
PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size); PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase);
// given disk path (per database) and pool, prepare disk on host // given disk path (per database) and pool, prepare disk on host
public boolean connectPhysicalDisk(String volumePath, KVMStoragePool pool, Map<String, String> details); public boolean connectPhysicalDisk(String volumePath, KVMStoragePool pool, Map<String, String> details);
@ -58,13 +58,14 @@ public interface StorageAdaptor {
public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template,
String name, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, String name, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size,
KVMStoragePool destPool, int timeout); KVMStoragePool destPool, int timeout, byte[] passphrase);
public KVMPhysicalDisk createTemplateFromDisk(KVMPhysicalDisk disk, String name, PhysicalDiskFormat format, long size, KVMStoragePool destPool); public KVMPhysicalDisk createTemplateFromDisk(KVMPhysicalDisk disk, String name, PhysicalDiskFormat format, long size, KVMStoragePool destPool);
public List<KVMPhysicalDisk> listPhysicalDisks(String storagePoolUuid, KVMStoragePool pool); public List<KVMPhysicalDisk> listPhysicalDisks(String storagePoolUuid, KVMStoragePool pool);
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPools, int timeout); public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPools, int timeout);
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPools, int timeout, byte[] srcPassphrase, byte[] dstPassphrase, Storage.ProvisioningType provisioningType);
public boolean refresh(KVMStoragePool pool); public boolean refresh(KVMStoragePool pool);
@ -80,7 +81,7 @@ public interface StorageAdaptor {
*/ */
KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template,
String name, PhysicalDiskFormat format, long size, String name, PhysicalDiskFormat format, long size,
KVMStoragePool destPool, int timeout); KVMStoragePool destPool, int timeout, byte[] passphrase);
/** /**
* Create physical disk on Primary Storage from direct download template on the host (in temporary location) * Create physical disk on Primary Storage from direct download template on the host (in temporary location)

View File

@ -0,0 +1,124 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.utils.cryptsetup;
import com.cloud.utils.script.Script;
import java.io.IOException;
public class CryptSetup {
protected String commandPath = "cryptsetup";
/**
* LuksType represents the possible types that can be passed to cryptsetup.
* NOTE: Only "luks1" is currently supported with Libvirt, so while
* this utility may be capable of creating various types, care should
* be taken to use types that work for the use case.
*/
public enum LuksType {
LUKS("luks1"), LUKS2("luks2"), PLAIN("plain"), TCRYPT("tcrypt"), BITLK("bitlk");
final String luksTypeValue;
LuksType(String type) { this.luksTypeValue = type; }
@Override
public String toString() {
return luksTypeValue;
}
}
public CryptSetup(final String commandPath) {
this.commandPath = commandPath;
}
public CryptSetup() {}
public void open(byte[] passphrase, String diskPath, String diskName) throws CryptSetupException {
try(KeyFile key = new KeyFile(passphrase)) {
final Script script = new Script(commandPath);
script.add("open");
script.add("--key-file");
script.add(key.toString());
script.add("--allow-discards");
script.add(diskPath);
script.add(diskName);
final String result = script.execute();
if (result != null) {
throw new CryptSetupException(result);
}
} catch (IOException ex) {
throw new CryptSetupException(String.format("Failed to open encrypted device at '%s'", diskPath), ex);
}
}
public void close(String diskName) throws CryptSetupException {
final Script script = new Script(commandPath);
script.add("close");
script.add(diskName);
final String result = script.execute();
if (result != null) {
throw new CryptSetupException(result);
}
}
/**
* Formats a file using cryptsetup
* @param passphrase
* @param luksType
* @param diskPath
* @throws CryptSetupException
*/
public void luksFormat(byte[] passphrase, LuksType luksType, String diskPath) throws CryptSetupException {
try(KeyFile key = new KeyFile(passphrase)) {
final Script script = new Script(commandPath);
script.add("luksFormat");
script.add("-q");
script.add("--force-password");
script.add("--key-file");
script.add(key.toString());
script.add("--type");
script.add(luksType.toString());
script.add(diskPath);
final String result = script.execute();
if (result != null) {
throw new CryptSetupException(result);
}
} catch (IOException ex) {
throw new CryptSetupException(String.format("Failed to format encrypted device at '%s'", diskPath), ex);
}
}
public boolean isSupported() {
final Script script = new Script(commandPath);
script.add("--usage");
final String result = script.execute();
return result == null;
}
public boolean isLuks(String filePath) {
final Script script = new Script(commandPath);
script.add("isLuks");
script.add(filePath);
final String result = script.execute();
return result == null;
}
}

View File

@ -0,0 +1,27 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.cloudstack.utils.cryptsetup;
public class CryptSetupException extends Exception {
public CryptSetupException(String message) {
super(message);
}
public CryptSetupException(String message, Exception ex) { super(message, ex); }
}

View File

@ -0,0 +1,76 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.utils.cryptsetup;
import java.io.Closeable;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.attribute.PosixFilePermission;
import java.nio.file.attribute.PosixFilePermissions;
import java.util.Set;
public class KeyFile implements Closeable {
private Path filePath = null;
/**
* KeyFile represents a temporary file for storing data
* to pass to commands, as an alternative to putting sensitive
* data on the command line.
* @param key byte array of content for the KeyFile
* @throws IOException as the IOException for creating KeyFile
*/
public KeyFile(byte[] key) throws IOException {
if (key != null && key.length > 0) {
Set<PosixFilePermission> permissions = PosixFilePermissions.fromString("rw-------");
filePath = Files.createTempFile("keyfile", ".tmp", PosixFilePermissions.asFileAttribute(permissions));
Files.write(filePath, key);
}
}
public Path getPath() {
return filePath;
}
public boolean isSet() {
return filePath != null;
}
/**
* Converts the keyfile to the absolute path String where it is located
* @return absolute path as String
*/
@Override
public String toString() {
if (filePath != null) {
return filePath.toAbsolutePath().toString();
}
return "";
}
/**
* Deletes the underlying key file
* @throws IOException as the IOException for deleting the underlying key file
*/
@Override
public void close() throws IOException {
if (isSet()) {
Files.delete(filePath);
filePath = null;
}
}
}

View File

@ -0,0 +1,78 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.utils.qemu;
import com.google.common.base.Joiner;
import java.util.HashMap;
import java.util.Map;
import java.util.TreeMap;
public class QemuImageOptions {
private Map<String, String> params = new HashMap<>();
private static final String FILENAME_PARAM_KEY = "file.filename";
private static final String LUKS_KEY_SECRET_PARAM_KEY = "key-secret";
private static final String QCOW2_KEY_SECRET_PARAM_KEY = "encrypt.key-secret";
public QemuImageOptions(String filePath) {
params.put(FILENAME_PARAM_KEY, filePath);
}
/**
* Constructor for self-crafting the full map of parameters
* @param params the map of parameters
*/
public QemuImageOptions(Map<String, String> params) {
this.params = params;
}
/**
* Constructor for crafting image options that may contain a secret or format
* @param format optional format, renders as "driver" option
* @param filePath required path of image
* @param secretName optional secret name for image. Secret only applies for QCOW2 or LUKS format
*/
public QemuImageOptions(QemuImg.PhysicalDiskFormat format, String filePath, String secretName) {
params.put(FILENAME_PARAM_KEY, filePath);
if (secretName != null && !secretName.isBlank()) {
if (format.equals(QemuImg.PhysicalDiskFormat.QCOW2)) {
params.put(QCOW2_KEY_SECRET_PARAM_KEY, secretName);
} else if (format.equals(QemuImg.PhysicalDiskFormat.LUKS)) {
params.put(LUKS_KEY_SECRET_PARAM_KEY, secretName);
}
}
if (format != null) {
params.put("driver", format.toString());
}
}
public void setFormat(QemuImg.PhysicalDiskFormat format) {
if (format != null) {
params.put("driver", format.toString());
}
}
/**
* Converts QemuImageOptions into the command strings required by qemu-img flags
* @return array of strings representing command flag and value (--image-opts)
*/
public String[] toCommandFlag() {
Map<String, String> sorted = new TreeMap<>(params);
String paramString = Joiner.on(",").withKeyValueSeparator("=").join(sorted);
return new String[] {"--image-opts", paramString};
}
}

View File

@ -16,33 +16,47 @@
// under the License. // under the License.
package org.apache.cloudstack.utils.qemu; package org.apache.cloudstack.utils.qemu;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.HashMap; import java.util.HashMap;
import java.util.Iterator; import java.util.Iterator;
import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.regex.Pattern;
import org.apache.commons.lang.NotImplementedException;
import org.apache.commons.lang3.StringUtils;
import org.libvirt.LibvirtException;
import com.cloud.hypervisor.kvm.resource.LibvirtConnection; import com.cloud.hypervisor.kvm.resource.LibvirtConnection;
import com.cloud.storage.Storage; import com.cloud.storage.Storage;
import com.cloud.utils.script.OutputInterpreter; import com.cloud.utils.script.OutputInterpreter;
import com.cloud.utils.script.Script; import com.cloud.utils.script.Script;
import org.apache.commons.lang3.StringUtils;
import org.apache.log4j.Logger; import org.apache.log4j.Logger;
import org.apache.commons.lang.NotImplementedException;
import org.libvirt.LibvirtException; import static java.util.regex.Pattern.CASE_INSENSITIVE;
public class QemuImg { public class QemuImg {
private Logger logger = Logger.getLogger(this.getClass()); private Logger logger = Logger.getLogger(this.getClass());
public final static String BACKING_FILE = "backing_file"; public static final String BACKING_FILE = "backing_file";
public final static String BACKING_FILE_FORMAT = "backing_file_format"; public static final String BACKING_FILE_FORMAT = "backing_file_format";
public final static String CLUSTER_SIZE = "cluster_size"; public static final String CLUSTER_SIZE = "cluster_size";
public final static String FILE_FORMAT = "file_format"; public static final String FILE_FORMAT = "file_format";
public final static String IMAGE = "image"; public static final String IMAGE = "image";
public final static String VIRTUAL_SIZE = "virtual_size"; public static final String VIRTUAL_SIZE = "virtual_size";
public static final String ENCRYPT_FORMAT = "encrypt.format";
public static final String ENCRYPT_KEY_SECRET = "encrypt.key-secret";
public static final String TARGET_ZERO_FLAG = "--target-is-zero";
public static final long QEMU_2_10 = 2010000;
/* The qemu-img binary. We expect this to be in $PATH */ /* The qemu-img binary. We expect this to be in $PATH */
public String _qemuImgPath = "qemu-img"; public String _qemuImgPath = "qemu-img";
private String cloudQemuImgPath = "cloud-qemu-img"; private String cloudQemuImgPath = "cloud-qemu-img";
private int timeout; private int timeout;
private boolean skipZero = false;
private boolean noCache = false;
private long version;
private String getQemuImgPathScript = String.format("which %s >& /dev/null; " + private String getQemuImgPathScript = String.format("which %s >& /dev/null; " +
"if [ $? -gt 0 ]; then echo \"%s\"; else echo \"%s\"; fi", "if [ $? -gt 0 ]; then echo \"%s\"; else echo \"%s\"; fi",
@ -50,7 +64,7 @@ public class QemuImg {
/* Shouldn't we have KVMPhysicalDisk and LibvirtVMDef read this? */ /* Shouldn't we have KVMPhysicalDisk and LibvirtVMDef read this? */
public static enum PhysicalDiskFormat { public static enum PhysicalDiskFormat {
RAW("raw"), QCOW2("qcow2"), VMDK("vmdk"), FILE("file"), RBD("rbd"), SHEEPDOG("sheepdog"), HTTP("http"), HTTPS("https"), TAR("tar"), DIR("dir"); RAW("raw"), QCOW2("qcow2"), VMDK("vmdk"), FILE("file"), RBD("rbd"), SHEEPDOG("sheepdog"), HTTP("http"), HTTPS("https"), TAR("tar"), DIR("dir"), LUKS("luks");
String format; String format;
private PhysicalDiskFormat(final String format) { private PhysicalDiskFormat(final String format) {
@ -93,8 +107,41 @@ public class QemuImg {
} }
} }
public QemuImg(final int timeout) { /**
* Create a QemuImg object that supports skipping target zeroes
* We detect this support via qemu-img help since support can
* be backported rather than found in a specific version.
*
* @param timeout script timeout, default 0
* @param skipZeroIfSupported Don't write zeroes to target device during convert, if supported by qemu-img
* @param noCache Ensure we flush writes to target disk (useful for block device targets)
*/
public QemuImg(final int timeout, final boolean skipZeroIfSupported, final boolean noCache) throws LibvirtException {
if (skipZeroIfSupported) {
final Script s = new Script(_qemuImgPath, timeout);
s.add("--help");
final OutputInterpreter.AllLinesParser parser = new OutputInterpreter.AllLinesParser();
final String result = s.execute(parser);
// Older Qemu returns output in result due to --help reporting error status
if (result != null) {
if (result.contains(TARGET_ZERO_FLAG)) {
this.skipZero = true;
}
} else {
if (parser.getLines().contains(TARGET_ZERO_FLAG)) {
this.skipZero = true;
}
}
}
this.timeout = timeout; this.timeout = timeout;
this.noCache = noCache;
this.version = LibvirtConnection.getConnection().getVersion();
}
public QemuImg(final int timeout) throws LibvirtException, QemuImgException {
this(timeout, false, false);
} }
public void setTimeout(final int timeout) { public void setTimeout(final int timeout) {
@ -109,7 +156,8 @@ public class QemuImg {
* A alternative path to the qemu-img binary * A alternative path to the qemu-img binary
* @return void * @return void
*/ */
public QemuImg(final String qemuImgPath) { public QemuImg(final String qemuImgPath) throws LibvirtException {
this(0, false, false);
_qemuImgPath = qemuImgPath; _qemuImgPath = qemuImgPath;
} }
@ -135,9 +183,35 @@ public class QemuImg {
* @return void * @return void
*/ */
public void create(final QemuImgFile file, final QemuImgFile backingFile, final Map<String, String> options) throws QemuImgException { public void create(final QemuImgFile file, final QemuImgFile backingFile, final Map<String, String> options) throws QemuImgException {
create(file, backingFile, options, null);
}
/**
* Create a new image
*
* This method calls 'qemu-img create'
*
* @param file
* The file to create
* @param backingFile
* A backing file if used (for example with qcow2)
* @param options
* Options for the create. Takes a Map<String, String> with key value
* pairs which are passed on to qemu-img without validation.
* @param qemuObjects
* Pass list of qemu Object to create - see objects in qemu man page
* @return void
*/
public void create(final QemuImgFile file, final QemuImgFile backingFile, final Map<String, String> options, final List<QemuObject> qemuObjects) throws QemuImgException {
final Script s = new Script(_qemuImgPath, timeout); final Script s = new Script(_qemuImgPath, timeout);
s.add("create"); s.add("create");
if (this.version >= QEMU_2_10 && qemuObjects != null) {
for (QemuObject o : qemuObjects) {
s.add(o.toCommandFlag());
}
}
if (options != null && !options.isEmpty()) { if (options != null && !options.isEmpty()) {
s.add("-o"); s.add("-o");
final StringBuilder optionsStr = new StringBuilder(); final StringBuilder optionsStr = new StringBuilder();
@ -247,6 +321,63 @@ public class QemuImg {
*/ */
public void convert(final QemuImgFile srcFile, final QemuImgFile destFile, public void convert(final QemuImgFile srcFile, final QemuImgFile destFile,
final Map<String, String> options, final String snapshotName, final boolean forceSourceFormat) throws QemuImgException, LibvirtException { final Map<String, String> options, final String snapshotName, final boolean forceSourceFormat) throws QemuImgException, LibvirtException {
convert(srcFile, destFile, options, null, snapshotName, forceSourceFormat);
}
/**
* Convert a image from source to destination
*
* This method calls 'qemu-img convert' and takes five objects
* as an argument.
*
*
* @param srcFile
* The source file
* @param destFile
* The destination file
* @param options
* Options for the convert. Takes a Map<String, String> with key value
* pairs which are passed on to qemu-img without validation.
* @param qemuObjects
* Pass qemu Objects to create - see objects in qemu man page
* @param snapshotName
* If it is provided, convertion uses it as parameter
* @param forceSourceFormat
* If true, specifies the source format in the conversion cmd
* @return void
*/
public void convert(final QemuImgFile srcFile, final QemuImgFile destFile,
final Map<String, String> options, final List<QemuObject> qemuObjects, final String snapshotName, final boolean forceSourceFormat) throws QemuImgException {
QemuImageOptions imageOpts = new QemuImageOptions(srcFile.getFormat(), srcFile.getFileName(), null);
convert(srcFile, destFile, options, qemuObjects, imageOpts, snapshotName, forceSourceFormat);
}
/**
* Convert a image from source to destination
*
* This method calls 'qemu-img convert' and takes five objects
* as an argument.
*
*
* @param srcFile
* The source file
* @param destFile
* The destination file
* @param options
* Options for the convert. Takes a Map<String, String> with key value
* pairs which are passed on to qemu-img without validation.
* @param qemuObjects
* Pass qemu Objects to convert - see objects in qemu man page
* @param srcImageOpts
* pass qemu --image-opts to convert
* @param snapshotName
* If it is provided, convertion uses it as parameter
* @param forceSourceFormat
* If true, specifies the source format in the conversion cmd
* @return void
*/
public void convert(final QemuImgFile srcFile, final QemuImgFile destFile,
final Map<String, String> options, final List<QemuObject> qemuObjects, final QemuImageOptions srcImageOpts, final String snapshotName, final boolean forceSourceFormat) throws QemuImgException {
Script script = new Script(_qemuImgPath, timeout); Script script = new Script(_qemuImgPath, timeout);
if (StringUtils.isNotBlank(snapshotName)) { if (StringUtils.isNotBlank(snapshotName)) {
String qemuPath = Script.runSimpleBashScript(getQemuImgPathScript); String qemuPath = Script.runSimpleBashScript(getQemuImgPathScript);
@ -254,34 +385,48 @@ public class QemuImg {
} }
script.add("convert"); script.add("convert");
Long version = LibvirtConnection.getConnection().getVersion();
if (version >= 2010000) {
script.add("-U");
}
// autodetect source format unless specified explicitly if (skipZero && Files.exists(Paths.get(destFile.getFileName()))) {
if (forceSourceFormat) { script.add("-n");
script.add("-f"); script.add(TARGET_ZERO_FLAG);
script.add(srcFile.getFormat().toString()); script.add("-W");
// with target-is-zero we skip zeros in 1M chunks for compatibility
script.add("-S");
script.add("1M");
} }
script.add("-O"); script.add("-O");
script.add(destFile.getFormat().toString()); script.add(destFile.getFormat().toString());
if (options != null && !options.isEmpty()) { addScriptOptionsFromMap(options, script);
script.add("-o");
final StringBuffer optionsBuffer = new StringBuffer();
for (final Map.Entry<String, String> option : options.entrySet()) {
optionsBuffer.append(option.getKey()).append('=').append(option.getValue()).append(',');
}
String optionsStr = optionsBuffer.toString();
optionsStr = optionsStr.replaceAll(",$", "");
script.add(optionsStr);
}
addSnapshotToConvertCommand(srcFile.getFormat().toString(), snapshotName, forceSourceFormat, script, version); addSnapshotToConvertCommand(srcFile.getFormat().toString(), snapshotName, forceSourceFormat, script, version);
script.add(srcFile.getFileName()); if (noCache) {
script.add("-t");
script.add("none");
}
if (this.version >= QEMU_2_10) {
script.add("-U");
if (forceSourceFormat) {
srcImageOpts.setFormat(srcFile.getFormat());
}
script.add(srcImageOpts.toCommandFlag());
if (qemuObjects != null) {
for (QemuObject o : qemuObjects) {
script.add(o.toCommandFlag());
}
}
} else {
if (forceSourceFormat) {
script.add("-f");
script.add(srcFile.getFormat().toString());
}
script.add(srcFile.getFileName());
}
script.add(destFile.getFileName()); script.add(destFile.getFileName());
final String result = script.execute(); final String result = script.execute();
@ -433,11 +578,10 @@ public class QemuImg {
* A QemuImgFile object containing the file to get the information from * A QemuImgFile object containing the file to get the information from
* @return A HashMap with String key-value information as returned by 'qemu-img info' * @return A HashMap with String key-value information as returned by 'qemu-img info'
*/ */
public Map<String, String> info(final QemuImgFile file) throws QemuImgException, LibvirtException { public Map<String, String> info(final QemuImgFile file) throws QemuImgException {
final Script s = new Script(_qemuImgPath); final Script s = new Script(_qemuImgPath);
s.add("info"); s.add("info");
Long version = LibvirtConnection.getConnection().getVersion(); if (this.version >= QEMU_2_10) {
if (version >= 2010000) {
s.add("-U"); s.add("-U");
} }
s.add(file.getFileName()); s.add(file.getFileName());
@ -465,12 +609,72 @@ public class QemuImg {
info.put(key, value); info.put(key, value);
} }
} }
// set some missing attributes in passed file, if found
if (info.containsKey(VIRTUAL_SIZE) && file.getSize() == 0L) {
file.setSize(Long.parseLong(info.get(VIRTUAL_SIZE)));
}
if (info.containsKey(FILE_FORMAT) && file.getFormat() == null) {
file.setFormat(PhysicalDiskFormat.valueOf(info.get(FILE_FORMAT).toUpperCase()));
}
return info; return info;
} }
/* List, apply, create or delete snapshots in image */ /* create snapshots in image */
public void snapshot() throws QemuImgException { public void snapshot(final QemuImageOptions srcImageOpts, final String snapshotName, final List<QemuObject> qemuObjects) throws QemuImgException {
final Script s = new Script(_qemuImgPath, timeout);
s.add("snapshot");
s.add("-c");
s.add(snapshotName);
for (QemuObject o : qemuObjects) {
s.add(o.toCommandFlag());
}
s.add(srcImageOpts.toCommandFlag());
final String result = s.execute();
if (result != null) {
throw new QemuImgException(result);
}
}
/* delete snapshots in image */
public void deleteSnapshot(final QemuImageOptions srcImageOpts, final String snapshotName, final List<QemuObject> qemuObjects) throws QemuImgException {
final Script s = new Script(_qemuImgPath, timeout);
s.add("snapshot");
s.add("-d");
s.add(snapshotName);
for (QemuObject o : qemuObjects) {
s.add(o.toCommandFlag());
}
s.add(srcImageOpts.toCommandFlag());
final String result = s.execute();
if (result != null) {
// support idempotent delete calls, if no snapshot exists we are good.
if (result.contains("snapshot not found") || result.contains("Can't find the snapshot")) {
return;
}
throw new QemuImgException(result);
}
}
private void addScriptOptionsFromMap(Map<String, String> options, Script s) {
if (options != null && !options.isEmpty()) {
s.add("-o");
final StringBuffer optionsBuffer = new StringBuffer();
for (final Map.Entry<String, String> option : options.entrySet()) {
optionsBuffer.append(option.getKey()).append('=').append(option.getValue()).append(',');
}
String optionsStr = optionsBuffer.toString();
optionsStr = optionsStr.replaceAll(",$", "");
s.add(optionsStr);
}
} }
/* Changes the backing file of an image */ /* Changes the backing file of an image */
@ -541,6 +745,33 @@ public class QemuImg {
s.execute(); s.execute();
} }
/**
* Resize an image, new style flags/options
*
* @param imageOptions
* Qemu style image options for the image to resize
* @param qemuObjects
* Qemu style options (e.g. for passing secrets)
* @param size
* The absolute final size of the image
*/
public void resize(final QemuImageOptions imageOptions, final List<QemuObject> qemuObjects, final long size) throws QemuImgException {
final Script s = new Script(_qemuImgPath);
s.add("resize");
for (QemuObject o : qemuObjects) {
s.add(o.toCommandFlag());
}
s.add(imageOptions.toCommandFlag());
s.add(Long.toString(size));
final String result = s.execute();
if (result != null) {
throw new QemuImgException(result);
}
}
/** /**
* Resize an image * Resize an image
* *
@ -557,4 +788,37 @@ public class QemuImg {
public void resize(final QemuImgFile file, final long size) throws QemuImgException { public void resize(final QemuImgFile file, final long size) throws QemuImgException {
this.resize(file, size, false); this.resize(file, size, false);
} }
/**
* Does qemu-img support --target-is-zero
* @return boolean
*/
public boolean supportsSkipZeros() {
return this.skipZero;
}
public void setSkipZero(boolean skipZero) {
this.skipZero = skipZero;
}
public boolean supportsImageFormat(QemuImg.PhysicalDiskFormat format) {
final Script s = new Script(_qemuImgPath, timeout);
s.add("--help");
final OutputInterpreter.AllLinesParser parser = new OutputInterpreter.AllLinesParser();
String result = s.execute(parser);
String output = parser.getLines();
// Older Qemu returns output in result due to --help reporting error status
if (result != null) {
output = result;
}
return helpSupportsImageFormat(output, format);
}
protected static boolean helpSupportsImageFormat(String text, QemuImg.PhysicalDiskFormat format) {
Pattern pattern = Pattern.compile("Supported\\sformats:[a-zA-Z0-9-_\\s]*?\\b" + format + "\\b", CASE_INSENSITIVE);
return pattern.matcher(text).find();
}
} }

View File

@ -0,0 +1,128 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.utils.qemu;
import java.util.EnumMap;
import java.util.Map;
import java.util.TreeMap;
import org.apache.commons.lang3.StringUtils;
import com.google.common.base.Joiner;
public class QemuObject {
private final ObjectType type;
private final Map<ObjectParameter, String> params;
public enum ObjectParameter {
DATA("data"),
FILE("file"),
FORMAT("format"),
ID("id"),
IV("iv"),
KEYID("keyid");
private final String parameter;
ObjectParameter(String param) {
this.parameter = param;
}
@Override
public String toString() {return parameter; }
}
/**
* Supported qemu encryption formats.
* NOTE: Only "luks" is currently supported with Libvirt, so while
* this utility may be capable of creating various formats, care should
* be taken to use types that work for the use case.
*/
public enum EncryptFormat {
LUKS("luks"),
AES("aes");
private final String format;
EncryptFormat(String format) { this.format = format; }
@Override
public String toString() { return format;}
public static EncryptFormat enumValue(String value) {
if (StringUtils.isBlank(value)) {
return LUKS; // default encryption format
}
return EncryptFormat.valueOf(value.toUpperCase());
}
}
public enum ObjectType {
SECRET("secret");
private final String objectTypeValue;
ObjectType(String objectTypeValue) {
this.objectTypeValue = objectTypeValue;
}
@Override
public String toString() {
return objectTypeValue;
}
}
public QemuObject(ObjectType type, Map<ObjectParameter, String> params) {
this.type = type;
this.params = params;
}
/**
* Converts QemuObject into the command strings required by qemu-img flags
* @return array of strings representing command flag and value (--object)
*/
public String[] toCommandFlag() {
Map<ObjectParameter, String> sorted = new TreeMap<>(params);
String paramString = Joiner.on(",").withKeyValueSeparator("=").join(sorted);
return new String[] {"--object", String.format("%s,%s", type, paramString) };
}
/**
* Creates a QemuObject with the correct parameters for passing encryption secret details to qemu-img
* @param format the image format to use
* @param encryptFormat the encryption format to use (luks)
* @param keyFilePath the path to the file containing encryption key
* @param secretName the name to use for the secret
* @param options the options map for qemu-img (-o flag)
* @return the QemuObject containing encryption parameters
*/
public static QemuObject prepareSecretForQemuImg(QemuImg.PhysicalDiskFormat format, EncryptFormat encryptFormat, String keyFilePath, String secretName, Map<String, String> options) {
EnumMap<ObjectParameter, String> params = new EnumMap<>(ObjectParameter.class);
params.put(ObjectParameter.ID, secretName);
params.put(ObjectParameter.FILE, keyFilePath);
if (options != null) {
if (format == QemuImg.PhysicalDiskFormat.QCOW2) {
options.put("encrypt.key-secret", secretName);
options.put("encrypt.format", encryptFormat.toString());
} else if (format == QemuImg.PhysicalDiskFormat.RAW || format == QemuImg.PhysicalDiskFormat.LUKS) {
options.put("key-secret", secretName);
}
}
return new QemuObject(ObjectType.SECRET, params);
}
}

View File

@ -57,6 +57,7 @@ import javax.xml.xpath.XPathFactory;
import com.cloud.utils.ssh.SshHelper; import com.cloud.utils.ssh.SshHelper;
import org.apache.cloudstack.storage.command.AttachAnswer; import org.apache.cloudstack.storage.command.AttachAnswer;
import org.apache.cloudstack.storage.command.AttachCommand; import org.apache.cloudstack.storage.command.AttachCommand;
import org.apache.cloudstack.utils.bytescale.ByteScaleUtils;
import org.apache.cloudstack.utils.linux.CPUStat; import org.apache.cloudstack.utils.linux.CPUStat;
import org.apache.cloudstack.utils.linux.MemStat; import org.apache.cloudstack.utils.linux.MemStat;
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat; import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
@ -78,6 +79,7 @@ import org.libvirt.MemoryStatistic;
import org.libvirt.NodeInfo; import org.libvirt.NodeInfo;
import org.libvirt.SchedUlongParameter; import org.libvirt.SchedUlongParameter;
import org.libvirt.StorageVol; import org.libvirt.StorageVol;
import org.libvirt.VcpuInfo;
import org.libvirt.jna.virDomainMemoryStats; import org.libvirt.jna.virDomainMemoryStats;
import org.mockito.BDDMockito; import org.mockito.BDDMockito;
import org.mockito.Mock; import org.mockito.Mock;
@ -209,8 +211,6 @@ import com.cloud.vm.DiskProfile;
import com.cloud.vm.VirtualMachine; import com.cloud.vm.VirtualMachine;
import com.cloud.vm.VirtualMachine.PowerState; import com.cloud.vm.VirtualMachine.PowerState;
import com.cloud.vm.VirtualMachine.Type; import com.cloud.vm.VirtualMachine.Type;
import org.apache.cloudstack.utils.bytescale.ByteScaleUtils;
import org.libvirt.VcpuInfo;
@RunWith(PowerMockRunner.class) @RunWith(PowerMockRunner.class)
@PrepareForTest(value = {MemStat.class, SshHelper.class}) @PrepareForTest(value = {MemStat.class, SshHelper.class})
@ -2149,7 +2149,7 @@ public class LibvirtComputingResourceTest {
when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(poolManager); when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(poolManager);
when(poolManager.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(primary); when(poolManager.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(primary);
when(primary.createPhysicalDisk(diskCharacteristics.getPath(), diskCharacteristics.getProvisioningType(), diskCharacteristics.getSize())).thenReturn(vol); when(primary.createPhysicalDisk(diskCharacteristics.getPath(), diskCharacteristics.getProvisioningType(), diskCharacteristics.getSize(), null)).thenReturn(vol);
final LibvirtRequestWrapper wrapper = LibvirtRequestWrapper.getInstance(); final LibvirtRequestWrapper wrapper = LibvirtRequestWrapper.getInstance();
assertNotNull(wrapper); assertNotNull(wrapper);
@ -2208,7 +2208,7 @@ public class LibvirtComputingResourceTest {
when(poolManager.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(primary); when(poolManager.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(primary);
when(primary.getPhysicalDisk(command.getTemplateUrl())).thenReturn(baseVol); when(primary.getPhysicalDisk(command.getTemplateUrl())).thenReturn(baseVol);
when(poolManager.createDiskFromTemplate(baseVol, diskCharacteristics.getPath(), diskCharacteristics.getProvisioningType(), primary, baseVol.getSize(), 0)).thenReturn(vol); when(poolManager.createDiskFromTemplate(baseVol, diskCharacteristics.getPath(), diskCharacteristics.getProvisioningType(), primary, baseVol.getSize(), 0,null)).thenReturn(vol);
final LibvirtRequestWrapper wrapper = LibvirtRequestWrapper.getInstance(); final LibvirtRequestWrapper wrapper = LibvirtRequestWrapper.getInstance();
assertNotNull(wrapper); assertNotNull(wrapper);
@ -4847,7 +4847,12 @@ public class LibvirtComputingResourceTest {
final LibvirtUtilitiesHelper libvirtUtilitiesHelper = Mockito.mock(LibvirtUtilitiesHelper.class); final LibvirtUtilitiesHelper libvirtUtilitiesHelper = Mockito.mock(LibvirtUtilitiesHelper.class);
final Connect conn = Mockito.mock(Connect.class); final Connect conn = Mockito.mock(Connect.class);
final StorageVol v = Mockito.mock(StorageVol.class); final StorageVol v = Mockito.mock(StorageVol.class);
final Domain vm = Mockito.mock(Domain.class);
final DomainInfo info = Mockito.mock(DomainInfo.class);
final DomainState state = DomainInfo.DomainState.VIR_DOMAIN_RUNNING;
info.state = state;
when(pool.getType()).thenReturn(StoragePoolType.RBD);
when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(storagePoolMgr); when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(storagePoolMgr);
when(storagePoolMgr.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(storagePool); when(storagePoolMgr.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(storagePool);
when(storagePool.getPhysicalDisk(path)).thenReturn(vol); when(storagePool.getPhysicalDisk(path)).thenReturn(vol);
@ -4860,9 +4865,11 @@ public class LibvirtComputingResourceTest {
try { try {
when(libvirtUtilitiesHelper.getConnection()).thenReturn(conn); when(libvirtUtilitiesHelper.getConnection()).thenReturn(conn);
when(conn.storageVolLookupByPath(path)).thenReturn(v); when(conn.storageVolLookupByPath(path)).thenReturn(v);
when(libvirtUtilitiesHelper.getConnectionByVmName(vmInstance)).thenReturn(conn);
when(conn.domainLookupByName(vmInstance)).thenReturn(vm);
when(vm.getInfo()).thenReturn(info);
when(conn.getLibVirVersion()).thenReturn(10010l); when(conn.getLibVirVersion()).thenReturn(10010l);
} catch (final LibvirtException e) { } catch (final LibvirtException e) {
fail(e.getMessage()); fail(e.getMessage());
} }
@ -4875,9 +4882,10 @@ public class LibvirtComputingResourceTest {
verify(libvirtComputingResource, times(1)).getStoragePoolMgr(); verify(libvirtComputingResource, times(1)).getStoragePoolMgr();
verify(libvirtComputingResource, times(1)).getLibvirtUtilitiesHelper(); verify(libvirtComputingResource, times(2)).getLibvirtUtilitiesHelper();
try { try {
verify(libvirtUtilitiesHelper, times(1)).getConnection(); verify(libvirtUtilitiesHelper, times(1)).getConnection();
verify(libvirtUtilitiesHelper, times(1)).getConnectionByVmName(vmInstance);
} catch (final LibvirtException e) { } catch (final LibvirtException e) {
fail(e.getMessage()); fail(e.getMessage());
} }
@ -4898,7 +4906,13 @@ public class LibvirtComputingResourceTest {
final KVMStoragePool storagePool = Mockito.mock(KVMStoragePool.class); final KVMStoragePool storagePool = Mockito.mock(KVMStoragePool.class);
final KVMPhysicalDisk vol = Mockito.mock(KVMPhysicalDisk.class); final KVMPhysicalDisk vol = Mockito.mock(KVMPhysicalDisk.class);
final LibvirtUtilitiesHelper libvirtUtilitiesHelper = Mockito.mock(LibvirtUtilitiesHelper.class); final LibvirtUtilitiesHelper libvirtUtilitiesHelper = Mockito.mock(LibvirtUtilitiesHelper.class);
final Connect conn = Mockito.mock(Connect.class);
final Domain vm = Mockito.mock(Domain.class);
final DomainInfo info = Mockito.mock(DomainInfo.class);
final DomainState state = DomainInfo.DomainState.VIR_DOMAIN_RUNNING;
info.state = state;
when(pool.getType()).thenReturn(StoragePoolType.Linstor);
when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(storagePoolMgr); when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(storagePoolMgr);
when(storagePoolMgr.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(storagePool); when(storagePoolMgr.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(storagePool);
when(storagePool.getPhysicalDisk(path)).thenReturn(vol); when(storagePool.getPhysicalDisk(path)).thenReturn(vol);
@ -4906,6 +4920,15 @@ public class LibvirtComputingResourceTest {
when(storagePool.getType()).thenReturn(StoragePoolType.Linstor); when(storagePool.getType()).thenReturn(StoragePoolType.Linstor);
when(vol.getFormat()).thenReturn(PhysicalDiskFormat.RAW); when(vol.getFormat()).thenReturn(PhysicalDiskFormat.RAW);
when(libvirtComputingResource.getLibvirtUtilitiesHelper()).thenReturn(libvirtUtilitiesHelper);
try {
when(libvirtUtilitiesHelper.getConnectionByVmName(vmInstance)).thenReturn(conn);
when(conn.domainLookupByName(vmInstance)).thenReturn(vm);
when(vm.getInfo()).thenReturn(info);
} catch (final LibvirtException e) {
fail(e.getMessage());
}
final LibvirtRequestWrapper wrapper = LibvirtRequestWrapper.getInstance(); final LibvirtRequestWrapper wrapper = LibvirtRequestWrapper.getInstance();
assertNotNull(wrapper); assertNotNull(wrapper);
@ -4915,9 +4938,10 @@ public class LibvirtComputingResourceTest {
verify(libvirtComputingResource, times(1)).getStoragePoolMgr(); verify(libvirtComputingResource, times(1)).getStoragePoolMgr();
verify(libvirtComputingResource, times(0)).getResizeScriptType(storagePool, vol); verify(libvirtComputingResource, times(0)).getResizeScriptType(storagePool, vol);
verify(libvirtComputingResource, times(0)).getLibvirtUtilitiesHelper(); verify(libvirtComputingResource, times(1)).getLibvirtUtilitiesHelper();
try { try {
verify(libvirtUtilitiesHelper, times(0)).getConnection(); verify(libvirtUtilitiesHelper, times(0)).getConnection();
verify(libvirtUtilitiesHelper, times(1)).getConnectionByVmName(vmInstance);
} catch (final LibvirtException e) { } catch (final LibvirtException e) {
fail(e.getMessage()); fail(e.getMessage());
} }
@ -4956,6 +4980,7 @@ public class LibvirtComputingResourceTest {
final KVMStoragePool storagePool = Mockito.mock(KVMStoragePool.class); final KVMStoragePool storagePool = Mockito.mock(KVMStoragePool.class);
final KVMPhysicalDisk vol = Mockito.mock(KVMPhysicalDisk.class); final KVMPhysicalDisk vol = Mockito.mock(KVMPhysicalDisk.class);
when(pool.getType()).thenReturn(StoragePoolType.Filesystem);
when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(storagePoolMgr); when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(storagePoolMgr);
when(storagePoolMgr.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(storagePool); when(storagePoolMgr.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(storagePool);
when(storagePool.getPhysicalDisk(path)).thenReturn(vol); when(storagePool.getPhysicalDisk(path)).thenReturn(vol);
@ -4986,6 +5011,7 @@ public class LibvirtComputingResourceTest {
final KVMPhysicalDisk vol = Mockito.mock(KVMPhysicalDisk.class); final KVMPhysicalDisk vol = Mockito.mock(KVMPhysicalDisk.class);
final LibvirtUtilitiesHelper libvirtUtilitiesHelper = Mockito.mock(LibvirtUtilitiesHelper.class); final LibvirtUtilitiesHelper libvirtUtilitiesHelper = Mockito.mock(LibvirtUtilitiesHelper.class);
when(pool.getType()).thenReturn(StoragePoolType.RBD);
when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(storagePoolMgr); when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(storagePoolMgr);
when(storagePoolMgr.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(storagePool); when(storagePoolMgr.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(storagePool);
when(storagePool.getPhysicalDisk(path)).thenReturn(vol); when(storagePool.getPhysicalDisk(path)).thenReturn(vol);
@ -5032,6 +5058,7 @@ public class LibvirtComputingResourceTest {
final KVMStoragePoolManager storagePoolMgr = Mockito.mock(KVMStoragePoolManager.class); final KVMStoragePoolManager storagePoolMgr = Mockito.mock(KVMStoragePoolManager.class);
final KVMStoragePool storagePool = Mockito.mock(KVMStoragePool.class); final KVMStoragePool storagePool = Mockito.mock(KVMStoragePool.class);
when(pool.getType()).thenReturn(StoragePoolType.RBD);
when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(storagePoolMgr); when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(storagePoolMgr);
when(storagePoolMgr.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(storagePool); when(storagePoolMgr.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(storagePool);
when(storagePool.getPhysicalDisk(path)).thenThrow(CloudRuntimeException.class); when(storagePool.getPhysicalDisk(path)).thenThrow(CloudRuntimeException.class);

View File

@ -29,6 +29,7 @@ import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.RngDef;
import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.WatchDogDef; import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.WatchDogDef;
import junit.framework.TestCase; import junit.framework.TestCase;
import org.apache.cloudstack.utils.qemu.QemuObject;
public class LibvirtDomainXMLParserTest extends TestCase { public class LibvirtDomainXMLParserTest extends TestCase {
@ -51,6 +52,10 @@ public class LibvirtDomainXMLParserTest extends TestCase {
String diskLabel ="vda"; String diskLabel ="vda";
String diskPath = "/var/lib/libvirt/images/my-test-image.qcow2"; String diskPath = "/var/lib/libvirt/images/my-test-image.qcow2";
String diskLabel2 ="vdb";
String diskPath2 = "/var/lib/libvirt/images/my-test-image2.qcow2";
String secretUuid = "5644d664-a238-3a9b-811c-961f609d29f4";
String xml = "<domain type='kvm' id='10'>" + String xml = "<domain type='kvm' id='10'>" +
"<name>s-2970-VM</name>" + "<name>s-2970-VM</name>" +
"<uuid>4d2c1526-865d-4fc9-a1ac-dbd1801a22d0</uuid>" + "<uuid>4d2c1526-865d-4fc9-a1ac-dbd1801a22d0</uuid>" +
@ -87,6 +92,16 @@ public class LibvirtDomainXMLParserTest extends TestCase {
"<alias name='virtio-disk0'/>" + "<alias name='virtio-disk0'/>" +
"<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>" + "<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>" +
"</disk>" + "</disk>" +
"<disk type='" + diskType.toString() + "' device='" + deviceType.toString() + "'>" +
"<driver name='qemu' type='" + diskFormat.toString() + "' cache='" + diskCache.toString() + "'/>" +
"<source file='" + diskPath2 + "'/>" +
"<target dev='" + diskLabel2 +"' bus='" + diskBus.toString() + "'/>" +
"<alias name='virtio-disk1'/>" +
"<encryption format='luks'>" +
"<secret type='passphrase' uuid='" + secretUuid + "'/>" +
"</encryption>" +
"<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>" +
"</disk>" +
"<disk type='file' device='cdrom'>" + "<disk type='file' device='cdrom'>" +
"<driver name='qemu' type='raw' cache='none'/>" + "<driver name='qemu' type='raw' cache='none'/>" +
"<source file='/usr/share/cloudstack-common/vms/systemvm.iso'/>" + "<source file='/usr/share/cloudstack-common/vms/systemvm.iso'/>" +
@ -200,6 +215,11 @@ public class LibvirtDomainXMLParserTest extends TestCase {
assertEquals(deviceType, disks.get(diskId).getDeviceType()); assertEquals(deviceType, disks.get(diskId).getDeviceType());
assertEquals(diskFormat, disks.get(diskId).getDiskFormatType()); assertEquals(diskFormat, disks.get(diskId).getDiskFormatType());
DiskDef.LibvirtDiskEncryptDetails encryptDetails = disks.get(1).getLibvirtDiskEncryptDetails();
assertNotNull(encryptDetails);
assertEquals(QemuObject.EncryptFormat.LUKS, encryptDetails.getEncryptFormat());
assertEquals(secretUuid, encryptDetails.getPassphraseUuid());
List<ChannelDef> channels = parser.getChannels(); List<ChannelDef> channels = parser.getChannels();
for (int i = 0; i < channels.size(); i++) { for (int i = 0; i < channels.size(); i++) {
assertEquals(channelType, channels.get(i).getChannelType()); assertEquals(channelType, channels.get(i).getChannelType());

View File

@ -23,6 +23,7 @@ import java.io.File;
import java.util.Arrays; import java.util.Arrays;
import java.util.List; import java.util.List;
import java.util.Scanner; import java.util.Scanner;
import java.util.UUID;
import junit.framework.TestCase; import junit.framework.TestCase;
@ -30,6 +31,7 @@ import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.ChannelDef;
import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.DiskDef; import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.DiskDef;
import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.SCSIDef; import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.SCSIDef;
import org.apache.cloudstack.utils.linux.MemStat; import org.apache.cloudstack.utils.linux.MemStat;
import org.apache.cloudstack.utils.qemu.QemuObject;
import org.junit.Before; import org.junit.Before;
import org.junit.Test; import org.junit.Test;
import org.junit.runner.RunWith; import org.junit.runner.RunWith;
@ -218,6 +220,24 @@ public class LibvirtVMDefTest extends TestCase {
assertEquals(xmlDef, expectedXml); assertEquals(xmlDef, expectedXml);
} }
@Test
public void testDiskDefWithEncryption() {
String passphraseUuid = UUID.randomUUID().toString();
DiskDef disk = new DiskDef();
DiskDef.LibvirtDiskEncryptDetails encryptDetails = new DiskDef.LibvirtDiskEncryptDetails(passphraseUuid, QemuObject.EncryptFormat.LUKS);
disk.defBlockBasedDisk("disk1", 1, DiskDef.DiskBus.VIRTIO);
disk.setLibvirtDiskEncryptDetails(encryptDetails);
String expectedXML = "<disk device='disk' type='block'>\n" +
"<driver name='qemu' type='raw' cache='none' />\n" +
"<source dev='disk1'/>\n" +
"<target dev='vdb' bus='virtio'/>\n" +
"<encryption format='luks'>\n" +
"<secret type='passphrase' uuid='" + passphraseUuid + "' />\n" +
"</encryption>\n" +
"</disk>\n";
assertEquals(disk.toString(), expectedXML);
}
@Test @Test
public void testDiskDefWithBurst() { public void testDiskDefWithBurst() {
String filePath = "/var/lib/libvirt/images/disk.qcow2"; String filePath = "/var/lib/libvirt/images/disk.qcow2";

View File

@ -759,6 +759,41 @@ public class LibvirtMigrateCommandWrapperTest {
assertXpath(doc, "/domain/devices/disk/driver/@type", "raw"); assertXpath(doc, "/domain/devices/disk/driver/@type", "raw");
} }
@Test
public void testReplaceStorageWithSecrets() throws Exception {
Map<String, MigrateDiskInfo> mapMigrateStorage = new HashMap<String, MigrateDiskInfo>();
final String xmlDesc =
"<domain type='kvm' id='3'>" +
" <devices>" +
" <disk type='file' device='disk'>\n" +
" <driver name='qemu' type='qcow2' cache='none'/>\n" +
" <source file='/mnt/07eb495b-5590-3877-9fb7-23c6e9a40d40/bf8621b3-027c-497d-963b-06319650f048'/>\n" +
" <target dev='vdb' bus='virtio'/>\n" +
" <serial>bf8621b3027c497d963b</serial>\n" +
" <alias name='virtio-disk1'/>\n" +
" <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>\n" +
" <encryption format='luks'>\n" +
" <secret type='passphrase' uuid='5644d664-a238-3a9b-811c-961f609d29f4'/>\n" +
" </encryption>\n" +
" </disk>\n" +
" </devices>" +
"</domain>";
final String volumeFile = "3530f749-82fd-458e-9485-a357e6e541db";
String newDiskPath = "/mnt/2d0435e1-99e0-4f1d-94c0-bee1f6f8b99e/" + volumeFile;
MigrateDiskInfo diskInfo = new MigrateDiskInfo("123456", DiskType.BLOCK, DriverType.RAW, Source.FILE, newDiskPath);
mapMigrateStorage.put("/mnt/07eb495b-5590-3877-9fb7-23c6e9a40d40/bf8621b3-027c-497d-963b-06319650f048", diskInfo);
final String result = libvirtMigrateCmdWrapper.replaceStorage(xmlDesc, mapMigrateStorage, false);
final String expectedSecretUuid = LibvirtComputingResource.generateSecretUUIDFromString(volumeFile);
InputStream in = IOUtils.toInputStream(result);
DocumentBuilderFactory docFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder docBuilder = docFactory.newDocumentBuilder();
Document doc = docBuilder.parse(in);
assertXpath(doc, "/domain/devices/disk/encryption/secret/@uuid", expectedSecretUuid);
}
public void testReplaceStorageXmlDiskNotManagedStorage() throws ParserConfigurationException, TransformerException, SAXException, IOException { public void testReplaceStorageXmlDiskNotManagedStorage() throws ParserConfigurationException, TransformerException, SAXException, IOException {
final LibvirtMigrateCommandWrapper lw = new LibvirtMigrateCommandWrapper(); final LibvirtMigrateCommandWrapper lw = new LibvirtMigrateCommandWrapper();
String destDisk1FileName = "XXXXXXXXXXXXXX"; String destDisk1FileName = "XXXXXXXXXXXXXX";

View File

@ -0,0 +1,31 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.hypervisor.kvm.storage;
import org.junit.Assert;
import org.junit.Test;
public class ScaleIOStorageAdaptorTest {
@Test
public void getUsableBytesFromRawBytesTest() {
Assert.assertEquals("Overhead calculated for 8Gi size", 8554774528L, ScaleIOStorageAdaptor.getUsableBytesFromRawBytes(8L << 30));
Assert.assertEquals("Overhead calculated for 4Ti size", 4294130925568L, ScaleIOStorageAdaptor.getUsableBytesFromRawBytes(4000L << 30));
Assert.assertEquals("Overhead calculated for 500Gi size", 536737005568L, ScaleIOStorageAdaptor.getUsableBytesFromRawBytes(500L << 30));
Assert.assertEquals("Unsupported small size", 0, ScaleIOStorageAdaptor.getUsableBytesFromRawBytes(1L));
}
}

View File

@ -0,0 +1,71 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.cloudstack.utils.cryptsetup;
import org.apache.cloudstack.secret.PassphraseVO;
import org.junit.Assert;
import org.junit.Assume;
import org.junit.Before;
import org.junit.Test;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.attribute.PosixFilePermission;
import java.nio.file.attribute.PosixFilePermissions;
import java.util.Set;
public class CryptSetupTest {
CryptSetup cryptSetup = new CryptSetup();
@Before
public void setup() {
Assume.assumeTrue(cryptSetup.isSupported());
}
@Test
public void cryptSetupTest() throws IOException, CryptSetupException {
Set<PosixFilePermission> permissions = PosixFilePermissions.fromString("rw-------");
Path path = Files.createTempFile("cryptsetup", ".tmp",PosixFilePermissions.asFileAttribute(permissions));
// create a 1MB file to use as a crypt device
RandomAccessFile file = new RandomAccessFile(path.toFile(),"rw");
file.setLength(10<<20);
file.close();
String filePath = path.toAbsolutePath().toString();
PassphraseVO passphrase = new PassphraseVO();
cryptSetup.luksFormat(passphrase.getPassphrase(), CryptSetup.LuksType.LUKS, filePath);
Assert.assertTrue(cryptSetup.isLuks(filePath));
Assert.assertTrue(Files.deleteIfExists(path));
}
@Test
public void cryptSetupNonLuksTest() throws IOException {
Set<PosixFilePermission> permissions = PosixFilePermissions.fromString("rw-------");
Path path = Files.createTempFile("cryptsetup", ".tmp",PosixFilePermissions.asFileAttribute(permissions));
Assert.assertFalse(cryptSetup.isLuks(path.toAbsolutePath().toString()));
Assert.assertTrue(Files.deleteIfExists(path));
}
}

View File

@ -0,0 +1,49 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.cloudstack.utils.cryptsetup;
import org.junit.Assert;
import org.junit.Test;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
public class KeyFileTest {
@Test
public void keyFileTest() throws IOException {
byte[] contents = "the quick brown fox".getBytes();
KeyFile keyFile = new KeyFile(contents);
System.out.printf("New test KeyFile at %s%n", keyFile);
Path path = keyFile.getPath();
Assert.assertTrue(keyFile.isSet());
// check contents
byte[] fileContents = Files.readAllBytes(path);
Assert.assertArrayEquals(contents, fileContents);
// delete file on close
keyFile.close();
Assert.assertFalse("key file was not cleaned up", Files.exists(path));
Assert.assertFalse("key file is still set", keyFile.isSet());
}
}

View File

@ -0,0 +1,61 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.cloudstack.utils.qemu;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import java.util.Arrays;
import java.util.Collection;
@RunWith(Parameterized.class)
public class QemuImageOptionsTest {
@Parameterized.Parameters
public static Collection<Object[]> data() {
String imagePath = "/path/to/file";
String secretName = "secretname";
return Arrays.asList(new Object[][] {
{ null, imagePath, null, new String[]{"--image-opts","file.filename=/path/to/file"} },
{ QemuImg.PhysicalDiskFormat.QCOW2, imagePath, null, new String[]{"--image-opts",String.format("driver=qcow2,file.filename=%s", imagePath)} },
{ QemuImg.PhysicalDiskFormat.RAW, imagePath, secretName, new String[]{"--image-opts",String.format("driver=raw,file.filename=%s", imagePath)} },
{ QemuImg.PhysicalDiskFormat.QCOW2, imagePath, secretName, new String[]{"--image-opts", String.format("driver=qcow2,encrypt.key-secret=%s,file.filename=%s", secretName, imagePath)} },
{ QemuImg.PhysicalDiskFormat.LUKS, imagePath, secretName, new String[]{"--image-opts", String.format("driver=luks,file.filename=%s,key-secret=%s", imagePath, secretName)} }
});
}
public QemuImageOptionsTest(QemuImg.PhysicalDiskFormat format, String filePath, String secretName, String[] expected) {
this.format = format;
this.filePath = filePath;
this.secretName = secretName;
this.expected = expected;
}
private final QemuImg.PhysicalDiskFormat format;
private final String filePath;
private final String secretName;
private final String[] expected;
@Test
public void qemuImageOptionsFileNameTest() {
QemuImageOptions options = new QemuImageOptions(format, filePath, secretName);
Assert.assertEquals(expected, options.toCommandFlag());
}
}

View File

@ -18,21 +18,27 @@ package org.apache.cloudstack.utils.qemu;
import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail; import static org.junit.Assert.fail;
import java.io.File; import java.io.File;
import com.cloud.utils.script.Script; import com.cloud.utils.script.Script;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.HashMap; import java.util.HashMap;
import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.UUID; import java.util.UUID;
import org.junit.Assert;
import org.junit.Ignore; import org.junit.Ignore;
import org.junit.Test; import org.junit.Test;
import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat; import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
import org.libvirt.LibvirtException; import org.libvirt.LibvirtException;
@Ignore @Ignore
public class QemuImgTest { public class QemuImgTest {
@ -94,7 +100,34 @@ public class QemuImgTest {
} }
@Test @Test
public void testCreateSparseVolume() throws QemuImgException { public void testCreateWithSecretObject() throws QemuImgException, LibvirtException {
Path testFile = Paths.get("/tmp/", UUID.randomUUID().toString()).normalize().toAbsolutePath();
long size = 1<<30; // 1 Gi
Map<QemuObject.ObjectParameter, String> objectParams = new HashMap<>();
objectParams.put(QemuObject.ObjectParameter.ID, "sec0");
objectParams.put(QemuObject.ObjectParameter.DATA, UUID.randomUUID().toString());
Map<String, String> options = new HashMap<String, String>();
options.put(QemuImg.ENCRYPT_FORMAT, "luks");
options.put(QemuImg.ENCRYPT_KEY_SECRET, "sec0");
List<QemuObject> qObjects = new ArrayList<>();
qObjects.add(new QemuObject(QemuObject.ObjectType.SECRET, objectParams));
QemuImgFile file = new QemuImgFile(testFile.toString(), size, PhysicalDiskFormat.QCOW2);
QemuImg qemu = new QemuImg(0);
qemu.create(file, null, options, qObjects);
Map<String, String> info = qemu.info(file);
assertEquals("yes", info.get("encrypted"));
assertTrue(testFile.toFile().delete());
}
@Test
public void testCreateSparseVolume() throws QemuImgException, LibvirtException {
String filename = "/tmp/" + UUID.randomUUID() + ".qcow2"; String filename = "/tmp/" + UUID.randomUUID() + ".qcow2";
/* 10TB virtual_size */ /* 10TB virtual_size */
@ -204,7 +237,7 @@ public class QemuImgTest {
} }
@Test(expected = QemuImgException.class) @Test(expected = QemuImgException.class)
public void testCreateAndResizeFail() throws QemuImgException { public void testCreateAndResizeFail() throws QemuImgException, LibvirtException {
String filename = "/tmp/" + UUID.randomUUID() + ".qcow2"; String filename = "/tmp/" + UUID.randomUUID() + ".qcow2";
long startSize = 20480; long startSize = 20480;
@ -224,7 +257,7 @@ public class QemuImgTest {
} }
@Test(expected = QemuImgException.class) @Test(expected = QemuImgException.class)
public void testCreateAndResizeZero() throws QemuImgException { public void testCreateAndResizeZero() throws QemuImgException, LibvirtException {
String filename = "/tmp/" + UUID.randomUUID() + ".qcow2"; String filename = "/tmp/" + UUID.randomUUID() + ".qcow2";
long startSize = 20480; long startSize = 20480;
@ -317,4 +350,22 @@ public class QemuImgTest {
df.delete(); df.delete();
} }
@Test
public void testHelpSupportsImageFormat() throws QemuImgException, LibvirtException {
String partialHelp = "Parameters to dd subcommand:\n" +
" 'bs=BYTES' read and write up to BYTES bytes at a time (default: 512)\n" +
" 'count=N' copy only N input blocks\n" +
" 'if=FILE' read from FILE\n" +
" 'of=FILE' write to FILE\n" +
" 'skip=N' skip N bs-sized blocks at the start of input\n" +
"\n" +
"Supported formats: cloop copy-on-read file ftp ftps host_cdrom host_device https iser luks nbd nvme parallels qcow qcow2 qed quorum raw rbd ssh throttle vdi vhdx vmdk vpc vvfat\n" +
"\n" +
"See <https://qemu.org/contribute/report-a-bug> for how to report bugs.\n" +
"More information on the QEMU project at <https://qemu.org>.";
Assert.assertTrue("should support luks", QemuImg.helpSupportsImageFormat(partialHelp, PhysicalDiskFormat.LUKS));
Assert.assertTrue("should support qcow2", QemuImg.helpSupportsImageFormat(partialHelp, PhysicalDiskFormat.QCOW2));
Assert.assertFalse("should not support http", QemuImg.helpSupportsImageFormat(partialHelp, PhysicalDiskFormat.SHEEPDOG));
}
} }

View File

@ -0,0 +1,41 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.utils.qemu;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.junit.MockitoJUnitRunner;
import java.util.HashMap;
import java.util.Map;
@RunWith(MockitoJUnitRunner.class)
public class QemuObjectTest {
@Test
public void ToStringTest() {
Map<QemuObject.ObjectParameter, String> params = new HashMap<>();
params.put(QemuObject.ObjectParameter.ID, "sec0");
params.put(QemuObject.ObjectParameter.FILE, "/dev/shm/file");
QemuObject qObject = new QemuObject(QemuObject.ObjectType.SECRET, params);
String[] flag = qObject.toCommandFlag();
Assert.assertEquals(2, flag.length);
Assert.assertEquals("--object", flag[0]);
Assert.assertEquals("secret,file=/dev/shm/file,id=sec0", flag[1]);
}
}

View File

@ -96,6 +96,8 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri
} }
private static final Logger s_logger = Logger.getLogger(CloudStackPrimaryDataStoreDriverImpl.class); private static final Logger s_logger = Logger.getLogger(CloudStackPrimaryDataStoreDriverImpl.class);
private static final String NO_REMOTE_ENDPOINT_WITH_ENCRYPTION = "No remote endpoint to send command, unable to find a valid endpoint. Requires encryption support: %s";
@Inject @Inject
DiskOfferingDao diskOfferingDao; DiskOfferingDao diskOfferingDao;
@Inject @Inject
@ -141,10 +143,11 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri
} }
CreateObjectCommand cmd = new CreateObjectCommand(volume.getTO()); CreateObjectCommand cmd = new CreateObjectCommand(volume.getTO());
EndPoint ep = epSelector.select(volume); boolean encryptionRequired = anyVolumeRequiresEncryption(volume);
EndPoint ep = epSelector.select(volume, encryptionRequired);
Answer answer = null; Answer answer = null;
if (ep == null) { if (ep == null) {
String errMsg = "No remote endpoint to send CreateObjectCommand, check if host or ssvm is down?"; String errMsg = String.format(NO_REMOTE_ENDPOINT_WITH_ENCRYPTION, encryptionRequired);
s_logger.error(errMsg); s_logger.error(errMsg);
answer = new Answer(cmd, false, errMsg); answer = new Answer(cmd, false, errMsg);
} else { } else {
@ -203,9 +206,6 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri
} else { } else {
result.setAnswer(answer); result.setAnswer(answer);
} }
} catch (StorageUnavailableException e) {
s_logger.debug("failed to create volume", e);
errMsg = e.toString();
} catch (Exception e) { } catch (Exception e) {
s_logger.debug("failed to create volume", e); s_logger.debug("failed to create volume", e);
errMsg = e.toString(); errMsg = e.toString();
@ -263,6 +263,8 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri
@Override @Override
public void copyAsync(DataObject srcdata, DataObject destData, AsyncCompletionCallback<CopyCommandResult> callback) { public void copyAsync(DataObject srcdata, DataObject destData, AsyncCompletionCallback<CopyCommandResult> callback) {
s_logger.debug(String.format("Copying volume %s(%s) to %s(%s)", srcdata.getId(), srcdata.getType(), destData.getId(), destData.getType()));
boolean encryptionRequired = anyVolumeRequiresEncryption(srcdata, destData);
DataStore store = destData.getDataStore(); DataStore store = destData.getDataStore();
if (store.getRole() == DataStoreRole.Primary) { if (store.getRole() == DataStoreRole.Primary) {
if ((srcdata.getType() == DataObjectType.TEMPLATE && destData.getType() == DataObjectType.TEMPLATE)) { if ((srcdata.getType() == DataObjectType.TEMPLATE && destData.getType() == DataObjectType.TEMPLATE)) {
@ -283,13 +285,14 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri
DataObject srcData = templateDataFactory.getTemplate(srcdata.getId(), imageStore); DataObject srcData = templateDataFactory.getTemplate(srcdata.getId(), imageStore);
CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), primaryStorageDownloadWait, true); CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), primaryStorageDownloadWait, true);
EndPoint ep = epSelector.select(srcData, destData); EndPoint ep = epSelector.select(srcData, destData, encryptionRequired);
Answer answer = null; Answer answer = null;
if (ep == null) { if (ep == null) {
String errMsg = "No remote endpoint to send CopyCommand, check if host or ssvm is down?"; String errMsg = String.format(NO_REMOTE_ENDPOINT_WITH_ENCRYPTION, encryptionRequired);
s_logger.error(errMsg); s_logger.error(errMsg);
answer = new Answer(cmd, false, errMsg); answer = new Answer(cmd, false, errMsg);
} else { } else {
s_logger.debug(String.format("Sending copy command to endpoint %s, where encryption support is %s", ep.getHostAddr(), encryptionRequired ? "required" : "not required"));
answer = ep.sendMessage(cmd); answer = ep.sendMessage(cmd);
} }
CopyCommandResult result = new CopyCommandResult("", answer); CopyCommandResult result = new CopyCommandResult("", answer);
@ -297,10 +300,10 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri
} else if (srcdata.getType() == DataObjectType.SNAPSHOT && destData.getType() == DataObjectType.VOLUME) { } else if (srcdata.getType() == DataObjectType.SNAPSHOT && destData.getType() == DataObjectType.VOLUME) {
SnapshotObjectTO srcTO = (SnapshotObjectTO) srcdata.getTO(); SnapshotObjectTO srcTO = (SnapshotObjectTO) srcdata.getTO();
CopyCommand cmd = new CopyCommand(srcTO, destData.getTO(), StorageManager.PRIMARY_STORAGE_DOWNLOAD_WAIT.value(), true); CopyCommand cmd = new CopyCommand(srcTO, destData.getTO(), StorageManager.PRIMARY_STORAGE_DOWNLOAD_WAIT.value(), true);
EndPoint ep = epSelector.select(srcdata, destData); EndPoint ep = epSelector.select(srcdata, destData, encryptionRequired);
CopyCmdAnswer answer = null; CopyCmdAnswer answer = null;
if (ep == null) { if (ep == null) {
String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; String errMsg = String.format(NO_REMOTE_ENDPOINT_WITH_ENCRYPTION, encryptionRequired);
s_logger.error(errMsg); s_logger.error(errMsg);
answer = new CopyCmdAnswer(errMsg); answer = new CopyCmdAnswer(errMsg);
} else { } else {
@ -345,6 +348,7 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri
@Override @Override
public void takeSnapshot(SnapshotInfo snapshot, AsyncCompletionCallback<CreateCmdResult> callback) { public void takeSnapshot(SnapshotInfo snapshot, AsyncCompletionCallback<CreateCmdResult> callback) {
CreateCmdResult result = null; CreateCmdResult result = null;
s_logger.debug("Taking snapshot of "+ snapshot);
try { try {
SnapshotObjectTO snapshotTO = (SnapshotObjectTO) snapshot.getTO(); SnapshotObjectTO snapshotTO = (SnapshotObjectTO) snapshot.getTO();
Object payload = snapshot.getPayload(); Object payload = snapshot.getPayload();
@ -353,10 +357,13 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri
snapshotTO.setQuiescevm(snapshotPayload.getQuiescevm()); snapshotTO.setQuiescevm(snapshotPayload.getQuiescevm());
} }
boolean encryptionRequired = anyVolumeRequiresEncryption(snapshot);
CreateObjectCommand cmd = new CreateObjectCommand(snapshotTO); CreateObjectCommand cmd = new CreateObjectCommand(snapshotTO);
EndPoint ep = epSelector.select(snapshot, StorageAction.TAKESNAPSHOT); EndPoint ep = epSelector.select(snapshot, StorageAction.TAKESNAPSHOT, encryptionRequired);
Answer answer = null; Answer answer = null;
s_logger.debug("Taking snapshot of "+ snapshot + " and encryption required is " + encryptionRequired);
if (ep == null) { if (ep == null) {
String errMsg = "No remote endpoint to send createObjectCommand, check if host or ssvm is down?"; String errMsg = "No remote endpoint to send createObjectCommand, check if host or ssvm is down?";
s_logger.error(errMsg); s_logger.error(errMsg);
@ -419,16 +426,22 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri
VolumeObject vol = (VolumeObject) data; VolumeObject vol = (VolumeObject) data;
StoragePool pool = (StoragePool) data.getDataStore(); StoragePool pool = (StoragePool) data.getDataStore();
ResizeVolumePayload resizeParameter = (ResizeVolumePayload) vol.getpayload(); ResizeVolumePayload resizeParameter = (ResizeVolumePayload) vol.getpayload();
boolean encryptionRequired = anyVolumeRequiresEncryption(vol);
long [] endpointsToRunResize = resizeParameter.hosts;
ResizeVolumeCommand resizeCmd = // if hosts are provided, they are where the VM last ran. We can use that.
new ResizeVolumeCommand(vol.getPath(), new StorageFilerTO(pool), vol.getSize(), resizeParameter.newSize, resizeParameter.shrinkOk, if (endpointsToRunResize == null || endpointsToRunResize.length == 0) {
resizeParameter.instanceName, vol.getChainInfo()); EndPoint ep = epSelector.select(data, encryptionRequired);
endpointsToRunResize = new long[] {ep.getId()};
}
ResizeVolumeCommand resizeCmd = new ResizeVolumeCommand(vol.getPath(), new StorageFilerTO(pool), vol.getSize(),
resizeParameter.newSize, resizeParameter.shrinkOk, resizeParameter.instanceName, vol.getChainInfo(), vol.getPassphrase(), vol.getEncryptFormat());
if (pool.getParent() != 0) { if (pool.getParent() != 0) {
resizeCmd.setContextParam(DiskTO.PROTOCOL_TYPE, Storage.StoragePoolType.DatastoreCluster.toString()); resizeCmd.setContextParam(DiskTO.PROTOCOL_TYPE, Storage.StoragePoolType.DatastoreCluster.toString());
} }
CreateCmdResult result = new CreateCmdResult(null, null); CreateCmdResult result = new CreateCmdResult(null, null);
try { try {
ResizeVolumeAnswer answer = (ResizeVolumeAnswer) storageMgr.sendToPool(pool, resizeParameter.hosts, resizeCmd); ResizeVolumeAnswer answer = (ResizeVolumeAnswer) storageMgr.sendToPool(pool, endpointsToRunResize, resizeCmd);
if (answer != null && answer.getResult()) { if (answer != null && answer.getResult()) {
long finalSize = answer.getNewSize(); long finalSize = answer.getNewSize();
s_logger.debug("Resize: volume started at size: " + toHumanReadableSize(vol.getSize()) + " and ended at size: " + toHumanReadableSize(finalSize)); s_logger.debug("Resize: volume started at size: " + toHumanReadableSize(vol.getSize()) + " and ended at size: " + toHumanReadableSize(finalSize));
@ -447,6 +460,8 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri
} catch (Exception e) { } catch (Exception e) {
s_logger.debug("sending resize command failed", e); s_logger.debug("sending resize command failed", e);
result.setResult(e.toString()); result.setResult(e.toString());
} finally {
resizeCmd.clearPassphrase();
} }
callback.complete(result); callback.complete(result);
@ -522,4 +537,19 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri
@Override @Override
public void provideVmTags(long vmId, long volumeId, String tagValue) { public void provideVmTags(long vmId, long volumeId, String tagValue) {
} }
/**
* Does any object require encryption support?
*/
private boolean anyVolumeRequiresEncryption(DataObject ... objects) {
for (DataObject o : objects) {
// this fails code smell for returning true twice, but it is more readable than combining all tests into one statement
if (o instanceof VolumeInfo && ((VolumeInfo) o).getPassphraseId() != null) {
return true;
} else if (o instanceof SnapshotInfo && ((SnapshotInfo) o).getBaseVolume().getPassphraseId() != null) {
return true;
}
}
return false;
}
} }

View File

@ -22,6 +22,12 @@ import java.util.Map;
import javax.inject.Inject; import javax.inject.Inject;
import com.cloud.agent.api.storage.ResizeVolumeCommand;
import com.cloud.agent.api.to.StorageFilerTO;
import com.cloud.host.HostVO;
import com.cloud.vm.VMInstanceVO;
import com.cloud.vm.VirtualMachine;
import com.cloud.vm.dao.VMInstanceDao;
import org.apache.cloudstack.engine.subsystem.api.storage.ChapInfo; import org.apache.cloudstack.engine.subsystem.api.storage.ChapInfo;
import org.apache.cloudstack.engine.subsystem.api.storage.CopyCommandResult; import org.apache.cloudstack.engine.subsystem.api.storage.CopyCommandResult;
import org.apache.cloudstack.engine.subsystem.api.storage.CreateCmdResult; import org.apache.cloudstack.engine.subsystem.api.storage.CreateCmdResult;
@ -41,6 +47,7 @@ import org.apache.cloudstack.storage.RemoteHostEndPoint;
import org.apache.cloudstack.storage.command.CommandResult; import org.apache.cloudstack.storage.command.CommandResult;
import org.apache.cloudstack.storage.command.CopyCommand; import org.apache.cloudstack.storage.command.CopyCommand;
import org.apache.cloudstack.storage.command.CreateObjectAnswer; import org.apache.cloudstack.storage.command.CreateObjectAnswer;
import org.apache.cloudstack.storage.command.CreateObjectCommand;
import org.apache.cloudstack.storage.datastore.api.StoragePoolStatistics; import org.apache.cloudstack.storage.datastore.api.StoragePoolStatistics;
import org.apache.cloudstack.storage.datastore.api.VolumeStatistics; import org.apache.cloudstack.storage.datastore.api.VolumeStatistics;
import org.apache.cloudstack.storage.datastore.client.ScaleIOGatewayClient; import org.apache.cloudstack.storage.datastore.client.ScaleIOGatewayClient;
@ -53,6 +60,8 @@ import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailsDao;
import org.apache.cloudstack.storage.datastore.db.StoragePoolVO; import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
import org.apache.cloudstack.storage.datastore.util.ScaleIOUtil; import org.apache.cloudstack.storage.datastore.util.ScaleIOUtil;
import org.apache.cloudstack.storage.to.SnapshotObjectTO; import org.apache.cloudstack.storage.to.SnapshotObjectTO;
import org.apache.cloudstack.storage.to.VolumeObjectTO;
import org.apache.cloudstack.storage.volume.VolumeObject;
import org.apache.commons.collections.CollectionUtils; import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.apache.log4j.Logger; import org.apache.log4j.Logger;
@ -64,6 +73,7 @@ import com.cloud.agent.api.to.DataTO;
import com.cloud.alert.AlertManager; import com.cloud.alert.AlertManager;
import com.cloud.configuration.Config; import com.cloud.configuration.Config;
import com.cloud.host.Host; import com.cloud.host.Host;
import com.cloud.host.dao.HostDao;
import com.cloud.server.ManagementServerImpl; import com.cloud.server.ManagementServerImpl;
import com.cloud.storage.DataStoreRole; import com.cloud.storage.DataStoreRole;
import com.cloud.storage.ResizeVolumePayload; import com.cloud.storage.ResizeVolumePayload;
@ -112,6 +122,10 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
private AlertManager alertMgr; private AlertManager alertMgr;
@Inject @Inject
private ConfigurationDao configDao; private ConfigurationDao configDao;
@Inject
private HostDao hostDao;
@Inject
private VMInstanceDao vmInstanceDao;
public ScaleIOPrimaryDataStoreDriver() { public ScaleIOPrimaryDataStoreDriver() {
@ -187,6 +201,11 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
} }
} }
private boolean grantAccess(DataObject dataObject, EndPoint ep, DataStore dataStore) {
Host host = hostDao.findById(ep.getId());
return grantAccess(dataObject, host, dataStore);
}
@Override @Override
public void revokeAccess(DataObject dataObject, Host host, DataStore dataStore) { public void revokeAccess(DataObject dataObject, Host host, DataStore dataStore) {
try { try {
@ -229,6 +248,11 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
} }
} }
private void revokeAccess(DataObject dataObject, EndPoint ep, DataStore dataStore) {
Host host = hostDao.findById(ep.getId());
revokeAccess(dataObject, host, dataStore);
}
private String getConnectedSdc(long poolId, long hostId) { private String getConnectedSdc(long poolId, long hostId) {
try { try {
StoragePoolHostVO poolHostVO = storagePoolHostDao.findByPoolHost(poolId, hostId); StoragePoolHostVO poolHostVO = storagePoolHostDao.findByPoolHost(poolId, hostId);
@ -414,7 +438,7 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
} }
} }
private String createVolume(VolumeInfo volumeInfo, long storagePoolId) { private CreateObjectAnswer createVolume(VolumeInfo volumeInfo, long storagePoolId) {
LOGGER.debug("Creating PowerFlex volume"); LOGGER.debug("Creating PowerFlex volume");
StoragePoolVO storagePool = storagePoolDao.findById(storagePoolId); StoragePoolVO storagePool = storagePoolDao.findById(storagePoolId);
@ -447,7 +471,8 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
volume.setPoolType(Storage.StoragePoolType.PowerFlex); volume.setPoolType(Storage.StoragePoolType.PowerFlex);
volume.setFormat(Storage.ImageFormat.RAW); volume.setFormat(Storage.ImageFormat.RAW);
volume.setPoolId(storagePoolId); volume.setPoolId(storagePoolId);
volumeDao.update(volume.getId(), volume); VolumeObject createdObject = VolumeObject.getVolumeObject(volumeInfo.getDataStore(), volume);
createdObject.update();
long capacityBytes = storagePool.getCapacityBytes(); long capacityBytes = storagePool.getCapacityBytes();
long usedBytes = storagePool.getUsedBytes(); long usedBytes = storagePool.getUsedBytes();
@ -455,7 +480,35 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
storagePool.setUsedBytes(usedBytes > capacityBytes ? capacityBytes : usedBytes); storagePool.setUsedBytes(usedBytes > capacityBytes ? capacityBytes : usedBytes);
storagePoolDao.update(storagePoolId, storagePool); storagePoolDao.update(storagePoolId, storagePool);
return volumePath; CreateObjectAnswer answer = new CreateObjectAnswer(createdObject.getTO());
// if volume needs to be set up with encryption, do it now if it's not a root disk (which gets done during template copy)
if (anyVolumeRequiresEncryption(volumeInfo) && !volumeInfo.getVolumeType().equals(Volume.Type.ROOT)) {
LOGGER.debug(String.format("Setting up encryption for volume %s", volumeInfo.getId()));
VolumeObjectTO prepVolume = (VolumeObjectTO) createdObject.getTO();
prepVolume.setPath(volumePath);
prepVolume.setUuid(volumePath);
CreateObjectCommand cmd = new CreateObjectCommand(prepVolume);
EndPoint ep = selector.select(volumeInfo, true);
if (ep == null) {
throw new CloudRuntimeException("No remote endpoint to send PowerFlex volume encryption preparation");
} else {
try {
grantAccess(createdObject, ep, volumeInfo.getDataStore());
answer = (CreateObjectAnswer) ep.sendMessage(cmd);
if (!answer.getResult()) {
throw new CloudRuntimeException("Failed to set up encryption on PowerFlex volume: " + answer.getDetails());
}
} finally {
revokeAccess(createdObject, ep, volumeInfo.getDataStore());
prepVolume.clearPassphrase();
}
}
} else {
LOGGER.debug(String.format("No encryption configured for data volume %s", volumeInfo));
}
return answer;
} catch (Exception e) { } catch (Exception e) {
String errMsg = "Unable to create PowerFlex Volume due to " + e.getMessage(); String errMsg = "Unable to create PowerFlex Volume due to " + e.getMessage();
LOGGER.warn(errMsg); LOGGER.warn(errMsg);
@ -511,16 +564,21 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
public void createAsync(DataStore dataStore, DataObject dataObject, AsyncCompletionCallback<CreateCmdResult> callback) { public void createAsync(DataStore dataStore, DataObject dataObject, AsyncCompletionCallback<CreateCmdResult> callback) {
String scaleIOVolumePath = null; String scaleIOVolumePath = null;
String errMsg = null; String errMsg = null;
Answer answer = new Answer(null, false, "not started");
try { try {
if (dataObject.getType() == DataObjectType.VOLUME) { if (dataObject.getType() == DataObjectType.VOLUME) {
LOGGER.debug("createAsync - creating volume"); LOGGER.debug("createAsync - creating volume");
scaleIOVolumePath = createVolume((VolumeInfo) dataObject, dataStore.getId()); CreateObjectAnswer createAnswer = createVolume((VolumeInfo) dataObject, dataStore.getId());
scaleIOVolumePath = createAnswer.getData().getPath();
answer = createAnswer;
} else if (dataObject.getType() == DataObjectType.TEMPLATE) { } else if (dataObject.getType() == DataObjectType.TEMPLATE) {
LOGGER.debug("createAsync - creating template"); LOGGER.debug("createAsync - creating template");
scaleIOVolumePath = createTemplateVolume((TemplateInfo)dataObject, dataStore.getId()); scaleIOVolumePath = createTemplateVolume((TemplateInfo)dataObject, dataStore.getId());
answer = new Answer(null, true, "created template");
} else { } else {
errMsg = "Invalid DataObjectType (" + dataObject.getType() + ") passed to createAsync"; errMsg = "Invalid DataObjectType (" + dataObject.getType() + ") passed to createAsync";
LOGGER.error(errMsg); LOGGER.error(errMsg);
answer = new Answer(null, false, errMsg);
} }
} catch (Exception ex) { } catch (Exception ex) {
errMsg = ex.getMessage(); errMsg = ex.getMessage();
@ -528,10 +586,11 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
if (callback == null) { if (callback == null) {
throw ex; throw ex;
} }
answer = new Answer(null, false, errMsg);
} }
if (callback != null) { if (callback != null) {
CreateCmdResult result = new CreateCmdResult(scaleIOVolumePath, new Answer(null, errMsg == null, errMsg)); CreateCmdResult result = new CreateCmdResult(scaleIOVolumePath, answer);
result.setResult(errMsg); result.setResult(errMsg);
callback.complete(result); callback.complete(result);
} }
@ -606,6 +665,7 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) { public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) {
Answer answer = null; Answer answer = null;
String errMsg = null; String errMsg = null;
CopyCommandResult result;
try { try {
DataStore srcStore = srcData.getDataStore(); DataStore srcStore = srcData.getDataStore();
@ -613,51 +673,72 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
if (srcStore.getRole() == DataStoreRole.Primary && (destStore.getRole() == DataStoreRole.Primary && destData.getType() == DataObjectType.VOLUME)) { if (srcStore.getRole() == DataStoreRole.Primary && (destStore.getRole() == DataStoreRole.Primary && destData.getType() == DataObjectType.VOLUME)) {
if (srcData.getType() == DataObjectType.TEMPLATE) { if (srcData.getType() == DataObjectType.TEMPLATE) {
answer = copyTemplateToVolume(srcData, destData, destHost); answer = copyTemplateToVolume(srcData, destData, destHost);
if (answer == null) {
errMsg = "No answer for copying template to PowerFlex volume";
} else if (!answer.getResult()) {
errMsg = answer.getDetails();
}
} else if (srcData.getType() == DataObjectType.VOLUME) { } else if (srcData.getType() == DataObjectType.VOLUME) {
if (isSameScaleIOStorageInstance(srcStore, destStore)) { if (isSameScaleIOStorageInstance(srcStore, destStore)) {
answer = migrateVolume(srcData, destData); answer = migrateVolume(srcData, destData);
} else { } else {
answer = copyVolume(srcData, destData, destHost); answer = copyVolume(srcData, destData, destHost);
} }
if (answer == null) {
errMsg = "No answer for migrate PowerFlex volume";
} else if (!answer.getResult()) {
errMsg = answer.getDetails();
}
} else { } else {
errMsg = "Unsupported copy operation from src object: (" + srcData.getType() + ", " + srcData.getDataStore() + "), dest object: (" errMsg = "Unsupported copy operation from src object: (" + srcData.getType() + ", " + srcData.getDataStore() + "), dest object: ("
+ destData.getType() + ", " + destData.getDataStore() + ")"; + destData.getType() + ", " + destData.getDataStore() + ")";
LOGGER.warn(errMsg); LOGGER.warn(errMsg);
answer = new Answer(null, false, errMsg);
} }
} else { } else {
errMsg = "Unsupported copy operation"; errMsg = "Unsupported copy operation";
LOGGER.warn(errMsg);
answer = new Answer(null, false, errMsg);
} }
} catch (Exception e) { } catch (Exception e) {
LOGGER.debug("Failed to copy due to " + e.getMessage(), e); LOGGER.debug("Failed to copy due to " + e.getMessage(), e);
errMsg = e.toString(); errMsg = e.toString();
answer = new Answer(null, false, errMsg);
} }
CopyCommandResult result = new CopyCommandResult(null, answer); result = new CopyCommandResult(null, answer);
result.setResult(errMsg);
callback.complete(result); callback.complete(result);
} }
/**
* Responsible for copying template on ScaleIO primary to root disk
* @param srcData dataobject representing the template
* @param destData dataobject representing the target root disk
* @param destHost host to use for copy
* @return answer
*/
private Answer copyTemplateToVolume(DataObject srcData, DataObject destData, Host destHost) { private Answer copyTemplateToVolume(DataObject srcData, DataObject destData, Host destHost) {
/* If encryption is requested, since the template object is not encrypted we need to grow the destination disk to accommodate the new headers.
* Data stores of file type happen automatically, but block device types have to handle it. Unfortunately for ScaleIO this means we add a whole 8GB to
* the original size, but only if we are close to an 8GB boundary.
*/
LOGGER.debug(String.format("Copying template %s to volume %s", srcData.getId(), destData.getId()));
VolumeInfo destInfo = (VolumeInfo) destData;
boolean encryptionRequired = anyVolumeRequiresEncryption(destData);
if (encryptionRequired) {
if (needsExpansionForEncryptionHeader(srcData.getSize(), destData.getSize())) {
long newSize = destData.getSize() + (1<<30);
LOGGER.debug(String.format("Destination volume %s(%s) is configured for encryption. Resizing to fit headers, new size %s will be rounded up to nearest 8Gi", destInfo.getId(), destData.getSize(), newSize));
ResizeVolumePayload p = new ResizeVolumePayload(newSize, destInfo.getMinIops(), destInfo.getMaxIops(),
destInfo.getHypervisorSnapshotReserve(), false, destInfo.getAttachedVmName(), null, true);
destInfo.addPayload(p);
resizeVolume(destInfo);
} else {
LOGGER.debug(String.format("Template %s has size %s, ok for volume %s with size %s", srcData.getId(), srcData.getSize(), destData.getId(), destData.getSize()));
}
} else {
LOGGER.debug(String.format("Destination volume is not configured for encryption, skipping encryption prep. Volume: %s", destData.getId()));
}
// Copy PowerFlex/ScaleIO template to volume // Copy PowerFlex/ScaleIO template to volume
LOGGER.debug(String.format("Initiating copy from PowerFlex template volume on host %s", destHost != null ? destHost.getId() : "<not specified>")); LOGGER.debug(String.format("Initiating copy from PowerFlex template volume on host %s", destHost != null ? destHost.getId() : "<not specified>"));
int primaryStorageDownloadWait = StorageManager.PRIMARY_STORAGE_DOWNLOAD_WAIT.value(); int primaryStorageDownloadWait = StorageManager.PRIMARY_STORAGE_DOWNLOAD_WAIT.value();
CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), primaryStorageDownloadWait, VirtualMachineManager.ExecuteInSequence.value()); CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), primaryStorageDownloadWait, VirtualMachineManager.ExecuteInSequence.value());
Answer answer = null; Answer answer = null;
EndPoint ep = destHost != null ? RemoteHostEndPoint.getHypervisorHostEndPoint(destHost) : selector.select(srcData.getDataStore()); EndPoint ep = destHost != null ? RemoteHostEndPoint.getHypervisorHostEndPoint(destHost) : selector.select(srcData, encryptionRequired);
if (ep == null) { if (ep == null) {
String errorMsg = "No remote endpoint to send command, check if host or ssvm is down?"; String errorMsg = String.format("No remote endpoint to send command, unable to find a valid endpoint. Requires encryption support: %s", encryptionRequired);
LOGGER.error(errorMsg); LOGGER.error(errorMsg);
answer = new Answer(cmd, false, errorMsg); answer = new Answer(cmd, false, errorMsg);
} else { } else {
@ -676,9 +757,10 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), copyVolumeWait, VirtualMachineManager.ExecuteInSequence.value()); CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), copyVolumeWait, VirtualMachineManager.ExecuteInSequence.value());
Answer answer = null; Answer answer = null;
EndPoint ep = destHost != null ? RemoteHostEndPoint.getHypervisorHostEndPoint(destHost) : selector.select(srcData.getDataStore()); boolean encryptionRequired = anyVolumeRequiresEncryption(srcData, destData);
EndPoint ep = destHost != null ? RemoteHostEndPoint.getHypervisorHostEndPoint(destHost) : selector.select(srcData, encryptionRequired);
if (ep == null) { if (ep == null) {
String errorMsg = "No remote endpoint to send command, check if host or ssvm is down?"; String errorMsg = String.format("No remote endpoint to send command, unable to find a valid endpoint. Requires encryption support: %s", encryptionRequired);
LOGGER.error(errorMsg); LOGGER.error(errorMsg);
answer = new Answer(cmd, false, errorMsg); answer = new Answer(cmd, false, errorMsg);
} else { } else {
@ -824,6 +906,7 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
try { try {
String scaleIOVolumeId = ScaleIOUtil.getVolumePath(volumeInfo.getPath()); String scaleIOVolumeId = ScaleIOUtil.getVolumePath(volumeInfo.getPath());
Long storagePoolId = volumeInfo.getPoolId(); Long storagePoolId = volumeInfo.getPoolId();
final ScaleIOGatewayClient client = getScaleIOClient(storagePoolId);
ResizeVolumePayload payload = (ResizeVolumePayload)volumeInfo.getpayload(); ResizeVolumePayload payload = (ResizeVolumePayload)volumeInfo.getpayload();
long newSizeInBytes = payload.newSize != null ? payload.newSize : volumeInfo.getSize(); long newSizeInBytes = payload.newSize != null ? payload.newSize : volumeInfo.getSize();
@ -832,13 +915,69 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
throw new CloudRuntimeException("Only increase size is allowed for volume: " + volumeInfo.getName()); throw new CloudRuntimeException("Only increase size is allowed for volume: " + volumeInfo.getName());
} }
org.apache.cloudstack.storage.datastore.api.Volume scaleIOVolume = null; org.apache.cloudstack.storage.datastore.api.Volume scaleIOVolume = client.getVolume(scaleIOVolumeId);
long newSizeInGB = newSizeInBytes / (1024 * 1024 * 1024); long newSizeInGB = newSizeInBytes / (1024 * 1024 * 1024);
long newSizeIn8gbBoundary = (long) (Math.ceil(newSizeInGB / 8.0) * 8.0); long newSizeIn8gbBoundary = (long) (Math.ceil(newSizeInGB / 8.0) * 8.0);
final ScaleIOGatewayClient client = getScaleIOClient(storagePoolId);
scaleIOVolume = client.resizeVolume(scaleIOVolumeId, (int) newSizeIn8gbBoundary); if (scaleIOVolume.getSizeInKb() == newSizeIn8gbBoundary << 20) {
if (scaleIOVolume == null) { LOGGER.debug("No resize necessary at API");
throw new CloudRuntimeException("Failed to resize volume: " + volumeInfo.getName()); } else {
scaleIOVolume = client.resizeVolume(scaleIOVolumeId, (int) newSizeIn8gbBoundary);
if (scaleIOVolume == null) {
throw new CloudRuntimeException("Failed to resize volume: " + volumeInfo.getName());
}
}
StoragePoolVO storagePool = storagePoolDao.findById(storagePoolId);
boolean attachedRunning = false;
long hostId = 0;
if (payload.instanceName != null) {
VMInstanceVO instance = vmInstanceDao.findVMByInstanceName(payload.instanceName);
if (instance.getState().equals(VirtualMachine.State.Running)) {
hostId = instance.getHostId();
attachedRunning = true;
}
}
if (volumeInfo.getFormat().equals(Storage.ImageFormat.QCOW2) || attachedRunning) {
LOGGER.debug("Volume needs to be resized at the hypervisor host");
if (hostId == 0) {
hostId = selector.select(volumeInfo, true).getId();
}
HostVO host = hostDao.findById(hostId);
if (host == null) {
throw new CloudRuntimeException("Found no hosts to run resize command on");
}
EndPoint ep = RemoteHostEndPoint.getHypervisorHostEndPoint(host);
ResizeVolumeCommand resizeVolumeCommand = new ResizeVolumeCommand(
volumeInfo.getPath(), new StorageFilerTO(storagePool), volumeInfo.getSize(), newSizeInBytes,
payload.shrinkOk, payload.instanceName, volumeInfo.getChainInfo(),
volumeInfo.getPassphrase(), volumeInfo.getEncryptFormat());
try {
if (!attachedRunning) {
grantAccess(volumeInfo, ep, volumeInfo.getDataStore());
}
Answer answer = ep.sendMessage(resizeVolumeCommand);
if (!answer.getResult() && volumeInfo.getFormat().equals(Storage.ImageFormat.QCOW2)) {
throw new CloudRuntimeException("Failed to resize at host: " + answer.getDetails());
} else if (!answer.getResult()) {
// for non-qcow2, notifying the running VM is going to be best-effort since we can't roll back
// or avoid VM seeing a successful change at the PowerFlex volume after e.g. reboot
LOGGER.warn("Resized raw volume, but failed to notify. VM will see change on reboot. Error:" + answer.getDetails());
} else {
LOGGER.debug("Resized volume at host: " + answer.getDetails());
}
} finally {
if (!attachedRunning) {
revokeAccess(volumeInfo, ep, volumeInfo.getDataStore());
}
}
} }
VolumeVO volume = volumeDao.findById(volumeInfo.getId()); VolumeVO volume = volumeDao.findById(volumeInfo.getId());
@ -846,7 +985,6 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
volume.setSize(scaleIOVolume.getSizeInKb() * 1024); volume.setSize(scaleIOVolume.getSizeInKb() * 1024);
volumeDao.update(volume.getId(), volume); volumeDao.update(volume.getId(), volume);
StoragePoolVO storagePool = storagePoolDao.findById(storagePoolId);
long capacityBytes = storagePool.getCapacityBytes(); long capacityBytes = storagePool.getCapacityBytes();
long usedBytes = storagePool.getUsedBytes(); long usedBytes = storagePool.getUsedBytes();
@ -990,4 +1128,27 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
@Override @Override
public void provideVmTags(long vmId, long volumeId, String tagValue) { public void provideVmTags(long vmId, long volumeId, String tagValue) {
} }
/**
* Does the destination size fit the source size plus an encryption header?
* @param srcSize size of source
* @param dstSize size of destination
* @return true if resize is required
*/
private boolean needsExpansionForEncryptionHeader(long srcSize, long dstSize) {
int headerSize = 32<<20; // ensure we have 32MiB for encryption header
return srcSize + headerSize > dstSize;
}
/**
* Does any object require encryption support?
*/
private boolean anyVolumeRequiresEncryption(DataObject ... objects) {
for (DataObject o : objects) {
if (o instanceof VolumeInfo && ((VolumeInfo) o).getPassphraseId() != null) {
return true;
}
}
return false;
}
} }

View File

@ -85,7 +85,7 @@ public final class StorPoolCopyVolumeToSecondaryCommandWrapper extends CommandWr
} }
SP_LOG("StorpoolCopyVolumeToSecondaryCommandWrapper.execute: dstName=%s, dstProvisioningType=%s, srcSize=%s, dstUUID=%s, srcUUID=%s " ,dst.getName(), dst.getProvisioningType(), src.getSize(),dst.getUuid(), src.getUuid()); SP_LOG("StorpoolCopyVolumeToSecondaryCommandWrapper.execute: dstName=%s, dstProvisioningType=%s, srcSize=%s, dstUUID=%s, srcUUID=%s " ,dst.getName(), dst.getProvisioningType(), src.getSize(),dst.getUuid(), src.getUuid());
KVMPhysicalDisk newDisk = destPool.createPhysicalDisk(dst.getUuid(), dst.getProvisioningType(), src.getSize()); KVMPhysicalDisk newDisk = destPool.createPhysicalDisk(dst.getUuid(), dst.getProvisioningType(), src.getSize(), null);
SP_LOG("NewDisk path=%s, uuid=%s ", newDisk.getPath(), dst.getUuid()); SP_LOG("NewDisk path=%s, uuid=%s ", newDisk.getPath(), dst.getUuid());
String destPath = newDisk.getPath(); String destPath = newDisk.getPath();
newDisk.setPath(dst.getUuid()); newDisk.setPath(dst.getUuid());

View File

@ -276,7 +276,7 @@ public class StorPoolStorageAdaptor implements StorageAdaptor {
// The following do not apply for StorpoolStorageAdaptor? // The following do not apply for StorpoolStorageAdaptor?
@Override @Override
public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, KVMStoragePool pool, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, KVMStoragePool pool, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) {
SP_LOG("StorpooolStorageAdaptor.createPhysicalDisk: uuid=%s, pool=%s, format=%s, size=%d", volumeUuid, pool, format, size); SP_LOG("StorpooolStorageAdaptor.createPhysicalDisk: uuid=%s, pool=%s, format=%s, size=%d", volumeUuid, pool, format, size);
throw new UnsupportedOperationException("Creating a physical disk is not supported."); throw new UnsupportedOperationException("Creating a physical disk is not supported.");
} }
@ -317,7 +317,7 @@ public class StorPoolStorageAdaptor implements StorageAdaptor {
@Override @Override
public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, PhysicalDiskFormat format,
ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout) { ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout, byte[] passphrase) {
SP_LOG("StorpooolStorageAdaptor.createDiskFromTemplate: template=%s, name=%s, fmt=%s, ptype=%s, size=%d, dst_pool=%s, to=%d", SP_LOG("StorpooolStorageAdaptor.createDiskFromTemplate: template=%s, name=%s, fmt=%s, ptype=%s, size=%d, dst_pool=%s, to=%d",
template, name, format, provisioningType, size, destPool.getUuid(), timeout); template, name, format, provisioningType, size, destPool.getUuid(), timeout);
throw new UnsupportedOperationException("Creating a disk from a template is not yet supported for this configuration."); throw new UnsupportedOperationException("Creating a disk from a template is not yet supported for this configuration.");
@ -329,6 +329,11 @@ public class StorPoolStorageAdaptor implements StorageAdaptor {
throw new UnsupportedOperationException("Creating a template from a disk is not yet supported for this configuration."); throw new UnsupportedOperationException("Creating a template from a disk is not yet supported for this configuration.");
} }
@Override
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout, byte[] sourcePassphrase, byte[] destPassphrase, ProvisioningType provisioningType) {
return copyPhysicalDisk(disk, name, destPool, timeout);
}
@Override @Override
public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) { public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) {
SP_LOG("StorpooolStorageAdaptor.copyPhysicalDisk: disk=%s, name=%s, dst_pool=%s, to=%d", disk, name, destPool.getUuid(), timeout); SP_LOG("StorpooolStorageAdaptor.copyPhysicalDisk: disk=%s, name=%s, dst_pool=%s, to=%d", disk, name, destPool.getUuid(), timeout);
@ -361,7 +366,7 @@ public class StorPoolStorageAdaptor implements StorageAdaptor {
} }
public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name,
PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout) { PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout, byte[] passphrase) {
SP_LOG("StorpooolStorageAdaptor.createDiskFromTemplateBacking: template=%s, name=%s, dst_pool=%s", template, SP_LOG("StorpooolStorageAdaptor.createDiskFromTemplateBacking: template=%s, name=%s, dst_pool=%s", template,
name, destPool.getUuid()); name, destPool.getUuid());
throw new UnsupportedOperationException( throw new UnsupportedOperationException(

View File

@ -104,13 +104,13 @@ public class StorPoolStoragePool implements KVMStoragePool {
} }
@Override @Override
public KVMPhysicalDisk createPhysicalDisk(String name, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { public KVMPhysicalDisk createPhysicalDisk(String name, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) {
return _storageAdaptor.createPhysicalDisk(name, this, format, provisioningType, size); return _storageAdaptor.createPhysicalDisk(name, this, format, provisioningType, size, passphrase);
} }
@Override @Override
public KVMPhysicalDisk createPhysicalDisk(String name, Storage.ProvisioningType provisioningType, long size) { public KVMPhysicalDisk createPhysicalDisk(String name, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) {
return _storageAdaptor.createPhysicalDisk(name, this, null, provisioningType, size); return _storageAdaptor.createPhysicalDisk(name, this, null, provisioningType, size, passphrase);
} }
@Override @Override

View File

@ -2948,6 +2948,7 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
Long zoneId = cmd.getZoneId(); Long zoneId = cmd.getZoneId();
Long volumeId = cmd.getVolumeId(); Long volumeId = cmd.getVolumeId();
Long storagePoolId = cmd.getStoragePoolId(); Long storagePoolId = cmd.getStoragePoolId();
Boolean encrypt = cmd.getEncrypt();
// Keeping this logic consistent with domain specific zones // Keeping this logic consistent with domain specific zones
// if a domainId is provided, we just return the disk offering // if a domainId is provided, we just return the disk offering
// associated with this domain // associated with this domain
@ -2995,6 +2996,10 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
sc.addAnd("name", SearchCriteria.Op.EQ, name); sc.addAnd("name", SearchCriteria.Op.EQ, name);
} }
if (encrypt != null) {
sc.addAnd("encrypt", SearchCriteria.Op.EQ, encrypt);
}
if (zoneId != null) { if (zoneId != null) {
SearchBuilder<DiskOfferingJoinVO> sb = _diskOfferingJoinDao.createSearchBuilder(); SearchBuilder<DiskOfferingJoinVO> sb = _diskOfferingJoinDao.createSearchBuilder();
sb.and("zoneId", sb.entity().getZoneId(), Op.FIND_IN_SET); sb.and("zoneId", sb.entity().getZoneId(), Op.FIND_IN_SET);
@ -3118,6 +3123,7 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
Integer cpuNumber = cmd.getCpuNumber(); Integer cpuNumber = cmd.getCpuNumber();
Integer memory = cmd.getMemory(); Integer memory = cmd.getMemory();
Integer cpuSpeed = cmd.getCpuSpeed(); Integer cpuSpeed = cmd.getCpuSpeed();
Boolean encryptRoot = cmd.getEncryptRoot();
SearchCriteria<ServiceOfferingJoinVO> sc = _srvOfferingJoinDao.createSearchCriteria(); SearchCriteria<ServiceOfferingJoinVO> sc = _srvOfferingJoinDao.createSearchCriteria();
if (!_accountMgr.isRootAdmin(caller.getId()) && isSystem) { if (!_accountMgr.isRootAdmin(caller.getId()) && isSystem) {
@ -3229,6 +3235,10 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q
sc.addAnd("systemUse", SearchCriteria.Op.EQ, isSystem); sc.addAnd("systemUse", SearchCriteria.Op.EQ, isSystem);
} }
if (encryptRoot != null) {
sc.addAnd("encryptRoot", SearchCriteria.Op.EQ, encryptRoot);
}
if (name != null) { if (name != null) {
sc.addAnd("name", SearchCriteria.Op.EQ, name); sc.addAnd("name", SearchCriteria.Op.EQ, name);
} }

View File

@ -107,6 +107,7 @@ public class DiskOfferingJoinDaoImpl extends GenericDaoBase<DiskOfferingJoinVO,
diskOfferingResponse.setDomain(offering.getDomainPath()); diskOfferingResponse.setDomain(offering.getDomainPath());
diskOfferingResponse.setZoneId(offering.getZoneUuid()); diskOfferingResponse.setZoneId(offering.getZoneUuid());
diskOfferingResponse.setZone(offering.getZoneName()); diskOfferingResponse.setZone(offering.getZoneName());
diskOfferingResponse.setEncrypt(offering.getEncrypt());
diskOfferingResponse.setHasAnnotation(annotationDao.hasAnnotations(offering.getUuid(), AnnotationService.EntityType.DISK_OFFERING.name(), diskOfferingResponse.setHasAnnotation(annotationDao.hasAnnotations(offering.getUuid(), AnnotationService.EntityType.DISK_OFFERING.name(),
accountManager.isRootAdmin(CallContext.current().getCallingAccount().getId()))); accountManager.isRootAdmin(CallContext.current().getCallingAccount().getId())));

View File

@ -130,6 +130,7 @@ public class ServiceOfferingJoinDaoImpl extends GenericDaoBase<ServiceOfferingJo
offeringResponse.setIscutomized(offering.isDynamic()); offeringResponse.setIscutomized(offering.isDynamic());
offeringResponse.setCacheMode(offering.getCacheMode()); offeringResponse.setCacheMode(offering.getCacheMode());
offeringResponse.setDynamicScalingEnabled(offering.isDynamicScalingEnabled()); offeringResponse.setDynamicScalingEnabled(offering.isDynamicScalingEnabled());
offeringResponse.setEncryptRoot(offering.getEncryptRoot());
if (offeringDetails != null && !offeringDetails.isEmpty()) { if (offeringDetails != null && !offeringDetails.isEmpty()) {
String vsphereStoragePolicyId = offeringDetails.get(ApiConstants.STORAGE_POLICY); String vsphereStoragePolicyId = offeringDetails.get(ApiConstants.STORAGE_POLICY);

View File

@ -161,6 +161,9 @@ public class DiskOfferingJoinVO extends BaseViewVO implements InternalIdentity,
@Column(name = "disk_size_strictness") @Column(name = "disk_size_strictness")
boolean diskSizeStrictness; boolean diskSizeStrictness;
@Column(name = "encrypt")
private boolean encrypt;
public DiskOfferingJoinVO() { public DiskOfferingJoinVO() {
} }
@ -346,7 +349,10 @@ public class DiskOfferingJoinVO extends BaseViewVO implements InternalIdentity,
return vsphereStoragePolicy; return vsphereStoragePolicy;
} }
public boolean getDiskSizeStrictness() { public boolean getDiskSizeStrictness() {
return diskSizeStrictness; return diskSizeStrictness;
} }
public boolean getEncrypt() { return encrypt; }
} }

View File

@ -211,6 +211,9 @@ public class ServiceOfferingJoinVO extends BaseViewVO implements InternalIdentit
@Column(name = "disk_offering_display_text") @Column(name = "disk_offering_display_text")
private String diskOfferingDisplayText; private String diskOfferingDisplayText;
@Column(name = "encrypt_root")
private boolean encryptRoot;
public ServiceOfferingJoinVO() { public ServiceOfferingJoinVO() {
} }
@ -443,4 +446,6 @@ public class ServiceOfferingJoinVO extends BaseViewVO implements InternalIdentit
public String getDiskOfferingDisplayText() { public String getDiskOfferingDisplayText() {
return diskOfferingDisplayText; return diskOfferingDisplayText;
} }
public boolean getEncryptRoot() { return encryptRoot; }
} }

View File

@ -3005,7 +3005,8 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati
cmd.getBytesWriteRate(), cmd.getBytesWriteRateMax(), cmd.getBytesWriteRateMaxLength(), cmd.getBytesWriteRate(), cmd.getBytesWriteRateMax(), cmd.getBytesWriteRateMaxLength(),
cmd.getIopsReadRate(), cmd.getIopsReadRateMax(), cmd.getIopsReadRateMaxLength(), cmd.getIopsReadRate(), cmd.getIopsReadRateMax(), cmd.getIopsReadRateMaxLength(),
cmd.getIopsWriteRate(), cmd.getIopsWriteRateMax(), cmd.getIopsWriteRateMaxLength(), cmd.getIopsWriteRate(), cmd.getIopsWriteRateMax(), cmd.getIopsWriteRateMaxLength(),
cmd.getHypervisorSnapshotReserve(), cmd.getCacheMode(), storagePolicyId, cmd.getDynamicScalingEnabled(), diskOfferingId, cmd.getDiskOfferingStrictness(), cmd.isCustomized()); cmd.getHypervisorSnapshotReserve(), cmd.getCacheMode(), storagePolicyId, cmd.getDynamicScalingEnabled(), diskOfferingId,
cmd.getDiskOfferingStrictness(), cmd.isCustomized(), cmd.getEncryptRoot());
} }
protected ServiceOfferingVO createServiceOffering(final long userId, final boolean isSystem, final VirtualMachine.Type vmType, protected ServiceOfferingVO createServiceOffering(final long userId, final boolean isSystem, final VirtualMachine.Type vmType,
@ -3016,7 +3017,9 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati
Long bytesWriteRate, Long bytesWriteRateMax, Long bytesWriteRateMaxLength, Long bytesWriteRate, Long bytesWriteRateMax, Long bytesWriteRateMaxLength,
Long iopsReadRate, Long iopsReadRateMax, Long iopsReadRateMaxLength, Long iopsReadRate, Long iopsReadRateMax, Long iopsReadRateMaxLength,
Long iopsWriteRate, Long iopsWriteRateMax, Long iopsWriteRateMaxLength, Long iopsWriteRate, Long iopsWriteRateMax, Long iopsWriteRateMaxLength,
final Integer hypervisorSnapshotReserve, String cacheMode, final Long storagePolicyID, final boolean dynamicScalingEnabled, final Long diskOfferingId, final boolean diskOfferingStrictness, final boolean isCustomized) { final Integer hypervisorSnapshotReserve, String cacheMode, final Long storagePolicyID, final boolean dynamicScalingEnabled, final Long diskOfferingId,
final boolean diskOfferingStrictness, final boolean isCustomized, final boolean encryptRoot) {
// Filter child domains when both parent and child domains are present // Filter child domains when both parent and child domains are present
List<Long> filteredDomainIds = filterChildSubDomains(domainIds); List<Long> filteredDomainIds = filterChildSubDomains(domainIds);
@ -3103,7 +3106,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati
bytesWriteRate, bytesWriteRateMax, bytesWriteRateMaxLength, bytesWriteRate, bytesWriteRateMax, bytesWriteRateMaxLength,
iopsReadRate, iopsReadRateMax, iopsReadRateMaxLength, iopsReadRate, iopsReadRateMax, iopsReadRateMaxLength,
iopsWriteRate, iopsWriteRateMax, iopsWriteRateMaxLength, iopsWriteRate, iopsWriteRateMax, iopsWriteRateMaxLength,
hypervisorSnapshotReserve, cacheMode, storagePolicyID); hypervisorSnapshotReserve, cacheMode, storagePolicyID, encryptRoot);
} else { } else {
diskOffering = _diskOfferingDao.findById(diskOfferingId); diskOffering = _diskOfferingDao.findById(diskOfferingId);
} }
@ -3145,7 +3148,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati
Long bytesWriteRate, Long bytesWriteRateMax, Long bytesWriteRateMaxLength, Long bytesWriteRate, Long bytesWriteRateMax, Long bytesWriteRateMaxLength,
Long iopsReadRate, Long iopsReadRateMax, Long iopsReadRateMaxLength, Long iopsReadRate, Long iopsReadRateMax, Long iopsReadRateMaxLength,
Long iopsWriteRate, Long iopsWriteRateMax, Long iopsWriteRateMaxLength, Long iopsWriteRate, Long iopsWriteRateMax, Long iopsWriteRateMaxLength,
final Integer hypervisorSnapshotReserve, String cacheMode, final Long storagePolicyID) { final Integer hypervisorSnapshotReserve, String cacheMode, final Long storagePolicyID, boolean encrypt) {
DiskOfferingVO diskOffering = new DiskOfferingVO(name, displayText, typedProvisioningType, false, tags, false, localStorageRequired, false); DiskOfferingVO diskOffering = new DiskOfferingVO(name, displayText, typedProvisioningType, false, tags, false, localStorageRequired, false);
@ -3185,6 +3188,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati
diskOffering.setCustomizedIops(isCustomizedIops); diskOffering.setCustomizedIops(isCustomizedIops);
diskOffering.setMinIops(minIops); diskOffering.setMinIops(minIops);
diskOffering.setMaxIops(maxIops); diskOffering.setMaxIops(maxIops);
diskOffering.setEncrypt(encrypt);
setBytesRate(diskOffering, bytesReadRate, bytesReadRateMax, bytesReadRateMaxLength, bytesWriteRate, bytesWriteRateMax, bytesWriteRateMaxLength); setBytesRate(diskOffering, bytesReadRate, bytesReadRateMax, bytesReadRateMaxLength, bytesWriteRate, bytesWriteRateMax, bytesWriteRateMaxLength);
setIopsRate(diskOffering, iopsReadRate, iopsReadRateMax, iopsReadRateMaxLength, iopsWriteRate, iopsWriteRateMax, iopsWriteRateMaxLength); setIopsRate(diskOffering, iopsReadRate, iopsReadRateMax, iopsReadRateMaxLength, iopsWriteRate, iopsWriteRateMax, iopsWriteRateMaxLength);
@ -3441,7 +3445,8 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati
Long bytesWriteRate, Long bytesWriteRateMax, Long bytesWriteRateMaxLength, Long bytesWriteRate, Long bytesWriteRateMax, Long bytesWriteRateMaxLength,
Long iopsReadRate, Long iopsReadRateMax, Long iopsReadRateMaxLength, Long iopsReadRate, Long iopsReadRateMax, Long iopsReadRateMaxLength,
Long iopsWriteRate, Long iopsWriteRateMax, Long iopsWriteRateMaxLength, Long iopsWriteRate, Long iopsWriteRateMax, Long iopsWriteRateMaxLength,
final Integer hypervisorSnapshotReserve, String cacheMode, final Map<String, String> details, final Long storagePolicyID, final boolean diskSizeStrictness) { final Integer hypervisorSnapshotReserve, String cacheMode, final Map<String, String> details, final Long storagePolicyID,
final boolean diskSizeStrictness, final boolean encrypt) {
long diskSize = 0;// special case for custom disk offerings long diskSize = 0;// special case for custom disk offerings
long maxVolumeSizeInGb = VolumeOrchestrationService.MaxVolumeSize.value(); long maxVolumeSizeInGb = VolumeOrchestrationService.MaxVolumeSize.value();
if (numGibibytes != null && numGibibytes <= 0) { if (numGibibytes != null && numGibibytes <= 0) {
@ -3523,6 +3528,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati
throw new InvalidParameterValueException("If provided, Hypervisor Snapshot Reserve must be greater than or equal to 0."); throw new InvalidParameterValueException("If provided, Hypervisor Snapshot Reserve must be greater than or equal to 0.");
} }
newDiskOffering.setEncrypt(encrypt);
newDiskOffering.setHypervisorSnapshotReserve(hypervisorSnapshotReserve); newDiskOffering.setHypervisorSnapshotReserve(hypervisorSnapshotReserve);
newDiskOffering.setDiskSizeStrictness(diskSizeStrictness); newDiskOffering.setDiskSizeStrictness(diskSizeStrictness);
@ -3538,6 +3544,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati
detailsVO.add(new DiskOfferingDetailVO(offering.getId(), ApiConstants.ZONE_ID, String.valueOf(zoneId), false)); detailsVO.add(new DiskOfferingDetailVO(offering.getId(), ApiConstants.ZONE_ID, String.valueOf(zoneId), false));
} }
} }
if (MapUtils.isNotEmpty(details)) { if (MapUtils.isNotEmpty(details)) {
details.forEach((key, value) -> { details.forEach((key, value) -> {
boolean displayDetail = !StringUtils.equalsAny(key, Volume.BANDWIDTH_LIMIT_IN_MBPS, Volume.IOPS_LIMIT); boolean displayDetail = !StringUtils.equalsAny(key, Volume.BANDWIDTH_LIMIT_IN_MBPS, Volume.IOPS_LIMIT);
@ -3634,6 +3641,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati
final Long iopsWriteRateMaxLength = cmd.getIopsWriteRateMaxLength(); final Long iopsWriteRateMaxLength = cmd.getIopsWriteRateMaxLength();
final Integer hypervisorSnapshotReserve = cmd.getHypervisorSnapshotReserve(); final Integer hypervisorSnapshotReserve = cmd.getHypervisorSnapshotReserve();
final String cacheMode = cmd.getCacheMode(); final String cacheMode = cmd.getCacheMode();
final boolean encrypt = cmd.getEncrypt();
validateMaxRateEqualsOrGreater(iopsReadRate, iopsReadRateMax, IOPS_READ_RATE); validateMaxRateEqualsOrGreater(iopsReadRate, iopsReadRateMax, IOPS_READ_RATE);
validateMaxRateEqualsOrGreater(iopsWriteRate, iopsWriteRateMax, IOPS_WRITE_RATE); validateMaxRateEqualsOrGreater(iopsWriteRate, iopsWriteRateMax, IOPS_WRITE_RATE);
@ -3647,7 +3655,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati
localStorageRequired, isDisplayOfferingEnabled, isCustomizedIops, minIops, localStorageRequired, isDisplayOfferingEnabled, isCustomizedIops, minIops,
maxIops, bytesReadRate, bytesReadRateMax, bytesReadRateMaxLength, bytesWriteRate, bytesWriteRateMax, bytesWriteRateMaxLength, maxIops, bytesReadRate, bytesReadRateMax, bytesReadRateMaxLength, bytesWriteRate, bytesWriteRateMax, bytesWriteRateMaxLength,
iopsReadRate, iopsReadRateMax, iopsReadRateMaxLength, iopsWriteRate, iopsWriteRateMax, iopsWriteRateMaxLength, iopsReadRate, iopsReadRateMax, iopsReadRateMaxLength, iopsWriteRate, iopsWriteRateMax, iopsWriteRateMaxLength,
hypervisorSnapshotReserve, cacheMode, details, storagePolicyId, diskSizeStrictness); hypervisorSnapshotReserve, cacheMode, details, storagePolicyId, diskSizeStrictness, encrypt);
} }
/** /**

View File

@ -274,7 +274,7 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
long ram_requested = offering.getRamSize() * 1024L * 1024L; long ram_requested = offering.getRamSize() * 1024L * 1024L;
VirtualMachine vm = vmProfile.getVirtualMachine(); VirtualMachine vm = vmProfile.getVirtualMachine();
DataCenter dc = _dcDao.findById(vm.getDataCenterId()); DataCenter dc = _dcDao.findById(vm.getDataCenterId());
boolean volumesRequireEncryption = anyVolumeRequiresEncryption(_volsDao.findByInstance(vm.getId()));
if (vm.getType() == VirtualMachine.Type.User || vm.getType() == VirtualMachine.Type.DomainRouter) { if (vm.getType() == VirtualMachine.Type.User || vm.getType() == VirtualMachine.Type.DomainRouter) {
checkForNonDedicatedResources(vmProfile, dc, avoids); checkForNonDedicatedResources(vmProfile, dc, avoids);
@ -296,7 +296,7 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
if (plan.getHostId() != null && haVmTag == null) { if (plan.getHostId() != null && haVmTag == null) {
Long hostIdSpecified = plan.getHostId(); Long hostIdSpecified = plan.getHostId();
if (s_logger.isDebugEnabled()) { if (s_logger.isDebugEnabled()) {
s_logger.debug("DeploymentPlan has host_id specified, choosing this host and making no checks on this host: " + hostIdSpecified); s_logger.debug("DeploymentPlan has host_id specified, choosing this host: " + hostIdSpecified);
} }
HostVO host = _hostDao.findById(hostIdSpecified); HostVO host = _hostDao.findById(hostIdSpecified);
if (host != null && StringUtils.isNotBlank(uefiFlag) && "yes".equalsIgnoreCase(uefiFlag)) { if (host != null && StringUtils.isNotBlank(uefiFlag) && "yes".equalsIgnoreCase(uefiFlag)) {
@ -337,6 +337,14 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
Map<Volume, List<StoragePool>> suitableVolumeStoragePools = result.first(); Map<Volume, List<StoragePool>> suitableVolumeStoragePools = result.first();
List<Volume> readyAndReusedVolumes = result.second(); List<Volume> readyAndReusedVolumes = result.second();
_hostDao.loadDetails(host);
if (volumesRequireEncryption && !Boolean.parseBoolean(host.getDetail(Host.HOST_VOLUME_ENCRYPTION))) {
s_logger.warn(String.format("VM's volumes require encryption support, and provided host %s can't handle it", host));
return null;
} else {
s_logger.debug(String.format("Volume encryption requirements are met by provided host %s", host));
}
// choose the potential pool for this VM for this host // choose the potential pool for this VM for this host
if (!suitableVolumeStoragePools.isEmpty()) { if (!suitableVolumeStoragePools.isEmpty()) {
List<Host> suitableHosts = new ArrayList<Host>(); List<Host> suitableHosts = new ArrayList<Host>();
@ -402,6 +410,8 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
s_logger.debug("This VM has last host_id specified, trying to choose the same host: " + vm.getLastHostId()); s_logger.debug("This VM has last host_id specified, trying to choose the same host: " + vm.getLastHostId());
HostVO host = _hostDao.findById(vm.getLastHostId()); HostVO host = _hostDao.findById(vm.getLastHostId());
_hostDao.loadHostTags(host);
_hostDao.loadDetails(host);
ServiceOfferingDetailsVO offeringDetails = null; ServiceOfferingDetailsVO offeringDetails = null;
if (host == null) { if (host == null) {
s_logger.debug("The last host of this VM cannot be found"); s_logger.debug("The last host of this VM cannot be found");
@ -419,6 +429,8 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
if(!_resourceMgr.isGPUDeviceAvailable(host.getId(), groupName.getValue(), offeringDetails.getValue())){ if(!_resourceMgr.isGPUDeviceAvailable(host.getId(), groupName.getValue(), offeringDetails.getValue())){
s_logger.debug("The last host of this VM does not have required GPU devices available"); s_logger.debug("The last host of this VM does not have required GPU devices available");
} }
} else if (volumesRequireEncryption && !Boolean.parseBoolean(host.getDetail(Host.HOST_VOLUME_ENCRYPTION))) {
s_logger.warn(String.format("The last host of this VM %s does not support volume encryption, which is required by this VM.", host));
} else { } else {
if (host.getStatus() == Status.Up) { if (host.getStatus() == Status.Up) {
if (checkVmProfileAndHost(vmProfile, host)) { if (checkVmProfileAndHost(vmProfile, host)) {
@ -523,14 +535,12 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
resetAvoidSet(plannerAvoidOutput, plannerAvoidInput); resetAvoidSet(plannerAvoidOutput, plannerAvoidInput);
dest = dest = checkClustersforDestination(clusterList, vmProfile, plan, avoids, dc, getPlannerUsage(planner, vmProfile, plan, avoids), plannerAvoidOutput);
checkClustersforDestination(clusterList, vmProfile, plan, avoids, dc, getPlannerUsage(planner, vmProfile, plan, avoids), plannerAvoidOutput);
if (dest != null) { if (dest != null) {
return dest; return dest;
} }
// reset the avoid input to the planners // reset the avoid input to the planners
resetAvoidSet(avoids, plannerAvoidOutput); resetAvoidSet(avoids, plannerAvoidOutput);
} else { } else {
return null; return null;
} }
@ -540,6 +550,13 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
long hostId = dest.getHost().getId(); long hostId = dest.getHost().getId();
avoids.addHost(dest.getHost().getId()); avoids.addHost(dest.getHost().getId());
if (volumesRequireEncryption && !Boolean.parseBoolean(_hostDetailsDao.findDetail(hostId, Host.HOST_VOLUME_ENCRYPTION).getValue())) {
s_logger.warn(String.format("VM's volumes require encryption support, and the planner-provided host %s can't handle it", dest.getHost()));
continue;
} else {
s_logger.debug(String.format("VM's volume encryption requirements are met by host %s", dest.getHost()));
}
if (checkIfHostFitsPlannerUsage(hostId, DeploymentPlanner.PlannerResourceUsage.Shared)) { if (checkIfHostFitsPlannerUsage(hostId, DeploymentPlanner.PlannerResourceUsage.Shared)) {
// found destination // found destination
return dest; return dest;
@ -554,10 +571,18 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
} }
} }
} }
return dest; return dest;
} }
protected boolean anyVolumeRequiresEncryption(List<? extends Volume> volumes) {
for (Volume volume : volumes) {
if (volume.getPassphraseId() != null) {
return true;
}
}
return false;
}
private boolean isDeployAsIs(VirtualMachine vm) { private boolean isDeployAsIs(VirtualMachine vm) {
long templateId = vm.getTemplateId(); long templateId = vm.getTemplateId();
VMTemplateVO template = templateDao.findById(templateId); VMTemplateVO template = templateDao.findById(templateId);
@ -664,7 +689,7 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
return null; return null;
} }
private boolean checkVmProfileAndHost(final VirtualMachineProfile vmProfile, final HostVO host) { protected boolean checkVmProfileAndHost(final VirtualMachineProfile vmProfile, final HostVO host) {
ServiceOffering offering = vmProfile.getServiceOffering(); ServiceOffering offering = vmProfile.getServiceOffering();
if (offering.getHostTag() != null) { if (offering.getHostTag() != null) {
_hostDao.loadHostTags(host); _hostDao.loadHostTags(host);
@ -877,14 +902,13 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
} }
@DB @DB
private boolean checkIfHostFitsPlannerUsage(final long hostId, final PlannerResourceUsage resourceUsageRequired) { protected boolean checkIfHostFitsPlannerUsage(final long hostId, final PlannerResourceUsage resourceUsageRequired) {
// TODO Auto-generated method stub // TODO Auto-generated method stub
// check if this host has been picked up by some other planner // check if this host has been picked up by some other planner
// exclusively // exclusively
// if planner can work with shared host, check if this host has // if planner can work with shared host, check if this host has
// been marked as 'shared' // been marked as 'shared'
// else if planner needs dedicated host, // else if planner needs dedicated host,
PlannerHostReservationVO reservationEntry = _plannerHostReserveDao.findByHostId(hostId); PlannerHostReservationVO reservationEntry = _plannerHostReserveDao.findByHostId(hostId);
if (reservationEntry != null) { if (reservationEntry != null) {
final long id = reservationEntry.getId(); final long id = reservationEntry.getId();
@ -1222,7 +1246,6 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
if (!suitableVolumeStoragePools.isEmpty()) { if (!suitableVolumeStoragePools.isEmpty()) {
Pair<Host, Map<Volume, StoragePool>> potentialResources = findPotentialDeploymentResources(suitableHosts, suitableVolumeStoragePools, avoid, Pair<Host, Map<Volume, StoragePool>> potentialResources = findPotentialDeploymentResources(suitableHosts, suitableVolumeStoragePools, avoid,
resourceUsageRequired, readyAndReusedVolumes, plan.getPreferredHosts(), vmProfile.getVirtualMachine()); resourceUsageRequired, readyAndReusedVolumes, plan.getPreferredHosts(), vmProfile.getVirtualMachine());
if (potentialResources != null) { if (potentialResources != null) {
Host host = _hostDao.findById(potentialResources.first().getId()); Host host = _hostDao.findById(potentialResources.first().getId());
Map<Volume, StoragePool> storageVolMap = potentialResources.second(); Map<Volume, StoragePool> storageVolMap = potentialResources.second();
@ -1412,6 +1435,7 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
List<Volume> allVolumes = new ArrayList<>(); List<Volume> allVolumes = new ArrayList<>();
allVolumes.addAll(volumesOrderBySizeDesc); allVolumes.addAll(volumesOrderBySizeDesc);
List<Pair<Volume, DiskProfile>> volumeDiskProfilePair = getVolumeDiskProfilePairs(allVolumes); List<Pair<Volume, DiskProfile>> volumeDiskProfilePair = getVolumeDiskProfilePairs(allVolumes);
for (StoragePool storagePool : suitablePools) { for (StoragePool storagePool : suitablePools) {
haveEnoughSpace = false; haveEnoughSpace = false;
hostCanAccessPool = false; hostCanAccessPool = false;
@ -1493,12 +1517,22 @@ StateListener<State, VirtualMachine.Event, VirtualMachine>, Configurable {
} }
} }
if (hostCanAccessPool && haveEnoughSpace && hostAffinityCheck && checkIfHostFitsPlannerUsage(potentialHost.getId(), resourceUsageRequired)) { HostVO potentialHostVO = _hostDao.findById(potentialHost.getId());
_hostDao.loadDetails(potentialHostVO);
boolean hostHasEncryption = Boolean.parseBoolean(potentialHostVO.getDetail(Host.HOST_VOLUME_ENCRYPTION));
boolean hostMeetsEncryptionRequirements = !anyVolumeRequiresEncryption(new ArrayList<>(volumesOrderBySizeDesc)) || hostHasEncryption;
boolean plannerUsageFits = checkIfHostFitsPlannerUsage(potentialHost.getId(), resourceUsageRequired);
if (hostCanAccessPool && haveEnoughSpace && hostAffinityCheck && hostMeetsEncryptionRequirements && plannerUsageFits) {
s_logger.debug("Found a potential host " + "id: " + potentialHost.getId() + " name: " + potentialHost.getName() + s_logger.debug("Found a potential host " + "id: " + potentialHost.getId() + " name: " + potentialHost.getName() +
" and associated storage pools for this VM"); " and associated storage pools for this VM");
volumeAllocationMap.clear(); volumeAllocationMap.clear();
return new Pair<Host, Map<Volume, StoragePool>>(potentialHost, storage); return new Pair<Host, Map<Volume, StoragePool>>(potentialHost, storage);
} else { } else {
if (!hostMeetsEncryptionRequirements) {
s_logger.debug("Potential host " + potentialHost + " did not meet encryption requirements of all volumes");
}
avoid.addHost(potentialHost.getId()); avoid.addHost(potentialHost.getId());
} }
} }

View File

@ -45,11 +45,6 @@ import java.util.stream.Collectors;
import javax.inject.Inject; import javax.inject.Inject;
import com.cloud.agent.api.GetStoragePoolCapabilitiesAnswer;
import com.cloud.agent.api.GetStoragePoolCapabilitiesCommand;
import com.cloud.network.router.VirtualNetworkApplianceManager;
import com.cloud.server.StatsCollector;
import com.cloud.upgrade.SystemVmTemplateRegistration;
import com.google.common.collect.Sets; import com.google.common.collect.Sets;
import org.apache.cloudstack.annotation.AnnotationService; import org.apache.cloudstack.annotation.AnnotationService;
import org.apache.cloudstack.annotation.dao.AnnotationDao; import org.apache.cloudstack.annotation.dao.AnnotationDao;
@ -127,6 +122,8 @@ import com.cloud.agent.AgentManager;
import com.cloud.agent.api.Answer; import com.cloud.agent.api.Answer;
import com.cloud.agent.api.Command; import com.cloud.agent.api.Command;
import com.cloud.agent.api.DeleteStoragePoolCommand; import com.cloud.agent.api.DeleteStoragePoolCommand;
import com.cloud.agent.api.GetStoragePoolCapabilitiesAnswer;
import com.cloud.agent.api.GetStoragePoolCapabilitiesCommand;
import com.cloud.agent.api.GetStorageStatsAnswer; import com.cloud.agent.api.GetStorageStatsAnswer;
import com.cloud.agent.api.GetStorageStatsCommand; import com.cloud.agent.api.GetStorageStatsCommand;
import com.cloud.agent.api.GetVolumeStatsAnswer; import com.cloud.agent.api.GetVolumeStatsAnswer;
@ -178,6 +175,7 @@ import com.cloud.host.dao.HostDao;
import com.cloud.hypervisor.Hypervisor; import com.cloud.hypervisor.Hypervisor;
import com.cloud.hypervisor.Hypervisor.HypervisorType; import com.cloud.hypervisor.Hypervisor.HypervisorType;
import com.cloud.hypervisor.HypervisorGuruManager; import com.cloud.hypervisor.HypervisorGuruManager;
import com.cloud.network.router.VirtualNetworkApplianceManager;
import com.cloud.offering.DiskOffering; import com.cloud.offering.DiskOffering;
import com.cloud.offering.ServiceOffering; import com.cloud.offering.ServiceOffering;
import com.cloud.org.Grouping; import com.cloud.org.Grouping;
@ -185,6 +183,7 @@ import com.cloud.org.Grouping.AllocationState;
import com.cloud.resource.ResourceState; import com.cloud.resource.ResourceState;
import com.cloud.server.ConfigurationServer; import com.cloud.server.ConfigurationServer;
import com.cloud.server.ManagementServer; import com.cloud.server.ManagementServer;
import com.cloud.server.StatsCollector;
import com.cloud.service.dao.ServiceOfferingDetailsDao; import com.cloud.service.dao.ServiceOfferingDetailsDao;
import com.cloud.storage.Storage.ImageFormat; import com.cloud.storage.Storage.ImageFormat;
import com.cloud.storage.Storage.StoragePoolType; import com.cloud.storage.Storage.StoragePoolType;
@ -202,6 +201,7 @@ import com.cloud.storage.listener.StoragePoolMonitor;
import com.cloud.storage.listener.VolumeStateListener; import com.cloud.storage.listener.VolumeStateListener;
import com.cloud.template.TemplateManager; import com.cloud.template.TemplateManager;
import com.cloud.template.VirtualMachineTemplate; import com.cloud.template.VirtualMachineTemplate;
import com.cloud.upgrade.SystemVmTemplateRegistration;
import com.cloud.user.Account; import com.cloud.user.Account;
import com.cloud.user.AccountManager; import com.cloud.user.AccountManager;
import com.cloud.user.ResourceLimitService; import com.cloud.user.ResourceLimitService;

View File

@ -131,6 +131,7 @@ import com.cloud.exception.PermissionDeniedException;
import com.cloud.exception.ResourceAllocationException; import com.cloud.exception.ResourceAllocationException;
import com.cloud.exception.StorageUnavailableException; import com.cloud.exception.StorageUnavailableException;
import com.cloud.gpu.GPU; import com.cloud.gpu.GPU;
import com.cloud.host.Host;
import com.cloud.host.HostVO; import com.cloud.host.HostVO;
import com.cloud.host.Status; import com.cloud.host.Status;
import com.cloud.host.dao.HostDao; import com.cloud.host.dao.HostDao;
@ -311,7 +312,6 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic
VirtualMachineManager virtualMachineManager; VirtualMachineManager virtualMachineManager;
@Inject @Inject
private ManagementService managementService; private ManagementService managementService;
@Inject @Inject
protected SnapshotHelper snapshotHelper; protected SnapshotHelper snapshotHelper;
@ -800,6 +800,11 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic
parentVolume = _volsDao.findByIdIncludingRemoved(snapshotCheck.getVolumeId()); parentVolume = _volsDao.findByIdIncludingRemoved(snapshotCheck.getVolumeId());
// Don't support creating templates from encrypted volumes (yet)
if (parentVolume.getPassphraseId() != null) {
throw new UnsupportedOperationException("Cannot create new volumes from encrypted volume snapshots");
}
if (zoneId == null) { if (zoneId == null) {
// if zoneId is not provided, we default to create volume in the same zone as the snapshot zone. // if zoneId is not provided, we default to create volume in the same zone as the snapshot zone.
zoneId = snapshotCheck.getDataCenterId(); zoneId = snapshotCheck.getDataCenterId();
@ -899,6 +904,7 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic
} }
volume = _volsDao.persist(volume); volume = _volsDao.persist(volume);
if (cmd.getSnapshotId() == null && displayVolume) { if (cmd.getSnapshotId() == null && displayVolume) {
// for volume created from snapshot, create usage event after volume creation // for volume created from snapshot, create usage event after volume creation
UsageEventUtils.publishUsageEvent(EventTypes.EVENT_VOLUME_CREATE, volume.getAccountId(), volume.getDataCenterId(), volume.getId(), volume.getName(), diskOfferingId, null, size, UsageEventUtils.publishUsageEvent(EventTypes.EVENT_VOLUME_CREATE, volume.getAccountId(), volume.getDataCenterId(), volume.getId(), volume.getName(), diskOfferingId, null, size,
@ -1113,10 +1119,15 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic
Long instanceId = volume.getInstanceId(); Long instanceId = volume.getInstanceId();
VMInstanceVO vmInstanceVO = _vmInstanceDao.findById(instanceId); VMInstanceVO vmInstanceVO = _vmInstanceDao.findById(instanceId);
if (volume.getVolumeType().equals(Volume.Type.ROOT)) { if (volume.getVolumeType().equals(Volume.Type.ROOT)) {
ServiceOfferingVO serviceOffering = _serviceOfferingDao.findById(vmInstanceVO.getServiceOfferingId()); ServiceOfferingVO serviceOffering = _serviceOfferingDao.findById(vmInstanceVO.getServiceOfferingId());
if (serviceOffering != null && serviceOffering.getDiskOfferingStrictness()) { if (serviceOffering != null && serviceOffering.getDiskOfferingStrictness()) {
throw new InvalidParameterValueException(String.format("Cannot resize ROOT volume [%s] with new disk offering since existing disk offering is strictly assigned to the ROOT volume.", volume.getName())); throw new InvalidParameterValueException(String.format("Cannot resize ROOT volume [%s] with new disk offering since existing disk offering is strictly assigned to the ROOT volume.", volume.getName()));
} }
if (newDiskOffering.getEncrypt() != diskOffering.getEncrypt()) {
throw new InvalidParameterValueException(
String.format("Current disk offering's encryption(%s) does not match target disk offering's encryption(%s)", diskOffering.getEncrypt(), newDiskOffering.getEncrypt())
);
}
} }
if (diskOffering.getTags() != null) { if (diskOffering.getTags() != null) {
@ -1183,7 +1194,7 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic
if (storagePoolId != null) { if (storagePoolId != null) {
StoragePoolVO storagePoolVO = _storagePoolDao.findById(storagePoolId); StoragePoolVO storagePoolVO = _storagePoolDao.findById(storagePoolId);
if (storagePoolVO.isManaged()) { if (storagePoolVO.isManaged() && !storagePoolVO.getPoolType().equals(Storage.StoragePoolType.PowerFlex)) {
Long instanceId = volume.getInstanceId(); Long instanceId = volume.getInstanceId();
if (instanceId != null) { if (instanceId != null) {
@ -1272,15 +1283,15 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic
if (jobResult != null) { if (jobResult != null) {
if (jobResult instanceof ConcurrentOperationException) { if (jobResult instanceof ConcurrentOperationException) {
throw (ConcurrentOperationException)jobResult; throw (ConcurrentOperationException) jobResult;
} else if (jobResult instanceof ResourceAllocationException) { } else if (jobResult instanceof ResourceAllocationException) {
throw (ResourceAllocationException)jobResult; throw (ResourceAllocationException) jobResult;
} else if (jobResult instanceof RuntimeException) { } else if (jobResult instanceof RuntimeException) {
throw (RuntimeException)jobResult; throw (RuntimeException) jobResult;
} else if (jobResult instanceof Throwable) { } else if (jobResult instanceof Throwable) {
throw new RuntimeException("Unexpected exception", (Throwable)jobResult); throw new RuntimeException("Unexpected exception", (Throwable) jobResult);
} else if (jobResult instanceof Long) { } else if (jobResult instanceof Long) {
return _volsDao.findById((Long)jobResult); return _volsDao.findById((Long) jobResult);
} }
} }
@ -2214,6 +2225,11 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic
job.getId())); job.getId()));
} }
DiskOfferingVO diskOffering = _diskOfferingDao.findById(volumeToAttach.getDiskOfferingId());
if (diskOffering.getEncrypt() && rootDiskHyperType != HypervisorType.KVM) {
throw new InvalidParameterValueException("Volume's disk offering has encryption enabled, but volume encryption is not supported for hypervisor type " + rootDiskHyperType);
}
_jobMgr.updateAsyncJobAttachment(job.getId(), "Volume", volumeId); _jobMgr.updateAsyncJobAttachment(job.getId(), "Volume", volumeId);
if (asyncExecutionContext.isJobDispatchedBy(VmWorkConstants.VM_WORK_JOB_DISPATCHER)) { if (asyncExecutionContext.isJobDispatchedBy(VmWorkConstants.VM_WORK_JOB_DISPATCHER)) {
@ -2872,6 +2888,10 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic
vm = _vmInstanceDao.findById(instanceId); vm = _vmInstanceDao.findById(instanceId);
} }
if (vol.getPassphraseId() != null) {
throw new InvalidParameterValueException("Migration of encrypted volumes is unsupported");
}
// Check that Vm to which this volume is attached does not have VM Snapshots // Check that Vm to which this volume is attached does not have VM Snapshots
// OfflineVmwareMigration: consider if this is needed and desirable // OfflineVmwareMigration: consider if this is needed and desirable
if (vm != null && _vmSnapshotDao.findByVm(vm.getId()).size() > 0) { if (vm != null && _vmSnapshotDao.findByVm(vm.getId()).size() > 0) {
@ -3353,6 +3373,11 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic
throw new InvalidParameterValueException("VolumeId: " + volumeId + " is not in " + Volume.State.Ready + " state but " + volume.getState() + ". Cannot take snapshot."); throw new InvalidParameterValueException("VolumeId: " + volumeId + " is not in " + Volume.State.Ready + " state but " + volume.getState() + ". Cannot take snapshot.");
} }
if (volume.getEncryptFormat() != null && volume.getAttachedVM() != null && volume.getAttachedVM().getState() != State.Stopped) {
s_logger.debug(String.format("Refusing to take snapshot of encrypted volume (%s) on running VM (%s)", volume, volume.getAttachedVM()));
throw new UnsupportedOperationException("Volume snapshots for encrypted volumes are not supported if VM is running");
}
CreateSnapshotPayload payload = new CreateSnapshotPayload(); CreateSnapshotPayload payload = new CreateSnapshotPayload();
payload.setSnapshotId(snapshotId); payload.setSnapshotId(snapshotId);
@ -3529,6 +3554,10 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic
throw ex; throw ex;
} }
if (volume.getPassphraseId() != null) {
throw new InvalidParameterValueException("Extraction of encrypted volumes is unsupported");
}
if (volume.getVolumeType() != Volume.Type.DATADISK) { if (volume.getVolumeType() != Volume.Type.DATADISK) {
// Datadisk don't have any template dependence. // Datadisk don't have any template dependence.
@ -3862,6 +3891,14 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic
sendCommand = true; sendCommand = true;
} }
if (host != null) {
_hostDao.loadDetails(host);
boolean hostSupportsEncryption = Boolean.parseBoolean(host.getDetail(Host.HOST_VOLUME_ENCRYPTION));
if (volumeToAttach.getPassphraseId() != null && !hostSupportsEncryption) {
throw new CloudRuntimeException(errorMsg + " because target host " + host + " doesn't support volume encryption");
}
}
if (volumeToAttachStoragePool != null) { if (volumeToAttachStoragePool != null) {
verifyManagedStorage(volumeToAttachStoragePool.getId(), hostId); verifyManagedStorage(volumeToAttachStoragePool.getId(), hostId);
} }

View File

@ -97,6 +97,7 @@ import com.cloud.server.ResourceTag.ResourceObjectType;
import com.cloud.server.TaggedResourceService; import com.cloud.server.TaggedResourceService;
import com.cloud.storage.CreateSnapshotPayload; import com.cloud.storage.CreateSnapshotPayload;
import com.cloud.storage.DataStoreRole; import com.cloud.storage.DataStoreRole;
import com.cloud.storage.DiskOfferingVO;
import com.cloud.storage.ScopeType; import com.cloud.storage.ScopeType;
import com.cloud.storage.Snapshot; import com.cloud.storage.Snapshot;
import com.cloud.storage.Snapshot.Type; import com.cloud.storage.Snapshot.Type;
@ -110,6 +111,7 @@ import com.cloud.storage.StoragePool;
import com.cloud.storage.VMTemplateVO; import com.cloud.storage.VMTemplateVO;
import com.cloud.storage.Volume; import com.cloud.storage.Volume;
import com.cloud.storage.VolumeVO; import com.cloud.storage.VolumeVO;
import com.cloud.storage.dao.DiskOfferingDao;
import com.cloud.storage.dao.SnapshotDao; import com.cloud.storage.dao.SnapshotDao;
import com.cloud.storage.dao.SnapshotPolicyDao; import com.cloud.storage.dao.SnapshotPolicyDao;
import com.cloud.storage.dao.SnapshotScheduleDao; import com.cloud.storage.dao.SnapshotScheduleDao;
@ -172,6 +174,8 @@ public class SnapshotManagerImpl extends MutualExclusiveIdsManagerBase implement
@Inject @Inject
DomainDao _domainDao; DomainDao _domainDao;
@Inject @Inject
DiskOfferingDao diskOfferingDao;
@Inject
StorageManager _storageMgr; StorageManager _storageMgr;
@Inject @Inject
SnapshotScheduler _snapSchedMgr; SnapshotScheduler _snapSchedMgr;
@ -846,6 +850,14 @@ public class SnapshotManagerImpl extends MutualExclusiveIdsManagerBase implement
throw new InvalidParameterValueException("Failed to create snapshot policy, unable to find a volume with id " + volumeId); throw new InvalidParameterValueException("Failed to create snapshot policy, unable to find a volume with id " + volumeId);
} }
// For now, volumes with encryption don't support snapshot schedules, because they will fail when VM is running
DiskOfferingVO diskOffering = diskOfferingDao.findByIdIncludingRemoved(volume.getDiskOfferingId());
if (diskOffering == null) {
throw new InvalidParameterValueException(String.format("Failed to find disk offering for the volume [%s]", volume.getUuid()));
} else if(diskOffering.getEncrypt()) {
throw new UnsupportedOperationException(String.format("Encrypted volumes don't support snapshot schedules, cannot create snapshot policy for the volume [%s]", volume.getUuid()));
}
String volumeDescription = volume.getVolumeDescription(); String volumeDescription = volume.getVolumeDescription();
_accountMgr.checkAccess(CallContext.current().getCallingAccount(), null, true, volume); _accountMgr.checkAccess(CallContext.current().getCallingAccount(), null, true, volume);

View File

@ -1802,6 +1802,11 @@ public class TemplateManagerImpl extends ManagerBase implements TemplateManager,
// check permissions // check permissions
_accountMgr.checkAccess(caller, null, true, volume); _accountMgr.checkAccess(caller, null, true, volume);
// Don't support creating templates from encrypted volumes (yet)
if (volume.getPassphraseId() != null) {
throw new UnsupportedOperationException("Cannot create templates from encrypted volumes");
}
// If private template is created from Volume, check that the volume // If private template is created from Volume, check that the volume
// will not be active when the private template is // will not be active when the private template is
// created // created
@ -1825,6 +1830,11 @@ public class TemplateManagerImpl extends ManagerBase implements TemplateManager,
// Volume could be removed so find including removed to record source template id. // Volume could be removed so find including removed to record source template id.
volume = _volumeDao.findByIdIncludingRemoved(snapshot.getVolumeId()); volume = _volumeDao.findByIdIncludingRemoved(snapshot.getVolumeId());
// Don't support creating templates from encrypted volumes (yet)
if (volume != null && volume.getPassphraseId() != null) {
throw new UnsupportedOperationException("Cannot create templates from snapshots of encrypted volumes");
}
// check permissions // check permissions
_accountMgr.checkAccess(caller, null, true, snapshot); _accountMgr.checkAccess(caller, null, true, snapshot);

Some files were not shown because too many files have changed in this diff Show More