From 697e12f8f782a4f91007fc1be1ae7f0d84c8af1d Mon Sep 17 00:00:00 2001 From: Marcus Sorensen Date: Mon, 26 Sep 2022 22:50:59 -0600 Subject: [PATCH] kvm: volume encryption feature (#6522) This PR introduces a feature designed to allow CloudStack to manage a generic volume encryption setting. The encryption is handled transparently to the guest OS, and is intended to handle VM guest data encryption at rest and possibly over the wire, though the actual encryption implementation is up to the primary storage driver. In some cases cloud customers may still prefer to maintain their own guest-level volume encryption, if they don't trust the cloud provider. However, for private cloud cases this greatly simplifies the guest OS experience in terms of running volume encryption for guests without the user having to manage keys, deal with key servers and guest booting being dependent on network connectivity to them (i.e. Tang), etc, especially in cases where users are attaching/detaching data disks and moving them between VMs occasionally. The feature can be thought of as having two parts - the API/control plane (which includes scheduling aspects), and the storage driver implementation. This initial PR adds the encryption setting to disk offerings and service offerings (for root volume), and implements encryption support for KVM SharedMountPoint, NFS, Local, and ScaleIO storage pools. NOTE: While not required, operations can be significantly sped up by ensuring that hosts have the `rng-tools` package and service installed and running on the management server and hypervisors. For EL hosts the service is `rngd` and for Debian it is `rng-tools`. In particular, the use of SecureRandom for generating volume passphrases can be slow if there isn't a good source of entropy. This could affect testing and build environments, and otherwise would only affect users who actually use the encryption feature. If you find tests or volume creates blocking on encryption, check this first. ### Management Server ##### API * createDiskOffering now has an 'encrypt' Boolean * createServiceOffering now has an 'encryptroot' Boolean. The 'root' suffix is added here in case there is ever any other need to encrypt something related to the guest configuration, like the RAM of a VM. This has been refactored to deal with the new separation of service offering from disk offering internally. * listDiskOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption * listServiceOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption * listHosts now shows encryption support of each hypervisor host via `encryptionsupported` * Volumes themselves don't show encryption on/off, rather the offering should be referenced. This follows the same pattern as other disk offering based settings such as the IOPS of the volume. ##### Volume functions A decent effort has been made to ensure that the most common volume functions have either been cleanly supported or blocked. However, for the first release it is advised to mark this feature as *experimental*, as the code base is complex and there are certainly edge cases to be found. Many of these features could eventually be supported over time, such as creating templates from encrypted volumes, but the effort and size of the change is already overwhelming. Supported functions: * Data Volume create * VM root volume create * VM root volume reinstall * Offline volume snapshot/restore * Migration of VM with storage (e.g. local storage VM migration) * Resize volume * Detach/attach volume Blocked functions: * Online volume snapshot * VM snapshot w/memory * Scheduled snapshots (would fail when VM is running) * Disk offering migration to offerings that don't have matching encryption * Creating template from encrypted volume * Creating volume from encrypted volume * Volume extraction (would we decrypt it first, or expose the key? Probably the former). ##### Primary Storage Support For storage developers, adding encryption support involves: 1. Updating the `StoragePoolType` for your primary storage to advertise encryption support. This is used during allocation of storage to match storage types that support encryption to storage that supports it. 2. Implementing encryption feature when your `PrimaryDataStoreDriver` is called to perform volume lifecycle functions on volumes that are requesting encryption. You are free to do what your storage supports - this could be as simple as calling a storage API with the right flag when creating a volume. Or (as is the case with the KVM storage types), as complex as managing volume details directly at the hypervisor host. The data objects passed to the storage driver will contain volume passphrases, if encryption is requested. ##### Scheduling For the KVM implementations specified above, we are dependent on the KVM hosts having support for volume encryption tools. As such, the hosts `StartupRoutingCommand` has been modified to advertise whether the host supports encryption. This is done via a probe during agent startup to look for functioning `cryptsetup` and support in `qemu-img`. This is also visible via the listHosts API and the host details in the UI. This was patterned after other features that require hypervisor support such as UEFI. The `EndPointSelector` interface and `DefaultEndpointSelector` have had new methods added, which allow the caller to ask for endpoints that support encryption. This can be used by storage drivers to find the proper hosts to send storage commands that involve encryption. Not all volume activities will require a host to support encryption (for example a snapshot backup is a simple file copy), and this is the reason why the interface has been modified to allow for the storage driver to decide, rather than just passing the data objects to the EndpointSelector and letting the implementation decide. VM scheduling has also been modified. When a VM start is requested, if any volume that requires encryption is attached, it will filter out hosts that don't support encryption. ##### DB Changes A volume whose disk offering enables encryption will get a passphrase generated for it before its first use. This is stored in the new 'passphrase' table, and is encrypted using the CloudStack installation's standard configured DB encryption. A field has been added to the volumes table, referencing this passphrase, and a foreign key added to ensure passphrases that are referenced can't be removed from the database. The volumes table now also contains an encryption format field, which is set by the implementer of the encryption and used as it sees fit. #### KVM Agent For the KVM storage pool types supported, the encryption has been implemented at Qemu itself, using the built-in LUKS storage support. This means that the storage remains encrypted all the way to the VM process, and decrypted before the block device is visible to the guest. This may not be necessary in order to implement encryption for /your/ storage pool type, maybe you have a kernel driver that decrypts before the block device on the system, or something like that. However, it seemed like the simplest, common place to terminate the encryption, and provides the lowest surface area for decrypted guest data. For qcow2 based storage, `qemu-img` is used to set up a qcow2 file with LUKS encryption. For block based (currently just ScaleIO storage), the `cryptsetup` utility is used to format the block device as LUKS for data disks, but `qemu-img` and its LUKS support is used for template copy. Any volume that requires encryption will contain a passphrase ID as a byte array when handed down to the KVM agent. Care has been taken to ensure this doesn't get logged, and it is cleared after use in attempt to avoid exposing it before garbage collection occurs. On the agent side, this passphrase is used in two ways: 1. In cases where the volume experiences some libvirt interaction it is loaded into libvirt as an ephemeral, private secret and then referenced by secret UUID in any libvirt XML. This applies to things like VM startup, migration preparation, etc. 2. In cases where `qemu-img` needs to use this passphrase for volume operations, it is written to a `KeyFile` on the cloudstack agent's configured tmpfs and passed along. The `KeyFile` is a `Closeable` and when it is closed, it is deleted. This allows us to try-with-resources any volume operations and get the KeyFile removed regardless. In order to support the advanced syntax required to handle encryption and passphrases with `qemu-img`, the `QemuImg` utility has been modified to support the new `--object` and `--image-opts` flags. These are modeled as `QemuObject` and `QemuImageOptions`. These `qemu-img` flags have been designed to supersede some of the existing, older flags being used today (such as choosing file formats and paths), and an effort could be made to switch over to these wholesale. However, for now we have instead opted to keep existing functions and do some wrapping to ensure backward compatibility, so callers of `QemuImg` can choose to use either way. It should be noted that there are also a few different Enums that represent the encryption format for various purposes. While these are analogous in principle, they represent different things and should not be confused. For example, the supported encryption format strings for the `cryptsetup` utility has `LuksType.LUKS` while `QemuImg` has a `QemuImg.PhysicalDiskFormat.LUKS`. Some additional effort could potentially be made to support advanced encryption configurations, such as choosing between LUKS1 and LUKS2 or changing cipher details. These may require changes all the way up through the control plane. However, in practice Libvirt and Qemu currently only support LUKS1 today. Additionally, the cipher details aren't required in order to use an encrypted volume, as they're stored in the LUKS header on the volume there is no need to store these elsewhere. As such, we need only set the one encryption format upon volume creation, which is persisted in the volumes table and then available later as needed. In the future when LUKS2 is standard and fully supported, we could move to it as the default and old volumes will still reference LUKS1 and have the headers on-disk to ensure they remain usable. We could also possibly support an automatic upgrade of the headers down the road, or a volume migration mechanism. Every version of cryptsetup and qemu-img tested on variants of EL7 and Ubuntu that support encryption use the XTS-AES 256 cipher, which is the leading industry standard and widely used cipher today (e.g. BitLocker and FileVault). Signed-off-by: Marcus Sorensen Co-authored-by: Marcus Sorensen --- .../java/com/cloud/agent/api/to/DiskTO.java | 1 + .../cloud/agent/api/to/StorageFilerTO.java | 2 + api/src/main/java/com/cloud/host/Host.java | 1 + .../java/com/cloud/offering/DiskOffering.java | 4 + .../com/cloud/storage/MigrationOptions.java | 9 +- .../main/java/com/cloud/storage/Storage.java | 46 +- .../main/java/com/cloud/storage/Volume.java | 8 + .../main/java/com/cloud/vm/DiskProfile.java | 12 +- .../apache/cloudstack/api/ApiConstants.java | 3 + .../admin/offering/CreateDiskOfferingCmd.java | 12 + .../offering/CreateServiceOfferingCmd.java | 11 + .../user/offering/ListDiskOfferingsCmd.java | 9 +- .../offering/ListServiceOfferingsCmd.java | 8 + .../user/snapshot/CreateSnapshotCmd.java | 4 + .../api/response/DiskOfferingResponse.java | 7 + .../cloudstack/api/response/HostResponse.java | 15 + .../api/response/ServiceOfferingResponse.java | 7 + .../api/storage/ResizeVolumeCommand.java | 24 + .../StorageSubsystemCommandHandlerBase.java | 4 +- .../cloudstack/storage/to/VolumeObjectTO.java | 27 + debian/control | 4 +- .../api/storage/EndPointSelector.java | 8 + .../subsystem/api/storage/VolumeInfo.java | 2 + .../orchestration/VolumeOrchestrator.java | 57 +- .../com/cloud/storage/DiskOfferingVO.java | 9 + .../main/java/com/cloud/storage/VolumeVO.java | 18 +- .../java/com/cloud/storage/dao/VolumeDao.java | 7 + .../com/cloud/storage/dao/VolumeDaoImpl.java | 10 + .../cloudstack/secret/PassphraseVO.java | 73 +++ .../cloudstack/secret/dao/PassphraseDao.java | 25 + .../secret/dao/PassphraseDaoImpl.java | 25 + ...spring-engine-schema-core-daos-context.xml | 1 + .../META-INF/db/schema-41710to41800.sql | 196 ++++++ .../motion/AncientDataMotionStrategy.java | 73 ++- .../storage/motion/DataMotionServiceImpl.java | 10 + .../StorageSystemDataMotionStrategy.java | 28 +- .../AbstractStoragePoolAllocator.java | 8 +- .../endpoint/DefaultEndPointSelector.java | 56 +- .../storage/volume/VolumeObject.java | 70 ++- .../storage/volume/VolumeServiceImpl.java | 8 + packaging/centos7/cloud.spec | 11 +- packaging/centos8/cloud.spec | 11 +- packaging/suse15/cloud.spec | 11 +- plugins/hypervisors/kvm/pom.xml | 45 +- .../resource/LibvirtComputingResource.java | 116 +++- .../kvm/resource/LibvirtDomainXMLParser.java | 10 + .../kvm/resource/LibvirtSecretDef.java | 4 + .../hypervisor/kvm/resource/LibvirtVMDef.java | 29 +- .../wrapper/LibvirtCreateCommandWrapper.java | 4 +- ...ivateTemplateFromVolumeCommandWrapper.java | 2 +- .../wrapper/LibvirtMigrateCommandWrapper.java | 13 + ...virtPrepareForMigrationCommandWrapper.java | 20 +- .../LibvirtResizeVolumeCommandWrapper.java | 135 ++++- .../wrapper/LibvirtStopCommandWrapper.java | 4 + .../kvm/storage/IscsiAdmStorageAdaptor.java | 20 +- .../kvm/storage/IscsiAdmStoragePool.java | 4 +- .../kvm/storage/KVMPhysicalDisk.java | 14 + .../kvm/storage/KVMStoragePool.java | 4 +- .../kvm/storage/KVMStoragePoolManager.java | 27 +- .../kvm/storage/KVMStorageProcessor.java | 124 +++- .../kvm/storage/LibvirtStorageAdaptor.java | 120 ++-- .../kvm/storage/LibvirtStoragePool.java | 8 +- .../kvm/storage/LinstorStorageAdaptor.java | 57 +- .../kvm/storage/LinstorStoragePool.java | 11 +- .../kvm/storage/ManagedNfsStorageAdaptor.java | 11 +- .../kvm/storage/ScaleIOStorageAdaptor.java | 205 ++++++- .../kvm/storage/ScaleIOStoragePool.java | 6 +- .../kvm/storage/StorageAdaptor.java | 7 +- .../utils/cryptsetup/CryptSetup.java | 124 ++++ .../utils/cryptsetup/CryptSetupException.java | 27 + .../cloudstack/utils/cryptsetup/KeyFile.java | 76 +++ .../utils/qemu/QemuImageOptions.java | 78 +++ .../apache/cloudstack/utils/qemu/QemuImg.java | 338 +++++++++-- .../cloudstack/utils/qemu/QemuObject.java | 128 ++++ .../LibvirtComputingResourceTest.java | 41 +- .../resource/LibvirtDomainXMLParserTest.java | 20 + .../kvm/resource/LibvirtVMDefTest.java | 20 + .../LibvirtMigrateCommandWrapperTest.java | 35 ++ .../storage/ScaleIOStorageAdaptorTest.java | 31 + .../utils/cryptsetup/CryptSetupTest.java | 71 +++ .../utils/cryptsetup/KeyFileTest.java | 49 ++ .../utils/qemu/QemuImageOptionsTest.java | 61 ++ .../cloudstack/utils/qemu/QemuImgTest.java | 59 +- .../cloudstack/utils/qemu/QemuObjectTest.java | 41 ++ .../CloudStackPrimaryDataStoreDriverImpl.java | 58 +- .../driver/ScaleIOPrimaryDataStoreDriver.java | 217 ++++++- ...olCopyVolumeToSecondaryCommandWrapper.java | 2 +- .../kvm/storage/StorPoolStorageAdaptor.java | 11 +- .../kvm/storage/StorPoolStoragePool.java | 8 +- .../com/cloud/api/query/QueryManagerImpl.java | 10 + .../query/dao/DiskOfferingJoinDaoImpl.java | 1 + .../query/dao/ServiceOfferingJoinDaoImpl.java | 1 + .../api/query/vo/DiskOfferingJoinVO.java | 6 + .../api/query/vo/ServiceOfferingJoinVO.java | 5 + .../ConfigurationManagerImpl.java | 20 +- .../deploy/DeploymentPlanningManagerImpl.java | 56 +- .../com/cloud/storage/StorageManagerImpl.java | 10 +- .../cloud/storage/VolumeApiServiceImpl.java | 53 +- .../storage/snapshot/SnapshotManagerImpl.java | 12 + .../cloud/template/TemplateManagerImpl.java | 10 + .../java/com/cloud/vm/UserVmManagerImpl.java | 5 + .../vm/snapshot/VMSnapshotManagerImpl.java | 6 + .../DeploymentPlanningManagerImplTest.java | 358 ++++++++++- .../storage/VolumeApiServiceImplTest.java | 56 +- .../test/resources/createNetworkOffering.xml | 1 + test/integration/smoke/test_disk_offerings.py | 50 +- .../smoke/test_service_offerings.py | 54 +- test/integration/smoke/test_volumes.py | 557 +++++++++++++++++- ui/public/locales/en.json | 3 + ui/src/config/section/offering.js | 4 +- ui/src/views/infra/HostInfo.vue | 8 + ui/src/views/offering/AddComputeOffering.vue | 13 +- ui/src/views/offering/AddDiskOffering.vue | 21 +- .../main/java/com/cloud/utils/UuidUtils.java | 11 +- 114 files changed, 4328 insertions(+), 433 deletions(-) create mode 100644 engine/schema/src/main/java/org/apache/cloudstack/secret/PassphraseVO.java create mode 100644 engine/schema/src/main/java/org/apache/cloudstack/secret/dao/PassphraseDao.java create mode 100644 engine/schema/src/main/java/org/apache/cloudstack/secret/dao/PassphraseDaoImpl.java create mode 100644 plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/cryptsetup/CryptSetup.java create mode 100644 plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/cryptsetup/CryptSetupException.java create mode 100644 plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/cryptsetup/KeyFile.java create mode 100644 plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/qemu/QemuImageOptions.java create mode 100644 plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/qemu/QemuObject.java create mode 100644 plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/storage/ScaleIOStorageAdaptorTest.java create mode 100644 plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/cryptsetup/CryptSetupTest.java create mode 100644 plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/cryptsetup/KeyFileTest.java create mode 100644 plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/qemu/QemuImageOptionsTest.java create mode 100644 plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/qemu/QemuObjectTest.java diff --git a/api/src/main/java/com/cloud/agent/api/to/DiskTO.java b/api/src/main/java/com/cloud/agent/api/to/DiskTO.java index 7b3d10bc4db..d22df2df172 100644 --- a/api/src/main/java/com/cloud/agent/api/to/DiskTO.java +++ b/api/src/main/java/com/cloud/agent/api/to/DiskTO.java @@ -40,6 +40,7 @@ public class DiskTO { public static final String VMDK = "vmdk"; public static final String EXPAND_DATASTORE = "expandDatastore"; public static final String TEMPLATE_RESIGN = "templateResign"; + public static final String SECRET_CONSUMER_DETAIL = "storageMigrateSecretConsumer"; private DataTO data; private Long diskSeq; diff --git a/api/src/main/java/com/cloud/agent/api/to/StorageFilerTO.java b/api/src/main/java/com/cloud/agent/api/to/StorageFilerTO.java index 8f58c9e1c91..e361e7a141f 100644 --- a/api/src/main/java/com/cloud/agent/api/to/StorageFilerTO.java +++ b/api/src/main/java/com/cloud/agent/api/to/StorageFilerTO.java @@ -16,6 +16,7 @@ // under the License. package com.cloud.agent.api.to; +import com.cloud.agent.api.LogLevel; import com.cloud.storage.Storage.StoragePoolType; import com.cloud.storage.StoragePool; @@ -24,6 +25,7 @@ public class StorageFilerTO { String uuid; String host; String path; + @LogLevel(LogLevel.Log4jLevel.Off) String userInfo; int port; StoragePoolType type; diff --git a/api/src/main/java/com/cloud/host/Host.java b/api/src/main/java/com/cloud/host/Host.java index e5a3889ff18..7563bc3b742 100644 --- a/api/src/main/java/com/cloud/host/Host.java +++ b/api/src/main/java/com/cloud/host/Host.java @@ -53,6 +53,7 @@ public interface Host extends StateObject, Identity, Partition, HAResour } } public static final String HOST_UEFI_ENABLE = "host.uefi.enable"; + public static final String HOST_VOLUME_ENCRYPTION = "host.volume.encryption"; /** * @return name of the machine. diff --git a/api/src/main/java/com/cloud/offering/DiskOffering.java b/api/src/main/java/com/cloud/offering/DiskOffering.java index 8f2a0c9f761..e1c41f77cbf 100644 --- a/api/src/main/java/com/cloud/offering/DiskOffering.java +++ b/api/src/main/java/com/cloud/offering/DiskOffering.java @@ -149,4 +149,8 @@ public interface DiskOffering extends InfrastructureEntity, Identity, InternalId boolean isComputeOnly(); boolean getDiskSizeStrictness(); + + boolean getEncrypt(); + + void setEncrypt(boolean encrypt); } diff --git a/api/src/main/java/com/cloud/storage/MigrationOptions.java b/api/src/main/java/com/cloud/storage/MigrationOptions.java index 38c1ee87bbe..a39a2a7c827 100644 --- a/api/src/main/java/com/cloud/storage/MigrationOptions.java +++ b/api/src/main/java/com/cloud/storage/MigrationOptions.java @@ -25,6 +25,7 @@ public class MigrationOptions implements Serializable { private String srcPoolUuid; private Storage.StoragePoolType srcPoolType; private Type type; + private ScopeType scopeType; private String srcBackingFilePath; private boolean copySrcTemplate; private String srcVolumeUuid; @@ -37,18 +38,20 @@ public class MigrationOptions implements Serializable { public MigrationOptions() { } - public MigrationOptions(String srcPoolUuid, Storage.StoragePoolType srcPoolType, String srcBackingFilePath, boolean copySrcTemplate) { + public MigrationOptions(String srcPoolUuid, Storage.StoragePoolType srcPoolType, String srcBackingFilePath, boolean copySrcTemplate, ScopeType scopeType) { this.srcPoolUuid = srcPoolUuid; this.srcPoolType = srcPoolType; this.type = Type.LinkedClone; + this.scopeType = scopeType; this.srcBackingFilePath = srcBackingFilePath; this.copySrcTemplate = copySrcTemplate; } - public MigrationOptions(String srcPoolUuid, Storage.StoragePoolType srcPoolType, String srcVolumeUuid) { + public MigrationOptions(String srcPoolUuid, Storage.StoragePoolType srcPoolType, String srcVolumeUuid, ScopeType scopeType) { this.srcPoolUuid = srcPoolUuid; this.srcPoolType = srcPoolType; this.type = Type.FullClone; + this.scopeType = scopeType; this.srcVolumeUuid = srcVolumeUuid; } @@ -60,6 +63,8 @@ public class MigrationOptions implements Serializable { return srcPoolType; } + public ScopeType getScopeType() { return scopeType; } + public String getSrcBackingFilePath() { return srcBackingFilePath; } diff --git a/api/src/main/java/com/cloud/storage/Storage.java b/api/src/main/java/com/cloud/storage/Storage.java index 300944559d6..7e63462b9da 100644 --- a/api/src/main/java/com/cloud/storage/Storage.java +++ b/api/src/main/java/com/cloud/storage/Storage.java @@ -130,33 +130,35 @@ public class Storage { } public static enum StoragePoolType { - Filesystem(false, true), // local directory - NetworkFilesystem(true, true), // NFS - IscsiLUN(true, false), // shared LUN, with a clusterfs overlay - Iscsi(true, false), // for e.g., ZFS Comstar - ISO(false, false), // for iso image - LVM(false, false), // XenServer local LVM SR - CLVM(true, false), - RBD(true, true), // http://libvirt.org/storage.html#StorageBackendRBD - SharedMountPoint(true, false), - VMFS(true, true), // VMware VMFS storage - PreSetup(true, true), // for XenServer, Storage Pool is set up by customers. - EXT(false, true), // XenServer local EXT SR - OCFS2(true, false), - SMB(true, false), - Gluster(true, false), - PowerFlex(true, true), // Dell EMC PowerFlex/ScaleIO (formerly VxFlexOS) - ManagedNFS(true, false), - Linstor(true, true), - DatastoreCluster(true, true), // for VMware, to abstract pool of clusters - StorPool(true, true); + Filesystem(false, true, true), // local directory + NetworkFilesystem(true, true, true), // NFS + IscsiLUN(true, false, false), // shared LUN, with a clusterfs overlay + Iscsi(true, false, false), // for e.g., ZFS Comstar + ISO(false, false, false), // for iso image + LVM(false, false, false), // XenServer local LVM SR + CLVM(true, false, false), + RBD(true, true, false), // http://libvirt.org/storage.html#StorageBackendRBD + SharedMountPoint(true, false, true), + VMFS(true, true, false), // VMware VMFS storage + PreSetup(true, true, false), // for XenServer, Storage Pool is set up by customers. + EXT(false, true, false), // XenServer local EXT SR + OCFS2(true, false, false), + SMB(true, false, false), + Gluster(true, false, false), + PowerFlex(true, true, true), // Dell EMC PowerFlex/ScaleIO (formerly VxFlexOS) + ManagedNFS(true, false, false), + Linstor(true, true, false), + DatastoreCluster(true, true, false), // for VMware, to abstract pool of clusters + StorPool(true, true, false); private final boolean shared; private final boolean overprovisioning; + private final boolean encryption; - StoragePoolType(boolean shared, boolean overprovisioning) { + StoragePoolType(boolean shared, boolean overprovisioning, boolean encryption) { this.shared = shared; this.overprovisioning = overprovisioning; + this.encryption = encryption; } public boolean isShared() { @@ -166,6 +168,8 @@ public class Storage { public boolean supportsOverProvisioning() { return overprovisioning; } + + public boolean supportsEncryption() { return encryption; } } public static List getNonSharedStoragePoolTypes() { diff --git a/api/src/main/java/com/cloud/storage/Volume.java b/api/src/main/java/com/cloud/storage/Volume.java index 5f58d52e85d..57db35f0c11 100644 --- a/api/src/main/java/com/cloud/storage/Volume.java +++ b/api/src/main/java/com/cloud/storage/Volume.java @@ -247,4 +247,12 @@ public interface Volume extends ControlledEntity, Identity, InternalIdentity, Ba String getExternalUuid(); void setExternalUuid(String externalUuid); + + public Long getPassphraseId(); + + public void setPassphraseId(Long id); + + public String getEncryptFormat(); + + public void setEncryptFormat(String encryptFormat); } diff --git a/api/src/main/java/com/cloud/vm/DiskProfile.java b/api/src/main/java/com/cloud/vm/DiskProfile.java index 9de5ce6fefd..971ebde496e 100644 --- a/api/src/main/java/com/cloud/vm/DiskProfile.java +++ b/api/src/main/java/com/cloud/vm/DiskProfile.java @@ -44,6 +44,7 @@ public class DiskProfile { private String cacheMode; private Long minIops; private Long maxIops; + private boolean requiresEncryption; private HypervisorType hyperType; @@ -63,6 +64,12 @@ public class DiskProfile { this.volumeId = volumeId; } + public DiskProfile(long volumeId, Volume.Type type, String name, long diskOfferingId, long size, String[] tags, boolean useLocalStorage, boolean recreatable, + Long templateId, boolean requiresEncryption) { + this(volumeId, type, name, diskOfferingId, size, tags, useLocalStorage, recreatable, templateId); + this.requiresEncryption = requiresEncryption; + } + public DiskProfile(Volume vol, DiskOffering offering, HypervisorType hyperType) { this(vol.getId(), vol.getVolumeType(), @@ -75,6 +82,7 @@ public class DiskProfile { null); this.hyperType = hyperType; this.provisioningType = offering.getProvisioningType(); + this.requiresEncryption = offering.getEncrypt() || vol.getPassphraseId() != null; } public DiskProfile(DiskProfile dp) { @@ -230,7 +238,6 @@ public class DiskProfile { return cacheMode; } - public Long getMinIops() { return minIops; } @@ -247,4 +254,7 @@ public class DiskProfile { this.maxIops = maxIops; } + public boolean requiresEncryption() { return requiresEncryption; } + + public void setEncryption(boolean encrypt) { this.requiresEncryption = encrypt; } } diff --git a/api/src/main/java/org/apache/cloudstack/api/ApiConstants.java b/api/src/main/java/org/apache/cloudstack/api/ApiConstants.java index 55002f70b1b..2abdb328702 100644 --- a/api/src/main/java/org/apache/cloudstack/api/ApiConstants.java +++ b/api/src/main/java/org/apache/cloudstack/api/ApiConstants.java @@ -109,6 +109,9 @@ public class ApiConstants { public static final String CUSTOM_JOB_ID = "customjobid"; public static final String CURRENT_START_IP = "currentstartip"; public static final String CURRENT_END_IP = "currentendip"; + public static final String ENCRYPT = "encrypt"; + public static final String ENCRYPT_ROOT = "encryptroot"; + public static final String ENCRYPTION_SUPPORTED = "encryptionsupported"; public static final String MIN_IOPS = "miniops"; public static final String MAX_IOPS = "maxiops"; public static final String HYPERVISOR_SNAPSHOT_RESERVE = "hypervisorsnapshotreserve"; diff --git a/api/src/main/java/org/apache/cloudstack/api/command/admin/offering/CreateDiskOfferingCmd.java b/api/src/main/java/org/apache/cloudstack/api/command/admin/offering/CreateDiskOfferingCmd.java index b628ce44f1a..46a8936e498 100644 --- a/api/src/main/java/org/apache/cloudstack/api/command/admin/offering/CreateDiskOfferingCmd.java +++ b/api/src/main/java/org/apache/cloudstack/api/command/admin/offering/CreateDiskOfferingCmd.java @@ -163,9 +163,14 @@ public class CreateDiskOfferingCmd extends BaseCmd { @Parameter(name = ApiConstants.DISK_SIZE_STRICTNESS, type = CommandType.BOOLEAN, description = "To allow or disallow the resize operation on the disks created from this disk offering, if the flag is true then resize is not allowed", since = "4.17") private Boolean diskSizeStrictness; + @Parameter(name = ApiConstants.ENCRYPT, type = CommandType.BOOLEAN, required=false, description = "Volumes using this offering should be encrypted", since = "4.18") + private Boolean encrypt; + @Parameter(name = ApiConstants.DETAILS, type = CommandType.MAP, description = "details to specify disk offering parameters", since = "4.16") private Map details; + + ///////////////////////////////////////////////////// /////////////////// Accessors /////////////////////// ///////////////////////////////////////////////////// @@ -202,6 +207,13 @@ public class CreateDiskOfferingCmd extends BaseCmd { return maxIops; } + public boolean getEncrypt() { + if (encrypt == null) { + return false; + } + return encrypt; + } + public List getDomainIds() { if (CollectionUtils.isNotEmpty(domainIds)) { Set set = new LinkedHashSet<>(domainIds); diff --git a/api/src/main/java/org/apache/cloudstack/api/command/admin/offering/CreateServiceOfferingCmd.java b/api/src/main/java/org/apache/cloudstack/api/command/admin/offering/CreateServiceOfferingCmd.java index 4eadfcdff25..fa890f310dc 100644 --- a/api/src/main/java/org/apache/cloudstack/api/command/admin/offering/CreateServiceOfferingCmd.java +++ b/api/src/main/java/org/apache/cloudstack/api/command/admin/offering/CreateServiceOfferingCmd.java @@ -242,6 +242,10 @@ public class CreateServiceOfferingCmd extends BaseCmd { since = "4.17") private Boolean diskOfferingStrictness; + @Parameter(name = ApiConstants.ENCRYPT_ROOT, type = CommandType.BOOLEAN, description = "VMs using this offering require root volume encryption", since="4.18") + private Boolean encryptRoot; + + ///////////////////////////////////////////////////// /////////////////// Accessors /////////////////////// ///////////////////////////////////////////////////// @@ -472,6 +476,13 @@ public class CreateServiceOfferingCmd extends BaseCmd { return diskOfferingStrictness == null ? false : diskOfferingStrictness; } + public boolean getEncryptRoot() { + if (encryptRoot != null) { + return encryptRoot; + } + return false; + } + ///////////////////////////////////////////////////// /////////////// API Implementation/////////////////// ///////////////////////////////////////////////////// diff --git a/api/src/main/java/org/apache/cloudstack/api/command/user/offering/ListDiskOfferingsCmd.java b/api/src/main/java/org/apache/cloudstack/api/command/user/offering/ListDiskOfferingsCmd.java index 91fa1f864dc..ed295f22e17 100644 --- a/api/src/main/java/org/apache/cloudstack/api/command/user/offering/ListDiskOfferingsCmd.java +++ b/api/src/main/java/org/apache/cloudstack/api/command/user/offering/ListDiskOfferingsCmd.java @@ -58,6 +58,9 @@ public class ListDiskOfferingsCmd extends BaseListDomainResourcesCmd { @Parameter(name = ApiConstants.STORAGE_ID, type = CommandType.UUID, entityType = StoragePoolResponse.class, description = "The ID of the storage pool, tags of the storage pool are used to filter the offerings", since = "4.17") private Long storagePoolId; + @Parameter(name = ApiConstants.ENCRYPT, type = CommandType.BOOLEAN, description = "listed offerings support disk encryption", since = "4.18") + private Boolean encrypt; + ///////////////////////////////////////////////////// /////////////////// Accessors /////////////////////// ///////////////////////////////////////////////////// @@ -78,9 +81,9 @@ public class ListDiskOfferingsCmd extends BaseListDomainResourcesCmd { return volumeId; } - public Long getStoragePoolId() { - return storagePoolId; - } + public Long getStoragePoolId() { return storagePoolId; } + + public Boolean getEncrypt() { return encrypt; } ///////////////////////////////////////////////////// /////////////// API Implementation/////////////////// diff --git a/api/src/main/java/org/apache/cloudstack/api/command/user/offering/ListServiceOfferingsCmd.java b/api/src/main/java/org/apache/cloudstack/api/command/user/offering/ListServiceOfferingsCmd.java index 91cac0937d4..9774c88d681 100644 --- a/api/src/main/java/org/apache/cloudstack/api/command/user/offering/ListServiceOfferingsCmd.java +++ b/api/src/main/java/org/apache/cloudstack/api/command/user/offering/ListServiceOfferingsCmd.java @@ -83,6 +83,12 @@ public class ListServiceOfferingsCmd extends BaseListDomainResourcesCmd { since = "4.15") private Integer cpuSpeed; + @Parameter(name = ApiConstants.ENCRYPT_ROOT, + type = CommandType.BOOLEAN, + description = "listed offerings support root disk encryption", + since = "4.18") + private Boolean encryptRoot; + ///////////////////////////////////////////////////// /////////////////// Accessors /////////////////////// ///////////////////////////////////////////////////// @@ -123,6 +129,8 @@ public class ListServiceOfferingsCmd extends BaseListDomainResourcesCmd { return cpuSpeed; } + public Boolean getEncryptRoot() { return encryptRoot; } + ///////////////////////////////////////////////////// /////////////// API Implementation/////////////////// ///////////////////////////////////////////////////// diff --git a/api/src/main/java/org/apache/cloudstack/api/command/user/snapshot/CreateSnapshotCmd.java b/api/src/main/java/org/apache/cloudstack/api/command/user/snapshot/CreateSnapshotCmd.java index 9b616ea28fe..787065f9a6d 100644 --- a/api/src/main/java/org/apache/cloudstack/api/command/user/snapshot/CreateSnapshotCmd.java +++ b/api/src/main/java/org/apache/cloudstack/api/command/user/snapshot/CreateSnapshotCmd.java @@ -226,6 +226,10 @@ public class CreateSnapshotCmd extends BaseAsyncCreateCmd { throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, String.format("Snapshot from volume [%s] was not found in database.", getVolumeUuid())); } } catch (Exception e) { + if (e.getCause() instanceof UnsupportedOperationException) { + throw new ServerApiException(ApiErrorCode.UNSUPPORTED_ACTION_ERROR, String.format("Failed to create snapshot due to unsupported operation: %s", e.getCause().getMessage())); + } + String errorMessage = "Failed to create snapshot due to an internal error creating snapshot for volume " + getVolumeUuid(); s_logger.error(errorMessage, e); throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, errorMessage); diff --git a/api/src/main/java/org/apache/cloudstack/api/response/DiskOfferingResponse.java b/api/src/main/java/org/apache/cloudstack/api/response/DiskOfferingResponse.java index 1bea164d359..b8244aebc60 100644 --- a/api/src/main/java/org/apache/cloudstack/api/response/DiskOfferingResponse.java +++ b/api/src/main/java/org/apache/cloudstack/api/response/DiskOfferingResponse.java @@ -156,10 +156,15 @@ public class DiskOfferingResponse extends BaseResponseWithAnnotations { @Param(description = "the vsphere storage policy tagged to the disk offering in case of VMware", since = "4.15") private String vsphereStoragePolicy; + @SerializedName(ApiConstants.DISK_SIZE_STRICTNESS) @Param(description = "To allow or disallow the resize operation on the disks created from this disk offering, if the flag is true then resize is not allowed", since = "4.17") private Boolean diskSizeStrictness; + @SerializedName(ApiConstants.ENCRYPT) + @Param(description = "Whether disks using this offering will be encrypted on primary storage", since = "4.18") + private Boolean encrypt; + @SerializedName(ApiConstants.DETAILS) @Param(description = "additional key/value details tied with this disk offering", since = "4.17") private Map details; @@ -381,6 +386,8 @@ public class DiskOfferingResponse extends BaseResponseWithAnnotations { this.diskSizeStrictness = diskSizeStrictness; } + public void setEncrypt(Boolean encrypt) { this.encrypt = encrypt; } + public void setDetails(Map details) { this.details = details; } diff --git a/api/src/main/java/org/apache/cloudstack/api/response/HostResponse.java b/api/src/main/java/org/apache/cloudstack/api/response/HostResponse.java index 1290af7f506..5d809cf1553 100644 --- a/api/src/main/java/org/apache/cloudstack/api/response/HostResponse.java +++ b/api/src/main/java/org/apache/cloudstack/api/response/HostResponse.java @@ -270,6 +270,10 @@ public class HostResponse extends BaseResponseWithAnnotations { @Param(description = "true if the host has capability to support UEFI boot") private Boolean uefiCapabilty; + @SerializedName(ApiConstants.ENCRYPTION_SUPPORTED) + @Param(description = "true if the host supports encryption", since = "4.18") + private Boolean encryptionSupported; + @Override public String getObjectId() { return this.getId(); @@ -533,6 +537,13 @@ public class HostResponse extends BaseResponseWithAnnotations { detailsCopy.remove("username"); detailsCopy.remove("password"); + if (detailsCopy.containsKey(Host.HOST_VOLUME_ENCRYPTION)) { + this.setEncryptionSupported(Boolean.parseBoolean((String) detailsCopy.get(Host.HOST_VOLUME_ENCRYPTION))); + detailsCopy.remove(Host.HOST_VOLUME_ENCRYPTION); + } else { + this.setEncryptionSupported(new Boolean(false)); // default + } + this.details = detailsCopy; } @@ -718,4 +729,8 @@ public class HostResponse extends BaseResponseWithAnnotations { public void setUefiCapabilty(Boolean hostCapability) { this.uefiCapabilty = hostCapability; } + + public void setEncryptionSupported(Boolean encryptionSupported) { + this.encryptionSupported = encryptionSupported; + } } diff --git a/api/src/main/java/org/apache/cloudstack/api/response/ServiceOfferingResponse.java b/api/src/main/java/org/apache/cloudstack/api/response/ServiceOfferingResponse.java index b65911e572c..53767adf17d 100644 --- a/api/src/main/java/org/apache/cloudstack/api/response/ServiceOfferingResponse.java +++ b/api/src/main/java/org/apache/cloudstack/api/response/ServiceOfferingResponse.java @@ -226,6 +226,10 @@ public class ServiceOfferingResponse extends BaseResponseWithAnnotations { @Param(description = "the display text of the disk offering", since = "4.17") private String diskOfferingDisplayText; + @SerializedName(ApiConstants.ENCRYPT_ROOT) + @Param(description = "true if virtual machine root disk will be encrypted on storage", since = "4.18") + private Boolean encryptRoot; + public ServiceOfferingResponse() { } @@ -505,6 +509,7 @@ public class ServiceOfferingResponse extends BaseResponseWithAnnotations { this.dynamicScalingEnabled = dynamicScalingEnabled; } + public Boolean getDiskOfferingStrictness() { return diskOfferingStrictness; } @@ -536,4 +541,6 @@ public class ServiceOfferingResponse extends BaseResponseWithAnnotations { public String getDiskOfferingDisplayText() { return diskOfferingDisplayText; } + + public void setEncryptRoot(Boolean encrypt) { this.encryptRoot = encrypt; } } diff --git a/core/src/main/java/com/cloud/agent/api/storage/ResizeVolumeCommand.java b/core/src/main/java/com/cloud/agent/api/storage/ResizeVolumeCommand.java index 70d4d3ebab4..db867698e91 100644 --- a/core/src/main/java/com/cloud/agent/api/storage/ResizeVolumeCommand.java +++ b/core/src/main/java/com/cloud/agent/api/storage/ResizeVolumeCommand.java @@ -20,8 +20,11 @@ package com.cloud.agent.api.storage; import com.cloud.agent.api.Command; +import com.cloud.agent.api.LogLevel; import com.cloud.agent.api.to.StorageFilerTO; +import java.util.Arrays; + public class ResizeVolumeCommand extends Command { private String path; private StorageFilerTO pool; @@ -35,6 +38,10 @@ public class ResizeVolumeCommand extends Command { private boolean managed; private String iScsiName; + @LogLevel(LogLevel.Log4jLevel.Off) + private byte[] passphrase; + private String encryptFormat; + protected ResizeVolumeCommand() { } @@ -48,6 +55,13 @@ public class ResizeVolumeCommand extends Command { this.managed = false; } + public ResizeVolumeCommand(String path, StorageFilerTO pool, Long currentSize, Long newSize, boolean shrinkOk, String vmInstance, + String chainInfo, byte[] passphrase, String encryptFormat) { + this(path, pool, currentSize, newSize, shrinkOk, vmInstance, chainInfo); + this.passphrase = passphrase; + this.encryptFormat = encryptFormat; + } + public ResizeVolumeCommand(String path, StorageFilerTO pool, Long currentSize, Long newSize, boolean shrinkOk, String vmInstance, String chainInfo) { this(path, pool, currentSize, newSize, shrinkOk, vmInstance); this.chainInfo = chainInfo; @@ -89,6 +103,16 @@ public class ResizeVolumeCommand extends Command { public String getChainInfo() {return chainInfo; } + public String getEncryptFormat() { return encryptFormat; } + + public byte[] getPassphrase() { return passphrase; } + + public void clearPassphrase() { + if (this.passphrase != null) { + Arrays.fill(this.passphrase, (byte) 0); + } + } + /** * {@inheritDoc} */ diff --git a/core/src/main/java/com/cloud/storage/resource/StorageSubsystemCommandHandlerBase.java b/core/src/main/java/com/cloud/storage/resource/StorageSubsystemCommandHandlerBase.java index 7044490c720..4a9a24a9f53 100644 --- a/core/src/main/java/com/cloud/storage/resource/StorageSubsystemCommandHandlerBase.java +++ b/core/src/main/java/com/cloud/storage/resource/StorageSubsystemCommandHandlerBase.java @@ -19,6 +19,7 @@ package com.cloud.storage.resource; +import com.cloud.serializer.GsonHelper; import org.apache.cloudstack.agent.directdownload.DirectDownloadCommand; import org.apache.cloudstack.storage.to.VolumeObjectTO; import org.apache.cloudstack.storage.command.CheckDataStoreStoragePolicyComplainceCommand; @@ -48,6 +49,7 @@ import com.google.gson.Gson; public class StorageSubsystemCommandHandlerBase implements StorageSubsystemCommandHandler { private static final Logger s_logger = Logger.getLogger(StorageSubsystemCommandHandlerBase.class); + protected static final Gson s_gogger = GsonHelper.getGsonLogger(); protected StorageProcessor processor; public StorageSubsystemCommandHandlerBase(StorageProcessor processor) { @@ -175,7 +177,7 @@ public class StorageSubsystemCommandHandlerBase implements StorageSubsystemComma private void logCommand(Command cmd) { try { - s_logger.debug(String.format("Executing command %s: [%s].", cmd.getClass().getSimpleName(), new Gson().toJson(cmd))); + s_logger.debug(String.format("Executing command %s: [%s].", cmd.getClass().getSimpleName(), s_gogger.toJson(cmd))); } catch (Exception e) { s_logger.debug(String.format("Executing command %s.", cmd.getClass().getSimpleName())); } diff --git a/core/src/main/java/org/apache/cloudstack/storage/to/VolumeObjectTO.java b/core/src/main/java/org/apache/cloudstack/storage/to/VolumeObjectTO.java index 36c35e57273..8473ea7a49e 100644 --- a/core/src/main/java/org/apache/cloudstack/storage/to/VolumeObjectTO.java +++ b/core/src/main/java/org/apache/cloudstack/storage/to/VolumeObjectTO.java @@ -19,6 +19,7 @@ package org.apache.cloudstack.storage.to; +import com.cloud.agent.api.LogLevel; import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo; import com.cloud.agent.api.to.DataObjectType; @@ -30,6 +31,8 @@ import com.cloud.storage.MigrationOptions; import com.cloud.storage.Storage; import com.cloud.storage.Volume; +import java.util.Arrays; + public class VolumeObjectTO implements DataTO { private String uuid; private Volume.Type volumeType; @@ -68,6 +71,10 @@ public class VolumeObjectTO implements DataTO { private String updatedDataStoreUUID; private String vSphereStoragePolicyId; + @LogLevel(LogLevel.Log4jLevel.Off) + private byte[] passphrase; + private String encryptFormat; + public VolumeObjectTO() { } @@ -110,6 +117,8 @@ public class VolumeObjectTO implements DataTO { this.directDownload = volume.isDirectDownload(); this.deployAsIs = volume.isDeployAsIs(); this.vSphereStoragePolicyId = volume.getvSphereStoragePolicyId(); + this.passphrase = volume.getPassphrase(); + this.encryptFormat = volume.getEncryptFormat(); } public String getUuid() { @@ -357,4 +366,22 @@ public class VolumeObjectTO implements DataTO { public void setvSphereStoragePolicyId(String vSphereStoragePolicyId) { this.vSphereStoragePolicyId = vSphereStoragePolicyId; } + + public String getEncryptFormat() { return encryptFormat; } + + public void setEncryptFormat(String encryptFormat) { this.encryptFormat = encryptFormat; } + + public byte[] getPassphrase() { return passphrase; } + + public void setPassphrase(byte[] passphrase) { this.passphrase = passphrase; } + + public void clearPassphrase() { + if (this.passphrase != null) { + Arrays.fill(this.passphrase, (byte) 0); + } + } + + public boolean requiresEncryption() { + return passphrase != null && passphrase.length > 0; + } } diff --git a/debian/control b/debian/control index 066994785b3..d5273d77e92 100644 --- a/debian/control +++ b/debian/control @@ -15,14 +15,14 @@ Description: A common package which contains files which are shared by several C Package: cloudstack-management Architecture: all -Depends: ${python3:Depends}, openjdk-11-jre-headless | java11-runtime-headless | java11-runtime | openjdk-11-jre-headless | zulu-11, cloudstack-common (= ${source:Version}), net-tools, sudo, python3-mysql.connector, augeas-tools, mysql-client | mariadb-client, adduser, bzip2, ipmitool, file, gawk, iproute2, qemu-utils, python3-dnspython, lsb-release, init-system-helpers (>= 1.14~), python3-setuptools +Depends: ${python3:Depends}, openjdk-11-jre-headless | java11-runtime-headless | java11-runtime | openjdk-11-jre-headless | zulu-11, cloudstack-common (= ${source:Version}), net-tools, sudo, python3-mysql.connector, augeas-tools, mysql-client | mariadb-client, adduser, bzip2, ipmitool, file, gawk, iproute2, qemu-utils, haveged, python3-dnspython, lsb-release, init-system-helpers (>= 1.14~), python3-setuptools Conflicts: cloud-server, cloud-client, cloud-client-ui Description: CloudStack server library The CloudStack management server Package: cloudstack-agent Architecture: all -Depends: ${python:Depends}, ${python3:Depends}, openjdk-11-jre-headless | java11-runtime-headless | java11-runtime | openjdk-11-jre-headless | zulu-11, cloudstack-common (= ${source:Version}), lsb-base (>= 9), openssh-client, qemu-kvm (>= 2.5) | qemu-system-x86 (>= 5.2), libvirt-bin (>= 1.3) | libvirt-daemon-system (>= 3.0), iproute2, ebtables, vlan, ipset, python3-libvirt, ethtool, iptables, lsb-release, aria2, ufw, apparmor +Depends: ${python:Depends}, ${python3:Depends}, openjdk-11-jre-headless | java11-runtime-headless | java11-runtime | openjdk-11-jre-headless | zulu-11, cloudstack-common (= ${source:Version}), lsb-base (>= 9), openssh-client, qemu-kvm (>= 2.5) | qemu-system-x86 (>= 5.2), libvirt-bin (>= 1.3) | libvirt-daemon-system (>= 3.0), iproute2, ebtables, vlan, ipset, python3-libvirt, ethtool, iptables, cryptsetup, rng-tools, lsb-release, aria2, ufw, apparmor Recommends: init-system-helpers Conflicts: cloud-agent, cloud-agent-libs, cloud-agent-deps, cloud-agent-scripts Description: CloudStack agent diff --git a/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/EndPointSelector.java b/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/EndPointSelector.java index ec272501998..6f6e79d067e 100644 --- a/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/EndPointSelector.java +++ b/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/EndPointSelector.java @@ -23,14 +23,22 @@ import java.util.List; public interface EndPointSelector { EndPoint select(DataObject srcData, DataObject destData); + EndPoint select(DataObject srcData, DataObject destData, boolean encryptionSupportRequired); + EndPoint select(DataObject srcData, DataObject destData, StorageAction action); + EndPoint select(DataObject srcData, DataObject destData, StorageAction action, boolean encryptionSupportRequired); + EndPoint select(DataObject object); EndPoint select(DataStore store); + EndPoint select(DataObject object, boolean encryptionSupportRequired); + EndPoint select(DataObject object, StorageAction action); + EndPoint select(DataObject object, StorageAction action, boolean encryptionSupportRequired); + List selectAll(DataStore store); List findAllEndpointsForScope(DataStore store); diff --git a/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/VolumeInfo.java b/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/VolumeInfo.java index 33386f172d3..be16c20d173 100644 --- a/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/VolumeInfo.java +++ b/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/VolumeInfo.java @@ -93,5 +93,7 @@ public interface VolumeInfo extends DataObject, Volume { public String getvSphereStoragePolicyId(); + public byte[] getPassphrase(); + Volume getVolume(); } diff --git a/engine/orchestration/src/main/java/org/apache/cloudstack/engine/orchestration/VolumeOrchestrator.java b/engine/orchestration/src/main/java/org/apache/cloudstack/engine/orchestration/VolumeOrchestrator.java index 74f5eca7731..18cd38180bc 100644 --- a/engine/orchestration/src/main/java/org/apache/cloudstack/engine/orchestration/VolumeOrchestrator.java +++ b/engine/orchestration/src/main/java/org/apache/cloudstack/engine/orchestration/VolumeOrchestrator.java @@ -38,6 +38,8 @@ import javax.inject.Inject; import javax.naming.ConfigurationException; import com.cloud.storage.StorageUtil; +import org.apache.cloudstack.secret.dao.PassphraseDao; +import org.apache.cloudstack.secret.PassphraseVO; import org.apache.cloudstack.api.command.admin.vm.MigrateVMCmd; import org.apache.cloudstack.api.command.admin.volume.MigrateVolumeCmdByAdmin; import org.apache.cloudstack.api.command.user.volume.MigrateVolumeCmd; @@ -232,6 +234,8 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati private SecondaryStorageVmDao secondaryStorageVmDao; @Inject VolumeApiService _volumeApiService; + @Inject + PassphraseDao passphraseDao; @Inject protected SnapshotHelper snapshotHelper; @@ -271,7 +275,8 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati // Find a destination storage pool with the specified criteria DiskOffering diskOffering = _entityMgr.findById(DiskOffering.class, volumeInfo.getDiskOfferingId()); DiskProfile dskCh = new DiskProfile(volumeInfo.getId(), volumeInfo.getVolumeType(), volumeInfo.getName(), diskOffering.getId(), diskOffering.getDiskSize(), diskOffering.getTagsArray(), - diskOffering.isUseLocalStorage(), diskOffering.isRecreatable(), null); + diskOffering.isUseLocalStorage(), diskOffering.isRecreatable(), null, (diskOffering.getEncrypt() || volumeInfo.getPassphraseId() != null)); + dskCh.setHyperType(dataDiskHyperType); storageMgr.setDiskProfileThrottling(dskCh, null, diskOffering); @@ -305,6 +310,13 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati newVol.setInstanceId(oldVol.getInstanceId()); newVol.setRecreatable(oldVol.isRecreatable()); newVol.setFormat(oldVol.getFormat()); + + if (oldVol.getPassphraseId() != null) { + PassphraseVO passphrase = passphraseDao.persist(new PassphraseVO()); + passphrase.clearPassphrase(); + newVol.setPassphraseId(passphrase.getId()); + } + return _volsDao.persist(newVol); } @@ -457,6 +469,10 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati Pair pod = null; DiskOffering diskOffering = _entityMgr.findById(DiskOffering.class, volume.getDiskOfferingId()); + if (diskOffering.getEncrypt()) { + VolumeVO vol = (VolumeVO) volume; + volume = setPassphraseForVolumeEncryption(vol); + } DataCenter dc = _entityMgr.findById(DataCenter.class, volume.getDataCenterId()); DiskProfile dskCh = new DiskProfile(volume, diskOffering, snapshot.getHypervisorType()); @@ -581,21 +597,21 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati } protected DiskProfile createDiskCharacteristics(VolumeInfo volumeInfo, VirtualMachineTemplate template, DataCenter dc, DiskOffering diskOffering) { + boolean requiresEncryption = diskOffering.getEncrypt() || volumeInfo.getPassphraseId() != null; if (volumeInfo.getVolumeType() == Type.ROOT && Storage.ImageFormat.ISO != template.getFormat()) { String templateToString = getReflectOnlySelectedFields(template); String zoneToString = getReflectOnlySelectedFields(dc); - TemplateDataStoreVO ss = _vmTemplateStoreDao.findByTemplateZoneDownloadStatus(template.getId(), dc.getId(), VMTemplateStorageResourceAssoc.Status.DOWNLOADED); if (ss == null) { throw new CloudRuntimeException(String.format("Template [%s] has not been completely downloaded to the zone [%s].", templateToString, zoneToString)); } - return new DiskProfile(volumeInfo.getId(), volumeInfo.getVolumeType(), volumeInfo.getName(), diskOffering.getId(), ss.getSize(), diskOffering.getTagsArray(), diskOffering.isUseLocalStorage(), - diskOffering.isRecreatable(), Storage.ImageFormat.ISO != template.getFormat() ? template.getId() : null); + diskOffering.isRecreatable(), Storage.ImageFormat.ISO != template.getFormat() ? template.getId() : null, requiresEncryption); } else { return new DiskProfile(volumeInfo.getId(), volumeInfo.getVolumeType(), volumeInfo.getName(), diskOffering.getId(), diskOffering.getDiskSize(), diskOffering.getTagsArray(), - diskOffering.isUseLocalStorage(), diskOffering.isRecreatable(), null); + diskOffering.isUseLocalStorage(), diskOffering.isRecreatable(), null, requiresEncryption); + } } @@ -653,8 +669,16 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati storageMgr.setDiskProfileThrottling(dskCh, null, diskOffering); } - if (diskOffering != null && diskOffering.isCustomized()) { - dskCh.setSize(size); + if (diskOffering != null) { + if (diskOffering.isCustomized()) { + dskCh.setSize(size); + } + + if (diskOffering.getEncrypt()) { + VolumeVO vol = _volsDao.findById(volumeInfo.getId()); + setPassphraseForVolumeEncryption(vol); + volumeInfo = volFactory.getVolume(volumeInfo.getId()); + } } dskCh.setHyperType(hyperType); @@ -697,7 +721,6 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati throw new CloudRuntimeException(msg); } } - return result.getVolume(); } catch (InterruptedException | ExecutionException e) { String msg = String.format("Failed to create volume [%s] due to [%s].", volumeToString, e.getMessage()); @@ -1598,6 +1621,10 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati destPool = dataStoreMgr.getDataStore(pool.getId(), DataStoreRole.Primary); } if (vol.getState() == Volume.State.Allocated || vol.getState() == Volume.State.Creating) { + DiskOffering diskOffering = _entityMgr.findById(DiskOffering.class, vol.getDiskOfferingId()); + if (diskOffering.getEncrypt()) { + vol = setPassphraseForVolumeEncryption(vol); + } newVol = vol; } else { newVol = switchVolume(vol, vm); @@ -1715,6 +1742,20 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati return new Pair(newVol, destPool); } + private VolumeVO setPassphraseForVolumeEncryption(VolumeVO volume) { + if (volume.getPassphraseId() != null) { + return volume; + } + s_logger.debug("Creating passphrase for the volume: " + volume.getName()); + long startTime = System.currentTimeMillis(); + PassphraseVO passphrase = passphraseDao.persist(new PassphraseVO()); + passphrase.clearPassphrase(); + volume.setPassphraseId(passphrase.getId()); + long finishTime = System.currentTimeMillis(); + s_logger.debug("Creating and persisting passphrase took: " + (finishTime - startTime) + " ms for the volume: " + volume.toString()); + return _volsDao.persist(volume); + } + @Override public void prepare(VirtualMachineProfile vm, DeployDestination dest) throws StorageUnavailableException, InsufficientStorageCapacityException, ConcurrentOperationException, StorageAccessException { if (dest == null) { diff --git a/engine/schema/src/main/java/com/cloud/storage/DiskOfferingVO.java b/engine/schema/src/main/java/com/cloud/storage/DiskOfferingVO.java index bfdbba2a6d3..b4f112f98e8 100644 --- a/engine/schema/src/main/java/com/cloud/storage/DiskOfferingVO.java +++ b/engine/schema/src/main/java/com/cloud/storage/DiskOfferingVO.java @@ -129,6 +129,8 @@ public class DiskOfferingVO implements DiskOffering { @Column(name = "iops_write_rate_max_length") private Long iopsWriteRateMaxLength; + @Column(name = "encrypt") + private boolean encrypt; @Column(name = "cache_mode", updatable = true, nullable = false) @Enumerated(value = EnumType.STRING) @@ -568,10 +570,17 @@ public class DiskOfferingVO implements DiskOffering { return hypervisorSnapshotReserve; } + @Override + public boolean getEncrypt() { return encrypt; } + + @Override + public void setEncrypt(boolean encrypt) { this.encrypt = encrypt; } + public boolean isShared() { return !useLocalStorage; } + public boolean getDiskSizeStrictness() { return diskSizeStrictness; } diff --git a/engine/schema/src/main/java/com/cloud/storage/VolumeVO.java b/engine/schema/src/main/java/com/cloud/storage/VolumeVO.java index 2e81b4e0028..0bd71ea6d86 100644 --- a/engine/schema/src/main/java/com/cloud/storage/VolumeVO.java +++ b/engine/schema/src/main/java/com/cloud/storage/VolumeVO.java @@ -32,11 +32,12 @@ import javax.persistence.Temporal; import javax.persistence.TemporalType; import javax.persistence.Transient; +import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils; + import com.cloud.storage.Storage.ProvisioningType; import com.cloud.storage.Storage.StoragePoolType; import com.cloud.utils.NumbersUtil; import com.cloud.utils.db.GenericDao; -import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils; @Entity @Table(name = "volumes") @@ -173,6 +174,12 @@ public class VolumeVO implements Volume { @Transient private boolean deployAsIs; + @Column(name = "passphrase_id") + private Long passphraseId; + + @Column(name = "encrypt_format") + private String encryptFormat; + // Real Constructor public VolumeVO(Type type, String name, long dcId, long domainId, long accountId, long diskOfferingId, Storage.ProvisioningType provisioningType, long size, @@ -500,7 +507,7 @@ public class VolumeVO implements Volume { @Override public String toString() { - return new StringBuilder("Vol[").append(id).append("|vm=").append(instanceId).append("|").append(volumeType).append("]").toString(); + return new StringBuilder("Vol[").append(id).append("|name=").append(name).append("|vm=").append(instanceId).append("|").append(volumeType).append("]").toString(); } @Override @@ -663,4 +670,11 @@ public class VolumeVO implements Volume { this.externalUuid = externalUuid; } + public Long getPassphraseId() { return passphraseId; } + + public void setPassphraseId(Long id) { this.passphraseId = id; } + + public String getEncryptFormat() { return encryptFormat; } + + public void setEncryptFormat(String encryptFormat) { this.encryptFormat = encryptFormat; } } diff --git a/engine/schema/src/main/java/com/cloud/storage/dao/VolumeDao.java b/engine/schema/src/main/java/com/cloud/storage/dao/VolumeDao.java index 71c291e900b..64151d60687 100644 --- a/engine/schema/src/main/java/com/cloud/storage/dao/VolumeDao.java +++ b/engine/schema/src/main/java/com/cloud/storage/dao/VolumeDao.java @@ -102,6 +102,13 @@ public interface VolumeDao extends GenericDao, StateDao findIncludingRemovedByZone(long zoneId); + /** + * Lists all volumes using a given passphrase ID + * @param passphraseId + * @return list of volumes + */ + List listVolumesByPassphraseId(long passphraseId); + /** * Gets the Total Primary Storage space allocated for an account * diff --git a/engine/schema/src/main/java/com/cloud/storage/dao/VolumeDaoImpl.java b/engine/schema/src/main/java/com/cloud/storage/dao/VolumeDaoImpl.java index 9a46e923f88..d27faa3a78a 100644 --- a/engine/schema/src/main/java/com/cloud/storage/dao/VolumeDaoImpl.java +++ b/engine/schema/src/main/java/com/cloud/storage/dao/VolumeDaoImpl.java @@ -382,6 +382,7 @@ public class VolumeDaoImpl extends GenericDaoBase implements Vol AllFieldsSearch.and("updateTime", AllFieldsSearch.entity().getUpdated(), SearchCriteria.Op.LT); AllFieldsSearch.and("updatedCount", AllFieldsSearch.entity().getUpdatedCount(), Op.EQ); AllFieldsSearch.and("name", AllFieldsSearch.entity().getName(), Op.EQ); + AllFieldsSearch.and("passphraseId", AllFieldsSearch.entity().getPassphraseId(), Op.EQ); AllFieldsSearch.done(); RootDiskStateSearch = createSearchBuilder(); @@ -669,16 +670,25 @@ public class VolumeDaoImpl extends GenericDaoBase implements Vol } } + @Override + public List listVolumesByPassphraseId(long passphraseId) { + SearchCriteria sc = AllFieldsSearch.create(); + sc.setParameters("passphraseId", passphraseId); + return listBy(sc); + } + @Override @DB public boolean remove(Long id) { TransactionLegacy txn = TransactionLegacy.currentTxn(); txn.start(); + s_logger.debug(String.format("Removing volume %s from DB", id)); VolumeVO entry = findById(id); if (entry != null) { _tagsDao.removeByIdAndType(id, ResourceObjectType.Volume); } boolean result = super.remove(id); + txn.commit(); return result; } diff --git a/engine/schema/src/main/java/org/apache/cloudstack/secret/PassphraseVO.java b/engine/schema/src/main/java/org/apache/cloudstack/secret/PassphraseVO.java new file mode 100644 index 00000000000..1c0e5e47ec9 --- /dev/null +++ b/engine/schema/src/main/java/org/apache/cloudstack/secret/PassphraseVO.java @@ -0,0 +1,73 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.cloudstack.secret; + +import com.cloud.utils.db.Encrypt; +import com.cloud.utils.exception.CloudRuntimeException; + +import javax.persistence.Column; +import javax.persistence.Entity; +import javax.persistence.GeneratedValue; +import javax.persistence.GenerationType; +import javax.persistence.Id; +import javax.persistence.Table; +import java.security.NoSuchAlgorithmException; +import java.security.SecureRandom; +import java.util.Arrays; +import java.util.Base64; + +@Entity +@Table(name = "passphrase") +public class PassphraseVO { + @Id + @GeneratedValue(strategy = GenerationType.IDENTITY) + @Column(name = "id") + private Long id; + + @Column(name = "passphrase") + @Encrypt + private byte[] passphrase; + + public PassphraseVO() { + try { + SecureRandom random = SecureRandom.getInstanceStrong(); + byte[] temporary = new byte[48]; // 48 byte random passphrase buffer + this.passphrase = new byte[64]; // 48 byte random passphrase as base64 for usability + random.nextBytes(temporary); + Base64.getEncoder().encode(temporary, this.passphrase); + Arrays.fill(temporary, (byte) 0); // clear passphrase from buffer + } catch (NoSuchAlgorithmException ex ) { + throw new CloudRuntimeException("Volume encryption requested but system is missing specified algorithm to generate passphrase"); + } + } + + public PassphraseVO(PassphraseVO existing) { + this.passphrase = existing.getPassphrase(); + } + + public void clearPassphrase() { + if (this.passphrase != null) { + Arrays.fill(this.passphrase, (byte) 0); + } + } + + public byte[] getPassphrase() { return this.passphrase; } + + public Long getId() { return this.id; } +} diff --git a/engine/schema/src/main/java/org/apache/cloudstack/secret/dao/PassphraseDao.java b/engine/schema/src/main/java/org/apache/cloudstack/secret/dao/PassphraseDao.java new file mode 100644 index 00000000000..c03eb2a9820 --- /dev/null +++ b/engine/schema/src/main/java/org/apache/cloudstack/secret/dao/PassphraseDao.java @@ -0,0 +1,25 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.cloudstack.secret.dao; + +import org.apache.cloudstack.secret.PassphraseVO; +import com.cloud.utils.db.GenericDao; + +public interface PassphraseDao extends GenericDao { +} \ No newline at end of file diff --git a/engine/schema/src/main/java/org/apache/cloudstack/secret/dao/PassphraseDaoImpl.java b/engine/schema/src/main/java/org/apache/cloudstack/secret/dao/PassphraseDaoImpl.java new file mode 100644 index 00000000000..9b4e36feee6 --- /dev/null +++ b/engine/schema/src/main/java/org/apache/cloudstack/secret/dao/PassphraseDaoImpl.java @@ -0,0 +1,25 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.cloudstack.secret.dao; + +import org.apache.cloudstack.secret.PassphraseVO; +import com.cloud.utils.db.GenericDaoBase; + +public class PassphraseDaoImpl extends GenericDaoBase implements PassphraseDao { +} diff --git a/engine/schema/src/main/resources/META-INF/cloudstack/core/spring-engine-schema-core-daos-context.xml b/engine/schema/src/main/resources/META-INF/cloudstack/core/spring-engine-schema-core-daos-context.xml index fcd3be6c92e..d4676f3d58e 100644 --- a/engine/schema/src/main/resources/META-INF/cloudstack/core/spring-engine-schema-core-daos-context.xml +++ b/engine/schema/src/main/resources/META-INF/cloudstack/core/spring-engine-schema-core-daos-context.xml @@ -302,4 +302,5 @@ + diff --git a/engine/schema/src/main/resources/META-INF/db/schema-41710to41800.sql b/engine/schema/src/main/resources/META-INF/db/schema-41710to41800.sql index f5d06a38117..859dc6b5e3d 100644 --- a/engine/schema/src/main/resources/META-INF/db/schema-41710to41800.sql +++ b/engine/schema/src/main/resources/META-INF/db/schema-41710to41800.sql @@ -23,6 +23,202 @@ UPDATE `cloud`.`service_offering` so SET so.limit_cpu_use = 1 WHERE so.default_use = 1 AND so.vm_type IN ('domainrouter', 'secondarystoragevm', 'consoleproxy', 'internalloadbalancervm', 'elasticloadbalancervm'); +-- Idempotent ADD COLUMN +DROP PROCEDURE IF EXISTS `cloud`.`IDEMPOTENT_ADD_COLUMN`; +CREATE PROCEDURE `cloud`.`IDEMPOTENT_ADD_COLUMN` ( + IN in_table_name VARCHAR(200) +, IN in_column_name VARCHAR(200) +, IN in_column_definition VARCHAR(1000) +) +BEGIN + DECLARE CONTINUE HANDLER FOR 1060 BEGIN END; SET @ddl = CONCAT('ALTER TABLE ', in_table_name); SET @ddl = CONCAT(@ddl, ' ', 'ADD COLUMN') ; SET @ddl = CONCAT(@ddl, ' ', in_column_name); SET @ddl = CONCAT(@ddl, ' ', in_column_definition); PREPARE stmt FROM @ddl; EXECUTE stmt; DEALLOCATE PREPARE stmt; END; + + +-- Add foreign key procedure to link volumes to passphrase table +DROP PROCEDURE IF EXISTS `cloud`.`IDEMPOTENT_ADD_FOREIGN_KEY`; +CREATE PROCEDURE `cloud`.`IDEMPOTENT_ADD_FOREIGN_KEY` ( + IN in_table_name VARCHAR(200), + IN in_foreign_table_name VARCHAR(200), + IN in_foreign_column_name VARCHAR(200) +) +BEGIN + DECLARE CONTINUE HANDLER FOR 1005 BEGIN END; SET @ddl = CONCAT('ALTER TABLE ', in_table_name); SET @ddl = CONCAT(@ddl, ' ', ' ADD CONSTRAINT '); SET @ddl = CONCAT(@ddl, 'fk_', in_foreign_table_name, '_', in_foreign_column_name); SET @ddl = CONCAT(@ddl, ' FOREIGN KEY (', in_foreign_table_name, '_', in_foreign_column_name, ')'); SET @ddl = CONCAT(@ddl, ' REFERENCES ', in_foreign_table_name, '(', in_foreign_column_name, ')'); PREPARE stmt FROM @ddl; EXECUTE stmt; DEALLOCATE PREPARE stmt; END; + +-- Add passphrase table +CREATE TABLE IF NOT EXISTS `cloud`.`passphrase` ( + `id` bigint unsigned NOT NULL auto_increment, + `passphrase` varchar(64) DEFAULT NULL, + PRIMARY KEY (`id`) +) ENGINE=InnoDB DEFAULT CHARSET=utf8; + +-- Add passphrase column to volumes table +CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.volumes', 'passphrase_id', 'bigint unsigned DEFAULT NULL COMMENT ''encryption passphrase id'' '); +CALL `cloud`.`IDEMPOTENT_ADD_FOREIGN_KEY`('cloud.volumes', 'passphrase', 'id'); +CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.volumes', 'encrypt_format', 'varchar(64) DEFAULT NULL COMMENT ''encryption format'' '); + +-- Add encrypt column to disk_offering +CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.disk_offering', 'encrypt', 'tinyint(1) DEFAULT 0 COMMENT ''volume encrypt requested'' '); + +-- add encryption support to disk offering view +DROP VIEW IF EXISTS `cloud`.`disk_offering_view`; +CREATE VIEW `cloud`.`disk_offering_view` AS +SELECT + `disk_offering`.`id` AS `id`, + `disk_offering`.`uuid` AS `uuid`, + `disk_offering`.`name` AS `name`, + `disk_offering`.`display_text` AS `display_text`, + `disk_offering`.`provisioning_type` AS `provisioning_type`, + `disk_offering`.`disk_size` AS `disk_size`, + `disk_offering`.`min_iops` AS `min_iops`, + `disk_offering`.`max_iops` AS `max_iops`, + `disk_offering`.`created` AS `created`, + `disk_offering`.`tags` AS `tags`, + `disk_offering`.`customized` AS `customized`, + `disk_offering`.`customized_iops` AS `customized_iops`, + `disk_offering`.`removed` AS `removed`, + `disk_offering`.`use_local_storage` AS `use_local_storage`, + `disk_offering`.`hv_ss_reserve` AS `hv_ss_reserve`, + `disk_offering`.`bytes_read_rate` AS `bytes_read_rate`, + `disk_offering`.`bytes_read_rate_max` AS `bytes_read_rate_max`, + `disk_offering`.`bytes_read_rate_max_length` AS `bytes_read_rate_max_length`, + `disk_offering`.`bytes_write_rate` AS `bytes_write_rate`, + `disk_offering`.`bytes_write_rate_max` AS `bytes_write_rate_max`, + `disk_offering`.`bytes_write_rate_max_length` AS `bytes_write_rate_max_length`, + `disk_offering`.`iops_read_rate` AS `iops_read_rate`, + `disk_offering`.`iops_read_rate_max` AS `iops_read_rate_max`, + `disk_offering`.`iops_read_rate_max_length` AS `iops_read_rate_max_length`, + `disk_offering`.`iops_write_rate` AS `iops_write_rate`, + `disk_offering`.`iops_write_rate_max` AS `iops_write_rate_max`, + `disk_offering`.`iops_write_rate_max_length` AS `iops_write_rate_max_length`, + `disk_offering`.`cache_mode` AS `cache_mode`, + `disk_offering`.`sort_key` AS `sort_key`, + `disk_offering`.`compute_only` AS `compute_only`, + `disk_offering`.`display_offering` AS `display_offering`, + `disk_offering`.`state` AS `state`, + `disk_offering`.`disk_size_strictness` AS `disk_size_strictness`, + `vsphere_storage_policy`.`value` AS `vsphere_storage_policy`, + `disk_offering`.`encrypt` AS `encrypt`, + GROUP_CONCAT(DISTINCT(domain.id)) AS domain_id, + GROUP_CONCAT(DISTINCT(domain.uuid)) AS domain_uuid, + GROUP_CONCAT(DISTINCT(domain.name)) AS domain_name, + GROUP_CONCAT(DISTINCT(domain.path)) AS domain_path, + GROUP_CONCAT(DISTINCT(zone.id)) AS zone_id, + GROUP_CONCAT(DISTINCT(zone.uuid)) AS zone_uuid, + GROUP_CONCAT(DISTINCT(zone.name)) AS zone_name +FROM + `cloud`.`disk_offering` + LEFT JOIN + `cloud`.`disk_offering_details` AS `domain_details` ON `domain_details`.`offering_id` = `disk_offering`.`id` AND `domain_details`.`name`='domainid' + LEFT JOIN + `cloud`.`domain` AS `domain` ON FIND_IN_SET(`domain`.`id`, `domain_details`.`value`) + LEFT JOIN + `cloud`.`disk_offering_details` AS `zone_details` ON `zone_details`.`offering_id` = `disk_offering`.`id` AND `zone_details`.`name`='zoneid' + LEFT JOIN + `cloud`.`data_center` AS `zone` ON FIND_IN_SET(`zone`.`id`, `zone_details`.`value`) + LEFT JOIN + `cloud`.`disk_offering_details` AS `vsphere_storage_policy` ON `vsphere_storage_policy`.`offering_id` = `disk_offering`.`id` + AND `vsphere_storage_policy`.`name` = 'storagepolicy' +WHERE + `disk_offering`.`state`='Active' +GROUP BY + `disk_offering`.`id`; + +-- add encryption support to service offering view +DROP VIEW IF EXISTS `cloud`.`service_offering_view`; +CREATE VIEW `cloud`.`service_offering_view` AS +SELECT + `service_offering`.`id` AS `id`, + `service_offering`.`uuid` AS `uuid`, + `service_offering`.`name` AS `name`, + `service_offering`.`display_text` AS `display_text`, + `disk_offering`.`provisioning_type` AS `provisioning_type`, + `service_offering`.`created` AS `created`, + `disk_offering`.`tags` AS `tags`, + `service_offering`.`removed` AS `removed`, + `disk_offering`.`use_local_storage` AS `use_local_storage`, + `service_offering`.`system_use` AS `system_use`, + `disk_offering`.`id` AS `disk_offering_id`, + `disk_offering`.`name` AS `disk_offering_name`, + `disk_offering`.`uuid` AS `disk_offering_uuid`, + `disk_offering`.`display_text` AS `disk_offering_display_text`, + `disk_offering`.`customized_iops` AS `customized_iops`, + `disk_offering`.`min_iops` AS `min_iops`, + `disk_offering`.`max_iops` AS `max_iops`, + `disk_offering`.`hv_ss_reserve` AS `hv_ss_reserve`, + `disk_offering`.`bytes_read_rate` AS `bytes_read_rate`, + `disk_offering`.`bytes_read_rate_max` AS `bytes_read_rate_max`, + `disk_offering`.`bytes_read_rate_max_length` AS `bytes_read_rate_max_length`, + `disk_offering`.`bytes_write_rate` AS `bytes_write_rate`, + `disk_offering`.`bytes_write_rate_max` AS `bytes_write_rate_max`, + `disk_offering`.`bytes_write_rate_max_length` AS `bytes_write_rate_max_length`, + `disk_offering`.`iops_read_rate` AS `iops_read_rate`, + `disk_offering`.`iops_read_rate_max` AS `iops_read_rate_max`, + `disk_offering`.`iops_read_rate_max_length` AS `iops_read_rate_max_length`, + `disk_offering`.`iops_write_rate` AS `iops_write_rate`, + `disk_offering`.`iops_write_rate_max` AS `iops_write_rate_max`, + `disk_offering`.`iops_write_rate_max_length` AS `iops_write_rate_max_length`, + `disk_offering`.`cache_mode` AS `cache_mode`, + `disk_offering`.`disk_size` AS `root_disk_size`, + `disk_offering`.`encrypt` AS `encrypt_root`, + `service_offering`.`cpu` AS `cpu`, + `service_offering`.`speed` AS `speed`, + `service_offering`.`ram_size` AS `ram_size`, + `service_offering`.`nw_rate` AS `nw_rate`, + `service_offering`.`mc_rate` AS `mc_rate`, + `service_offering`.`ha_enabled` AS `ha_enabled`, + `service_offering`.`limit_cpu_use` AS `limit_cpu_use`, + `service_offering`.`host_tag` AS `host_tag`, + `service_offering`.`default_use` AS `default_use`, + `service_offering`.`vm_type` AS `vm_type`, + `service_offering`.`sort_key` AS `sort_key`, + `service_offering`.`is_volatile` AS `is_volatile`, + `service_offering`.`deployment_planner` AS `deployment_planner`, + `service_offering`.`dynamic_scaling_enabled` AS `dynamic_scaling_enabled`, + `service_offering`.`disk_offering_strictness` AS `disk_offering_strictness`, + `vsphere_storage_policy`.`value` AS `vsphere_storage_policy`, + GROUP_CONCAT(DISTINCT(domain.id)) AS domain_id, + GROUP_CONCAT(DISTINCT(domain.uuid)) AS domain_uuid, + GROUP_CONCAT(DISTINCT(domain.name)) AS domain_name, + GROUP_CONCAT(DISTINCT(domain.path)) AS domain_path, + GROUP_CONCAT(DISTINCT(zone.id)) AS zone_id, + GROUP_CONCAT(DISTINCT(zone.uuid)) AS zone_uuid, + GROUP_CONCAT(DISTINCT(zone.name)) AS zone_name, + IFNULL(`min_compute_details`.`value`, `cpu`) AS min_cpu, + IFNULL(`max_compute_details`.`value`, `cpu`) AS max_cpu, + IFNULL(`min_memory_details`.`value`, `ram_size`) AS min_memory, + IFNULL(`max_memory_details`.`value`, `ram_size`) AS max_memory +FROM + `cloud`.`service_offering` + INNER JOIN + `cloud`.`disk_offering_view` AS `disk_offering` ON service_offering.disk_offering_id = disk_offering.id + LEFT JOIN + `cloud`.`service_offering_details` AS `domain_details` ON `domain_details`.`service_offering_id` = `service_offering`.`id` AND `domain_details`.`name`='domainid' + LEFT JOIN + `cloud`.`domain` AS `domain` ON FIND_IN_SET(`domain`.`id`, `domain_details`.`value`) + LEFT JOIN + `cloud`.`service_offering_details` AS `zone_details` ON `zone_details`.`service_offering_id` = `service_offering`.`id` AND `zone_details`.`name`='zoneid' + LEFT JOIN + `cloud`.`data_center` AS `zone` ON FIND_IN_SET(`zone`.`id`, `zone_details`.`value`) + LEFT JOIN + `cloud`.`service_offering_details` AS `min_compute_details` ON `min_compute_details`.`service_offering_id` = `service_offering`.`id` + AND `min_compute_details`.`name` = 'mincpunumber' + LEFT JOIN + `cloud`.`service_offering_details` AS `max_compute_details` ON `max_compute_details`.`service_offering_id` = `service_offering`.`id` + AND `max_compute_details`.`name` = 'maxcpunumber' + LEFT JOIN + `cloud`.`service_offering_details` AS `min_memory_details` ON `min_memory_details`.`service_offering_id` = `service_offering`.`id` + AND `min_memory_details`.`name` = 'minmemory' + LEFT JOIN + `cloud`.`service_offering_details` AS `max_memory_details` ON `max_memory_details`.`service_offering_id` = `service_offering`.`id` + AND `max_memory_details`.`name` = 'maxmemory' + LEFT JOIN + `cloud`.`service_offering_details` AS `vsphere_storage_policy` ON `vsphere_storage_policy`.`service_offering_id` = `service_offering`.`id` + AND `vsphere_storage_policy`.`name` = 'storagepolicy' +WHERE + `service_offering`.`state`='Active' +GROUP BY + `service_offering`.`id`; + -- Add cidr_list column to load_balancing_rules ALTER TABLE `cloud`.`load_balancing_rules` ADD cidr_list VARCHAR(4096); diff --git a/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/AncientDataMotionStrategy.java b/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/AncientDataMotionStrategy.java index 2639968f261..6056defcc9b 100644 --- a/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/AncientDataMotionStrategy.java +++ b/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/AncientDataMotionStrategy.java @@ -80,6 +80,9 @@ import com.cloud.vm.VirtualMachineManager; @Component public class AncientDataMotionStrategy implements DataMotionStrategy { private static final Logger s_logger = Logger.getLogger(AncientDataMotionStrategy.class); + private static final String NO_REMOTE_ENDPOINT_SSVM = "No remote endpoint to send command, check if host or ssvm is down?"; + private static final String NO_REMOTE_ENDPOINT_WITH_ENCRYPTION = "No remote endpoint to send command, unable to find a valid endpoint. Requires encryption support: %s"; + @Inject EndPointSelector selector; @Inject @@ -170,9 +173,8 @@ public class AncientDataMotionStrategy implements DataMotionStrategy { VirtualMachineManager.ExecuteInSequence.value()); EndPoint ep = destHost != null ? RemoteHostEndPoint.getHypervisorHostEndPoint(destHost) : selector.select(srcForCopy, destData); if (ep == null) { - String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; - s_logger.error(errMsg); - answer = new Answer(cmd, false, errMsg); + s_logger.error(NO_REMOTE_ENDPOINT_SSVM); + answer = new Answer(cmd, false, NO_REMOTE_ENDPOINT_SSVM); } else { answer = ep.sendMessage(cmd); } @@ -294,9 +296,8 @@ public class AncientDataMotionStrategy implements DataMotionStrategy { Answer answer = null; if (ep == null) { - String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; - s_logger.error(errMsg); - answer = new Answer(cmd, false, errMsg); + s_logger.error(NO_REMOTE_ENDPOINT_SSVM); + answer = new Answer(cmd, false, NO_REMOTE_ENDPOINT_SSVM); } else { answer = ep.sendMessage(cmd); } @@ -316,12 +317,11 @@ public class AncientDataMotionStrategy implements DataMotionStrategy { protected Answer cloneVolume(DataObject template, DataObject volume) { CopyCommand cmd = new CopyCommand(template.getTO(), addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest(volume.getTO()), 0, VirtualMachineManager.ExecuteInSequence.value()); try { - EndPoint ep = selector.select(volume.getDataStore()); + EndPoint ep = selector.select(volume, anyVolumeRequiresEncryption(volume)); Answer answer = null; if (ep == null) { - String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; - s_logger.error(errMsg); - answer = new Answer(cmd, false, errMsg); + s_logger.error(NO_REMOTE_ENDPOINT_SSVM); + answer = new Answer(cmd, false, NO_REMOTE_ENDPOINT_SSVM); } else { answer = ep.sendMessage(cmd); } @@ -351,14 +351,15 @@ public class AncientDataMotionStrategy implements DataMotionStrategy { if (srcData instanceof VolumeInfo && ((VolumeInfo)srcData).isDirectDownload()) { bypassSecondaryStorage = true; } + boolean encryptionRequired = anyVolumeRequiresEncryption(srcData, destData); if (cacheStore == null) { if (bypassSecondaryStorage) { CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), _copyvolumewait, VirtualMachineManager.ExecuteInSequence.value()); - EndPoint ep = selector.select(srcData, destData); + EndPoint ep = selector.select(srcData, destData, encryptionRequired); Answer answer = null; if (ep == null) { - String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; + String errMsg = String.format(NO_REMOTE_ENDPOINT_WITH_ENCRYPTION, encryptionRequired); s_logger.error(errMsg); answer = new Answer(cmd, false, errMsg); } else { @@ -395,9 +396,9 @@ public class AncientDataMotionStrategy implements DataMotionStrategy { objOnImageStore.processEvent(Event.CopyingRequested); CopyCommand cmd = new CopyCommand(objOnImageStore.getTO(), addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest(destData.getTO()), _copyvolumewait, VirtualMachineManager.ExecuteInSequence.value()); - EndPoint ep = selector.select(objOnImageStore, destData); + EndPoint ep = selector.select(objOnImageStore, destData, encryptionRequired); if (ep == null) { - String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; + String errMsg = String.format(NO_REMOTE_ENDPOINT_WITH_ENCRYPTION, encryptionRequired); s_logger.error(errMsg); answer = new Answer(cmd, false, errMsg); } else { @@ -427,10 +428,10 @@ public class AncientDataMotionStrategy implements DataMotionStrategy { } else { DataObject cacheData = cacheMgr.createCacheObject(srcData, destScope); CopyCommand cmd = new CopyCommand(cacheData.getTO(), destData.getTO(), _copyvolumewait, VirtualMachineManager.ExecuteInSequence.value()); - EndPoint ep = selector.select(cacheData, destData); + EndPoint ep = selector.select(cacheData, destData, encryptionRequired); Answer answer = null; if (ep == null) { - String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; + String errMsg = String.format(NO_REMOTE_ENDPOINT_WITH_ENCRYPTION, encryptionRequired); s_logger.error(errMsg); answer = new Answer(cmd, false, errMsg); } else { @@ -457,10 +458,12 @@ public class AncientDataMotionStrategy implements DataMotionStrategy { command.setContextParam(DiskTO.PROTOCOL_TYPE, Storage.StoragePoolType.DatastoreCluster.toString()); } + boolean encryptionRequired = anyVolumeRequiresEncryption(srcData, destData); + EndPoint ep = selector.select(srcData, StorageAction.MIGRATEVOLUME); Answer answer = null; if (ep == null) { - String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; + String errMsg = String.format(NO_REMOTE_ENDPOINT_WITH_ENCRYPTION, encryptionRequired); s_logger.error(errMsg); answer = new Answer(command, false, errMsg); } else { @@ -556,9 +559,8 @@ public class AncientDataMotionStrategy implements DataMotionStrategy { CopyCommand cmd = new CopyCommand(srcData.getTO(), addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest(destData.getTO()), _createprivatetemplatefromsnapshotwait, VirtualMachineManager.ExecuteInSequence.value()); Answer answer = null; if (ep == null) { - String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; - s_logger.error(errMsg); - answer = new Answer(cmd, false, errMsg); + s_logger.error(NO_REMOTE_ENDPOINT_SSVM); + answer = new Answer(cmd, false, NO_REMOTE_ENDPOINT_SSVM); } else { answer = ep.sendMessage(cmd); } @@ -584,6 +586,8 @@ public class AncientDataMotionStrategy implements DataMotionStrategy { Map options = new HashMap(); options.put("fullSnapshot", fullSnapshot.toString()); options.put(BackupSnapshotAfterTakingSnapshot.key(), String.valueOf(BackupSnapshotAfterTakingSnapshot.value())); + boolean encryptionRequired = anyVolumeRequiresEncryption(srcData, destData); + Answer answer = null; try { if (needCacheStorage(srcData, destData)) { @@ -593,11 +597,10 @@ public class AncientDataMotionStrategy implements DataMotionStrategy { CopyCommand cmd = new CopyCommand(srcData.getTO(), addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest(destData.getTO()), _backupsnapshotwait, VirtualMachineManager.ExecuteInSequence.value()); cmd.setCacheTO(cacheData.getTO()); cmd.setOptions(options); - EndPoint ep = selector.select(srcData, destData); + EndPoint ep = selector.select(srcData, destData, encryptionRequired); if (ep == null) { - String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; - s_logger.error(errMsg); - answer = new Answer(cmd, false, errMsg); + s_logger.error(NO_REMOTE_ENDPOINT_SSVM); + answer = new Answer(cmd, false, NO_REMOTE_ENDPOINT_SSVM); } else { answer = ep.sendMessage(cmd); } @@ -605,11 +608,10 @@ public class AncientDataMotionStrategy implements DataMotionStrategy { addFullCloneAndDiskprovisiongStrictnessFlagOnVMwareDest(destData.getTO()); CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), _backupsnapshotwait, VirtualMachineManager.ExecuteInSequence.value()); cmd.setOptions(options); - EndPoint ep = selector.select(srcData, destData, StorageAction.BACKUPSNAPSHOT); + EndPoint ep = selector.select(srcData, destData, StorageAction.BACKUPSNAPSHOT, encryptionRequired); if (ep == null) { - String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; - s_logger.error(errMsg); - answer = new Answer(cmd, false, errMsg); + s_logger.error(NO_REMOTE_ENDPOINT_SSVM); + answer = new Answer(cmd, false, NO_REMOTE_ENDPOINT_SSVM); } else { answer = ep.sendMessage(cmd); } @@ -636,4 +638,19 @@ public class AncientDataMotionStrategy implements DataMotionStrategy { result.setResult("Unsupported operation requested for copying data."); callback.complete(result); } + + /** + * Does any object require encryption support? + */ + private boolean anyVolumeRequiresEncryption(DataObject ... objects) { + for (DataObject o : objects) { + // this fails code smell for returning true twice, but it is more readable than combining all tests into one statement + if (o instanceof VolumeInfo && ((VolumeInfo) o).getPassphraseId() != null) { + return true; + } else if (o instanceof SnapshotInfo && ((SnapshotInfo) o).getBaseVolume().getPassphraseId() != null) { + return true; + } + } + return false; + } } diff --git a/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/DataMotionServiceImpl.java b/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/DataMotionServiceImpl.java index 6a352a300c3..c8edb7b8abc 100644 --- a/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/DataMotionServiceImpl.java +++ b/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/DataMotionServiceImpl.java @@ -33,6 +33,7 @@ import org.apache.cloudstack.engine.subsystem.api.storage.DataStore; import org.apache.cloudstack.engine.subsystem.api.storage.StorageStrategyFactory; import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo; import org.apache.cloudstack.framework.async.AsyncCompletionCallback; +import org.apache.cloudstack.secret.dao.PassphraseDao; import org.apache.cloudstack.storage.command.CopyCmdAnswer; import org.apache.commons.lang3.StringUtils; import org.apache.log4j.Logger; @@ -53,6 +54,8 @@ public class DataMotionServiceImpl implements DataMotionService { StorageStrategyFactory storageStrategyFactory; @Inject VolumeDao volDao; + @Inject + PassphraseDao passphraseDao; @Override public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback callback) { @@ -98,7 +101,14 @@ public class DataMotionServiceImpl implements DataMotionService { volDao.update(sourceVO.getId(), sourceVO); destinationVO.setState(Volume.State.Expunged); destinationVO.setRemoved(new Date()); + Long passphraseId = destinationVO.getPassphraseId(); + destinationVO.setPassphraseId(null); volDao.update(destinationVO.getId(), destinationVO); + + if (passphraseId != null) { + passphraseDao.remove(passphraseId); + } + } @Override diff --git a/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/StorageSystemDataMotionStrategy.java b/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/StorageSystemDataMotionStrategy.java index b48ae6d22dc..64792a61018 100644 --- a/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/StorageSystemDataMotionStrategy.java +++ b/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/StorageSystemDataMotionStrategy.java @@ -1736,15 +1736,15 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy { protected MigrationOptions createLinkedCloneMigrationOptions(VolumeInfo srcVolumeInfo, VolumeInfo destVolumeInfo, String srcVolumeBackingFile, String srcPoolUuid, Storage.StoragePoolType srcPoolType) { VMTemplateStoragePoolVO ref = templatePoolDao.findByPoolTemplate(destVolumeInfo.getPoolId(), srcVolumeInfo.getTemplateId(), null); boolean updateBackingFileReference = ref == null; - String backingFile = ref != null ? ref.getInstallPath() : srcVolumeBackingFile; - return new MigrationOptions(srcPoolUuid, srcPoolType, backingFile, updateBackingFileReference); + String backingFile = !updateBackingFileReference ? ref.getInstallPath() : srcVolumeBackingFile; + return new MigrationOptions(srcPoolUuid, srcPoolType, backingFile, updateBackingFileReference, srcVolumeInfo.getDataStore().getScope().getScopeType()); } /** * Return expected MigrationOptions for a full clone volume live storage migration */ protected MigrationOptions createFullCloneMigrationOptions(VolumeInfo srcVolumeInfo, VirtualMachineTO vmTO, Host srcHost, String srcPoolUuid, Storage.StoragePoolType srcPoolType) { - return new MigrationOptions(srcPoolUuid, srcPoolType, srcVolumeInfo.getPath()); + return new MigrationOptions(srcPoolUuid, srcPoolType, srcVolumeInfo.getPath(), srcVolumeInfo.getDataStore().getScope().getScopeType()); } /** @@ -1874,6 +1874,7 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy { migrateDiskInfo = configureMigrateDiskInfo(srcVolumeInfo, destPath); migrateDiskInfo.setSourceDiskOnStorageFileSystem(isStoragePoolTypeOfFile(sourceStoragePool)); migrateDiskInfoList.add(migrateDiskInfo); + prepareDiskWithSecretConsumerDetail(vmTO, srcVolumeInfo, destVolumeInfo.getPath()); } migrateStorage.put(srcVolumeInfo.getPath(), migrateDiskInfo); @@ -2123,6 +2124,11 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy { newVol.setPoolId(storagePoolVO.getId()); newVol.setLastPoolId(lastPoolId); + if (volume.getPassphraseId() != null) { + newVol.setPassphraseId(volume.getPassphraseId()); + newVol.setEncryptFormat(volume.getEncryptFormat()); + } + return _volumeDao.persist(newVol); } @@ -2206,6 +2212,22 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy { } } + /** + * Include some destination volume info in vmTO, required for some PrepareForMigrationCommand processing + * + */ + protected void prepareDiskWithSecretConsumerDetail(VirtualMachineTO vmTO, VolumeInfo srcVolume, String destPath) { + if (vmTO.getDisks() != null) { + LOGGER.debug(String.format("Preparing VM TO '%s' disks with migration data", vmTO)); + Arrays.stream(vmTO.getDisks()).filter(diskTO -> diskTO.getData().getId() == srcVolume.getId()).forEach( diskTO -> { + if (diskTO.getDetails() == null) { + diskTO.setDetails(new HashMap<>()); + } + diskTO.getDetails().put(DiskTO.SECRET_CONSUMER_DETAIL, destPath); + }); + } + } + /** * At a high level: The source storage cannot be managed and * the destination storages can be all managed or all not managed, not mixed. diff --git a/engine/storage/src/main/java/org/apache/cloudstack/storage/allocator/AbstractStoragePoolAllocator.java b/engine/storage/src/main/java/org/apache/cloudstack/storage/allocator/AbstractStoragePoolAllocator.java index 5a669514d6a..919571999e5 100644 --- a/engine/storage/src/main/java/org/apache/cloudstack/storage/allocator/AbstractStoragePoolAllocator.java +++ b/engine/storage/src/main/java/org/apache/cloudstack/storage/allocator/AbstractStoragePoolAllocator.java @@ -262,7 +262,6 @@ public abstract class AbstractStoragePoolAllocator extends AdapterBase implement } protected boolean filter(ExcludeList avoid, StoragePool pool, DiskProfile dskCh, DeploymentPlan plan) { - if (s_logger.isDebugEnabled()) { s_logger.debug("Checking if storage pool is suitable, name: " + pool.getName() + " ,poolId: " + pool.getId()); } @@ -273,6 +272,13 @@ public abstract class AbstractStoragePoolAllocator extends AdapterBase implement return false; } + if (dskCh.requiresEncryption() && !pool.getPoolType().supportsEncryption()) { + if (s_logger.isDebugEnabled()) { + s_logger.debug(String.format("Storage pool type '%s' doesn't support encryption required for volume, skipping this pool", pool.getPoolType())); + } + return false; + } + Long clusterId = pool.getClusterId(); if (clusterId != null) { ClusterVO cluster = clusterDao.findById(clusterId); diff --git a/engine/storage/src/main/java/org/apache/cloudstack/storage/endpoint/DefaultEndPointSelector.java b/engine/storage/src/main/java/org/apache/cloudstack/storage/endpoint/DefaultEndPointSelector.java index 30d24f9acc1..1b8fb4cc3d7 100644 --- a/engine/storage/src/main/java/org/apache/cloudstack/storage/endpoint/DefaultEndPointSelector.java +++ b/engine/storage/src/main/java/org/apache/cloudstack/storage/endpoint/DefaultEndPointSelector.java @@ -65,6 +65,8 @@ import com.cloud.utils.db.TransactionLegacy; import com.cloud.utils.exception.CloudRuntimeException; import com.cloud.vm.VirtualMachine; +import static com.cloud.host.Host.HOST_VOLUME_ENCRYPTION; + @Component public class DefaultEndPointSelector implements EndPointSelector { private static final Logger s_logger = Logger.getLogger(DefaultEndPointSelector.class); @@ -72,11 +74,14 @@ public class DefaultEndPointSelector implements EndPointSelector { private HostDao hostDao; @Inject private DedicatedResourceDao dedicatedResourceDao; + + private static final String VOL_ENCRYPT_COLUMN_NAME = "volume_encryption_support"; private final String findOneHostOnPrimaryStorage = "select t.id from " - + "(select h.id, cd.value " + + "(select h.id, cd.value, hd.value as " + VOL_ENCRYPT_COLUMN_NAME + " " + "from host h join storage_pool_host_ref s on h.id = s.host_id " + "join cluster c on c.id=h.cluster_id " + "left join cluster_details cd on c.id=cd.cluster_id and cd.name='" + CapacityManager.StorageOperationsExcludeCluster.key() + "' " + + "left join host_details hd on h.id=hd.host_id and hd.name='" + HOST_VOLUME_ENCRYPTION + "' " + "where h.status = 'Up' and h.type = 'Routing' and h.resource_state = 'Enabled' and s.pool_id = ? "; private String findOneHypervisorHostInScopeByType = "select h.id from host h where h.status = 'Up' and h.hypervisor_type = ? "; @@ -118,8 +123,12 @@ public class DefaultEndPointSelector implements EndPointSelector { } } - @DB protected EndPoint findEndPointInScope(Scope scope, String sqlBase, Long poolId) { + return findEndPointInScope(scope, sqlBase, poolId, false); + } + + @DB + protected EndPoint findEndPointInScope(Scope scope, String sqlBase, Long poolId, boolean volumeEncryptionSupportRequired) { StringBuilder sbuilder = new StringBuilder(); sbuilder.append(sqlBase); @@ -142,8 +151,13 @@ public class DefaultEndPointSelector implements EndPointSelector { dedicatedHosts = dedicatedResourceDao.listAllHosts(); } - // TODO: order by rand() is slow if there are lot of hosts sbuilder.append(") t where t.value<>'true' or t.value is null"); //Added for exclude cluster's subquery + + if (volumeEncryptionSupportRequired) { + sbuilder.append(String.format(" and t.%s='true'", VOL_ENCRYPT_COLUMN_NAME)); + } + + // TODO: order by rand() is slow if there are lot of hosts sbuilder.append(" ORDER by "); if (dedicatedHosts.size() > 0) { moveDedicatedHostsToLowerPriority(sbuilder, dedicatedHosts); @@ -208,7 +222,7 @@ public class DefaultEndPointSelector implements EndPointSelector { } } - protected EndPoint findEndPointForImageMove(DataStore srcStore, DataStore destStore) { + protected EndPoint findEndPointForImageMove(DataStore srcStore, DataStore destStore, boolean volumeEncryptionSupportRequired) { // find any xenserver/kvm host in the scope Scope srcScope = srcStore.getScope(); Scope destScope = destStore.getScope(); @@ -233,17 +247,22 @@ public class DefaultEndPointSelector implements EndPointSelector { poolId = destStore.getId(); } } - return findEndPointInScope(selectedScope, findOneHostOnPrimaryStorage, poolId); + return findEndPointInScope(selectedScope, findOneHostOnPrimaryStorage, poolId, volumeEncryptionSupportRequired); } @Override public EndPoint select(DataObject srcData, DataObject destData) { + return select( srcData, destData, false); + } + + @Override + public EndPoint select(DataObject srcData, DataObject destData, boolean volumeEncryptionSupportRequired) { DataStore srcStore = srcData.getDataStore(); DataStore destStore = destData.getDataStore(); if (moveBetweenPrimaryImage(srcStore, destStore)) { - return findEndPointForImageMove(srcStore, destStore); + return findEndPointForImageMove(srcStore, destStore, volumeEncryptionSupportRequired); } else if (moveBetweenPrimaryDirectDownload(srcStore, destStore)) { - return findEndPointForImageMove(srcStore, destStore); + return findEndPointForImageMove(srcStore, destStore, volumeEncryptionSupportRequired); } else if (moveBetweenCacheAndImage(srcStore, destStore)) { // pick ssvm based on image cache dc DataStore selectedStore = null; @@ -274,6 +293,11 @@ public class DefaultEndPointSelector implements EndPointSelector { @Override public EndPoint select(DataObject srcData, DataObject destData, StorageAction action) { + return select(srcData, destData, action, false); + } + + @Override + public EndPoint select(DataObject srcData, DataObject destData, StorageAction action, boolean encryptionRequired) { s_logger.error("IR24 select BACKUPSNAPSHOT from primary to secondary " + srcData.getId() + " dest=" + destData.getId()); if (action == StorageAction.BACKUPSNAPSHOT && srcData.getDataStore().getRole() == DataStoreRole.Primary) { SnapshotInfo srcSnapshot = (SnapshotInfo)srcData; @@ -293,7 +317,7 @@ public class DefaultEndPointSelector implements EndPointSelector { } } } - return select(srcData, destData); + return select(srcData, destData, encryptionRequired); } protected EndPoint findEndpointForPrimaryStorage(DataStore store) { @@ -350,6 +374,15 @@ public class DefaultEndPointSelector implements EndPointSelector { return sc.list(); } + @Override + public EndPoint select(DataObject object, boolean encryptionSupportRequired) { + DataStore store = object.getDataStore(); + if (store.getRole() == DataStoreRole.Primary) { + return findEndPointInScope(store.getScope(), findOneHostOnPrimaryStorage, store.getId(), encryptionSupportRequired); + } + throw new CloudRuntimeException(String.format("Storage role %s doesn't support encryption", store.getRole())); + } + @Override public EndPoint select(DataObject object) { DataStore store = object.getDataStore(); @@ -415,6 +448,11 @@ public class DefaultEndPointSelector implements EndPointSelector { @Override public EndPoint select(DataObject object, StorageAction action) { + return select(object, action, false); + } + + @Override + public EndPoint select(DataObject object, StorageAction action, boolean encryptionRequired) { if (action == StorageAction.TAKESNAPSHOT) { SnapshotInfo snapshotInfo = (SnapshotInfo)object; if (snapshotInfo.getHypervisorType() == Hypervisor.HypervisorType.KVM) { @@ -446,7 +484,7 @@ public class DefaultEndPointSelector implements EndPointSelector { } } } - return select(object); + return select(object, encryptionRequired); } @Override diff --git a/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/volume/VolumeObject.java b/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/volume/VolumeObject.java index 8c23d61f697..5ebee87acd4 100644 --- a/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/volume/VolumeObject.java +++ b/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/volume/VolumeObject.java @@ -23,6 +23,11 @@ import javax.inject.Inject; import com.cloud.configuration.Resource.ResourceType; import com.cloud.dc.VsphereStoragePolicyVO; import com.cloud.dc.dao.VsphereStoragePolicyDao; +import com.cloud.utils.db.Transaction; +import com.cloud.utils.db.TransactionCallbackNoReturn; +import com.cloud.utils.db.TransactionStatus; +import org.apache.cloudstack.secret.dao.PassphraseDao; +import org.apache.cloudstack.secret.PassphraseVO; import com.cloud.service.dao.ServiceOfferingDetailsDao; import com.cloud.storage.MigrationOptions; import com.cloud.storage.VMTemplateVO; @@ -105,6 +110,8 @@ public class VolumeObject implements VolumeInfo { DiskOfferingDetailsDao diskOfferingDetailsDao; @Inject VsphereStoragePolicyDao vsphereStoragePolicyDao; + @Inject + PassphraseDao passphraseDao; private Object payload; private MigrationOptions migrationOptions; @@ -664,11 +671,13 @@ public class VolumeObject implements VolumeInfo { } protected void updateVolumeInfo(VolumeObjectTO newVolume, VolumeVO volumeVo, boolean setVolumeSize, boolean setFormat) { - String previousValues = ReflectionToStringBuilderUtils.reflectOnlySelectedFields(volumeVo, "path", "size", "format", "poolId"); + String previousValues = ReflectionToStringBuilderUtils.reflectOnlySelectedFields(volumeVo, "path", "size", "format", "encryptFormat", "poolId"); volumeVo.setPath(newVolume.getPath()); Long newVolumeSize = newVolume.getSize(); + volumeVo.setEncryptFormat(newVolume.getEncryptFormat()); + if (newVolumeSize != null && setVolumeSize) { volumeVo.setSize(newVolumeSize); } @@ -678,7 +687,7 @@ public class VolumeObject implements VolumeInfo { volumeVo.setPoolId(getDataStore().getId()); volumeDao.update(volumeVo.getId(), volumeVo); - String newValues = ReflectionToStringBuilderUtils.reflectOnlySelectedFields(volumeVo, "path", "size", "format", "poolId"); + String newValues = ReflectionToStringBuilderUtils.reflectOnlySelectedFields(volumeVo, "path", "size", "format", "encryptFormat", "poolId"); s_logger.debug(String.format("Updated %s from %s to %s ", volumeVo.getVolumeDescription(), previousValues, newValues)); } @@ -864,4 +873,61 @@ public class VolumeObject implements VolumeInfo { public void setExternalUuid(String externalUuid) { volumeVO.setExternalUuid(externalUuid); } + + @Override + public Long getPassphraseId() { + return volumeVO.getPassphraseId(); + } + + @Override + public void setPassphraseId(Long id) { + volumeVO.setPassphraseId(id); + } + + /** + * Removes passphrase reference from underlying volume. Also removes the associated passphrase entry if it is the last user. + */ + public void deletePassphrase() { + Transaction.execute(new TransactionCallbackNoReturn() { + @Override + public void doInTransactionWithoutResult(TransactionStatus status) { + Long passphraseId = volumeVO.getPassphraseId(); + if (passphraseId != null) { + volumeVO.setPassphraseId(null); + volumeDao.persist(volumeVO); + + s_logger.debug(String.format("Checking to see if we can delete passphrase id %s", passphraseId)); + List volumes = volumeDao.listVolumesByPassphraseId(passphraseId); + + if (volumes != null && !volumes.isEmpty()) { + s_logger.debug("Other volumes use this passphrase, skipping deletion"); + return; + } + + s_logger.debug(String.format("Deleting passphrase %s", passphraseId)); + passphraseDao.remove(passphraseId); + } + } + }); + } + + /** + * Looks up passphrase from underlying volume. + * @return passphrase as bytes + */ + public byte[] getPassphrase() { + PassphraseVO passphrase = passphraseDao.findById(volumeVO.getPassphraseId()); + if (passphrase != null) { + return passphrase.getPassphrase(); + } + return new byte[0]; + } + + @Override + public String getEncryptFormat() { return volumeVO.getEncryptFormat(); } + + @Override + public void setEncryptFormat(String encryptFormat) { + volumeVO.setEncryptFormat(encryptFormat); + } } diff --git a/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/volume/VolumeServiceImpl.java b/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/volume/VolumeServiceImpl.java index f74ef7a3877..cd7a840c86f 100644 --- a/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/volume/VolumeServiceImpl.java +++ b/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/volume/VolumeServiceImpl.java @@ -28,6 +28,7 @@ import java.util.Random; import javax.inject.Inject; +import org.apache.cloudstack.secret.dao.PassphraseDao; import com.cloud.storage.VMTemplateVO; import com.cloud.storage.dao.VMTemplateDao; import org.apache.cloudstack.annotation.AnnotationService; @@ -197,6 +198,8 @@ public class VolumeServiceImpl implements VolumeService { private AnnotationDao annotationDao; @Inject private SnapshotApiService snapshotApiService; + @Inject + private PassphraseDao passphraseDao; private final static String SNAPSHOT_ID = "SNAPSHOT_ID"; @@ -446,6 +449,11 @@ public class VolumeServiceImpl implements VolumeService { try { if (result.isSuccess()) { vo.processEvent(Event.OperationSuccessed); + + if (vo.getPassphraseId() != null) { + vo.deletePassphrase(); + } + if (canVolumeBeRemoved(vo.getId())) { s_logger.info("Volume " + vo.getId() + " is not referred anywhere, remove it from volumes table"); volDao.remove(vo.getId()); diff --git a/packaging/centos7/cloud.spec b/packaging/centos7/cloud.spec index 431dbee9302..a5271ff13bd 100644 --- a/packaging/centos7/cloud.spec +++ b/packaging/centos7/cloud.spec @@ -83,6 +83,7 @@ Requires: ipmitool Requires: %{name}-common = %{_ver} Requires: iptables-services Requires: qemu-img +Requires: haveged Requires: python3-pip Requires: python3-setuptools Group: System Environment/Libraries @@ -117,6 +118,8 @@ Requires: perl Requires: python36-libvirt Requires: qemu-img Requires: qemu-kvm +Requires: cryptsetup +Requires: rng-tools Provides: cloud-agent Group: System Environment/Libraries %description agent @@ -438,6 +441,7 @@ pip3 install %{_datadir}/%{name}-management/setup/wheel/six-1.15.0-py2.py3-none- pip3 install urllib3 /usr/bin/systemctl enable cloudstack-management > /dev/null 2>&1 || true +/usr/bin/systemctl enable --now haveged > /dev/null 2>&1 || true grep -s -q "db.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" grep -s -q "db.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" @@ -495,9 +499,10 @@ if [ ! -d %{_sysconfdir}/libvirt/hooks ] ; then fi cp -a ${RPM_BUILD_ROOT}%{_datadir}/%{name}-agent/lib/libvirtqemuhook %{_sysconfdir}/libvirt/hooks/qemu mkdir -m 0755 -p /usr/share/cloudstack-agent/tmp -/sbin/service libvirtd restart -/sbin/systemctl enable cloudstack-agent > /dev/null 2>&1 || true -/sbin/systemctl enable cloudstack-rolling-maintenance@p > /dev/null 2>&1 || true +/usr/bin/systemctl restart libvirtd +/usr/bin/systemctl enable cloudstack-agent > /dev/null 2>&1 || true +/usr/bin/systemctl enable cloudstack-rolling-maintenance@p > /dev/null 2>&1 || true +/usr/bin/systemctl enable --now rngd > /dev/null 2>&1 || true # if saved configs from upgrade exist, copy them over if [ -f "%{_sysconfdir}/cloud.rpmsave/agent/agent.properties" ]; then diff --git a/packaging/centos8/cloud.spec b/packaging/centos8/cloud.spec index 893b7b56cd8..3eb9db43cd5 100644 --- a/packaging/centos8/cloud.spec +++ b/packaging/centos8/cloud.spec @@ -78,6 +78,7 @@ Requires: ipmitool Requires: %{name}-common = %{_ver} Requires: iptables-services Requires: qemu-img +Requires: haveged Requires: python3-pip Requires: python3-setuptools Requires: libgcrypt > 1.8.3 @@ -110,6 +111,8 @@ Requires: perl Requires: python3-libvirt Requires: qemu-img Requires: qemu-kvm +Requires: cryptsetup +Requires: rng-tools Requires: libgcrypt > 1.8.3 Provides: cloud-agent Group: System Environment/Libraries @@ -429,6 +432,7 @@ fi pip3 install %{_datadir}/%{name}-management/setup/wheel/six-1.15.0-py2.py3-none-any.whl %{_datadir}/%{name}-management/setup/wheel/setuptools-47.3.1-py3-none-any.whl %{_datadir}/%{name}-management/setup/wheel/protobuf-3.12.2-cp36-cp36m-manylinux1_x86_64.whl %{_datadir}/%{name}-management/setup/wheel/mysql_connector_python-8.0.20-cp36-cp36m-manylinux1_x86_64.whl /usr/bin/systemctl enable cloudstack-management > /dev/null 2>&1 || true +/usr/bin/systemctl enable --now haveged > /dev/null 2>&1 || true grep -s -q "db.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" grep -s -q "db.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" @@ -486,9 +490,10 @@ if [ ! -d %{_sysconfdir}/libvirt/hooks ] ; then fi cp -a ${RPM_BUILD_ROOT}%{_datadir}/%{name}-agent/lib/libvirtqemuhook %{_sysconfdir}/libvirt/hooks/qemu mkdir -m 0755 -p /usr/share/cloudstack-agent/tmp -/sbin/service libvirtd restart -/sbin/systemctl enable cloudstack-agent > /dev/null 2>&1 || true -/sbin/systemctl enable cloudstack-rolling-maintenance@p > /dev/null 2>&1 || true +/usr/bin/systemctl restart libvirtd +/usr/bin/systemctl enable cloudstack-agent > /dev/null 2>&1 || true +/usr/bin/systemctl enable cloudstack-rolling-maintenance@p > /dev/null 2>&1 || true +/usr/bin/systemctl enable --now rngd > /dev/null 2>&1 || true # if saved configs from upgrade exist, copy them over if [ -f "%{_sysconfdir}/cloud.rpmsave/agent/agent.properties" ]; then diff --git a/packaging/suse15/cloud.spec b/packaging/suse15/cloud.spec index 9f2dc378219..c0164e474a4 100644 --- a/packaging/suse15/cloud.spec +++ b/packaging/suse15/cloud.spec @@ -78,6 +78,7 @@ Requires: mkisofs Requires: ipmitool Requires: %{name}-common = %{_ver} Requires: qemu-tools +Requires: haveged Requires: python3-pip Requires: python3-setuptools Requires: libgcrypt20 @@ -111,6 +112,8 @@ Requires: ipset Requires: perl Requires: python3-libvirt-python Requires: qemu-kvm +Requires: cryptsetup +Requires: rng-tools Requires: libgcrypt20 Requires: qemu-tools Provides: cloud-agent @@ -431,6 +434,7 @@ fi pip3 install %{_datadir}/%{name}-management/setup/wheel/six-1.15.0-py2.py3-none-any.whl %{_datadir}/%{name}-management/setup/wheel/setuptools-47.3.1-py3-none-any.whl %{_datadir}/%{name}-management/setup/wheel/protobuf-3.12.2-cp36-cp36m-manylinux1_x86_64.whl %{_datadir}/%{name}-management/setup/wheel/mysql_connector_python-8.0.20-cp36-cp36m-manylinux1_x86_64.whl /usr/bin/systemctl enable cloudstack-management > /dev/null 2>&1 || true +/usr/bin/systemctl enable --now haveged > /dev/null 2>&1 || true grep -s -q "db.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.cloud.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" grep -s -q "db.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" || sed -i -e "\$adb.usage.driver=jdbc:mysql" "%{_sysconfdir}/%{name}/management/db.properties" @@ -480,9 +484,10 @@ if [ ! -d %{_sysconfdir}/libvirt/hooks ] ; then fi cp -a ${RPM_BUILD_ROOT}%{_datadir}/%{name}-agent/lib/libvirtqemuhook %{_sysconfdir}/libvirt/hooks/qemu mkdir -m 0755 -p /usr/share/cloudstack-agent/tmp -/sbin/service libvirtd restart -/sbin/systemctl enable cloudstack-agent > /dev/null 2>&1 || true -/sbin/systemctl enable cloudstack-rolling-maintenance@p > /dev/null 2>&1 || true +/usr/bin/systemctl restart libvirtd +/usr/bin/systemctl enable cloudstack-agent > /dev/null 2>&1 || true +/usr/bin/systemctl enable cloudstack-rolling-maintenance@p > /dev/null 2>&1 || true +/usr/bin/systemctl enable --now rngd > /dev/null 2>&1 || true # if saved configs from upgrade exist, copy them over if [ -f "%{_sysconfdir}/cloud.rpmsave/agent/agent.properties" ]; then diff --git a/plugins/hypervisors/kvm/pom.xml b/plugins/hypervisors/kvm/pom.xml index a80ff11b228..2eb96ae15af 100644 --- a/plugins/hypervisors/kvm/pom.xml +++ b/plugins/hypervisors/kvm/pom.xml @@ -108,10 +108,53 @@ maven-surefire-plugin - **/Qemu*.java + **/QemuImg*.java + + + + skip.libvirt.tests + + + skip.libvirt.tests + true + + + + + + org.apache.maven.plugins + maven-dependency-plugin + + + copy-dependencies + package + + copy-dependencies + + + ${project.build.directory}/dependencies + runtime + + + + + + org.apache.maven.plugins + maven-surefire-plugin + + + **/QemuImg*.java + **/LibvirtComputingResourceTest.java + + + + + + + diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java index d75d03d85a4..59eaa6b64d8 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java @@ -50,6 +50,8 @@ import org.apache.cloudstack.storage.to.PrimaryDataStoreTO; import org.apache.cloudstack.storage.to.TemplateObjectTO; import org.apache.cloudstack.storage.to.VolumeObjectTO; import org.apache.cloudstack.utils.bytescale.ByteScaleUtils; +import org.apache.cloudstack.utils.cryptsetup.CryptSetup; + import org.apache.cloudstack.utils.hypervisor.HypervisorUtils; import org.apache.cloudstack.utils.linux.CPUStat; import org.apache.cloudstack.utils.linux.KVMHostInfo; @@ -58,6 +60,7 @@ import org.apache.cloudstack.utils.qemu.QemuImg; import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat; import org.apache.cloudstack.utils.qemu.QemuImgException; import org.apache.cloudstack.utils.qemu.QemuImgFile; +import org.apache.cloudstack.utils.qemu.QemuObject; import org.apache.cloudstack.utils.security.KeyStoreUtils; import org.apache.cloudstack.utils.security.ParserUtils; import org.apache.commons.collections.MapUtils; @@ -67,6 +70,7 @@ import org.apache.commons.lang.BooleanUtils; import org.apache.commons.lang.math.NumberUtils; import org.apache.commons.lang3.StringUtils; import org.apache.log4j.Logger; +import org.apache.xerces.impl.xpath.regex.Match; import org.joda.time.Duration; import org.libvirt.Connect; import org.libvirt.Domain; @@ -81,6 +85,7 @@ import org.libvirt.Network; import org.libvirt.SchedParameter; import org.libvirt.SchedUlongParameter; import org.libvirt.VcpuInfo; +import org.libvirt.Secret; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.Node; @@ -186,10 +191,13 @@ import com.cloud.utils.script.OutputInterpreter; import com.cloud.utils.script.OutputInterpreter.AllLinesParser; import com.cloud.utils.script.Script; import com.cloud.utils.ssh.SshHelper; +import com.cloud.utils.UuidUtils; import com.cloud.vm.VirtualMachine; import com.cloud.vm.VirtualMachine.PowerState; import com.cloud.vm.VmDetailConstants; +import static com.cloud.host.Host.HOST_VOLUME_ENCRYPTION; + /** * LibvirtComputingResource execute requests on the computing/routing host using * the libvirt API @@ -693,6 +701,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv protected String dpdkOvsPath; protected String directDownloadTemporaryDownloadPath; protected String cachePath; + protected String javaTempDir = System.getProperty("java.io.tmpdir"); private String getEndIpFromStartIp(final String startIp, final int numIps) { final String[] tokens = startIp.split("[.]"); @@ -2924,6 +2933,9 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv pool.getUuid(), devId, diskBusType, DiskProtocol.RBD, DiskDef.DiskFmtType.RAW); } else if (pool.getType() == StoragePoolType.PowerFlex) { disk.defBlockBasedDisk(physicalDisk.getPath(), devId, diskBusTypeData); + if (physicalDisk.getFormat().equals(PhysicalDiskFormat.QCOW2)) { + disk.setDiskFormatType(DiskDef.DiskFmtType.QCOW2); + } } else if (pool.getType() == StoragePoolType.Gluster) { final String mountpoint = pool.getLocalPath(); final String path = physicalDisk.getPath(); @@ -2960,6 +2972,12 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv if (volumeObjectTO.getCacheMode() != null) { disk.setCacheMode(DiskDef.DiskCacheMode.valueOf(volumeObjectTO.getCacheMode().toString().toUpperCase())); } + + if (volumeObjectTO.requiresEncryption()) { + String secretUuid = createLibvirtVolumeSecret(conn, volumeObjectTO.getPath(), volumeObjectTO.getPassphrase()); + DiskDef.LibvirtDiskEncryptDetails encryptDetails = new DiskDef.LibvirtDiskEncryptDetails(secretUuid, QemuObject.EncryptFormat.enumValue(volumeObjectTO.getEncryptFormat())); + disk.setLibvirtDiskEncryptDetails(encryptDetails); + } } if (vm.getDevices() == null) { s_logger.error("There is no devices for" + vm); @@ -3137,7 +3155,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv public boolean cleanupDisk(final DiskDef disk) { final String path = disk.getDiskPath(); - if (org.apache.commons.lang.StringUtils.isBlank(path)) { + if (StringUtils.isBlank(path)) { s_logger.debug("Unable to clean up disk with null path (perhaps empty cdrom drive):" + disk); return false; } @@ -3392,6 +3410,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv cmd.setCluster(_clusterId); cmd.setGatewayIpAddress(_localGateway); cmd.setIqn(getIqn()); + cmd.getHostDetails().put(HOST_VOLUME_ENCRYPTION, String.valueOf(hostSupportsVolumeEncryption())); if (cmd.getHostDetails().containsKey("Host.OS")) { _hostDistro = cmd.getHostDetails().get("Host.OS"); @@ -4705,6 +4724,32 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv return true; } + /** + * Test host for volume encryption support + * @return boolean + */ + public boolean hostSupportsVolumeEncryption() { + // test qemu-img + try { + QemuImg qemu = new QemuImg(0); + if (!qemu.supportsImageFormat(PhysicalDiskFormat.LUKS)) { + return false; + } + } catch (QemuImgException | LibvirtException ex) { + s_logger.info("Host's qemu install doesn't support encryption", ex); + return false; + } + + // test cryptsetup + CryptSetup crypt = new CryptSetup(); + if (!crypt.isSupported()) { + s_logger.info("Host can't run cryptsetup"); + return false; + } + + return true; + } + public boolean isSecureMode(String bootMode) { if (StringUtils.isNotBlank(bootMode) && "secure".equalsIgnoreCase(bootMode)) { return true; @@ -4743,8 +4788,9 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv public void setBackingFileFormat(String volPath) { final int timeout = 0; QemuImgFile file = new QemuImgFile(volPath); - QemuImg qemu = new QemuImg(timeout); + try{ + QemuImg qemu = new QemuImg(timeout); Map info = qemu.info(file); String backingFilePath = info.get(QemuImg.BACKING_FILE); String backingFileFormat = info.get(QemuImg.BACKING_FILE_FORMAT); @@ -4815,4 +4861,70 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv dm.setSchedulerParameters(params); } + + /** + * Set up a libvirt secret for a volume. If Libvirt says that a secret already exists for this volume path, we use its uuid. + * The UUID of the secret needs to be prescriptive such that we can register the same UUID on target host during live migration + * + * @param conn libvirt connection + * @param consumer identifier for volume in secret + * @param data secret contents + * @return uuid of matching secret for volume + * @throws LibvirtException + */ + public String createLibvirtVolumeSecret(Connect conn, String consumer, byte[] data) throws LibvirtException { + String secretUuid = null; + LibvirtSecretDef secretDef = new LibvirtSecretDef(LibvirtSecretDef.Usage.VOLUME, generateSecretUUIDFromString(consumer)); + secretDef.setVolumeVolume(consumer); + secretDef.setPrivate(true); + secretDef.setEphemeral(true); + + try { + Secret secret = conn.secretDefineXML(secretDef.toString()); + secret.setValue(data); + secretUuid = secret.getUUIDString(); + secret.free(); + } catch (LibvirtException ex) { + if (ex.getMessage().contains("already defined for use")) { + Match match = new Match(); + if (UuidUtils.getUuidRegex().matches(ex.getMessage(), match)) { + secretUuid = match.getCapturedText(0); + s_logger.info(String.format("Reusing previously defined secret '%s' for volume '%s'", secretUuid, consumer)); + } else { + throw ex; + } + } else { + throw ex; + } + } + + return secretUuid; + } + + public void removeLibvirtVolumeSecret(Connect conn, String secretUuid) throws LibvirtException { + try { + Secret secret = conn.secretLookupByUUIDString(secretUuid); + secret.undefine(); + } catch (LibvirtException ex) { + if (ex.getMessage().contains("Secret not found")) { + s_logger.debug(String.format("Secret uuid %s doesn't exist", secretUuid)); + return; + } + throw ex; + } + s_logger.debug(String.format("Undefined secret %s", secretUuid)); + } + + public void cleanOldSecretsByDiskDef(Connect conn, List disks) throws LibvirtException { + for (DiskDef disk : disks) { + DiskDef.LibvirtDiskEncryptDetails encryptDetails = disk.getLibvirtDiskEncryptDetails(); + if (encryptDetails != null) { + removeLibvirtVolumeSecret(conn, encryptDetails.getPassphraseUuid()); + } + } + } + + public static String generateSecretUUIDFromString(String seed) { + return UUID.nameUUIDFromBytes(seed.getBytes()).toString(); + } } diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtDomainXMLParser.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtDomainXMLParser.java index f3a177a9e0c..606115e6de0 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtDomainXMLParser.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtDomainXMLParser.java @@ -28,6 +28,7 @@ import javax.xml.parsers.ParserConfigurationException; import org.apache.cloudstack.utils.security.ParserUtils; import org.apache.commons.lang3.StringUtils; +import org.apache.cloudstack.utils.qemu.QemuObject; import org.apache.log4j.Logger; import org.w3c.dom.Document; import org.w3c.dom.Element; @@ -193,6 +194,15 @@ public class LibvirtDomainXMLParser { } } + NodeList encryption = disk.getElementsByTagName("encryption"); + if (encryption.getLength() != 0) { + Element encryptionElement = (Element) encryption.item(0); + String passphraseUuid = getAttrValue("secret", "uuid", encryptionElement); + QemuObject.EncryptFormat encryptFormat = QemuObject.EncryptFormat.enumValue(encryptionElement.getAttribute("format")); + DiskDef.LibvirtDiskEncryptDetails encryptDetails = new DiskDef.LibvirtDiskEncryptDetails(passphraseUuid, encryptFormat); + def.setLibvirtDiskEncryptDetails(encryptDetails); + } + diskDefs.add(def); } diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtSecretDef.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtSecretDef.java index 80c08e9d86d..9596b40dec6 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtSecretDef.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtSecretDef.java @@ -55,10 +55,14 @@ public class LibvirtSecretDef { return _ephemeral; } + public void setEphemeral(boolean ephemeral) { _ephemeral = ephemeral; } + public boolean getPrivate() { return _private; } + public void setPrivate(boolean isPrivate) { _private = isPrivate; } + public String getUuid() { return _uuid; } diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java index b516ecc2c29..a6fec3b60c4 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java @@ -22,6 +22,7 @@ import java.util.HashMap; import java.util.List; import java.util.Map; +import org.apache.cloudstack.utils.qemu.QemuObject; import org.apache.commons.lang.StringEscapeUtils; import org.apache.commons.lang3.StringUtils; import org.apache.log4j.Logger; @@ -559,6 +560,19 @@ public class LibvirtVMDef { } public static class DiskDef { + public static class LibvirtDiskEncryptDetails { + String passphraseUuid; + QemuObject.EncryptFormat encryptFormat; + + public LibvirtDiskEncryptDetails(String passphraseUuid, QemuObject.EncryptFormat encryptFormat) { + this.passphraseUuid = passphraseUuid; + this.encryptFormat = encryptFormat; + } + + public String getPassphraseUuid() { return this.passphraseUuid; } + public QemuObject.EncryptFormat getEncryptFormat() { return this.encryptFormat; } + } + public enum DeviceType { FLOPPY("floppy"), DISK("disk"), CDROM("cdrom"), LUN("lun"); String _type; @@ -714,6 +728,7 @@ public class LibvirtVMDef { private boolean qemuDriver = true; private DiscardType _discard = DiscardType.IGNORE; private IoDriver ioDriver; + private LibvirtDiskEncryptDetails encryptDetails; public DiscardType getDiscard() { return _discard; @@ -962,6 +977,8 @@ public class LibvirtVMDef { return _diskFmtType; } + public void setDiskFormatType(DiskFmtType type) { _diskFmtType = type; } + public void setBytesReadRate(Long bytesReadRate) { _bytesReadRate = bytesReadRate; } @@ -1026,6 +1043,10 @@ public class LibvirtVMDef { this._serial = serial; } + public void setLibvirtDiskEncryptDetails(LibvirtDiskEncryptDetails details) { this.encryptDetails = details; } + + public LibvirtDiskEncryptDetails getLibvirtDiskEncryptDetails() { return this.encryptDetails; } + @Override public String toString() { StringBuilder diskBuilder = new StringBuilder(); @@ -1093,7 +1114,13 @@ public class LibvirtVMDef { diskBuilder.append("/>\n"); if (_serial != null && !_serial.isEmpty() && _deviceType != DeviceType.LUN) { - diskBuilder.append("" + _serial + ""); + diskBuilder.append("" + _serial + "\n"); + } + + if (encryptDetails != null) { + diskBuilder.append("\n"); + diskBuilder.append("\n"); + diskBuilder.append("\n"); } if ((_deviceType != DeviceType.CDROM) && diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtCreateCommandWrapper.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtCreateCommandWrapper.java index bfa557308e7..bac5551129a 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtCreateCommandWrapper.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtCreateCommandWrapper.java @@ -59,13 +59,13 @@ public final class LibvirtCreateCommandWrapper extends CommandWrapper 0 ) { + s_logger.debug("Invoking qemu-img to resize an offline, encrypted volume"); + QemuObject.EncryptFormat encryptFormat = QemuObject.EncryptFormat.enumValue(command.getEncryptFormat()); + resizeEncryptedQcowFile(vol, encryptFormat,newSize, command.getPassphrase(), libvirtComputingResource); + } else { + s_logger.debug("Invoking resize script to handle type " + type); + final Script resizecmd = new Script(libvirtComputingResource.getResizeVolumePath(), libvirtComputingResource.getCmdsTimeout(), s_logger); + resizecmd.add("-s", String.valueOf(newSize)); + resizecmd.add("-c", String.valueOf(currentSize)); + resizecmd.add("-p", path); + resizecmd.add("-t", type); + resizecmd.add("-r", String.valueOf(shrinkOk)); + resizecmd.add("-v", vmInstanceName); + final String result = resizecmd.execute(); + + if (result != null) { + if(type.equals(notifyOnlyType)) { + return new ResizeVolumeAnswer(command, true, "Resize succeeded, but need reboot to notify guest"); + } else { + return new ResizeVolumeAnswer(command, false, result); + } } } /* fetch new size as seen from libvirt, don't want to assume anything */ pool = storagePoolMgr.getStoragePool(spool.getType(), spool.getUuid()); pool.refresh(); - final long finalSize = pool.getPhysicalDisk(volid).getVirtualSize(); + final long finalSize = pool.getPhysicalDisk(volumeId).getVirtualSize(); s_logger.debug("after resize, size reports as: " + toHumanReadableSize(finalSize) + ", requested: " + toHumanReadableSize(newSize)); return new ResizeVolumeAnswer(command, true, "success", finalSize); } catch (final CloudRuntimeException e) { final String error = "Failed to resize volume: " + e.getMessage(); s_logger.debug(error); return new ResizeVolumeAnswer(command, false, error); + } finally { + command.clearPassphrase(); + } + } + + private boolean isVmRunning(final String vmName, final LibvirtComputingResource libvirtComputingResource) { + try { + final LibvirtUtilitiesHelper libvirtUtilitiesHelper = libvirtComputingResource.getLibvirtUtilitiesHelper(); + Connect conn = libvirtUtilitiesHelper.getConnectionByVmName(vmName); + Domain dom = conn.domainLookupByName(vmName); + return (dom != null && dom.getInfo().state == DomainInfo.DomainState.VIR_DOMAIN_RUNNING); + } catch (LibvirtException ex) { + s_logger.info(String.format("Did not find a running VM '%s'", vmName)); + } + return false; + } + + private void resizeEncryptedQcowFile(final KVMPhysicalDisk vol, final QemuObject.EncryptFormat encryptFormat, long newSize, + byte[] passphrase, final LibvirtComputingResource libvirtComputingResource) throws CloudRuntimeException { + List passphraseObjects = new ArrayList<>(); + try (KeyFile keyFile = new KeyFile(passphrase)) { + passphraseObjects.add( + QemuObject.prepareSecretForQemuImg(vol.getFormat(), encryptFormat, keyFile.toString(), "sec0", null) + ); + QemuImg q = new QemuImg(libvirtComputingResource.getCmdsTimeout()); + QemuImageOptions imgOptions = new QemuImageOptions(vol.getFormat(), vol.getPath(),"sec0"); + q.resize(imgOptions, passphraseObjects, newSize); + } catch (QemuImgException | LibvirtException ex) { + throw new CloudRuntimeException("Failed to run qemu-img for resize", ex); + } catch (IOException ex) { + throw new CloudRuntimeException("Failed to create keyfile for encrypted resize", ex); + } finally { + Arrays.fill(passphrase, (byte) 0); + } + } + + private long getVirtualSizeFromFile(String path) { + try { + QemuImg qemu = new QemuImg(0); + QemuImgFile qemuFile = new QemuImgFile(path); + Map info = qemu.info(qemuFile); + if (info.containsKey(QemuImg.VIRTUAL_SIZE)) { + return Long.parseLong(info.get(QemuImg.VIRTUAL_SIZE)); + } else { + throw new CloudRuntimeException("Unable to determine virtual size of volume at path " + path); + } + } catch (QemuImgException | LibvirtException ex) { + throw new CloudRuntimeException("Error when inspecting volume at path " + path, ex); } } } \ No newline at end of file diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtStopCommandWrapper.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtStopCommandWrapper.java index ec243475a20..7ee6ccddf66 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtStopCommandWrapper.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtStopCommandWrapper.java @@ -99,6 +99,10 @@ public final class LibvirtStopCommandWrapper extends CommandWrapper 0) { for (final DiskDef disk : disks) { libvirtComputingResource.cleanupDisk(disk); + DiskDef.LibvirtDiskEncryptDetails diskEncryptDetails = disk.getLibvirtDiskEncryptDetails(); + if (diskEncryptDetails != null) { + libvirtComputingResource.removeLibvirtVolumeSecret(conn, diskEncryptDetails.getPassphraseUuid()); + } } } else { diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStorageAdaptor.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStorageAdaptor.java index daab2a4ce1d..f980cd295bd 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStorageAdaptor.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStorageAdaptor.java @@ -21,12 +21,12 @@ import java.util.List; import java.util.Map; import org.apache.cloudstack.utils.qemu.QemuImg; +import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat; import org.apache.cloudstack.utils.qemu.QemuImgException; import org.apache.cloudstack.utils.qemu.QemuImgFile; import org.apache.commons.lang3.StringUtils; import org.apache.log4j.Logger; - -import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat; +import org.libvirt.LibvirtException; import com.cloud.agent.api.to.DiskTO; import com.cloud.storage.Storage; @@ -35,7 +35,6 @@ import com.cloud.storage.Storage.StoragePoolType; import com.cloud.utils.exception.CloudRuntimeException; import com.cloud.utils.script.OutputInterpreter; import com.cloud.utils.script.Script; -import org.libvirt.LibvirtException; @StorageAdaptorInfo(storagePoolType=StoragePoolType.Iscsi) public class IscsiAdmStorageAdaptor implements StorageAdaptor { @@ -75,7 +74,7 @@ public class IscsiAdmStorageAdaptor implements StorageAdaptor { // called from LibvirtComputingResource.execute(CreateCommand) // does not apply for iScsiAdmStorageAdaptor @Override - public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, KVMStoragePool pool, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { + public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, KVMStoragePool pool, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) { throw new UnsupportedOperationException("Creating a physical disk is not supported."); } @@ -384,7 +383,7 @@ public class IscsiAdmStorageAdaptor implements StorageAdaptor { @Override public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, ProvisioningType provisioningType, long size, - KVMStoragePool destPool, int timeout) { + KVMStoragePool destPool, int timeout, byte[] passphrase) { throw new UnsupportedOperationException("Creating a disk from a template is not yet supported for this configuration."); } @@ -394,8 +393,12 @@ public class IscsiAdmStorageAdaptor implements StorageAdaptor { } @Override - public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk srcDisk, String destVolumeUuid, KVMStoragePool destPool, int timeout) { - QemuImg q = new QemuImg(timeout); + public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) { + return copyPhysicalDisk(disk, name, destPool, timeout, null, null, null); + } + + @Override + public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk srcDisk, String destVolumeUuid, KVMStoragePool destPool, int timeout, byte[] srcPassphrase, byte[] destPassphrase, ProvisioningType provisioningType) { QemuImgFile srcFile; @@ -414,6 +417,7 @@ public class IscsiAdmStorageAdaptor implements StorageAdaptor { QemuImgFile destFile = new QemuImgFile(destDisk.getPath(), destDisk.getFormat()); try { + QemuImg q = new QemuImg(timeout); q.convert(srcFile, destFile); } catch (QemuImgException | LibvirtException ex) { String msg = "Failed to copy data from " + srcDisk.getPath() + " to " + @@ -443,7 +447,7 @@ public class IscsiAdmStorageAdaptor implements StorageAdaptor { } @Override - public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout) { + public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout, byte[] passphrase) { return null; } diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStoragePool.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStoragePool.java index bd2b603fa25..09034c65325 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStoragePool.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStoragePool.java @@ -89,7 +89,7 @@ public class IscsiAdmStoragePool implements KVMStoragePool { // from LibvirtComputingResource.createDiskFromTemplate(KVMPhysicalDisk, String, PhysicalDiskFormat, long, KVMStoragePool) // does not apply for iScsiAdmStoragePool @Override - public KVMPhysicalDisk createPhysicalDisk(String name, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { + public KVMPhysicalDisk createPhysicalDisk(String name, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) { throw new UnsupportedOperationException("Creating a physical disk is not supported."); } @@ -97,7 +97,7 @@ public class IscsiAdmStoragePool implements KVMStoragePool { // from KVMStorageProcessor.createVolume(CreateObjectCommand) // does not apply for iScsiAdmStoragePool @Override - public KVMPhysicalDisk createPhysicalDisk(String name, Storage.ProvisioningType provisioningType, long size) { + public KVMPhysicalDisk createPhysicalDisk(String name, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) { throw new UnsupportedOperationException("Creating a physical disk is not supported."); } diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMPhysicalDisk.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMPhysicalDisk.java index 5b4a61058d5..7de6230f334 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMPhysicalDisk.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMPhysicalDisk.java @@ -17,11 +17,13 @@ package com.cloud.hypervisor.kvm.storage; import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat; +import org.apache.cloudstack.utils.qemu.QemuObject; public class KVMPhysicalDisk { private String path; private String name; private KVMStoragePool pool; + private boolean useAsTemplate; public static String RBDStringBuilder(String monHost, int monPort, String authUserName, String authSecret, String image) { String rbdOpts; @@ -49,6 +51,7 @@ public class KVMPhysicalDisk { private PhysicalDiskFormat format; private long size; private long virtualSize; + private QemuObject.EncryptFormat qemuEncryptFormat; public KVMPhysicalDisk(String path, String name, KVMStoragePool pool) { this.path = path; @@ -101,4 +104,15 @@ public class KVMPhysicalDisk { this.path = path; } + public QemuObject.EncryptFormat getQemuEncryptFormat() { + return this.qemuEncryptFormat; + } + + public void setQemuEncryptFormat(QemuObject.EncryptFormat format) { + this.qemuEncryptFormat = format; + } + + public void setUseAsTemplate() { this.useAsTemplate = true; } + + public boolean useAsTemplate() { return this.useAsTemplate; } } diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePool.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePool.java index feefb50b83d..3bff9c9852e 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePool.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePool.java @@ -25,9 +25,9 @@ import com.cloud.storage.Storage; import com.cloud.storage.Storage.StoragePoolType; public interface KVMStoragePool { - public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size); + public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase); - public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, Storage.ProvisioningType provisioningType, long size); + public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, Storage.ProvisioningType provisioningType, long size, byte[] passphrase); public boolean connectPhysicalDisk(String volumeUuid, Map details); diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePoolManager.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePoolManager.java index 860390835df..4c8445a4855 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePoolManager.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePoolManager.java @@ -386,35 +386,35 @@ public class KVMStoragePoolManager { } public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, Storage.ProvisioningType provisioningType, - KVMStoragePool destPool, int timeout) { - return createDiskFromTemplate(template, name, provisioningType, destPool, template.getSize(), timeout); + KVMStoragePool destPool, int timeout, byte[] passphrase) { + return createDiskFromTemplate(template, name, provisioningType, destPool, template.getSize(), timeout, passphrase); } public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, Storage.ProvisioningType provisioningType, - KVMStoragePool destPool, long size, int timeout) { + KVMStoragePool destPool, long size, int timeout, byte[] passphrase) { StorageAdaptor adaptor = getStorageAdaptor(destPool.getType()); // LibvirtStorageAdaptor-specific statement if (destPool.getType() == StoragePoolType.RBD) { return adaptor.createDiskFromTemplate(template, name, PhysicalDiskFormat.RAW, provisioningType, - size, destPool, timeout); + size, destPool, timeout, passphrase); } else if (destPool.getType() == StoragePoolType.CLVM) { return adaptor.createDiskFromTemplate(template, name, PhysicalDiskFormat.RAW, provisioningType, - size, destPool, timeout); + size, destPool, timeout, passphrase); } else if (template.getFormat() == PhysicalDiskFormat.DIR) { return adaptor.createDiskFromTemplate(template, name, PhysicalDiskFormat.DIR, provisioningType, - size, destPool, timeout); + size, destPool, timeout, passphrase); } else if (destPool.getType() == StoragePoolType.PowerFlex || destPool.getType() == StoragePoolType.Linstor) { return adaptor.createDiskFromTemplate(template, name, PhysicalDiskFormat.RAW, provisioningType, - size, destPool, timeout); + size, destPool, timeout, passphrase); } else { return adaptor.createDiskFromTemplate(template, name, PhysicalDiskFormat.QCOW2, provisioningType, - size, destPool, timeout); + size, destPool, timeout, passphrase); } } @@ -425,13 +425,18 @@ public class KVMStoragePoolManager { public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) { StorageAdaptor adaptor = getStorageAdaptor(destPool.getType()); - return adaptor.copyPhysicalDisk(disk, name, destPool, timeout); + return adaptor.copyPhysicalDisk(disk, name, destPool, timeout, null, null, null); + } + + public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout, byte[] srcPassphrase, byte[] dstPassphrase, Storage.ProvisioningType provisioningType) { + StorageAdaptor adaptor = getStorageAdaptor(destPool.getType()); + return adaptor.copyPhysicalDisk(disk, name, destPool, timeout, srcPassphrase, dstPassphrase, provisioningType); } public KVMPhysicalDisk createDiskWithTemplateBacking(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, long size, - KVMStoragePool destPool, int timeout) { + KVMStoragePool destPool, int timeout, byte[] passphrase) { StorageAdaptor adaptor = getStorageAdaptor(destPool.getType()); - return adaptor.createDiskFromTemplateBacking(template, name, format, size, destPool, timeout); + return adaptor.createDiskFromTemplateBacking(template, name, format, size, destPool, timeout, passphrase); } public KVMPhysicalDisk createPhysicalDiskFromDirectDownloadTemplate(String templateFilePath, String destTemplatePath, KVMStoragePool destPool, Storage.ImageFormat format, int timeout) { diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java index e9f7a64a905..2a4e204ffd8 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java @@ -37,6 +37,7 @@ import java.util.UUID; import javax.naming.ConfigurationException; +import com.cloud.storage.ScopeType; import org.apache.cloudstack.agent.directdownload.DirectDownloadAnswer; import org.apache.cloudstack.agent.directdownload.DirectDownloadCommand; import org.apache.cloudstack.agent.directdownload.HttpDirectDownloadCommand; @@ -68,6 +69,7 @@ import org.apache.cloudstack.utils.qemu.QemuImg; import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat; import org.apache.cloudstack.utils.qemu.QemuImgException; import org.apache.cloudstack.utils.qemu.QemuImgFile; +import org.apache.cloudstack.utils.qemu.QemuObject; import org.apache.commons.collections.MapUtils; import org.apache.commons.io.FileUtils; @@ -148,7 +150,6 @@ public class KVMStorageProcessor implements StorageProcessor { private int _cmdsTimeout; private static final String MANAGE_SNAPSTHOT_CREATE_OPTION = "-c"; - private static final String MANAGE_SNAPSTHOT_DESTROY_OPTION = "-d"; private static final String NAME_OPTION = "-n"; private static final String CEPH_MON_HOST = "mon_host"; private static final String CEPH_AUTH_KEY = "key"; @@ -250,6 +251,7 @@ public class KVMStorageProcessor implements StorageProcessor { } /* Copy volume to primary storage */ + tmplVol.setUseAsTemplate(); s_logger.debug("Copying template to primary storage, template format is " + tmplVol.getFormat() ); final KVMStoragePool primaryPool = storagePoolMgr.getStoragePool(primaryStore.getPoolType(), primaryStore.getUuid()); @@ -422,7 +424,7 @@ public class KVMStorageProcessor implements StorageProcessor { s_logger.warn("Failed to connect new volume at path: " + path + ", in storage pool id: " + primaryStore.getUuid()); } - vol = storagePoolMgr.copyPhysicalDisk(BaseVol, path != null ? path : volume.getUuid(), primaryPool, cmd.getWaitInMillSeconds()); + vol = storagePoolMgr.copyPhysicalDisk(BaseVol, path != null ? path : volume.getUuid(), primaryPool, cmd.getWaitInMillSeconds(), null, volume.getPassphrase(), volume.getProvisioningType()); storagePoolMgr.disconnectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), path); } else { @@ -432,7 +434,7 @@ public class KVMStorageProcessor implements StorageProcessor { } BaseVol = storagePoolMgr.getPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), templatePath); vol = storagePoolMgr.createDiskFromTemplate(BaseVol, volume.getUuid(), volume.getProvisioningType(), - BaseVol.getPool(), volume.getSize(), cmd.getWaitInMillSeconds()); + BaseVol.getPool(), volume.getSize(), cmd.getWaitInMillSeconds(), volume.getPassphrase()); } if (vol == null) { return new CopyCmdAnswer(" Can't create storage volume on storage pool"); @@ -441,6 +443,9 @@ public class KVMStorageProcessor implements StorageProcessor { final VolumeObjectTO newVol = new VolumeObjectTO(); newVol.setPath(vol.getName()); newVol.setSize(volume.getSize()); + if (vol.getQemuEncryptFormat() != null) { + newVol.setEncryptFormat(vol.getQemuEncryptFormat().toString()); + } if (vol.getFormat() == PhysicalDiskFormat.RAW) { newVol.setFormat(ImageFormat.RAW); @@ -454,6 +459,8 @@ public class KVMStorageProcessor implements StorageProcessor { } catch (final CloudRuntimeException e) { s_logger.debug("Failed to create volume: ", e); return new CopyCmdAnswer(e.toString()); + } finally { + volume.clearPassphrase(); } } @@ -524,6 +531,7 @@ public class KVMStorageProcessor implements StorageProcessor { return new CopyCmdAnswer(e.toString()); } finally { + srcVol.clearPassphrase(); if (secondaryStoragePool != null) { storagePoolMgr.deleteStoragePool(secondaryStoragePool.getType(), secondaryStoragePool.getUuid()); } @@ -570,6 +578,8 @@ public class KVMStorageProcessor implements StorageProcessor { s_logger.debug("Failed to copyVolumeFromPrimaryToSecondary: ", e); return new CopyCmdAnswer(e.toString()); } finally { + srcVol.clearPassphrase(); + destVol.clearPassphrase(); if (secondaryStoragePool != null) { storagePoolMgr.deleteStoragePool(secondaryStoragePool.getType(), secondaryStoragePool.getUuid()); } @@ -697,6 +707,7 @@ public class KVMStorageProcessor implements StorageProcessor { s_logger.debug("Failed to createTemplateFromVolume: ", e); return new CopyCmdAnswer(e.toString()); } finally { + volume.clearPassphrase(); if (secondaryStorage != null) { secondaryStorage.delete(); } @@ -942,6 +953,8 @@ public class KVMStorageProcessor implements StorageProcessor { Connect conn = null; KVMPhysicalDisk snapshotDisk = null; KVMStoragePool primaryPool = null; + + final VolumeObjectTO srcVolume = snapshot.getVolume(); try { conn = LibvirtConnection.getConnectionByVmName(vmName); @@ -1024,13 +1037,11 @@ public class KVMStorageProcessor implements StorageProcessor { newSnapshot.setPath(snapshotRelPath + File.separator + descName); newSnapshot.setPhysicalSize(size); return new CopyCmdAnswer(newSnapshot); - } catch (final LibvirtException e) { - s_logger.debug("Failed to backup snapshot: ", e); - return new CopyCmdAnswer(e.toString()); - } catch (final CloudRuntimeException e) { + } catch (final LibvirtException | CloudRuntimeException e) { s_logger.debug("Failed to backup snapshot: ", e); return new CopyCmdAnswer(e.toString()); } finally { + srcVolume.clearPassphrase(); if (isCreatedFromVmSnapshot) { s_logger.debug("Ignoring removal of vm snapshot on primary as this snapshot is created from vm snapshot"); } else if (primaryPool.getType() != StoragePoolType.RBD) { @@ -1058,16 +1069,6 @@ public class KVMStorageProcessor implements StorageProcessor { } } - private void deleteSnapshotViaManageSnapshotScript(final String snapshotName, KVMPhysicalDisk snapshotDisk) { - final Script command = new Script(_manageSnapshotPath, _cmdsTimeout, s_logger); - command.add(MANAGE_SNAPSTHOT_DESTROY_OPTION, snapshotDisk.getPath()); - command.add(NAME_OPTION, snapshotName); - final String result = command.execute(); - if (result != null) { - s_logger.debug("Failed to delete snapshot on primary: " + result); - } - } - protected synchronized String attachOrDetachISO(final Connect conn, final String vmName, String isoPath, final boolean isAttach, Map params) throws LibvirtException, URISyntaxException, InternalErrorException { String isoXml = null; @@ -1213,7 +1214,7 @@ public class KVMStorageProcessor implements StorageProcessor { final Long bytesReadRate, final Long bytesReadRateMax, final Long bytesReadRateMaxLength, final Long bytesWriteRate, final Long bytesWriteRateMax, final Long bytesWriteRateMaxLength, final Long iopsReadRate, final Long iopsReadRateMax, final Long iopsReadRateMaxLength, - final Long iopsWriteRate, final Long iopsWriteRateMax, final Long iopsWriteRateMaxLength, final String cacheMode) throws LibvirtException, InternalErrorException { + final Long iopsWriteRate, final Long iopsWriteRateMax, final Long iopsWriteRateMaxLength, final String cacheMode, final DiskDef.LibvirtDiskEncryptDetails encryptDetails) throws LibvirtException, InternalErrorException { List disks = null; Domain dm = null; DiskDef diskdef = null; @@ -1281,12 +1282,21 @@ public class KVMStorageProcessor implements StorageProcessor { final String glusterVolume = attachingPool.getSourceDir().replace("/", ""); diskdef.defNetworkBasedDisk(glusterVolume + path.replace(mountpoint, ""), attachingPool.getSourceHost(), attachingPool.getSourcePort(), null, null, devId, busT, DiskProtocol.GLUSTER, DiskDef.DiskFmtType.QCOW2); + } else if (attachingPool.getType() == StoragePoolType.PowerFlex) { + diskdef.defBlockBasedDisk(attachingDisk.getPath(), devId, busT); + if (attachingDisk.getFormat() == PhysicalDiskFormat.QCOW2) { + diskdef.setDiskFormatType(DiskDef.DiskFmtType.QCOW2); + } } else if (attachingDisk.getFormat() == PhysicalDiskFormat.QCOW2) { diskdef.defFileBasedDisk(attachingDisk.getPath(), devId, busT, DiskDef.DiskFmtType.QCOW2); } else if (attachingDisk.getFormat() == PhysicalDiskFormat.RAW) { diskdef.defBlockBasedDisk(attachingDisk.getPath(), devId, busT); } + if (encryptDetails != null) { + diskdef.setLibvirtDiskEncryptDetails(encryptDetails); + } + if ((bytesReadRate != null) && (bytesReadRate > 0)) { diskdef.setBytesReadRate(bytesReadRate); } @@ -1344,19 +1354,27 @@ public class KVMStorageProcessor implements StorageProcessor { final PrimaryDataStoreTO primaryStore = (PrimaryDataStoreTO)vol.getDataStore(); final String vmName = cmd.getVmName(); final String serial = resource.diskUuidToSerial(vol.getUuid()); + try { final Connect conn = LibvirtConnection.getConnectionByVmName(vmName); + DiskDef.LibvirtDiskEncryptDetails encryptDetails = null; + if (vol.requiresEncryption()) { + String secretUuid = resource.createLibvirtVolumeSecret(conn, vol.getPath(), vol.getPassphrase()); + encryptDetails = new DiskDef.LibvirtDiskEncryptDetails(secretUuid, QemuObject.EncryptFormat.enumValue(vol.getEncryptFormat())); + vol.clearPassphrase(); + } storagePoolMgr.connectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), vol.getPath(), disk.getDetails()); final KVMPhysicalDisk phyDisk = storagePoolMgr.getPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), vol.getPath()); final String volCacheMode = vol.getCacheMode() == null ? null : vol.getCacheMode().toString(); + s_logger.debug(String.format("Attaching physical disk %s with format %s", phyDisk.getPath(), phyDisk.getFormat())); attachOrDetachDisk(conn, true, vmName, phyDisk, disk.getDiskSeq().intValue(), serial, vol.getBytesReadRate(), vol.getBytesReadRateMax(), vol.getBytesReadRateMaxLength(), vol.getBytesWriteRate(), vol.getBytesWriteRateMax(), vol.getBytesWriteRateMaxLength(), vol.getIopsReadRate(), vol.getIopsReadRateMax(), vol.getIopsReadRateMaxLength(), - vol.getIopsWriteRate(), vol.getIopsWriteRateMax(), vol.getIopsWriteRateMaxLength(), volCacheMode); + vol.getIopsWriteRate(), vol.getIopsWriteRateMax(), vol.getIopsWriteRateMaxLength(), volCacheMode, encryptDetails); return new AttachAnswer(disk); } catch (final LibvirtException e) { @@ -1369,6 +1387,8 @@ public class KVMStorageProcessor implements StorageProcessor { } catch (final CloudRuntimeException e) { s_logger.debug("Failed to attach volume: " + vol.getPath() + ", due to ", e); return new AttachAnswer(e.toString()); + } finally { + vol.clearPassphrase(); } } @@ -1389,7 +1409,7 @@ public class KVMStorageProcessor implements StorageProcessor { vol.getBytesReadRate(), vol.getBytesReadRateMax(), vol.getBytesReadRateMaxLength(), vol.getBytesWriteRate(), vol.getBytesWriteRateMax(), vol.getBytesWriteRateMaxLength(), vol.getIopsReadRate(), vol.getIopsReadRateMax(), vol.getIopsReadRateMaxLength(), - vol.getIopsWriteRate(), vol.getIopsWriteRateMax(), vol.getIopsWriteRateMaxLength(), volCacheMode); + vol.getIopsWriteRate(), vol.getIopsWriteRateMax(), vol.getIopsWriteRateMaxLength(), volCacheMode, null); storagePoolMgr.disconnectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), vol.getPath()); @@ -1403,6 +1423,8 @@ public class KVMStorageProcessor implements StorageProcessor { } catch (final CloudRuntimeException e) { s_logger.debug("Failed to detach volume: " + vol.getPath() + ", due to ", e); return new DettachAnswer(e.toString()); + } finally { + vol.clearPassphrase(); } } @@ -1421,7 +1443,7 @@ public class KVMStorageProcessor implements StorageProcessor { destTemplate = primaryPool.getPhysicalDisk(srcBackingFilePath); } return storagePoolMgr.createDiskWithTemplateBacking(destTemplate, volume.getUuid(), format, volume.getSize(), - primaryPool, timeout); + primaryPool, timeout, volume.getPassphrase()); } /** @@ -1429,7 +1451,7 @@ public class KVMStorageProcessor implements StorageProcessor { */ protected KVMPhysicalDisk createFullCloneVolume(MigrationOptions migrationOptions, VolumeObjectTO volume, KVMStoragePool primaryPool, PhysicalDiskFormat format) { s_logger.debug("For VM migration with full-clone volume: Creating empty stub disk for source disk " + migrationOptions.getSrcVolumeUuid() + " and size: " + toHumanReadableSize(volume.getSize()) + " and format: " + format); - return primaryPool.createPhysicalDisk(volume.getUuid(), format, volume.getProvisioningType(), volume.getSize()); + return primaryPool.createPhysicalDisk(volume.getUuid(), format, volume.getProvisioningType(), volume.getSize(), volume.getPassphrase()); } @Override @@ -1452,24 +1474,25 @@ public class KVMStorageProcessor implements StorageProcessor { MigrationOptions migrationOptions = volume.getMigrationOptions(); if (migrationOptions != null) { - String srcStoreUuid = migrationOptions.getSrcPoolUuid(); - StoragePoolType srcPoolType = migrationOptions.getSrcPoolType(); - KVMStoragePool srcPool = storagePoolMgr.getStoragePool(srcPoolType, srcStoreUuid); int timeout = migrationOptions.getTimeout(); if (migrationOptions.getType() == MigrationOptions.Type.LinkedClone) { + KVMStoragePool srcPool = getTemplateSourcePoolUsingMigrationOptions(primaryPool, migrationOptions); vol = createLinkedCloneVolume(migrationOptions, srcPool, primaryPool, volume, format, timeout); } else if (migrationOptions.getType() == MigrationOptions.Type.FullClone) { vol = createFullCloneVolume(migrationOptions, volume, primaryPool, format); } } else { vol = primaryPool.createPhysicalDisk(volume.getUuid(), format, - volume.getProvisioningType(), disksize); + volume.getProvisioningType(), disksize, volume.getPassphrase()); } final VolumeObjectTO newVol = new VolumeObjectTO(); if(vol != null) { newVol.setPath(vol.getName()); + if (vol.getQemuEncryptFormat() != null) { + newVol.setEncryptFormat(vol.getQemuEncryptFormat().toString()); + } } newVol.setSize(volume.getSize()); newVol.setFormat(ImageFormat.valueOf(format.toString().toUpperCase())); @@ -1478,6 +1501,8 @@ public class KVMStorageProcessor implements StorageProcessor { } catch (final Exception e) { s_logger.debug("Failed to create volume: ", e); return new CreateObjectAnswer(e.toString()); + } finally { + volume.clearPassphrase(); } } @@ -1553,6 +1578,10 @@ public class KVMStorageProcessor implements StorageProcessor { } } + if (state == DomainInfo.DomainState.VIR_DOMAIN_RUNNING && volume.requiresEncryption()) { + throw new CloudRuntimeException("VM is running, encrypted volume snapshots aren't supported"); + } + final KVMStoragePool primaryPool = storagePoolMgr.getStoragePool(primaryStore.getPoolType(), primaryStore.getUuid()); final KVMPhysicalDisk disk = storagePoolMgr.getPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), volume.getPath()); @@ -1649,6 +1678,8 @@ public class KVMStorageProcessor implements StorageProcessor { String errorMsg = String.format("Failed take snapshot for volume [%s], in VM [%s], due to [%s].", volume, vmName, ex.getMessage()); s_logger.error(errorMsg, ex); return new CreateObjectAnswer(errorMsg); + } finally { + volume.clearPassphrase(); } } @@ -1662,11 +1693,11 @@ public class KVMStorageProcessor implements StorageProcessor { protected void extractDiskFromFullVmSnapshot(KVMPhysicalDisk disk, VolumeObjectTO volume, String snapshotPath, String snapshotName, String vmName, Domain vm) throws LibvirtException { - QemuImg qemuImg = new QemuImg(_cmdsTimeout); QemuImgFile srcFile = new QemuImgFile(disk.getPath(), disk.getFormat()); QemuImgFile destFile = new QemuImgFile(snapshotPath, disk.getFormat()); try { + QemuImg qemuImg = new QemuImg(_cmdsTimeout); s_logger.debug(String.format("Converting full VM snapshot [%s] of VM [%s] to external disk snapshot of the volume [%s].", snapshotName, vmName, volume)); qemuImg.convert(srcFile, destFile, null, snapshotName, true); } catch (QemuImgException qemuException) { @@ -1906,18 +1937,20 @@ public class KVMStorageProcessor implements StorageProcessor { } catch (final CloudRuntimeException e) { s_logger.debug("Failed to delete volume: ", e); return new Answer(null, false, e.toString()); + } finally { + vol.clearPassphrase(); } } @Override public Answer createVolumeFromSnapshot(final CopyCommand cmd) { + final DataTO srcData = cmd.getSrcTO(); + final SnapshotObjectTO snapshot = (SnapshotObjectTO)srcData; + final VolumeObjectTO volume = snapshot.getVolume(); try { - final DataTO srcData = cmd.getSrcTO(); - final SnapshotObjectTO snapshot = (SnapshotObjectTO)srcData; final DataTO destData = cmd.getDestTO(); final PrimaryDataStoreTO pool = (PrimaryDataStoreTO)destData.getDataStore(); final DataStoreTO imageStore = srcData.getDataStore(); - final VolumeObjectTO volume = snapshot.getVolume(); if (!(imageStore instanceof NfsTO || imageStore instanceof PrimaryDataStoreTO)) { return new CopyCmdAnswer("unsupported protocol"); @@ -1946,6 +1979,8 @@ public class KVMStorageProcessor implements StorageProcessor { } catch (final CloudRuntimeException e) { s_logger.debug("Failed to createVolumeFromSnapshot: ", e); return new CopyCmdAnswer(e.toString()); + } finally { + volume.clearPassphrase(); } } @@ -2075,15 +2110,15 @@ public class KVMStorageProcessor implements StorageProcessor { @Override public Answer deleteSnapshot(final DeleteCommand cmd) { String snapshotFullName = ""; + SnapshotObjectTO snapshotTO = (SnapshotObjectTO) cmd.getData(); + VolumeObjectTO volume = snapshotTO.getVolume(); try { - SnapshotObjectTO snapshotTO = (SnapshotObjectTO) cmd.getData(); PrimaryDataStoreTO primaryStore = (PrimaryDataStoreTO) snapshotTO.getDataStore(); KVMStoragePool primaryPool = storagePoolMgr.getStoragePool(primaryStore.getPoolType(), primaryStore.getUuid()); String snapshotFullPath = snapshotTO.getPath(); String snapshotName = snapshotFullPath.substring(snapshotFullPath.lastIndexOf("/") + 1); snapshotFullName = snapshotName; if (primaryPool.getType() == StoragePoolType.RBD) { - VolumeObjectTO volume = snapshotTO.getVolume(); KVMPhysicalDisk disk = storagePoolMgr.getPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), volume.getPath()); snapshotFullName = disk.getName() + "@" + snapshotName; Rados r = radosConnect(primaryPool); @@ -2106,6 +2141,7 @@ public class KVMStorageProcessor implements StorageProcessor { rbd.close(image); r.ioCtxDestroy(io); } + } else if (storagePoolTypesToDeleteSnapshotFile.contains(primaryPool.getType())) { s_logger.info(String.format("Deleting snapshot (id=%s, name=%s, path=%s, storage type=%s) on primary storage", snapshotTO.getId(), snapshotTO.getName(), snapshotTO.getPath(), primaryPool.getType())); @@ -2126,6 +2162,8 @@ public class KVMStorageProcessor implements StorageProcessor { } catch (Exception e) { s_logger.error("Failed to remove snapshot " + snapshotFullName + ", with exception: " + e.toString()); return new Answer(cmd, false, "Failed to remove snapshot " + snapshotFullName); + } finally { + volume.clearPassphrase(); } } @@ -2311,6 +2349,9 @@ public class KVMStorageProcessor implements StorageProcessor { } catch (final CloudRuntimeException e) { s_logger.debug("Failed to copyVolumeFromPrimaryToPrimary: ", e); return new CopyCmdAnswer(e.toString()); + } finally { + srcVol.clearPassphrase(); + destVol.clearPassphrase(); } } @@ -2355,4 +2396,23 @@ public class KVMStorageProcessor implements StorageProcessor { s_logger.info("SyncVolumePathCommand not currently applicable for KVMStorageProcessor"); return new Answer(cmd, false, "Not currently applicable for KVMStorageProcessor"); } + + /** + * Determine if migration is using host-local source pool. If so, return this host's storage as the template source, + * rather than remote host's + * @param localPool The host-local storage pool being migrated to + * @param migrationOptions The migration options provided with a migrating volume + * @return + */ + public KVMStoragePool getTemplateSourcePoolUsingMigrationOptions(KVMStoragePool localPool, MigrationOptions migrationOptions) { + if (migrationOptions == null) { + throw new CloudRuntimeException("Migration options cannot be null when choosing a storage pool for migration"); + } + + if (migrationOptions.getScopeType().equals(ScopeType.HOST)) { + return localPool; + } + + return storagePoolMgr.getStoragePool(migrationOptions.getSrcPoolType(), migrationOptions.getSrcPoolUuid()); + } } diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java index 317bb8eb267..4f228ac9e2d 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java @@ -17,6 +17,7 @@ package com.cloud.hypervisor.kvm.storage; import java.io.File; +import java.io.IOException; import java.nio.charset.Charset; import java.util.ArrayList; import java.util.HashMap; @@ -24,10 +25,12 @@ import java.util.List; import java.util.Map; import java.util.UUID; +import org.apache.cloudstack.utils.cryptsetup.KeyFile; import org.apache.cloudstack.utils.qemu.QemuImg; import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat; import org.apache.cloudstack.utils.qemu.QemuImgException; import org.apache.cloudstack.utils.qemu.QemuImgFile; +import org.apache.cloudstack.utils.qemu.QemuObject; import org.apache.commons.codec.binary.Base64; import org.apache.log4j.Logger; import org.libvirt.Connect; @@ -117,9 +120,9 @@ public class LibvirtStorageAdaptor implements StorageAdaptor { @Override public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, long size, - KVMStoragePool destPool, int timeout) { - String volumeDesc = String.format("volume [%s], with template backing [%s], in pool [%s] (%s), with size [%s]", name, template.getName(), destPool.getUuid(), - destPool.getType(), size); + KVMStoragePool destPool, int timeout, byte[] passphrase) { + String volumeDesc = String.format("volume [%s], with template backing [%s], in pool [%s] (%s), with size [%s] and encryption is %s", name, template.getName(), destPool.getUuid(), + destPool.getType(), size, passphrase != null && passphrase.length > 0); if (!poolTypesThatEnableCreateDiskFromTemplateBacking.contains(destPool.getType())) { s_logger.info(String.format("Skipping creation of %s due to pool type is none of the following types %s.", volumeDesc, poolTypesThatEnableCreateDiskFromTemplateBacking.stream() @@ -138,12 +141,22 @@ public class LibvirtStorageAdaptor implements StorageAdaptor { String destPoolLocalPath = destPool.getLocalPath(); String destPath = String.format("%s%s%s", destPoolLocalPath, destPoolLocalPath.endsWith("/") ? "" : "/", name); - try { + Map options = new HashMap<>(); + List passphraseObjects = new ArrayList<>(); + try (KeyFile keyFile = new KeyFile(passphrase)) { QemuImgFile destFile = new QemuImgFile(destPath, format); destFile.setSize(size); QemuImgFile backingFile = new QemuImgFile(template.getPath(), template.getFormat()); - new QemuImg(timeout).create(destFile, backingFile); - } catch (QemuImgException e) { + + if (keyFile.isSet()) { + passphraseObjects.add(QemuObject.prepareSecretForQemuImg(format, QemuObject.EncryptFormat.LUKS, keyFile.toString(), "sec0", options)); + } + s_logger.debug(String.format("Passphrase is staged to keyFile: %s", keyFile.isSet())); + + QemuImg qemu = new QemuImg(timeout); + qemu.create(destFile, backingFile, options, passphraseObjects); + } catch (QemuImgException | LibvirtException | IOException e) { + // why don't we throw an exception here? I guess we fail to find the volume later and that results in a failure returned? s_logger.error(String.format("Failed to create %s in [%s] due to [%s].", volumeDesc, destPath, e.getMessage()), e); } @@ -756,7 +769,7 @@ public class LibvirtStorageAdaptor implements StorageAdaptor { @Override public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, - PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { + PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) { s_logger.info("Attempting to create volume " + name + " (" + pool.getType().toString() + ") in pool " + pool.getUuid() + " with size " + toHumanReadableSize(size)); @@ -768,11 +781,9 @@ public class LibvirtStorageAdaptor implements StorageAdaptor { case Filesystem: switch (format) { case QCOW2: - return createPhysicalDiskByQemuImg(name, pool, format, provisioningType, size); case RAW: - return createPhysicalDiskByQemuImg(name, pool, format, provisioningType, size); + return createPhysicalDiskByQemuImg(name, pool, format, provisioningType, size, passphrase); case DIR: - return createPhysicalDiskByLibVirt(name, pool, format, provisioningType, size); case TAR: return createPhysicalDiskByLibVirt(name, pool, format, provisioningType, size); default: @@ -816,37 +827,50 @@ public class LibvirtStorageAdaptor implements StorageAdaptor { private KVMPhysicalDisk createPhysicalDiskByQemuImg(String name, KVMStoragePool pool, - PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { + PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) { String volPath = pool.getLocalPath() + "/" + name; String volName = name; long virtualSize = 0; long actualSize = 0; + QemuObject.EncryptFormat encryptFormat = null; + List passphraseObjects = new ArrayList<>(); final int timeout = 0; QemuImgFile destFile = new QemuImgFile(volPath); destFile.setFormat(format); destFile.setSize(size); - QemuImg qemu = new QemuImg(timeout); Map options = new HashMap(); if (pool.getType() == StoragePoolType.NetworkFilesystem){ options.put("preallocation", QemuImg.PreallocationType.getPreallocationType(provisioningType).toString()); } - try{ - qemu.create(destFile, options); + try (KeyFile keyFile = new KeyFile(passphrase)) { + QemuImg qemu = new QemuImg(timeout); + if (keyFile.isSet()) { + passphraseObjects.add(QemuObject.prepareSecretForQemuImg(format, QemuObject.EncryptFormat.LUKS, keyFile.toString(), "sec0", options)); + + // make room for encryption header on raw format, use LUKS + if (format == PhysicalDiskFormat.RAW) { + destFile.setSize(destFile.getSize() - (16<<20)); + destFile.setFormat(PhysicalDiskFormat.LUKS); + } + + encryptFormat = QemuObject.EncryptFormat.LUKS; + } + qemu.create(destFile, null, options, passphraseObjects); Map info = qemu.info(destFile); virtualSize = Long.parseLong(info.get(QemuImg.VIRTUAL_SIZE)); actualSize = new File(destFile.getFileName()).length(); - } catch (QemuImgException | LibvirtException e) { - s_logger.error("Failed to create " + volPath + - " due to a failed executing of qemu-img: " + e.getMessage()); + } catch (QemuImgException | LibvirtException | IOException e) { + throw new CloudRuntimeException(String.format("Failed to create %s due to a failed execution of qemu-img", volPath), e); } KVMPhysicalDisk disk = new KVMPhysicalDisk(volPath, volName, pool); disk.setFormat(format); disk.setSize(actualSize); disk.setVirtualSize(virtualSize); + disk.setQemuEncryptFormat(encryptFormat); return disk; } @@ -988,7 +1012,7 @@ public class LibvirtStorageAdaptor implements StorageAdaptor { */ @Override public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, - String name, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout) { + String name, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout, byte[] passphrase) { s_logger.info("Creating volume " + name + " from template " + template.getName() + " in pool " + destPool.getUuid() + " (" + destPool.getType().toString() + ") with size " + toHumanReadableSize(size)); @@ -998,12 +1022,14 @@ public class LibvirtStorageAdaptor implements StorageAdaptor { if (destPool.getType() == StoragePoolType.RBD) { disk = createDiskFromTemplateOnRBD(template, name, format, provisioningType, size, destPool, timeout); } else { - try { + try (KeyFile keyFile = new KeyFile(passphrase)){ String newUuid = name; - disk = destPool.createPhysicalDisk(newUuid, format, provisioningType, template.getVirtualSize()); + List passphraseObjects = new ArrayList<>(); + disk = destPool.createPhysicalDisk(newUuid, format, provisioningType, template.getVirtualSize(), passphrase); if (disk == null) { throw new CloudRuntimeException("Failed to create disk from template " + template.getName()); } + if (template.getFormat() == PhysicalDiskFormat.TAR) { Script.runSimpleBashScript("tar -x -f " + template.getPath() + " -C " + disk.getPath(), timeout); // TO BE FIXED to aware provisioningType } else if (template.getFormat() == PhysicalDiskFormat.DIR) { @@ -1020,32 +1046,45 @@ public class LibvirtStorageAdaptor implements StorageAdaptor { } Map options = new HashMap(); options.put("preallocation", QemuImg.PreallocationType.getPreallocationType(provisioningType).toString()); + + + if (keyFile.isSet()) { + passphraseObjects.add(QemuObject.prepareSecretForQemuImg(format, QemuObject.EncryptFormat.LUKS, keyFile.toString(), "sec0", options)); + disk.setQemuEncryptFormat(QemuObject.EncryptFormat.LUKS); + } switch(provisioningType){ case THIN: QemuImgFile backingFile = new QemuImgFile(template.getPath(), template.getFormat()); - qemu.create(destFile, backingFile, options); + qemu.create(destFile, backingFile, options, passphraseObjects); break; case SPARSE: case FAT: QemuImgFile srcFile = new QemuImgFile(template.getPath(), template.getFormat()); - qemu.convert(srcFile, destFile, options, null); + qemu.convert(srcFile, destFile, options, passphraseObjects, null, false); break; } } else if (format == PhysicalDiskFormat.RAW) { + PhysicalDiskFormat destFormat = PhysicalDiskFormat.RAW; + Map options = new HashMap(); + + if (keyFile.isSet()) { + destFormat = PhysicalDiskFormat.LUKS; + disk.setQemuEncryptFormat(QemuObject.EncryptFormat.LUKS); + passphraseObjects.add(QemuObject.prepareSecretForQemuImg(destFormat, QemuObject.EncryptFormat.LUKS, keyFile.toString(), "sec0", options)); + } + QemuImgFile sourceFile = new QemuImgFile(template.getPath(), template.getFormat()); - QemuImgFile destFile = new QemuImgFile(disk.getPath(), PhysicalDiskFormat.RAW); + QemuImgFile destFile = new QemuImgFile(disk.getPath(), destFormat); if (size > template.getVirtualSize()) { destFile.setSize(size); } else { destFile.setSize(template.getVirtualSize()); } QemuImg qemu = new QemuImg(timeout); - Map options = new HashMap(); - qemu.convert(sourceFile, destFile, options, null); + qemu.convert(sourceFile, destFile, options, passphraseObjects, null, false); } - } catch (QemuImgException | LibvirtException e) { - s_logger.error("Failed to create " + disk.getPath() + - " due to a failed executing of qemu-img: " + e.getMessage()); + } catch (QemuImgException | LibvirtException | IOException e) { + throw new CloudRuntimeException(String.format("Failed to create %s due to a failed execution of qemu-img", name), e); } } @@ -1080,7 +1119,6 @@ public class LibvirtStorageAdaptor implements StorageAdaptor { } - QemuImg qemu = new QemuImg(timeout); QemuImgFile srcFile; QemuImgFile destFile = new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(destPool.getSourceHost(), destPool.getSourcePort(), @@ -1089,10 +1127,10 @@ public class LibvirtStorageAdaptor implements StorageAdaptor { disk.getPath())); destFile.setFormat(format); - if (srcPool.getType() != StoragePoolType.RBD) { srcFile = new QemuImgFile(template.getPath(), template.getFormat()); try{ + QemuImg qemu = new QemuImg(timeout); qemu.convert(srcFile, destFile); } catch (QemuImgException | LibvirtException e) { s_logger.error("Failed to create " + disk.getPath() + @@ -1254,6 +1292,11 @@ public class LibvirtStorageAdaptor implements StorageAdaptor { } } + @Override + public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) { + return copyPhysicalDisk(disk, name, destPool, timeout, null, null, null); + } + /** * This copies a volume from Primary Storage to Secondary Storage * @@ -1261,7 +1304,7 @@ public class LibvirtStorageAdaptor implements StorageAdaptor { * in ManagementServerImpl shows that the destPool is always a Secondary Storage Pool */ @Override - public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) { + public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout, byte[] srcPassphrase, byte[] dstPassphrase, Storage.ProvisioningType provisioningType) { /** With RBD you can't run qemu-img convert with an existing RBD image as destination @@ -1282,9 +1325,9 @@ public class LibvirtStorageAdaptor implements StorageAdaptor { s_logger.debug("copyPhysicalDisk: disk size:" + toHumanReadableSize(disk.getSize()) + ", virtualsize:" + toHumanReadableSize(disk.getVirtualSize())+" format:"+disk.getFormat()); if (destPool.getType() != StoragePoolType.RBD) { if (disk.getFormat() == PhysicalDiskFormat.TAR) { - newDisk = destPool.createPhysicalDisk(name, PhysicalDiskFormat.DIR, Storage.ProvisioningType.THIN, disk.getVirtualSize()); + newDisk = destPool.createPhysicalDisk(name, PhysicalDiskFormat.DIR, Storage.ProvisioningType.THIN, disk.getVirtualSize(), null); } else { - newDisk = destPool.createPhysicalDisk(name, Storage.ProvisioningType.THIN, disk.getVirtualSize()); + newDisk = destPool.createPhysicalDisk(name, Storage.ProvisioningType.THIN, disk.getVirtualSize(), null); } } else { newDisk = new KVMPhysicalDisk(destPool.getSourceDir() + "/" + name, name, destPool); @@ -1296,7 +1339,13 @@ public class LibvirtStorageAdaptor implements StorageAdaptor { String destPath = newDisk.getPath(); PhysicalDiskFormat destFormat = newDisk.getFormat(); - QemuImg qemu = new QemuImg(timeout); + QemuImg qemu; + + try { + qemu = new QemuImg(timeout); + } catch (QemuImgException | LibvirtException ex ) { + throw new CloudRuntimeException("Failed to create qemu-img command", ex); + } QemuImgFile srcFile = null; QemuImgFile destFile = null; @@ -1333,7 +1382,7 @@ public class LibvirtStorageAdaptor implements StorageAdaptor { newDisk = null; } } - } catch (QemuImgException | LibvirtException e) { + } catch (QemuImgException e) { s_logger.error("Failed to fetch the information of file " + srcFile.getFileName() + " the error was: " + e.getMessage()); newDisk = null; } @@ -1443,5 +1492,4 @@ public class LibvirtStorageAdaptor implements StorageAdaptor { private void deleteDirVol(LibvirtStoragePool pool, StorageVol vol) throws LibvirtException { Script.runSimpleBashScript("rm -r --interactive=never " + vol.getPath()); } - } diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStoragePool.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStoragePool.java index 9d5bbed292d..4a449ab57fe 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStoragePool.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStoragePool.java @@ -112,15 +112,15 @@ public class LibvirtStoragePool implements KVMStoragePool { @Override public KVMPhysicalDisk createPhysicalDisk(String name, - PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { + PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) { return this._storageAdaptor - .createPhysicalDisk(name, this, format, provisioningType, size); + .createPhysicalDisk(name, this, format, provisioningType, size, passphrase); } @Override - public KVMPhysicalDisk createPhysicalDisk(String name, Storage.ProvisioningType provisioningType, long size) { + public KVMPhysicalDisk createPhysicalDisk(String name, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) { return this._storageAdaptor.createPhysicalDisk(name, this, - this.getDefaultFormat(), provisioningType, size); + this.getDefaultFormat(), provisioningType, size, passphrase); } @Override diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LinstorStorageAdaptor.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LinstorStorageAdaptor.java index ea2b185e61c..08deba57034 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LinstorStorageAdaptor.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LinstorStorageAdaptor.java @@ -16,6 +16,26 @@ // under the License. package com.cloud.hypervisor.kvm.storage; +import java.io.BufferedReader; +import java.io.IOException; +import java.io.InputStreamReader; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.StringJoiner; + +import javax.annotation.Nonnull; + +import org.apache.cloudstack.utils.qemu.QemuImg; +import org.apache.cloudstack.utils.qemu.QemuImgException; +import org.apache.cloudstack.utils.qemu.QemuImgFile; +import org.apache.log4j.Logger; +import org.libvirt.LibvirtException; + +import com.cloud.storage.Storage; +import com.cloud.utils.exception.CloudRuntimeException; import com.linbit.linstor.api.ApiClient; import com.linbit.linstor.api.ApiException; import com.linbit.linstor.api.Configuration; @@ -33,25 +53,6 @@ import com.linbit.linstor.api.model.ResourceWithVolumes; import com.linbit.linstor.api.model.StoragePool; import com.linbit.linstor.api.model.VolumeDefinition; -import javax.annotation.Nonnull; -import java.io.BufferedReader; -import java.io.IOException; -import java.io.InputStreamReader; -import java.util.Collections; -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import java.util.Optional; -import java.util.StringJoiner; - -import com.cloud.storage.Storage; -import com.cloud.utils.exception.CloudRuntimeException; -import org.apache.cloudstack.utils.qemu.QemuImg; -import org.apache.cloudstack.utils.qemu.QemuImgException; -import org.apache.cloudstack.utils.qemu.QemuImgFile; -import org.apache.log4j.Logger; -import org.libvirt.LibvirtException; - @StorageAdaptorInfo(storagePoolType=Storage.StoragePoolType.Linstor) public class LinstorStorageAdaptor implements StorageAdaptor { private static final Logger s_logger = Logger.getLogger(LinstorStorageAdaptor.class); @@ -197,7 +198,7 @@ public class LinstorStorageAdaptor implements StorageAdaptor { @Override public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, QemuImg.PhysicalDiskFormat format, - Storage.ProvisioningType provisioningType, long size) + Storage.ProvisioningType provisioningType, long size, byte[] passphrase) { final String rscName = getLinstorRscName(name); LinstorStoragePool lpool = (LinstorStoragePool) pool; @@ -377,7 +378,8 @@ public class LinstorStorageAdaptor implements StorageAdaptor { Storage.ProvisioningType provisioningType, long size, KVMStoragePool destPool, - int timeout) + int timeout, + byte[] passphrase) { s_logger.info("Linstor: createDiskFromTemplate"); return copyPhysicalDisk(template, name, destPool, timeout); @@ -401,23 +403,28 @@ public class LinstorStorageAdaptor implements StorageAdaptor { } @Override - public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPools, int timeout) + public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) { + return copyPhysicalDisk(disk, name, destPool, timeout, null, null, null); + } + + @Override + public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPools, int timeout, byte[] srcPassphrase, byte[] destPassphrase, Storage.ProvisioningType provisioningType) { s_logger.debug("Linstor: copyPhysicalDisk"); final QemuImg.PhysicalDiskFormat sourceFormat = disk.getFormat(); final String sourcePath = disk.getPath(); - final QemuImg qemu = new QemuImg(timeout); final QemuImgFile srcFile = new QemuImgFile(sourcePath, sourceFormat); final KVMPhysicalDisk dstDisk = destPools.createPhysicalDisk( - name, QemuImg.PhysicalDiskFormat.RAW, Storage.ProvisioningType.FAT, disk.getVirtualSize()); + name, QemuImg.PhysicalDiskFormat.RAW, Storage.ProvisioningType.FAT, disk.getVirtualSize(), null); final QemuImgFile destFile = new QemuImgFile(dstDisk.getPath()); destFile.setFormat(dstDisk.getFormat()); destFile.setSize(disk.getVirtualSize()); try { + final QemuImg qemu = new QemuImg(timeout); qemu.convert(srcFile, destFile); } catch (QemuImgException | LibvirtException e) { s_logger.error(e); @@ -452,7 +459,7 @@ public class LinstorStorageAdaptor implements StorageAdaptor { QemuImg.PhysicalDiskFormat format, long size, KVMStoragePool destPool, - int timeout) + int timeout, byte[] passphrase) { s_logger.debug("Linstor: createDiskFromTemplateBacking"); return null; diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LinstorStoragePool.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LinstorStoragePool.java index e8aea2ac1fb..5bc60fd2399 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LinstorStoragePool.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LinstorStoragePool.java @@ -19,9 +19,10 @@ package com.cloud.hypervisor.kvm.storage; import java.util.List; import java.util.Map; -import com.cloud.storage.Storage; import org.apache.cloudstack.utils.qemu.QemuImg; +import com.cloud.storage.Storage; + public class LinstorStoragePool implements KVMStoragePool { private final String _uuid; private final String _sourceHost; @@ -42,15 +43,15 @@ public class LinstorStoragePool implements KVMStoragePool { @Override public KVMPhysicalDisk createPhysicalDisk(String name, QemuImg.PhysicalDiskFormat format, - Storage.ProvisioningType provisioningType, long size) + Storage.ProvisioningType provisioningType, long size, byte[] passphrase) { - return _storageAdaptor.createPhysicalDisk(name, this, format, provisioningType, size); + return _storageAdaptor.createPhysicalDisk(name, this, format, provisioningType, size, passphrase); } @Override - public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, Storage.ProvisioningType provisioningType, long size) + public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) { - return _storageAdaptor.createPhysicalDisk(volumeUuid,this, getDefaultFormat(), provisioningType, size); + return _storageAdaptor.createPhysicalDisk(volumeUuid,this, getDefaultFormat(), provisioningType, size, passphrase); } @Override diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ManagedNfsStorageAdaptor.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ManagedNfsStorageAdaptor.java index e9cb042dea5..b23dd9a1790 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ManagedNfsStorageAdaptor.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ManagedNfsStorageAdaptor.java @@ -291,6 +291,11 @@ public class ManagedNfsStorageAdaptor implements StorageAdaptor { @Override public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) { + return copyPhysicalDisk(disk, name, destPool, timeout, null, null, null); + } + + @Override + public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout, byte[] srcPassphrase, byte[] destPassphrase, ProvisioningType provisioningType) { throw new UnsupportedOperationException("Copying a disk is not supported in this configuration."); } @@ -315,7 +320,7 @@ public class ManagedNfsStorageAdaptor implements StorageAdaptor { } @Override - public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout) { + public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout, byte[] passphrase) { return null; } @@ -325,7 +330,7 @@ public class ManagedNfsStorageAdaptor implements StorageAdaptor { } @Override - public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, PhysicalDiskFormat format, ProvisioningType provisioningType, long size) { + public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, PhysicalDiskFormat format, ProvisioningType provisioningType, long size, byte[] passphrase) { return null; } @@ -335,7 +340,7 @@ public class ManagedNfsStorageAdaptor implements StorageAdaptor { } @Override - public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout) { + public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout, byte[] passphrase) { return null; } } diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ScaleIOStorageAdaptor.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ScaleIOStorageAdaptor.java index 4a55288be8f..09c7e146e49 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ScaleIOStorageAdaptor.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ScaleIOStorageAdaptor.java @@ -19,6 +19,8 @@ package com.cloud.hypervisor.kvm.storage; import java.io.File; import java.io.FileFilter; +import java.io.IOException; +import java.util.ArrayList; import java.util.Arrays; import java.util.HashMap; import java.util.List; @@ -26,11 +28,17 @@ import java.util.Map; import java.util.UUID; import org.apache.cloudstack.storage.datastore.util.ScaleIOUtil; +import org.apache.cloudstack.utils.cryptsetup.CryptSetup; +import org.apache.cloudstack.utils.cryptsetup.CryptSetupException; +import org.apache.cloudstack.utils.cryptsetup.KeyFile; +import org.apache.cloudstack.utils.qemu.QemuImageOptions; import org.apache.cloudstack.utils.qemu.QemuImg; import org.apache.cloudstack.utils.qemu.QemuImgException; import org.apache.cloudstack.utils.qemu.QemuImgFile; +import org.apache.cloudstack.utils.qemu.QemuObject; import org.apache.commons.io.filefilter.WildcardFileFilter; import org.apache.log4j.Logger; +import org.libvirt.LibvirtException; import com.cloud.storage.Storage; import com.cloud.storage.StorageLayer; @@ -39,7 +47,6 @@ import com.cloud.utils.exception.CloudRuntimeException; import com.cloud.utils.script.OutputInterpreter; import com.cloud.utils.script.Script; import org.apache.commons.lang3.StringUtils; -import org.libvirt.LibvirtException; @StorageAdaptorInfo(storagePoolType= Storage.StoragePoolType.PowerFlex) public class ScaleIOStorageAdaptor implements StorageAdaptor { @@ -103,11 +110,27 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor { } KVMPhysicalDisk disk = new KVMPhysicalDisk(diskFilePath, volumePath, pool); - disk.setFormat(QemuImg.PhysicalDiskFormat.RAW); + + // try to discover format as written to disk, rather than assuming raw. + // We support qcow2 for stored primary templates, disks seen as other should be treated as raw. + QemuImg qemu = new QemuImg(0); + QemuImgFile qemuFile = new QemuImgFile(diskFilePath); + Map details = qemu.info(qemuFile); + String detectedFormat = details.getOrDefault(QemuImg.FILE_FORMAT, "none"); + if (detectedFormat.equalsIgnoreCase(QemuImg.PhysicalDiskFormat.QCOW2.toString())) { + disk.setFormat(QemuImg.PhysicalDiskFormat.QCOW2); + } else { + disk.setFormat(QemuImg.PhysicalDiskFormat.RAW); + } long diskSize = getPhysicalDiskSize(diskFilePath); disk.setSize(diskSize); - disk.setVirtualSize(diskSize); + + if (details.containsKey(QemuImg.VIRTUAL_SIZE)) { + disk.setVirtualSize(Long.parseLong(details.get(QemuImg.VIRTUAL_SIZE))); + } else { + disk.setVirtualSize(diskSize); + } return disk; } catch (Exception e) { @@ -128,9 +151,59 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor { return MapStorageUuidToStoragePool.remove(uuid) != null; } + /** + * ScaleIO doesn't need to communicate with the hypervisor normally to create a volume. This is used only to prepare a ScaleIO data disk for encryption. + * Thin encrypted volumes are provisioned in QCOW2 format, which insulates the guest from zeroes/unallocated blocks in the block device that would + * otherwise show up as garbage data through the encryption layer. As a bonus, encrypted QCOW2 format handles discard. + * @param name disk path + * @param pool pool + * @param format disk format + * @param provisioningType provisioning type + * @param size disk size + * @param passphrase passphrase + * @return the disk object + */ @Override - public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { - return null; + public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) { + if (passphrase == null || passphrase.length == 0) { + return null; + } + + if(!connectPhysicalDisk(name, pool, null)) { + throw new CloudRuntimeException(String.format("Failed to ensure disk %s was present", name)); + } + + KVMPhysicalDisk disk = getPhysicalDisk(name, pool); + + if (provisioningType.equals(Storage.ProvisioningType.THIN)) { + disk.setFormat(QemuImg.PhysicalDiskFormat.QCOW2); + disk.setQemuEncryptFormat(QemuObject.EncryptFormat.LUKS); + try (KeyFile keyFile = new KeyFile(passphrase)){ + QemuImg qemuImg = new QemuImg(0, true, false); + Map options = new HashMap<>(); + List qemuObjects = new ArrayList<>(); + long formattedSize = getUsableBytesFromRawBytes(disk.getSize()); + + options.put("preallocation", QemuImg.PreallocationType.Metadata.toString()); + qemuObjects.add(QemuObject.prepareSecretForQemuImg(disk.getFormat(), disk.getQemuEncryptFormat(), keyFile.toString(), "sec0", options)); + QemuImgFile file = new QemuImgFile(disk.getPath(), formattedSize, disk.getFormat()); + qemuImg.create(file, null, options, qemuObjects); + LOGGER.debug(String.format("Successfully formatted %s as encrypted QCOW2", file.getFileName())); + } catch (QemuImgException | LibvirtException | IOException ex) { + throw new CloudRuntimeException("Failed to set up encrypted QCOW on block device " + disk.getPath(), ex); + } + } else { + try { + CryptSetup crypt = new CryptSetup(); + crypt.luksFormat(passphrase, CryptSetup.LuksType.LUKS, disk.getPath()); + disk.setQemuEncryptFormat(QemuObject.EncryptFormat.LUKS); + disk.setFormat(QemuImg.PhysicalDiskFormat.RAW); + } catch (CryptSetupException ex) { + throw new CloudRuntimeException("Failed to set up encryption for block device " + disk.getPath(), ex); + } + } + + return disk; } @Override @@ -228,7 +301,7 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor { } @Override - public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout) { + public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout, byte[] passphrase) { return null; } @@ -244,11 +317,20 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor { @Override public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) { + return copyPhysicalDisk(disk, name, destPool, timeout, null, null, Storage.ProvisioningType.THIN); + } + + @Override + public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout, byte[] srcPassphrase, byte[]dstPassphrase, Storage.ProvisioningType provisioningType) { if (StringUtils.isEmpty(name) || disk == null || destPool == null) { LOGGER.error("Unable to copy physical disk due to insufficient data"); throw new CloudRuntimeException("Unable to copy physical disk due to insufficient data"); } + if (provisioningType == null) { + provisioningType = Storage.ProvisioningType.THIN; + } + LOGGER.debug("Copy physical disk with size: " + disk.getSize() + ", virtualsize: " + disk.getVirtualSize()+ ", format: " + disk.getFormat()); KVMPhysicalDisk destDisk = destPool.getPhysicalDisk(name); @@ -257,24 +339,65 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor { throw new CloudRuntimeException("Failed to find the disk: " + name + " of the storage pool: " + destPool.getUuid()); } - destDisk.setFormat(QemuImg.PhysicalDiskFormat.RAW); destDisk.setVirtualSize(disk.getVirtualSize()); destDisk.setSize(disk.getSize()); - QemuImg qemu = new QemuImg(timeout); - QemuImgFile srcFile = null; - QemuImgFile destFile = null; + QemuImg qemu = null; + QemuImgFile srcQemuFile = null; + QemuImgFile destQemuFile = null; + String srcKeyName = "sec0"; + String destKeyName = "sec1"; + List qemuObjects = new ArrayList<>(); + Map options = new HashMap(); + CryptSetup cryptSetup = null; - try { - srcFile = new QemuImgFile(disk.getPath(), disk.getFormat()); - destFile = new QemuImgFile(destDisk.getPath(), destDisk.getFormat()); + try (KeyFile srcKey = new KeyFile(srcPassphrase); KeyFile dstKey = new KeyFile(dstPassphrase)){ + qemu = new QemuImg(timeout, provisioningType.equals(Storage.ProvisioningType.FAT), false); + String srcPath = disk.getPath(); + String destPath = destDisk.getPath(); - LOGGER.debug("Starting copy from source disk image " + srcFile.getFileName() + " to PowerFlex volume: " + destDisk.getPath()); - qemu.convert(srcFile, destFile, true); - LOGGER.debug("Successfully converted source disk image " + srcFile.getFileName() + " to PowerFlex volume: " + destDisk.getPath()); - } catch (QemuImgException | LibvirtException e) { + QemuImageOptions qemuImageOpts = new QemuImageOptions(srcPath); + + srcQemuFile = new QemuImgFile(srcPath, disk.getFormat()); + destQemuFile = new QemuImgFile(destPath); + + if (disk.useAsTemplate()) { + destQemuFile.setFormat(QemuImg.PhysicalDiskFormat.QCOW2); + } + + if (srcKey.isSet()) { + qemuObjects.add(QemuObject.prepareSecretForQemuImg(disk.getFormat(), disk.getQemuEncryptFormat(), srcKey.toString(), srcKeyName, options)); + qemuImageOpts = new QemuImageOptions(disk.getFormat(), srcPath, srcKeyName); + } + + if (dstKey.isSet()) { + if (!provisioningType.equals(Storage.ProvisioningType.FAT)) { + destDisk.setFormat(QemuImg.PhysicalDiskFormat.QCOW2); + destQemuFile.setFormat(QemuImg.PhysicalDiskFormat.QCOW2); + options.put("preallocation", QemuImg.PreallocationType.Metadata.toString()); + } else { + qemu.setSkipZero(false); + destDisk.setFormat(QemuImg.PhysicalDiskFormat.RAW); + // qemu-img wants to treat RAW + encrypt formatting as LUKS + destQemuFile.setFormat(QemuImg.PhysicalDiskFormat.LUKS); + } + qemuObjects.add(QemuObject.prepareSecretForQemuImg(destDisk.getFormat(), QemuObject.EncryptFormat.LUKS, dstKey.toString(), destKeyName, options)); + destDisk.setQemuEncryptFormat(QemuObject.EncryptFormat.LUKS); + } + + boolean forceSourceFormat = srcQemuFile.getFormat() == QemuImg.PhysicalDiskFormat.RAW; + LOGGER.debug(String.format("Starting copy from source disk %s(%s) to PowerFlex volume %s(%s), forcing source format is %b", srcQemuFile.getFileName(), srcQemuFile.getFormat(), destQemuFile.getFileName(), destQemuFile.getFormat(), forceSourceFormat)); + qemu.convert(srcQemuFile, destQemuFile, options, qemuObjects, qemuImageOpts,null, forceSourceFormat); + LOGGER.debug("Successfully converted source disk image " + srcQemuFile.getFileName() + " to PowerFlex volume: " + destDisk.getPath()); + + if (destQemuFile.getFormat() == QemuImg.PhysicalDiskFormat.QCOW2 && !disk.useAsTemplate()) { + QemuImageOptions resizeOptions = new QemuImageOptions(destQemuFile.getFormat(), destPath, destKeyName); + resizeQcow2ToVolume(destPath, resizeOptions, qemuObjects, timeout); + LOGGER.debug("Resized volume at " + destPath); + } + } catch (QemuImgException | LibvirtException | IOException e) { try { - Map srcInfo = qemu.info(srcFile); + Map srcInfo = qemu.info(srcQemuFile); LOGGER.debug("Source disk info: " + Arrays.asList(srcInfo)); } catch (Exception ignored) { LOGGER.warn("Unable to get info from source disk: " + disk.getName()); @@ -283,11 +406,20 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor { String errMsg = String.format("Unable to convert/copy from %s to %s, due to: %s", disk.getName(), name, ((StringUtils.isEmpty(e.getMessage())) ? "an unknown error" : e.getMessage())); LOGGER.error(errMsg); throw new CloudRuntimeException(errMsg, e); + } finally { + if (cryptSetup != null) { + try { + cryptSetup.close(name); + } catch (CryptSetupException ex) { + LOGGER.warn("Failed to clean up LUKS disk after copying disk", ex); + } + } } return destDisk; } + @Override public boolean refresh(KVMStoragePool pool) { return true; @@ -310,7 +442,7 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor { @Override - public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, QemuImg.PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout) { + public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, QemuImg.PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout, byte[] passphrase) { return null; } @@ -347,6 +479,7 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor { QemuImgFile srcFile = null; QemuImgFile destFile = null; try { + QemuImg qemu = new QemuImg(timeout, true, false); destDisk = destPool.getPhysicalDisk(destTemplatePath); if (destDisk == null) { LOGGER.error("Failed to find the disk: " + destTemplatePath + " of the storage pool: " + destPool.getUuid()); @@ -369,14 +502,21 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor { } srcFile = new QemuImgFile(srcTemplateFilePath, srcFileFormat); - destFile = new QemuImgFile(destDisk.getPath(), destDisk.getFormat()); + qemu.info(srcFile); + /** + * Even though the disk itself is raw, we store templates on ScaleIO in qcow2 format. + * This improves performance by reading/writing less data to volume, saves the unused space for encryption header, and + * nicely encapsulates VM images that might contain LUKS data (as opposed to converting to raw which would look like a LUKS volume). + */ + destFile = new QemuImgFile(destDisk.getPath(), QemuImg.PhysicalDiskFormat.QCOW2); + destFile.setSize(srcFile.getSize()); LOGGER.debug("Starting copy from source downloaded template " + srcFile.getFileName() + " to PowerFlex template volume: " + destDisk.getPath()); - QemuImg qemu = new QemuImg(timeout); + qemu.create(destFile); qemu.convert(srcFile, destFile); LOGGER.debug("Successfully converted source downloaded template " + srcFile.getFileName() + " to PowerFlex template volume: " + destDisk.getPath()); } catch (QemuImgException | LibvirtException e) { - LOGGER.error("Failed to convert from " + srcFile.getFileName() + " to " + destFile.getFileName() + " the error was: " + e.getMessage(), e); + LOGGER.error("Failed to convert. The error was: " + e.getMessage(), e); destDisk = null; } finally { Script.runSimpleBashScript("rm -f " + srcTemplateFilePath); @@ -401,4 +541,25 @@ public class ScaleIOStorageAdaptor implements StorageAdaptor { throw new CloudRuntimeException("Unable to extract template " + downloadedTemplateFile); } } + + public void resizeQcow2ToVolume(String volumePath, QemuImageOptions options, List objects, Integer timeout) throws QemuImgException, LibvirtException { + long rawSizeBytes = getPhysicalDiskSize(volumePath); + long usableSizeBytes = getUsableBytesFromRawBytes(rawSizeBytes); + QemuImg qemu = new QemuImg(timeout); + qemu.resize(options, objects, usableSizeBytes); + } + + /** + * Calculates usable size from raw size, assuming qcow2 requires 192k/1GB for metadata + * We also remove 32MiB for potential encryption/safety factor. + * @param raw size in bytes + * @return usable size in bytesbytes + */ + public static long getUsableBytesFromRawBytes(Long raw) { + long usable = raw - (32 << 20) - ((raw >> 30) * 200704); + if (usable < 0) { + usable = 0L; + } + return usable; + } } diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ScaleIOStoragePool.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ScaleIOStoragePool.java index 9ddcd6537d8..cf977f5467b 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ScaleIOStoragePool.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ScaleIOStoragePool.java @@ -70,12 +70,12 @@ public class ScaleIOStoragePool implements KVMStoragePool { } @Override - public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { - return null; + public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) { + return this.storageAdaptor.createPhysicalDisk(volumeUuid, this, format, provisioningType, size, passphrase); } @Override - public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, Storage.ProvisioningType provisioningType, long size) { + public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) { return null; } diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/StorageAdaptor.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/StorageAdaptor.java index 19687d7721a..ecf8691c6ed 100644 --- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/StorageAdaptor.java +++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/StorageAdaptor.java @@ -40,7 +40,7 @@ public interface StorageAdaptor { public boolean deleteStoragePool(String uuid); public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, - PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size); + PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase); // given disk path (per database) and pool, prepare disk on host public boolean connectPhysicalDisk(String volumePath, KVMStoragePool pool, Map details); @@ -58,13 +58,14 @@ public interface StorageAdaptor { public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, - KVMStoragePool destPool, int timeout); + KVMStoragePool destPool, int timeout, byte[] passphrase); public KVMPhysicalDisk createTemplateFromDisk(KVMPhysicalDisk disk, String name, PhysicalDiskFormat format, long size, KVMStoragePool destPool); public List listPhysicalDisks(String storagePoolUuid, KVMStoragePool pool); public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPools, int timeout); + public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPools, int timeout, byte[] srcPassphrase, byte[] dstPassphrase, Storage.ProvisioningType provisioningType); public boolean refresh(KVMStoragePool pool); @@ -80,7 +81,7 @@ public interface StorageAdaptor { */ KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, long size, - KVMStoragePool destPool, int timeout); + KVMStoragePool destPool, int timeout, byte[] passphrase); /** * Create physical disk on Primary Storage from direct download template on the host (in temporary location) diff --git a/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/cryptsetup/CryptSetup.java b/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/cryptsetup/CryptSetup.java new file mode 100644 index 00000000000..82c4ebe6d8f --- /dev/null +++ b/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/cryptsetup/CryptSetup.java @@ -0,0 +1,124 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +package org.apache.cloudstack.utils.cryptsetup; + +import com.cloud.utils.script.Script; + +import java.io.IOException; + +public class CryptSetup { + protected String commandPath = "cryptsetup"; + + /** + * LuksType represents the possible types that can be passed to cryptsetup. + * NOTE: Only "luks1" is currently supported with Libvirt, so while + * this utility may be capable of creating various types, care should + * be taken to use types that work for the use case. + */ + public enum LuksType { + LUKS("luks1"), LUKS2("luks2"), PLAIN("plain"), TCRYPT("tcrypt"), BITLK("bitlk"); + + final String luksTypeValue; + + LuksType(String type) { this.luksTypeValue = type; } + + @Override + public String toString() { + return luksTypeValue; + } + } + + public CryptSetup(final String commandPath) { + this.commandPath = commandPath; + } + + public CryptSetup() {} + + public void open(byte[] passphrase, String diskPath, String diskName) throws CryptSetupException { + try(KeyFile key = new KeyFile(passphrase)) { + final Script script = new Script(commandPath); + script.add("open"); + script.add("--key-file"); + script.add(key.toString()); + script.add("--allow-discards"); + script.add(diskPath); + script.add(diskName); + + final String result = script.execute(); + if (result != null) { + throw new CryptSetupException(result); + } + } catch (IOException ex) { + throw new CryptSetupException(String.format("Failed to open encrypted device at '%s'", diskPath), ex); + } + } + + public void close(String diskName) throws CryptSetupException { + final Script script = new Script(commandPath); + script.add("close"); + script.add(diskName); + + final String result = script.execute(); + if (result != null) { + throw new CryptSetupException(result); + } + } + + /** + * Formats a file using cryptsetup + * @param passphrase + * @param luksType + * @param diskPath + * @throws CryptSetupException + */ + public void luksFormat(byte[] passphrase, LuksType luksType, String diskPath) throws CryptSetupException { + try(KeyFile key = new KeyFile(passphrase)) { + final Script script = new Script(commandPath); + script.add("luksFormat"); + script.add("-q"); + script.add("--force-password"); + script.add("--key-file"); + script.add(key.toString()); + script.add("--type"); + script.add(luksType.toString()); + script.add(diskPath); + + final String result = script.execute(); + if (result != null) { + throw new CryptSetupException(result); + } + } catch (IOException ex) { + throw new CryptSetupException(String.format("Failed to format encrypted device at '%s'", diskPath), ex); + } + } + + public boolean isSupported() { + final Script script = new Script(commandPath); + script.add("--usage"); + final String result = script.execute(); + return result == null; + } + + public boolean isLuks(String filePath) { + final Script script = new Script(commandPath); + script.add("isLuks"); + script.add(filePath); + + final String result = script.execute(); + return result == null; + } +} diff --git a/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/cryptsetup/CryptSetupException.java b/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/cryptsetup/CryptSetupException.java new file mode 100644 index 00000000000..82c8030ff05 --- /dev/null +++ b/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/cryptsetup/CryptSetupException.java @@ -0,0 +1,27 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.cloudstack.utils.cryptsetup; + +public class CryptSetupException extends Exception { + public CryptSetupException(String message) { + super(message); + } + + public CryptSetupException(String message, Exception ex) { super(message, ex); } +} diff --git a/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/cryptsetup/KeyFile.java b/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/cryptsetup/KeyFile.java new file mode 100644 index 00000000000..b680bfcc62d --- /dev/null +++ b/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/cryptsetup/KeyFile.java @@ -0,0 +1,76 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +package org.apache.cloudstack.utils.cryptsetup; + +import java.io.Closeable; +import java.io.IOException; +import java.nio.file.Files; +import java.nio.file.Path; +import java.nio.file.attribute.PosixFilePermission; +import java.nio.file.attribute.PosixFilePermissions; +import java.util.Set; + +public class KeyFile implements Closeable { + private Path filePath = null; + + /** + * KeyFile represents a temporary file for storing data + * to pass to commands, as an alternative to putting sensitive + * data on the command line. + * @param key byte array of content for the KeyFile + * @throws IOException as the IOException for creating KeyFile + */ + public KeyFile(byte[] key) throws IOException { + if (key != null && key.length > 0) { + Set permissions = PosixFilePermissions.fromString("rw-------"); + filePath = Files.createTempFile("keyfile", ".tmp", PosixFilePermissions.asFileAttribute(permissions)); + Files.write(filePath, key); + } + } + + public Path getPath() { + return filePath; + } + + public boolean isSet() { + return filePath != null; + } + + /** + * Converts the keyfile to the absolute path String where it is located + * @return absolute path as String + */ + @Override + public String toString() { + if (filePath != null) { + return filePath.toAbsolutePath().toString(); + } + return ""; + } + + /** + * Deletes the underlying key file + * @throws IOException as the IOException for deleting the underlying key file + */ + @Override + public void close() throws IOException { + if (isSet()) { + Files.delete(filePath); + filePath = null; + } + } +} diff --git a/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/qemu/QemuImageOptions.java b/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/qemu/QemuImageOptions.java new file mode 100644 index 00000000000..4e2c1c4bc69 --- /dev/null +++ b/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/qemu/QemuImageOptions.java @@ -0,0 +1,78 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +package org.apache.cloudstack.utils.qemu; + +import com.google.common.base.Joiner; + +import java.util.HashMap; +import java.util.Map; +import java.util.TreeMap; + +public class QemuImageOptions { + private Map params = new HashMap<>(); + private static final String FILENAME_PARAM_KEY = "file.filename"; + private static final String LUKS_KEY_SECRET_PARAM_KEY = "key-secret"; + private static final String QCOW2_KEY_SECRET_PARAM_KEY = "encrypt.key-secret"; + + public QemuImageOptions(String filePath) { + params.put(FILENAME_PARAM_KEY, filePath); + } + + /** + * Constructor for self-crafting the full map of parameters + * @param params the map of parameters + */ + public QemuImageOptions(Map params) { + this.params = params; + } + + /** + * Constructor for crafting image options that may contain a secret or format + * @param format optional format, renders as "driver" option + * @param filePath required path of image + * @param secretName optional secret name for image. Secret only applies for QCOW2 or LUKS format + */ + public QemuImageOptions(QemuImg.PhysicalDiskFormat format, String filePath, String secretName) { + params.put(FILENAME_PARAM_KEY, filePath); + if (secretName != null && !secretName.isBlank()) { + if (format.equals(QemuImg.PhysicalDiskFormat.QCOW2)) { + params.put(QCOW2_KEY_SECRET_PARAM_KEY, secretName); + } else if (format.equals(QemuImg.PhysicalDiskFormat.LUKS)) { + params.put(LUKS_KEY_SECRET_PARAM_KEY, secretName); + } + } + if (format != null) { + params.put("driver", format.toString()); + } + } + + public void setFormat(QemuImg.PhysicalDiskFormat format) { + if (format != null) { + params.put("driver", format.toString()); + } + } + + /** + * Converts QemuImageOptions into the command strings required by qemu-img flags + * @return array of strings representing command flag and value (--image-opts) + */ + public String[] toCommandFlag() { + Map sorted = new TreeMap<>(params); + String paramString = Joiner.on(",").withKeyValueSeparator("=").join(sorted); + return new String[] {"--image-opts", paramString}; + } +} diff --git a/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/qemu/QemuImg.java b/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/qemu/QemuImg.java index 351ec1031e3..43dd0c80292 100644 --- a/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/qemu/QemuImg.java +++ b/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/qemu/QemuImg.java @@ -16,33 +16,47 @@ // under the License. package org.apache.cloudstack.utils.qemu; +import java.nio.file.Files; +import java.nio.file.Paths; import java.util.HashMap; import java.util.Iterator; +import java.util.List; import java.util.Map; +import java.util.regex.Pattern; + +import org.apache.commons.lang.NotImplementedException; +import org.apache.commons.lang3.StringUtils; +import org.libvirt.LibvirtException; import com.cloud.hypervisor.kvm.resource.LibvirtConnection; import com.cloud.storage.Storage; import com.cloud.utils.script.OutputInterpreter; import com.cloud.utils.script.Script; -import org.apache.commons.lang3.StringUtils; import org.apache.log4j.Logger; -import org.apache.commons.lang.NotImplementedException; -import org.libvirt.LibvirtException; + +import static java.util.regex.Pattern.CASE_INSENSITIVE; public class QemuImg { private Logger logger = Logger.getLogger(this.getClass()); - public final static String BACKING_FILE = "backing_file"; - public final static String BACKING_FILE_FORMAT = "backing_file_format"; - public final static String CLUSTER_SIZE = "cluster_size"; - public final static String FILE_FORMAT = "file_format"; - public final static String IMAGE = "image"; - public final static String VIRTUAL_SIZE = "virtual_size"; + public static final String BACKING_FILE = "backing_file"; + public static final String BACKING_FILE_FORMAT = "backing_file_format"; + public static final String CLUSTER_SIZE = "cluster_size"; + public static final String FILE_FORMAT = "file_format"; + public static final String IMAGE = "image"; + public static final String VIRTUAL_SIZE = "virtual_size"; + public static final String ENCRYPT_FORMAT = "encrypt.format"; + public static final String ENCRYPT_KEY_SECRET = "encrypt.key-secret"; + public static final String TARGET_ZERO_FLAG = "--target-is-zero"; + public static final long QEMU_2_10 = 2010000; /* The qemu-img binary. We expect this to be in $PATH */ public String _qemuImgPath = "qemu-img"; private String cloudQemuImgPath = "cloud-qemu-img"; private int timeout; + private boolean skipZero = false; + private boolean noCache = false; + private long version; private String getQemuImgPathScript = String.format("which %s >& /dev/null; " + "if [ $? -gt 0 ]; then echo \"%s\"; else echo \"%s\"; fi", @@ -50,7 +64,7 @@ public class QemuImg { /* Shouldn't we have KVMPhysicalDisk and LibvirtVMDef read this? */ public static enum PhysicalDiskFormat { - RAW("raw"), QCOW2("qcow2"), VMDK("vmdk"), FILE("file"), RBD("rbd"), SHEEPDOG("sheepdog"), HTTP("http"), HTTPS("https"), TAR("tar"), DIR("dir"); + RAW("raw"), QCOW2("qcow2"), VMDK("vmdk"), FILE("file"), RBD("rbd"), SHEEPDOG("sheepdog"), HTTP("http"), HTTPS("https"), TAR("tar"), DIR("dir"), LUKS("luks"); String format; private PhysicalDiskFormat(final String format) { @@ -93,8 +107,41 @@ public class QemuImg { } } - public QemuImg(final int timeout) { + /** + * Create a QemuImg object that supports skipping target zeroes + * We detect this support via qemu-img help since support can + * be backported rather than found in a specific version. + * + * @param timeout script timeout, default 0 + * @param skipZeroIfSupported Don't write zeroes to target device during convert, if supported by qemu-img + * @param noCache Ensure we flush writes to target disk (useful for block device targets) + */ + public QemuImg(final int timeout, final boolean skipZeroIfSupported, final boolean noCache) throws LibvirtException { + if (skipZeroIfSupported) { + final Script s = new Script(_qemuImgPath, timeout); + s.add("--help"); + + final OutputInterpreter.AllLinesParser parser = new OutputInterpreter.AllLinesParser(); + final String result = s.execute(parser); + + // Older Qemu returns output in result due to --help reporting error status + if (result != null) { + if (result.contains(TARGET_ZERO_FLAG)) { + this.skipZero = true; + } + } else { + if (parser.getLines().contains(TARGET_ZERO_FLAG)) { + this.skipZero = true; + } + } + } this.timeout = timeout; + this.noCache = noCache; + this.version = LibvirtConnection.getConnection().getVersion(); + } + + public QemuImg(final int timeout) throws LibvirtException, QemuImgException { + this(timeout, false, false); } public void setTimeout(final int timeout) { @@ -109,7 +156,8 @@ public class QemuImg { * A alternative path to the qemu-img binary * @return void */ - public QemuImg(final String qemuImgPath) { + public QemuImg(final String qemuImgPath) throws LibvirtException { + this(0, false, false); _qemuImgPath = qemuImgPath; } @@ -135,9 +183,35 @@ public class QemuImg { * @return void */ public void create(final QemuImgFile file, final QemuImgFile backingFile, final Map options) throws QemuImgException { + create(file, backingFile, options, null); + } + + /** + * Create a new image + * + * This method calls 'qemu-img create' + * + * @param file + * The file to create + * @param backingFile + * A backing file if used (for example with qcow2) + * @param options + * Options for the create. Takes a Map with key value + * pairs which are passed on to qemu-img without validation. + * @param qemuObjects + * Pass list of qemu Object to create - see objects in qemu man page + * @return void + */ + public void create(final QemuImgFile file, final QemuImgFile backingFile, final Map options, final List qemuObjects) throws QemuImgException { final Script s = new Script(_qemuImgPath, timeout); s.add("create"); + if (this.version >= QEMU_2_10 && qemuObjects != null) { + for (QemuObject o : qemuObjects) { + s.add(o.toCommandFlag()); + } + } + if (options != null && !options.isEmpty()) { s.add("-o"); final StringBuilder optionsStr = new StringBuilder(); @@ -247,6 +321,63 @@ public class QemuImg { */ public void convert(final QemuImgFile srcFile, final QemuImgFile destFile, final Map options, final String snapshotName, final boolean forceSourceFormat) throws QemuImgException, LibvirtException { + convert(srcFile, destFile, options, null, snapshotName, forceSourceFormat); + } + + /** + * Convert a image from source to destination + * + * This method calls 'qemu-img convert' and takes five objects + * as an argument. + * + * + * @param srcFile + * The source file + * @param destFile + * The destination file + * @param options + * Options for the convert. Takes a Map with key value + * pairs which are passed on to qemu-img without validation. + * @param qemuObjects + * Pass qemu Objects to create - see objects in qemu man page + * @param snapshotName + * If it is provided, convertion uses it as parameter + * @param forceSourceFormat + * If true, specifies the source format in the conversion cmd + * @return void + */ + public void convert(final QemuImgFile srcFile, final QemuImgFile destFile, + final Map options, final List qemuObjects, final String snapshotName, final boolean forceSourceFormat) throws QemuImgException { + QemuImageOptions imageOpts = new QemuImageOptions(srcFile.getFormat(), srcFile.getFileName(), null); + convert(srcFile, destFile, options, qemuObjects, imageOpts, snapshotName, forceSourceFormat); + } + + /** + * Convert a image from source to destination + * + * This method calls 'qemu-img convert' and takes five objects + * as an argument. + * + * + * @param srcFile + * The source file + * @param destFile + * The destination file + * @param options + * Options for the convert. Takes a Map with key value + * pairs which are passed on to qemu-img without validation. + * @param qemuObjects + * Pass qemu Objects to convert - see objects in qemu man page + * @param srcImageOpts + * pass qemu --image-opts to convert + * @param snapshotName + * If it is provided, convertion uses it as parameter + * @param forceSourceFormat + * If true, specifies the source format in the conversion cmd + * @return void + */ + public void convert(final QemuImgFile srcFile, final QemuImgFile destFile, + final Map options, final List qemuObjects, final QemuImageOptions srcImageOpts, final String snapshotName, final boolean forceSourceFormat) throws QemuImgException { Script script = new Script(_qemuImgPath, timeout); if (StringUtils.isNotBlank(snapshotName)) { String qemuPath = Script.runSimpleBashScript(getQemuImgPathScript); @@ -254,34 +385,48 @@ public class QemuImg { } script.add("convert"); - Long version = LibvirtConnection.getConnection().getVersion(); - if (version >= 2010000) { - script.add("-U"); - } - // autodetect source format unless specified explicitly - if (forceSourceFormat) { - script.add("-f"); - script.add(srcFile.getFormat().toString()); + if (skipZero && Files.exists(Paths.get(destFile.getFileName()))) { + script.add("-n"); + script.add(TARGET_ZERO_FLAG); + script.add("-W"); + // with target-is-zero we skip zeros in 1M chunks for compatibility + script.add("-S"); + script.add("1M"); } script.add("-O"); script.add(destFile.getFormat().toString()); - if (options != null && !options.isEmpty()) { - script.add("-o"); - final StringBuffer optionsBuffer = new StringBuffer(); - for (final Map.Entry option : options.entrySet()) { - optionsBuffer.append(option.getKey()).append('=').append(option.getValue()).append(','); - } - String optionsStr = optionsBuffer.toString(); - optionsStr = optionsStr.replaceAll(",$", ""); - script.add(optionsStr); - } - + addScriptOptionsFromMap(options, script); addSnapshotToConvertCommand(srcFile.getFormat().toString(), snapshotName, forceSourceFormat, script, version); - script.add(srcFile.getFileName()); + if (noCache) { + script.add("-t"); + script.add("none"); + } + + if (this.version >= QEMU_2_10) { + script.add("-U"); + + if (forceSourceFormat) { + srcImageOpts.setFormat(srcFile.getFormat()); + } + script.add(srcImageOpts.toCommandFlag()); + + if (qemuObjects != null) { + for (QemuObject o : qemuObjects) { + script.add(o.toCommandFlag()); + } + } + } else { + if (forceSourceFormat) { + script.add("-f"); + script.add(srcFile.getFormat().toString()); + } + script.add(srcFile.getFileName()); + } + script.add(destFile.getFileName()); final String result = script.execute(); @@ -433,11 +578,10 @@ public class QemuImg { * A QemuImgFile object containing the file to get the information from * @return A HashMap with String key-value information as returned by 'qemu-img info' */ - public Map info(final QemuImgFile file) throws QemuImgException, LibvirtException { + public Map info(final QemuImgFile file) throws QemuImgException { final Script s = new Script(_qemuImgPath); s.add("info"); - Long version = LibvirtConnection.getConnection().getVersion(); - if (version >= 2010000) { + if (this.version >= QEMU_2_10) { s.add("-U"); } s.add(file.getFileName()); @@ -465,12 +609,72 @@ public class QemuImg { info.put(key, value); } } + + // set some missing attributes in passed file, if found + if (info.containsKey(VIRTUAL_SIZE) && file.getSize() == 0L) { + file.setSize(Long.parseLong(info.get(VIRTUAL_SIZE))); + } + + if (info.containsKey(FILE_FORMAT) && file.getFormat() == null) { + file.setFormat(PhysicalDiskFormat.valueOf(info.get(FILE_FORMAT).toUpperCase())); + } + return info; } - /* List, apply, create or delete snapshots in image */ - public void snapshot() throws QemuImgException { + /* create snapshots in image */ + public void snapshot(final QemuImageOptions srcImageOpts, final String snapshotName, final List qemuObjects) throws QemuImgException { + final Script s = new Script(_qemuImgPath, timeout); + s.add("snapshot"); + s.add("-c"); + s.add(snapshotName); + for (QemuObject o : qemuObjects) { + s.add(o.toCommandFlag()); + } + + s.add(srcImageOpts.toCommandFlag()); + + final String result = s.execute(); + if (result != null) { + throw new QemuImgException(result); + } + } + + /* delete snapshots in image */ + public void deleteSnapshot(final QemuImageOptions srcImageOpts, final String snapshotName, final List qemuObjects) throws QemuImgException { + final Script s = new Script(_qemuImgPath, timeout); + s.add("snapshot"); + s.add("-d"); + s.add(snapshotName); + + for (QemuObject o : qemuObjects) { + s.add(o.toCommandFlag()); + } + + s.add(srcImageOpts.toCommandFlag()); + + final String result = s.execute(); + if (result != null) { + // support idempotent delete calls, if no snapshot exists we are good. + if (result.contains("snapshot not found") || result.contains("Can't find the snapshot")) { + return; + } + throw new QemuImgException(result); + } + } + + private void addScriptOptionsFromMap(Map options, Script s) { + if (options != null && !options.isEmpty()) { + s.add("-o"); + final StringBuffer optionsBuffer = new StringBuffer(); + for (final Map.Entry option : options.entrySet()) { + optionsBuffer.append(option.getKey()).append('=').append(option.getValue()).append(','); + } + String optionsStr = optionsBuffer.toString(); + optionsStr = optionsStr.replaceAll(",$", ""); + s.add(optionsStr); + } } /* Changes the backing file of an image */ @@ -541,6 +745,33 @@ public class QemuImg { s.execute(); } + /** + * Resize an image, new style flags/options + * + * @param imageOptions + * Qemu style image options for the image to resize + * @param qemuObjects + * Qemu style options (e.g. for passing secrets) + * @param size + * The absolute final size of the image + */ + public void resize(final QemuImageOptions imageOptions, final List qemuObjects, final long size) throws QemuImgException { + final Script s = new Script(_qemuImgPath); + s.add("resize"); + + for (QemuObject o : qemuObjects) { + s.add(o.toCommandFlag()); + } + + s.add(imageOptions.toCommandFlag()); + s.add(Long.toString(size)); + + final String result = s.execute(); + if (result != null) { + throw new QemuImgException(result); + } + } + /** * Resize an image * @@ -557,4 +788,37 @@ public class QemuImg { public void resize(final QemuImgFile file, final long size) throws QemuImgException { this.resize(file, size, false); } + + /** + * Does qemu-img support --target-is-zero + * @return boolean + */ + public boolean supportsSkipZeros() { + return this.skipZero; + } + + public void setSkipZero(boolean skipZero) { + this.skipZero = skipZero; + } + + public boolean supportsImageFormat(QemuImg.PhysicalDiskFormat format) { + final Script s = new Script(_qemuImgPath, timeout); + s.add("--help"); + + final OutputInterpreter.AllLinesParser parser = new OutputInterpreter.AllLinesParser(); + String result = s.execute(parser); + String output = parser.getLines(); + + // Older Qemu returns output in result due to --help reporting error status + if (result != null) { + output = result; + } + + return helpSupportsImageFormat(output, format); + } + + protected static boolean helpSupportsImageFormat(String text, QemuImg.PhysicalDiskFormat format) { + Pattern pattern = Pattern.compile("Supported\\sformats:[a-zA-Z0-9-_\\s]*?\\b" + format + "\\b", CASE_INSENSITIVE); + return pattern.matcher(text).find(); + } } diff --git a/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/qemu/QemuObject.java b/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/qemu/QemuObject.java new file mode 100644 index 00000000000..efeee04cb90 --- /dev/null +++ b/plugins/hypervisors/kvm/src/main/java/org/apache/cloudstack/utils/qemu/QemuObject.java @@ -0,0 +1,128 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +package org.apache.cloudstack.utils.qemu; + +import java.util.EnumMap; +import java.util.Map; +import java.util.TreeMap; + +import org.apache.commons.lang3.StringUtils; + +import com.google.common.base.Joiner; + +public class QemuObject { + private final ObjectType type; + private final Map params; + + public enum ObjectParameter { + DATA("data"), + FILE("file"), + FORMAT("format"), + ID("id"), + IV("iv"), + KEYID("keyid"); + + private final String parameter; + + ObjectParameter(String param) { + this.parameter = param; + } + + @Override + public String toString() {return parameter; } + } + + /** + * Supported qemu encryption formats. + * NOTE: Only "luks" is currently supported with Libvirt, so while + * this utility may be capable of creating various formats, care should + * be taken to use types that work for the use case. + */ + public enum EncryptFormat { + LUKS("luks"), + AES("aes"); + + private final String format; + + EncryptFormat(String format) { this.format = format; } + + @Override + public String toString() { return format;} + + public static EncryptFormat enumValue(String value) { + if (StringUtils.isBlank(value)) { + return LUKS; // default encryption format + } + return EncryptFormat.valueOf(value.toUpperCase()); + } + } + + public enum ObjectType { + SECRET("secret"); + + private final String objectTypeValue; + + ObjectType(String objectTypeValue) { + this.objectTypeValue = objectTypeValue; + } + + @Override + public String toString() { + return objectTypeValue; + } + } + + public QemuObject(ObjectType type, Map params) { + this.type = type; + this.params = params; + } + + /** + * Converts QemuObject into the command strings required by qemu-img flags + * @return array of strings representing command flag and value (--object) + */ + public String[] toCommandFlag() { + Map sorted = new TreeMap<>(params); + String paramString = Joiner.on(",").withKeyValueSeparator("=").join(sorted); + return new String[] {"--object", String.format("%s,%s", type, paramString) }; + } + + /** + * Creates a QemuObject with the correct parameters for passing encryption secret details to qemu-img + * @param format the image format to use + * @param encryptFormat the encryption format to use (luks) + * @param keyFilePath the path to the file containing encryption key + * @param secretName the name to use for the secret + * @param options the options map for qemu-img (-o flag) + * @return the QemuObject containing encryption parameters + */ + public static QemuObject prepareSecretForQemuImg(QemuImg.PhysicalDiskFormat format, EncryptFormat encryptFormat, String keyFilePath, String secretName, Map options) { + EnumMap params = new EnumMap<>(ObjectParameter.class); + params.put(ObjectParameter.ID, secretName); + params.put(ObjectParameter.FILE, keyFilePath); + + if (options != null) { + if (format == QemuImg.PhysicalDiskFormat.QCOW2) { + options.put("encrypt.key-secret", secretName); + options.put("encrypt.format", encryptFormat.toString()); + } else if (format == QemuImg.PhysicalDiskFormat.RAW || format == QemuImg.PhysicalDiskFormat.LUKS) { + options.put("key-secret", secretName); + } + } + return new QemuObject(ObjectType.SECRET, params); + } +} diff --git a/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResourceTest.java b/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResourceTest.java index 6741c61eb81..f3fb1df74a0 100644 --- a/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResourceTest.java +++ b/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResourceTest.java @@ -57,6 +57,7 @@ import javax.xml.xpath.XPathFactory; import com.cloud.utils.ssh.SshHelper; import org.apache.cloudstack.storage.command.AttachAnswer; import org.apache.cloudstack.storage.command.AttachCommand; +import org.apache.cloudstack.utils.bytescale.ByteScaleUtils; import org.apache.cloudstack.utils.linux.CPUStat; import org.apache.cloudstack.utils.linux.MemStat; import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat; @@ -78,6 +79,7 @@ import org.libvirt.MemoryStatistic; import org.libvirt.NodeInfo; import org.libvirt.SchedUlongParameter; import org.libvirt.StorageVol; +import org.libvirt.VcpuInfo; import org.libvirt.jna.virDomainMemoryStats; import org.mockito.BDDMockito; import org.mockito.Mock; @@ -209,8 +211,6 @@ import com.cloud.vm.DiskProfile; import com.cloud.vm.VirtualMachine; import com.cloud.vm.VirtualMachine.PowerState; import com.cloud.vm.VirtualMachine.Type; -import org.apache.cloudstack.utils.bytescale.ByteScaleUtils; -import org.libvirt.VcpuInfo; @RunWith(PowerMockRunner.class) @PrepareForTest(value = {MemStat.class, SshHelper.class}) @@ -2149,7 +2149,7 @@ public class LibvirtComputingResourceTest { when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(poolManager); when(poolManager.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(primary); - when(primary.createPhysicalDisk(diskCharacteristics.getPath(), diskCharacteristics.getProvisioningType(), diskCharacteristics.getSize())).thenReturn(vol); + when(primary.createPhysicalDisk(diskCharacteristics.getPath(), diskCharacteristics.getProvisioningType(), diskCharacteristics.getSize(), null)).thenReturn(vol); final LibvirtRequestWrapper wrapper = LibvirtRequestWrapper.getInstance(); assertNotNull(wrapper); @@ -2208,7 +2208,7 @@ public class LibvirtComputingResourceTest { when(poolManager.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(primary); when(primary.getPhysicalDisk(command.getTemplateUrl())).thenReturn(baseVol); - when(poolManager.createDiskFromTemplate(baseVol, diskCharacteristics.getPath(), diskCharacteristics.getProvisioningType(), primary, baseVol.getSize(), 0)).thenReturn(vol); + when(poolManager.createDiskFromTemplate(baseVol, diskCharacteristics.getPath(), diskCharacteristics.getProvisioningType(), primary, baseVol.getSize(), 0,null)).thenReturn(vol); final LibvirtRequestWrapper wrapper = LibvirtRequestWrapper.getInstance(); assertNotNull(wrapper); @@ -4847,7 +4847,12 @@ public class LibvirtComputingResourceTest { final LibvirtUtilitiesHelper libvirtUtilitiesHelper = Mockito.mock(LibvirtUtilitiesHelper.class); final Connect conn = Mockito.mock(Connect.class); final StorageVol v = Mockito.mock(StorageVol.class); + final Domain vm = Mockito.mock(Domain.class); + final DomainInfo info = Mockito.mock(DomainInfo.class); + final DomainState state = DomainInfo.DomainState.VIR_DOMAIN_RUNNING; + info.state = state; + when(pool.getType()).thenReturn(StoragePoolType.RBD); when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(storagePoolMgr); when(storagePoolMgr.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(storagePool); when(storagePool.getPhysicalDisk(path)).thenReturn(vol); @@ -4860,9 +4865,11 @@ public class LibvirtComputingResourceTest { try { when(libvirtUtilitiesHelper.getConnection()).thenReturn(conn); when(conn.storageVolLookupByPath(path)).thenReturn(v); + when(libvirtUtilitiesHelper.getConnectionByVmName(vmInstance)).thenReturn(conn); + when(conn.domainLookupByName(vmInstance)).thenReturn(vm); + when(vm.getInfo()).thenReturn(info); when(conn.getLibVirVersion()).thenReturn(10010l); - } catch (final LibvirtException e) { fail(e.getMessage()); } @@ -4875,9 +4882,10 @@ public class LibvirtComputingResourceTest { verify(libvirtComputingResource, times(1)).getStoragePoolMgr(); - verify(libvirtComputingResource, times(1)).getLibvirtUtilitiesHelper(); + verify(libvirtComputingResource, times(2)).getLibvirtUtilitiesHelper(); try { verify(libvirtUtilitiesHelper, times(1)).getConnection(); + verify(libvirtUtilitiesHelper, times(1)).getConnectionByVmName(vmInstance); } catch (final LibvirtException e) { fail(e.getMessage()); } @@ -4898,7 +4906,13 @@ public class LibvirtComputingResourceTest { final KVMStoragePool storagePool = Mockito.mock(KVMStoragePool.class); final KVMPhysicalDisk vol = Mockito.mock(KVMPhysicalDisk.class); final LibvirtUtilitiesHelper libvirtUtilitiesHelper = Mockito.mock(LibvirtUtilitiesHelper.class); + final Connect conn = Mockito.mock(Connect.class); + final Domain vm = Mockito.mock(Domain.class); + final DomainInfo info = Mockito.mock(DomainInfo.class); + final DomainState state = DomainInfo.DomainState.VIR_DOMAIN_RUNNING; + info.state = state; + when(pool.getType()).thenReturn(StoragePoolType.Linstor); when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(storagePoolMgr); when(storagePoolMgr.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(storagePool); when(storagePool.getPhysicalDisk(path)).thenReturn(vol); @@ -4906,6 +4920,15 @@ public class LibvirtComputingResourceTest { when(storagePool.getType()).thenReturn(StoragePoolType.Linstor); when(vol.getFormat()).thenReturn(PhysicalDiskFormat.RAW); + when(libvirtComputingResource.getLibvirtUtilitiesHelper()).thenReturn(libvirtUtilitiesHelper); + try { + when(libvirtUtilitiesHelper.getConnectionByVmName(vmInstance)).thenReturn(conn); + when(conn.domainLookupByName(vmInstance)).thenReturn(vm); + when(vm.getInfo()).thenReturn(info); + } catch (final LibvirtException e) { + fail(e.getMessage()); + } + final LibvirtRequestWrapper wrapper = LibvirtRequestWrapper.getInstance(); assertNotNull(wrapper); @@ -4915,9 +4938,10 @@ public class LibvirtComputingResourceTest { verify(libvirtComputingResource, times(1)).getStoragePoolMgr(); verify(libvirtComputingResource, times(0)).getResizeScriptType(storagePool, vol); - verify(libvirtComputingResource, times(0)).getLibvirtUtilitiesHelper(); + verify(libvirtComputingResource, times(1)).getLibvirtUtilitiesHelper(); try { verify(libvirtUtilitiesHelper, times(0)).getConnection(); + verify(libvirtUtilitiesHelper, times(1)).getConnectionByVmName(vmInstance); } catch (final LibvirtException e) { fail(e.getMessage()); } @@ -4956,6 +4980,7 @@ public class LibvirtComputingResourceTest { final KVMStoragePool storagePool = Mockito.mock(KVMStoragePool.class); final KVMPhysicalDisk vol = Mockito.mock(KVMPhysicalDisk.class); + when(pool.getType()).thenReturn(StoragePoolType.Filesystem); when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(storagePoolMgr); when(storagePoolMgr.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(storagePool); when(storagePool.getPhysicalDisk(path)).thenReturn(vol); @@ -4986,6 +5011,7 @@ public class LibvirtComputingResourceTest { final KVMPhysicalDisk vol = Mockito.mock(KVMPhysicalDisk.class); final LibvirtUtilitiesHelper libvirtUtilitiesHelper = Mockito.mock(LibvirtUtilitiesHelper.class); + when(pool.getType()).thenReturn(StoragePoolType.RBD); when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(storagePoolMgr); when(storagePoolMgr.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(storagePool); when(storagePool.getPhysicalDisk(path)).thenReturn(vol); @@ -5032,6 +5058,7 @@ public class LibvirtComputingResourceTest { final KVMStoragePoolManager storagePoolMgr = Mockito.mock(KVMStoragePoolManager.class); final KVMStoragePool storagePool = Mockito.mock(KVMStoragePool.class); + when(pool.getType()).thenReturn(StoragePoolType.RBD); when(libvirtComputingResource.getStoragePoolMgr()).thenReturn(storagePoolMgr); when(storagePoolMgr.getStoragePool(pool.getType(), pool.getUuid())).thenReturn(storagePool); when(storagePool.getPhysicalDisk(path)).thenThrow(CloudRuntimeException.class); diff --git a/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/resource/LibvirtDomainXMLParserTest.java b/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/resource/LibvirtDomainXMLParserTest.java index f2ba293436e..ccab4b01c33 100644 --- a/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/resource/LibvirtDomainXMLParserTest.java +++ b/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/resource/LibvirtDomainXMLParserTest.java @@ -29,6 +29,7 @@ import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.RngDef; import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.WatchDogDef; import junit.framework.TestCase; +import org.apache.cloudstack.utils.qemu.QemuObject; public class LibvirtDomainXMLParserTest extends TestCase { @@ -51,6 +52,10 @@ public class LibvirtDomainXMLParserTest extends TestCase { String diskLabel ="vda"; String diskPath = "/var/lib/libvirt/images/my-test-image.qcow2"; + String diskLabel2 ="vdb"; + String diskPath2 = "/var/lib/libvirt/images/my-test-image2.qcow2"; + String secretUuid = "5644d664-a238-3a9b-811c-961f609d29f4"; + String xml = "" + "s-2970-VM" + "4d2c1526-865d-4fc9-a1ac-dbd1801a22d0" + @@ -87,6 +92,16 @@ public class LibvirtDomainXMLParserTest extends TestCase { "" + "
" + "" + + "" + + "" + + "" + + "" + + "" + + "" + + "" + + "" + + "
" + + "" + "" + "" + "" + @@ -200,6 +215,11 @@ public class LibvirtDomainXMLParserTest extends TestCase { assertEquals(deviceType, disks.get(diskId).getDeviceType()); assertEquals(diskFormat, disks.get(diskId).getDiskFormatType()); + DiskDef.LibvirtDiskEncryptDetails encryptDetails = disks.get(1).getLibvirtDiskEncryptDetails(); + assertNotNull(encryptDetails); + assertEquals(QemuObject.EncryptFormat.LUKS, encryptDetails.getEncryptFormat()); + assertEquals(secretUuid, encryptDetails.getPassphraseUuid()); + List channels = parser.getChannels(); for (int i = 0; i < channels.size(); i++) { assertEquals(channelType, channels.get(i).getChannelType()); diff --git a/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/resource/LibvirtVMDefTest.java b/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/resource/LibvirtVMDefTest.java index dba26286d62..4eb464e4a68 100644 --- a/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/resource/LibvirtVMDefTest.java +++ b/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/resource/LibvirtVMDefTest.java @@ -23,6 +23,7 @@ import java.io.File; import java.util.Arrays; import java.util.List; import java.util.Scanner; +import java.util.UUID; import junit.framework.TestCase; @@ -30,6 +31,7 @@ import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.ChannelDef; import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.DiskDef; import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.SCSIDef; import org.apache.cloudstack.utils.linux.MemStat; +import org.apache.cloudstack.utils.qemu.QemuObject; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; @@ -218,6 +220,24 @@ public class LibvirtVMDefTest extends TestCase { assertEquals(xmlDef, expectedXml); } + @Test + public void testDiskDefWithEncryption() { + String passphraseUuid = UUID.randomUUID().toString(); + DiskDef disk = new DiskDef(); + DiskDef.LibvirtDiskEncryptDetails encryptDetails = new DiskDef.LibvirtDiskEncryptDetails(passphraseUuid, QemuObject.EncryptFormat.LUKS); + disk.defBlockBasedDisk("disk1", 1, DiskDef.DiskBus.VIRTIO); + disk.setLibvirtDiskEncryptDetails(encryptDetails); + String expectedXML = "\n" + + "\n" + + "\n" + + "\n" + + "\n" + + "\n" + + "\n" + + "\n"; + assertEquals(disk.toString(), expectedXML); + } + @Test public void testDiskDefWithBurst() { String filePath = "/var/lib/libvirt/images/disk.qcow2"; diff --git a/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtMigrateCommandWrapperTest.java b/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtMigrateCommandWrapperTest.java index ae2e4cc41c2..15dfc2e732a 100644 --- a/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtMigrateCommandWrapperTest.java +++ b/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtMigrateCommandWrapperTest.java @@ -759,6 +759,41 @@ public class LibvirtMigrateCommandWrapperTest { assertXpath(doc, "/domain/devices/disk/driver/@type", "raw"); } + @Test + public void testReplaceStorageWithSecrets() throws Exception { + Map mapMigrateStorage = new HashMap(); + + final String xmlDesc = + "" + + " " + + " \n" + + " \n" + + " \n" + + " \n" + + " bf8621b3027c497d963b\n" + + " \n" + + "
\n" + + " \n" + + " \n" + + " \n" + + " \n" + + " " + + ""; + + final String volumeFile = "3530f749-82fd-458e-9485-a357e6e541db"; + String newDiskPath = "/mnt/2d0435e1-99e0-4f1d-94c0-bee1f6f8b99e/" + volumeFile; + MigrateDiskInfo diskInfo = new MigrateDiskInfo("123456", DiskType.BLOCK, DriverType.RAW, Source.FILE, newDiskPath); + mapMigrateStorage.put("/mnt/07eb495b-5590-3877-9fb7-23c6e9a40d40/bf8621b3-027c-497d-963b-06319650f048", diskInfo); + final String result = libvirtMigrateCmdWrapper.replaceStorage(xmlDesc, mapMigrateStorage, false); + final String expectedSecretUuid = LibvirtComputingResource.generateSecretUUIDFromString(volumeFile); + + InputStream in = IOUtils.toInputStream(result); + DocumentBuilderFactory docFactory = DocumentBuilderFactory.newInstance(); + DocumentBuilder docBuilder = docFactory.newDocumentBuilder(); + Document doc = docBuilder.parse(in); + assertXpath(doc, "/domain/devices/disk/encryption/secret/@uuid", expectedSecretUuid); + } + public void testReplaceStorageXmlDiskNotManagedStorage() throws ParserConfigurationException, TransformerException, SAXException, IOException { final LibvirtMigrateCommandWrapper lw = new LibvirtMigrateCommandWrapper(); String destDisk1FileName = "XXXXXXXXXXXXXX"; diff --git a/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/storage/ScaleIOStorageAdaptorTest.java b/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/storage/ScaleIOStorageAdaptorTest.java new file mode 100644 index 00000000000..c06442c6ae3 --- /dev/null +++ b/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/storage/ScaleIOStorageAdaptorTest.java @@ -0,0 +1,31 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +package com.cloud.hypervisor.kvm.storage; + +import org.junit.Assert; +import org.junit.Test; + +public class ScaleIOStorageAdaptorTest { + @Test + public void getUsableBytesFromRawBytesTest() { + Assert.assertEquals("Overhead calculated for 8Gi size", 8554774528L, ScaleIOStorageAdaptor.getUsableBytesFromRawBytes(8L << 30)); + Assert.assertEquals("Overhead calculated for 4Ti size", 4294130925568L, ScaleIOStorageAdaptor.getUsableBytesFromRawBytes(4000L << 30)); + Assert.assertEquals("Overhead calculated for 500Gi size", 536737005568L, ScaleIOStorageAdaptor.getUsableBytesFromRawBytes(500L << 30)); + Assert.assertEquals("Unsupported small size", 0, ScaleIOStorageAdaptor.getUsableBytesFromRawBytes(1L)); + } +} diff --git a/plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/cryptsetup/CryptSetupTest.java b/plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/cryptsetup/CryptSetupTest.java new file mode 100644 index 00000000000..007d2c6dc64 --- /dev/null +++ b/plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/cryptsetup/CryptSetupTest.java @@ -0,0 +1,71 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.cloudstack.utils.cryptsetup; + +import org.apache.cloudstack.secret.PassphraseVO; +import org.junit.Assert; +import org.junit.Assume; +import org.junit.Before; +import org.junit.Test; + +import java.io.IOException; +import java.io.RandomAccessFile; +import java.nio.file.Files; +import java.nio.file.Path; +import java.nio.file.attribute.PosixFilePermission; +import java.nio.file.attribute.PosixFilePermissions; +import java.util.Set; + +public class CryptSetupTest { + CryptSetup cryptSetup = new CryptSetup(); + + @Before + public void setup() { + Assume.assumeTrue(cryptSetup.isSupported()); + } + + @Test + public void cryptSetupTest() throws IOException, CryptSetupException { + Set permissions = PosixFilePermissions.fromString("rw-------"); + Path path = Files.createTempFile("cryptsetup", ".tmp",PosixFilePermissions.asFileAttribute(permissions)); + + // create a 1MB file to use as a crypt device + RandomAccessFile file = new RandomAccessFile(path.toFile(),"rw"); + file.setLength(10<<20); + file.close(); + + String filePath = path.toAbsolutePath().toString(); + PassphraseVO passphrase = new PassphraseVO(); + + cryptSetup.luksFormat(passphrase.getPassphrase(), CryptSetup.LuksType.LUKS, filePath); + + Assert.assertTrue(cryptSetup.isLuks(filePath)); + + Assert.assertTrue(Files.deleteIfExists(path)); + } + + @Test + public void cryptSetupNonLuksTest() throws IOException { + Set permissions = PosixFilePermissions.fromString("rw-------"); + Path path = Files.createTempFile("cryptsetup", ".tmp",PosixFilePermissions.asFileAttribute(permissions)); + + Assert.assertFalse(cryptSetup.isLuks(path.toAbsolutePath().toString())); + Assert.assertTrue(Files.deleteIfExists(path)); + } +} diff --git a/plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/cryptsetup/KeyFileTest.java b/plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/cryptsetup/KeyFileTest.java new file mode 100644 index 00000000000..2cb95123a8c --- /dev/null +++ b/plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/cryptsetup/KeyFileTest.java @@ -0,0 +1,49 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.cloudstack.utils.cryptsetup; + +import org.junit.Assert; +import org.junit.Test; + +import java.io.IOException; +import java.nio.file.Files; +import java.nio.file.Path; + +public class KeyFileTest { + + @Test + public void keyFileTest() throws IOException { + byte[] contents = "the quick brown fox".getBytes(); + KeyFile keyFile = new KeyFile(contents); + System.out.printf("New test KeyFile at %s%n", keyFile); + Path path = keyFile.getPath(); + + Assert.assertTrue(keyFile.isSet()); + + // check contents + byte[] fileContents = Files.readAllBytes(path); + Assert.assertArrayEquals(contents, fileContents); + + // delete file on close + keyFile.close(); + + Assert.assertFalse("key file was not cleaned up", Files.exists(path)); + Assert.assertFalse("key file is still set", keyFile.isSet()); + } +} diff --git a/plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/qemu/QemuImageOptionsTest.java b/plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/qemu/QemuImageOptionsTest.java new file mode 100644 index 00000000000..2b56b69d1c5 --- /dev/null +++ b/plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/qemu/QemuImageOptionsTest.java @@ -0,0 +1,61 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.cloudstack.utils.qemu; + +import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.Parameterized; + +import java.util.Arrays; +import java.util.Collection; + +@RunWith(Parameterized.class) +public class QemuImageOptionsTest { + @Parameterized.Parameters + public static Collection data() { + String imagePath = "/path/to/file"; + String secretName = "secretname"; + return Arrays.asList(new Object[][] { + { null, imagePath, null, new String[]{"--image-opts","file.filename=/path/to/file"} }, + { QemuImg.PhysicalDiskFormat.QCOW2, imagePath, null, new String[]{"--image-opts",String.format("driver=qcow2,file.filename=%s", imagePath)} }, + { QemuImg.PhysicalDiskFormat.RAW, imagePath, secretName, new String[]{"--image-opts",String.format("driver=raw,file.filename=%s", imagePath)} }, + { QemuImg.PhysicalDiskFormat.QCOW2, imagePath, secretName, new String[]{"--image-opts", String.format("driver=qcow2,encrypt.key-secret=%s,file.filename=%s", secretName, imagePath)} }, + { QemuImg.PhysicalDiskFormat.LUKS, imagePath, secretName, new String[]{"--image-opts", String.format("driver=luks,file.filename=%s,key-secret=%s", imagePath, secretName)} } + }); + } + + public QemuImageOptionsTest(QemuImg.PhysicalDiskFormat format, String filePath, String secretName, String[] expected) { + this.format = format; + this.filePath = filePath; + this.secretName = secretName; + this.expected = expected; + } + + private final QemuImg.PhysicalDiskFormat format; + private final String filePath; + private final String secretName; + private final String[] expected; + + @Test + public void qemuImageOptionsFileNameTest() { + QemuImageOptions options = new QemuImageOptions(format, filePath, secretName); + Assert.assertEquals(expected, options.toCommandFlag()); + } +} diff --git a/plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/qemu/QemuImgTest.java b/plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/qemu/QemuImgTest.java index 335a5dd9c4a..8bb762cca85 100644 --- a/plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/qemu/QemuImgTest.java +++ b/plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/qemu/QemuImgTest.java @@ -18,21 +18,27 @@ package org.apache.cloudstack.utils.qemu; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; import java.io.File; import com.cloud.utils.script.Script; + +import java.nio.file.Path; +import java.nio.file.Paths; +import java.util.ArrayList; import java.util.HashMap; +import java.util.List; import java.util.Map; import java.util.UUID; +import org.junit.Assert; import org.junit.Ignore; import org.junit.Test; import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat; import org.libvirt.LibvirtException; - @Ignore public class QemuImgTest { @@ -94,7 +100,34 @@ public class QemuImgTest { } @Test - public void testCreateSparseVolume() throws QemuImgException { + public void testCreateWithSecretObject() throws QemuImgException, LibvirtException { + Path testFile = Paths.get("/tmp/", UUID.randomUUID().toString()).normalize().toAbsolutePath(); + long size = 1<<30; // 1 Gi + + Map objectParams = new HashMap<>(); + objectParams.put(QemuObject.ObjectParameter.ID, "sec0"); + objectParams.put(QemuObject.ObjectParameter.DATA, UUID.randomUUID().toString()); + + Map options = new HashMap(); + + options.put(QemuImg.ENCRYPT_FORMAT, "luks"); + options.put(QemuImg.ENCRYPT_KEY_SECRET, "sec0"); + + List qObjects = new ArrayList<>(); + qObjects.add(new QemuObject(QemuObject.ObjectType.SECRET, objectParams)); + + QemuImgFile file = new QemuImgFile(testFile.toString(), size, PhysicalDiskFormat.QCOW2); + QemuImg qemu = new QemuImg(0); + qemu.create(file, null, options, qObjects); + + Map info = qemu.info(file); + assertEquals("yes", info.get("encrypted")); + + assertTrue(testFile.toFile().delete()); + } + + @Test + public void testCreateSparseVolume() throws QemuImgException, LibvirtException { String filename = "/tmp/" + UUID.randomUUID() + ".qcow2"; /* 10TB virtual_size */ @@ -204,7 +237,7 @@ public class QemuImgTest { } @Test(expected = QemuImgException.class) - public void testCreateAndResizeFail() throws QemuImgException { + public void testCreateAndResizeFail() throws QemuImgException, LibvirtException { String filename = "/tmp/" + UUID.randomUUID() + ".qcow2"; long startSize = 20480; @@ -224,7 +257,7 @@ public class QemuImgTest { } @Test(expected = QemuImgException.class) - public void testCreateAndResizeZero() throws QemuImgException { + public void testCreateAndResizeZero() throws QemuImgException, LibvirtException { String filename = "/tmp/" + UUID.randomUUID() + ".qcow2"; long startSize = 20480; @@ -317,4 +350,22 @@ public class QemuImgTest { df.delete(); } + + @Test + public void testHelpSupportsImageFormat() throws QemuImgException, LibvirtException { + String partialHelp = "Parameters to dd subcommand:\n" + + " 'bs=BYTES' read and write up to BYTES bytes at a time (default: 512)\n" + + " 'count=N' copy only N input blocks\n" + + " 'if=FILE' read from FILE\n" + + " 'of=FILE' write to FILE\n" + + " 'skip=N' skip N bs-sized blocks at the start of input\n" + + "\n" + + "Supported formats: cloop copy-on-read file ftp ftps host_cdrom host_device https iser luks nbd nvme parallels qcow qcow2 qed quorum raw rbd ssh throttle vdi vhdx vmdk vpc vvfat\n" + + "\n" + + "See for how to report bugs.\n" + + "More information on the QEMU project at ."; + Assert.assertTrue("should support luks", QemuImg.helpSupportsImageFormat(partialHelp, PhysicalDiskFormat.LUKS)); + Assert.assertTrue("should support qcow2", QemuImg.helpSupportsImageFormat(partialHelp, PhysicalDiskFormat.QCOW2)); + Assert.assertFalse("should not support http", QemuImg.helpSupportsImageFormat(partialHelp, PhysicalDiskFormat.SHEEPDOG)); + } } diff --git a/plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/qemu/QemuObjectTest.java b/plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/qemu/QemuObjectTest.java new file mode 100644 index 00000000000..316da622b84 --- /dev/null +++ b/plugins/hypervisors/kvm/src/test/java/org/apache/cloudstack/utils/qemu/QemuObjectTest.java @@ -0,0 +1,41 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +package org.apache.cloudstack.utils.qemu; + +import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.mockito.junit.MockitoJUnitRunner; + +import java.util.HashMap; +import java.util.Map; + +@RunWith(MockitoJUnitRunner.class) +public class QemuObjectTest { + @Test + public void ToStringTest() { + Map params = new HashMap<>(); + params.put(QemuObject.ObjectParameter.ID, "sec0"); + params.put(QemuObject.ObjectParameter.FILE, "/dev/shm/file"); + QemuObject qObject = new QemuObject(QemuObject.ObjectType.SECRET, params); + + String[] flag = qObject.toCommandFlag(); + Assert.assertEquals(2, flag.length); + Assert.assertEquals("--object", flag[0]); + Assert.assertEquals("secret,file=/dev/shm/file,id=sec0", flag[1]); + } +} diff --git a/plugins/storage/volume/default/src/main/java/org/apache/cloudstack/storage/datastore/driver/CloudStackPrimaryDataStoreDriverImpl.java b/plugins/storage/volume/default/src/main/java/org/apache/cloudstack/storage/datastore/driver/CloudStackPrimaryDataStoreDriverImpl.java index 0f7b7b4fc38..4453906d2aa 100644 --- a/plugins/storage/volume/default/src/main/java/org/apache/cloudstack/storage/datastore/driver/CloudStackPrimaryDataStoreDriverImpl.java +++ b/plugins/storage/volume/default/src/main/java/org/apache/cloudstack/storage/datastore/driver/CloudStackPrimaryDataStoreDriverImpl.java @@ -96,6 +96,8 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri } private static final Logger s_logger = Logger.getLogger(CloudStackPrimaryDataStoreDriverImpl.class); + private static final String NO_REMOTE_ENDPOINT_WITH_ENCRYPTION = "No remote endpoint to send command, unable to find a valid endpoint. Requires encryption support: %s"; + @Inject DiskOfferingDao diskOfferingDao; @Inject @@ -141,10 +143,11 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri } CreateObjectCommand cmd = new CreateObjectCommand(volume.getTO()); - EndPoint ep = epSelector.select(volume); + boolean encryptionRequired = anyVolumeRequiresEncryption(volume); + EndPoint ep = epSelector.select(volume, encryptionRequired); Answer answer = null; if (ep == null) { - String errMsg = "No remote endpoint to send CreateObjectCommand, check if host or ssvm is down?"; + String errMsg = String.format(NO_REMOTE_ENDPOINT_WITH_ENCRYPTION, encryptionRequired); s_logger.error(errMsg); answer = new Answer(cmd, false, errMsg); } else { @@ -203,9 +206,6 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri } else { result.setAnswer(answer); } - } catch (StorageUnavailableException e) { - s_logger.debug("failed to create volume", e); - errMsg = e.toString(); } catch (Exception e) { s_logger.debug("failed to create volume", e); errMsg = e.toString(); @@ -263,6 +263,8 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri @Override public void copyAsync(DataObject srcdata, DataObject destData, AsyncCompletionCallback callback) { + s_logger.debug(String.format("Copying volume %s(%s) to %s(%s)", srcdata.getId(), srcdata.getType(), destData.getId(), destData.getType())); + boolean encryptionRequired = anyVolumeRequiresEncryption(srcdata, destData); DataStore store = destData.getDataStore(); if (store.getRole() == DataStoreRole.Primary) { if ((srcdata.getType() == DataObjectType.TEMPLATE && destData.getType() == DataObjectType.TEMPLATE)) { @@ -283,13 +285,14 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri DataObject srcData = templateDataFactory.getTemplate(srcdata.getId(), imageStore); CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), primaryStorageDownloadWait, true); - EndPoint ep = epSelector.select(srcData, destData); + EndPoint ep = epSelector.select(srcData, destData, encryptionRequired); Answer answer = null; if (ep == null) { - String errMsg = "No remote endpoint to send CopyCommand, check if host or ssvm is down?"; + String errMsg = String.format(NO_REMOTE_ENDPOINT_WITH_ENCRYPTION, encryptionRequired); s_logger.error(errMsg); answer = new Answer(cmd, false, errMsg); } else { + s_logger.debug(String.format("Sending copy command to endpoint %s, where encryption support is %s", ep.getHostAddr(), encryptionRequired ? "required" : "not required")); answer = ep.sendMessage(cmd); } CopyCommandResult result = new CopyCommandResult("", answer); @@ -297,10 +300,10 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri } else if (srcdata.getType() == DataObjectType.SNAPSHOT && destData.getType() == DataObjectType.VOLUME) { SnapshotObjectTO srcTO = (SnapshotObjectTO) srcdata.getTO(); CopyCommand cmd = new CopyCommand(srcTO, destData.getTO(), StorageManager.PRIMARY_STORAGE_DOWNLOAD_WAIT.value(), true); - EndPoint ep = epSelector.select(srcdata, destData); + EndPoint ep = epSelector.select(srcdata, destData, encryptionRequired); CopyCmdAnswer answer = null; if (ep == null) { - String errMsg = "No remote endpoint to send command, check if host or ssvm is down?"; + String errMsg = String.format(NO_REMOTE_ENDPOINT_WITH_ENCRYPTION, encryptionRequired); s_logger.error(errMsg); answer = new CopyCmdAnswer(errMsg); } else { @@ -345,6 +348,7 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri @Override public void takeSnapshot(SnapshotInfo snapshot, AsyncCompletionCallback callback) { CreateCmdResult result = null; + s_logger.debug("Taking snapshot of "+ snapshot); try { SnapshotObjectTO snapshotTO = (SnapshotObjectTO) snapshot.getTO(); Object payload = snapshot.getPayload(); @@ -353,10 +357,13 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri snapshotTO.setQuiescevm(snapshotPayload.getQuiescevm()); } + boolean encryptionRequired = anyVolumeRequiresEncryption(snapshot); CreateObjectCommand cmd = new CreateObjectCommand(snapshotTO); - EndPoint ep = epSelector.select(snapshot, StorageAction.TAKESNAPSHOT); + EndPoint ep = epSelector.select(snapshot, StorageAction.TAKESNAPSHOT, encryptionRequired); Answer answer = null; + s_logger.debug("Taking snapshot of "+ snapshot + " and encryption required is " + encryptionRequired); + if (ep == null) { String errMsg = "No remote endpoint to send createObjectCommand, check if host or ssvm is down?"; s_logger.error(errMsg); @@ -419,16 +426,22 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri VolumeObject vol = (VolumeObject) data; StoragePool pool = (StoragePool) data.getDataStore(); ResizeVolumePayload resizeParameter = (ResizeVolumePayload) vol.getpayload(); + boolean encryptionRequired = anyVolumeRequiresEncryption(vol); + long [] endpointsToRunResize = resizeParameter.hosts; - ResizeVolumeCommand resizeCmd = - new ResizeVolumeCommand(vol.getPath(), new StorageFilerTO(pool), vol.getSize(), resizeParameter.newSize, resizeParameter.shrinkOk, - resizeParameter.instanceName, vol.getChainInfo()); + // if hosts are provided, they are where the VM last ran. We can use that. + if (endpointsToRunResize == null || endpointsToRunResize.length == 0) { + EndPoint ep = epSelector.select(data, encryptionRequired); + endpointsToRunResize = new long[] {ep.getId()}; + } + ResizeVolumeCommand resizeCmd = new ResizeVolumeCommand(vol.getPath(), new StorageFilerTO(pool), vol.getSize(), + resizeParameter.newSize, resizeParameter.shrinkOk, resizeParameter.instanceName, vol.getChainInfo(), vol.getPassphrase(), vol.getEncryptFormat()); if (pool.getParent() != 0) { resizeCmd.setContextParam(DiskTO.PROTOCOL_TYPE, Storage.StoragePoolType.DatastoreCluster.toString()); } CreateCmdResult result = new CreateCmdResult(null, null); try { - ResizeVolumeAnswer answer = (ResizeVolumeAnswer) storageMgr.sendToPool(pool, resizeParameter.hosts, resizeCmd); + ResizeVolumeAnswer answer = (ResizeVolumeAnswer) storageMgr.sendToPool(pool, endpointsToRunResize, resizeCmd); if (answer != null && answer.getResult()) { long finalSize = answer.getNewSize(); s_logger.debug("Resize: volume started at size: " + toHumanReadableSize(vol.getSize()) + " and ended at size: " + toHumanReadableSize(finalSize)); @@ -447,6 +460,8 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri } catch (Exception e) { s_logger.debug("sending resize command failed", e); result.setResult(e.toString()); + } finally { + resizeCmd.clearPassphrase(); } callback.complete(result); @@ -522,4 +537,19 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri @Override public void provideVmTags(long vmId, long volumeId, String tagValue) { } + + /** + * Does any object require encryption support? + */ + private boolean anyVolumeRequiresEncryption(DataObject ... objects) { + for (DataObject o : objects) { + // this fails code smell for returning true twice, but it is more readable than combining all tests into one statement + if (o instanceof VolumeInfo && ((VolumeInfo) o).getPassphraseId() != null) { + return true; + } else if (o instanceof SnapshotInfo && ((SnapshotInfo) o).getBaseVolume().getPassphraseId() != null) { + return true; + } + } + return false; + } } diff --git a/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/driver/ScaleIOPrimaryDataStoreDriver.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/driver/ScaleIOPrimaryDataStoreDriver.java index 5323352c4aa..e647485d9bc 100644 --- a/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/driver/ScaleIOPrimaryDataStoreDriver.java +++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/driver/ScaleIOPrimaryDataStoreDriver.java @@ -22,6 +22,12 @@ import java.util.Map; import javax.inject.Inject; +import com.cloud.agent.api.storage.ResizeVolumeCommand; +import com.cloud.agent.api.to.StorageFilerTO; +import com.cloud.host.HostVO; +import com.cloud.vm.VMInstanceVO; +import com.cloud.vm.VirtualMachine; +import com.cloud.vm.dao.VMInstanceDao; import org.apache.cloudstack.engine.subsystem.api.storage.ChapInfo; import org.apache.cloudstack.engine.subsystem.api.storage.CopyCommandResult; import org.apache.cloudstack.engine.subsystem.api.storage.CreateCmdResult; @@ -41,6 +47,7 @@ import org.apache.cloudstack.storage.RemoteHostEndPoint; import org.apache.cloudstack.storage.command.CommandResult; import org.apache.cloudstack.storage.command.CopyCommand; import org.apache.cloudstack.storage.command.CreateObjectAnswer; +import org.apache.cloudstack.storage.command.CreateObjectCommand; import org.apache.cloudstack.storage.datastore.api.StoragePoolStatistics; import org.apache.cloudstack.storage.datastore.api.VolumeStatistics; import org.apache.cloudstack.storage.datastore.client.ScaleIOGatewayClient; @@ -53,6 +60,8 @@ import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailsDao; import org.apache.cloudstack.storage.datastore.db.StoragePoolVO; import org.apache.cloudstack.storage.datastore.util.ScaleIOUtil; import org.apache.cloudstack.storage.to.SnapshotObjectTO; +import org.apache.cloudstack.storage.to.VolumeObjectTO; +import org.apache.cloudstack.storage.volume.VolumeObject; import org.apache.commons.collections.CollectionUtils; import org.apache.commons.lang3.StringUtils; import org.apache.log4j.Logger; @@ -64,6 +73,7 @@ import com.cloud.agent.api.to.DataTO; import com.cloud.alert.AlertManager; import com.cloud.configuration.Config; import com.cloud.host.Host; +import com.cloud.host.dao.HostDao; import com.cloud.server.ManagementServerImpl; import com.cloud.storage.DataStoreRole; import com.cloud.storage.ResizeVolumePayload; @@ -112,6 +122,10 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver { private AlertManager alertMgr; @Inject private ConfigurationDao configDao; + @Inject + private HostDao hostDao; + @Inject + private VMInstanceDao vmInstanceDao; public ScaleIOPrimaryDataStoreDriver() { @@ -187,6 +201,11 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver { } } + private boolean grantAccess(DataObject dataObject, EndPoint ep, DataStore dataStore) { + Host host = hostDao.findById(ep.getId()); + return grantAccess(dataObject, host, dataStore); + } + @Override public void revokeAccess(DataObject dataObject, Host host, DataStore dataStore) { try { @@ -229,6 +248,11 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver { } } + private void revokeAccess(DataObject dataObject, EndPoint ep, DataStore dataStore) { + Host host = hostDao.findById(ep.getId()); + revokeAccess(dataObject, host, dataStore); + } + private String getConnectedSdc(long poolId, long hostId) { try { StoragePoolHostVO poolHostVO = storagePoolHostDao.findByPoolHost(poolId, hostId); @@ -414,7 +438,7 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver { } } - private String createVolume(VolumeInfo volumeInfo, long storagePoolId) { + private CreateObjectAnswer createVolume(VolumeInfo volumeInfo, long storagePoolId) { LOGGER.debug("Creating PowerFlex volume"); StoragePoolVO storagePool = storagePoolDao.findById(storagePoolId); @@ -447,7 +471,8 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver { volume.setPoolType(Storage.StoragePoolType.PowerFlex); volume.setFormat(Storage.ImageFormat.RAW); volume.setPoolId(storagePoolId); - volumeDao.update(volume.getId(), volume); + VolumeObject createdObject = VolumeObject.getVolumeObject(volumeInfo.getDataStore(), volume); + createdObject.update(); long capacityBytes = storagePool.getCapacityBytes(); long usedBytes = storagePool.getUsedBytes(); @@ -455,7 +480,35 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver { storagePool.setUsedBytes(usedBytes > capacityBytes ? capacityBytes : usedBytes); storagePoolDao.update(storagePoolId, storagePool); - return volumePath; + CreateObjectAnswer answer = new CreateObjectAnswer(createdObject.getTO()); + + // if volume needs to be set up with encryption, do it now if it's not a root disk (which gets done during template copy) + if (anyVolumeRequiresEncryption(volumeInfo) && !volumeInfo.getVolumeType().equals(Volume.Type.ROOT)) { + LOGGER.debug(String.format("Setting up encryption for volume %s", volumeInfo.getId())); + VolumeObjectTO prepVolume = (VolumeObjectTO) createdObject.getTO(); + prepVolume.setPath(volumePath); + prepVolume.setUuid(volumePath); + CreateObjectCommand cmd = new CreateObjectCommand(prepVolume); + EndPoint ep = selector.select(volumeInfo, true); + if (ep == null) { + throw new CloudRuntimeException("No remote endpoint to send PowerFlex volume encryption preparation"); + } else { + try { + grantAccess(createdObject, ep, volumeInfo.getDataStore()); + answer = (CreateObjectAnswer) ep.sendMessage(cmd); + if (!answer.getResult()) { + throw new CloudRuntimeException("Failed to set up encryption on PowerFlex volume: " + answer.getDetails()); + } + } finally { + revokeAccess(createdObject, ep, volumeInfo.getDataStore()); + prepVolume.clearPassphrase(); + } + } + } else { + LOGGER.debug(String.format("No encryption configured for data volume %s", volumeInfo)); + } + + return answer; } catch (Exception e) { String errMsg = "Unable to create PowerFlex Volume due to " + e.getMessage(); LOGGER.warn(errMsg); @@ -511,16 +564,21 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver { public void createAsync(DataStore dataStore, DataObject dataObject, AsyncCompletionCallback callback) { String scaleIOVolumePath = null; String errMsg = null; + Answer answer = new Answer(null, false, "not started"); try { if (dataObject.getType() == DataObjectType.VOLUME) { LOGGER.debug("createAsync - creating volume"); - scaleIOVolumePath = createVolume((VolumeInfo) dataObject, dataStore.getId()); + CreateObjectAnswer createAnswer = createVolume((VolumeInfo) dataObject, dataStore.getId()); + scaleIOVolumePath = createAnswer.getData().getPath(); + answer = createAnswer; } else if (dataObject.getType() == DataObjectType.TEMPLATE) { LOGGER.debug("createAsync - creating template"); scaleIOVolumePath = createTemplateVolume((TemplateInfo)dataObject, dataStore.getId()); + answer = new Answer(null, true, "created template"); } else { errMsg = "Invalid DataObjectType (" + dataObject.getType() + ") passed to createAsync"; LOGGER.error(errMsg); + answer = new Answer(null, false, errMsg); } } catch (Exception ex) { errMsg = ex.getMessage(); @@ -528,10 +586,11 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver { if (callback == null) { throw ex; } + answer = new Answer(null, false, errMsg); } if (callback != null) { - CreateCmdResult result = new CreateCmdResult(scaleIOVolumePath, new Answer(null, errMsg == null, errMsg)); + CreateCmdResult result = new CreateCmdResult(scaleIOVolumePath, answer); result.setResult(errMsg); callback.complete(result); } @@ -606,6 +665,7 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver { public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback callback) { Answer answer = null; String errMsg = null; + CopyCommandResult result; try { DataStore srcStore = srcData.getDataStore(); @@ -613,51 +673,72 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver { if (srcStore.getRole() == DataStoreRole.Primary && (destStore.getRole() == DataStoreRole.Primary && destData.getType() == DataObjectType.VOLUME)) { if (srcData.getType() == DataObjectType.TEMPLATE) { answer = copyTemplateToVolume(srcData, destData, destHost); - if (answer == null) { - errMsg = "No answer for copying template to PowerFlex volume"; - } else if (!answer.getResult()) { - errMsg = answer.getDetails(); - } } else if (srcData.getType() == DataObjectType.VOLUME) { if (isSameScaleIOStorageInstance(srcStore, destStore)) { answer = migrateVolume(srcData, destData); } else { answer = copyVolume(srcData, destData, destHost); } - - if (answer == null) { - errMsg = "No answer for migrate PowerFlex volume"; - } else if (!answer.getResult()) { - errMsg = answer.getDetails(); - } } else { errMsg = "Unsupported copy operation from src object: (" + srcData.getType() + ", " + srcData.getDataStore() + "), dest object: (" + destData.getType() + ", " + destData.getDataStore() + ")"; LOGGER.warn(errMsg); + answer = new Answer(null, false, errMsg); } } else { errMsg = "Unsupported copy operation"; + LOGGER.warn(errMsg); + answer = new Answer(null, false, errMsg); } } catch (Exception e) { LOGGER.debug("Failed to copy due to " + e.getMessage(), e); errMsg = e.toString(); + answer = new Answer(null, false, errMsg); } - CopyCommandResult result = new CopyCommandResult(null, answer); - result.setResult(errMsg); + result = new CopyCommandResult(null, answer); callback.complete(result); } + /** + * Responsible for copying template on ScaleIO primary to root disk + * @param srcData dataobject representing the template + * @param destData dataobject representing the target root disk + * @param destHost host to use for copy + * @return answer + */ private Answer copyTemplateToVolume(DataObject srcData, DataObject destData, Host destHost) { + /* If encryption is requested, since the template object is not encrypted we need to grow the destination disk to accommodate the new headers. + * Data stores of file type happen automatically, but block device types have to handle it. Unfortunately for ScaleIO this means we add a whole 8GB to + * the original size, but only if we are close to an 8GB boundary. + */ + LOGGER.debug(String.format("Copying template %s to volume %s", srcData.getId(), destData.getId())); + VolumeInfo destInfo = (VolumeInfo) destData; + boolean encryptionRequired = anyVolumeRequiresEncryption(destData); + if (encryptionRequired) { + if (needsExpansionForEncryptionHeader(srcData.getSize(), destData.getSize())) { + long newSize = destData.getSize() + (1<<30); + LOGGER.debug(String.format("Destination volume %s(%s) is configured for encryption. Resizing to fit headers, new size %s will be rounded up to nearest 8Gi", destInfo.getId(), destData.getSize(), newSize)); + ResizeVolumePayload p = new ResizeVolumePayload(newSize, destInfo.getMinIops(), destInfo.getMaxIops(), + destInfo.getHypervisorSnapshotReserve(), false, destInfo.getAttachedVmName(), null, true); + destInfo.addPayload(p); + resizeVolume(destInfo); + } else { + LOGGER.debug(String.format("Template %s has size %s, ok for volume %s with size %s", srcData.getId(), srcData.getSize(), destData.getId(), destData.getSize())); + } + } else { + LOGGER.debug(String.format("Destination volume is not configured for encryption, skipping encryption prep. Volume: %s", destData.getId())); + } + // Copy PowerFlex/ScaleIO template to volume LOGGER.debug(String.format("Initiating copy from PowerFlex template volume on host %s", destHost != null ? destHost.getId() : "")); int primaryStorageDownloadWait = StorageManager.PRIMARY_STORAGE_DOWNLOAD_WAIT.value(); CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), primaryStorageDownloadWait, VirtualMachineManager.ExecuteInSequence.value()); Answer answer = null; - EndPoint ep = destHost != null ? RemoteHostEndPoint.getHypervisorHostEndPoint(destHost) : selector.select(srcData.getDataStore()); + EndPoint ep = destHost != null ? RemoteHostEndPoint.getHypervisorHostEndPoint(destHost) : selector.select(srcData, encryptionRequired); if (ep == null) { - String errorMsg = "No remote endpoint to send command, check if host or ssvm is down?"; + String errorMsg = String.format("No remote endpoint to send command, unable to find a valid endpoint. Requires encryption support: %s", encryptionRequired); LOGGER.error(errorMsg); answer = new Answer(cmd, false, errorMsg); } else { @@ -676,9 +757,10 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver { CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), copyVolumeWait, VirtualMachineManager.ExecuteInSequence.value()); Answer answer = null; - EndPoint ep = destHost != null ? RemoteHostEndPoint.getHypervisorHostEndPoint(destHost) : selector.select(srcData.getDataStore()); + boolean encryptionRequired = anyVolumeRequiresEncryption(srcData, destData); + EndPoint ep = destHost != null ? RemoteHostEndPoint.getHypervisorHostEndPoint(destHost) : selector.select(srcData, encryptionRequired); if (ep == null) { - String errorMsg = "No remote endpoint to send command, check if host or ssvm is down?"; + String errorMsg = String.format("No remote endpoint to send command, unable to find a valid endpoint. Requires encryption support: %s", encryptionRequired); LOGGER.error(errorMsg); answer = new Answer(cmd, false, errorMsg); } else { @@ -824,6 +906,7 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver { try { String scaleIOVolumeId = ScaleIOUtil.getVolumePath(volumeInfo.getPath()); Long storagePoolId = volumeInfo.getPoolId(); + final ScaleIOGatewayClient client = getScaleIOClient(storagePoolId); ResizeVolumePayload payload = (ResizeVolumePayload)volumeInfo.getpayload(); long newSizeInBytes = payload.newSize != null ? payload.newSize : volumeInfo.getSize(); @@ -832,13 +915,69 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver { throw new CloudRuntimeException("Only increase size is allowed for volume: " + volumeInfo.getName()); } - org.apache.cloudstack.storage.datastore.api.Volume scaleIOVolume = null; + org.apache.cloudstack.storage.datastore.api.Volume scaleIOVolume = client.getVolume(scaleIOVolumeId); long newSizeInGB = newSizeInBytes / (1024 * 1024 * 1024); long newSizeIn8gbBoundary = (long) (Math.ceil(newSizeInGB / 8.0) * 8.0); - final ScaleIOGatewayClient client = getScaleIOClient(storagePoolId); - scaleIOVolume = client.resizeVolume(scaleIOVolumeId, (int) newSizeIn8gbBoundary); - if (scaleIOVolume == null) { - throw new CloudRuntimeException("Failed to resize volume: " + volumeInfo.getName()); + + if (scaleIOVolume.getSizeInKb() == newSizeIn8gbBoundary << 20) { + LOGGER.debug("No resize necessary at API"); + } else { + scaleIOVolume = client.resizeVolume(scaleIOVolumeId, (int) newSizeIn8gbBoundary); + if (scaleIOVolume == null) { + throw new CloudRuntimeException("Failed to resize volume: " + volumeInfo.getName()); + } + } + + StoragePoolVO storagePool = storagePoolDao.findById(storagePoolId); + boolean attachedRunning = false; + long hostId = 0; + + if (payload.instanceName != null) { + VMInstanceVO instance = vmInstanceDao.findVMByInstanceName(payload.instanceName); + if (instance.getState().equals(VirtualMachine.State.Running)) { + hostId = instance.getHostId(); + attachedRunning = true; + } + } + + if (volumeInfo.getFormat().equals(Storage.ImageFormat.QCOW2) || attachedRunning) { + LOGGER.debug("Volume needs to be resized at the hypervisor host"); + + if (hostId == 0) { + hostId = selector.select(volumeInfo, true).getId(); + } + + HostVO host = hostDao.findById(hostId); + if (host == null) { + throw new CloudRuntimeException("Found no hosts to run resize command on"); + } + + EndPoint ep = RemoteHostEndPoint.getHypervisorHostEndPoint(host); + ResizeVolumeCommand resizeVolumeCommand = new ResizeVolumeCommand( + volumeInfo.getPath(), new StorageFilerTO(storagePool), volumeInfo.getSize(), newSizeInBytes, + payload.shrinkOk, payload.instanceName, volumeInfo.getChainInfo(), + volumeInfo.getPassphrase(), volumeInfo.getEncryptFormat()); + + try { + if (!attachedRunning) { + grantAccess(volumeInfo, ep, volumeInfo.getDataStore()); + } + Answer answer = ep.sendMessage(resizeVolumeCommand); + + if (!answer.getResult() && volumeInfo.getFormat().equals(Storage.ImageFormat.QCOW2)) { + throw new CloudRuntimeException("Failed to resize at host: " + answer.getDetails()); + } else if (!answer.getResult()) { + // for non-qcow2, notifying the running VM is going to be best-effort since we can't roll back + // or avoid VM seeing a successful change at the PowerFlex volume after e.g. reboot + LOGGER.warn("Resized raw volume, but failed to notify. VM will see change on reboot. Error:" + answer.getDetails()); + } else { + LOGGER.debug("Resized volume at host: " + answer.getDetails()); + } + } finally { + if (!attachedRunning) { + revokeAccess(volumeInfo, ep, volumeInfo.getDataStore()); + } + } } VolumeVO volume = volumeDao.findById(volumeInfo.getId()); @@ -846,7 +985,6 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver { volume.setSize(scaleIOVolume.getSizeInKb() * 1024); volumeDao.update(volume.getId(), volume); - StoragePoolVO storagePool = storagePoolDao.findById(storagePoolId); long capacityBytes = storagePool.getCapacityBytes(); long usedBytes = storagePool.getUsedBytes(); @@ -990,4 +1128,27 @@ public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver { @Override public void provideVmTags(long vmId, long volumeId, String tagValue) { } + + /** + * Does the destination size fit the source size plus an encryption header? + * @param srcSize size of source + * @param dstSize size of destination + * @return true if resize is required + */ + private boolean needsExpansionForEncryptionHeader(long srcSize, long dstSize) { + int headerSize = 32<<20; // ensure we have 32MiB for encryption header + return srcSize + headerSize > dstSize; + } + + /** + * Does any object require encryption support? + */ + private boolean anyVolumeRequiresEncryption(DataObject ... objects) { + for (DataObject o : objects) { + if (o instanceof VolumeInfo && ((VolumeInfo) o).getPassphraseId() != null) { + return true; + } + } + return false; + } } diff --git a/plugins/storage/volume/storpool/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/StorPoolCopyVolumeToSecondaryCommandWrapper.java b/plugins/storage/volume/storpool/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/StorPoolCopyVolumeToSecondaryCommandWrapper.java index 29e8979bd88..bd50f43025f 100644 --- a/plugins/storage/volume/storpool/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/StorPoolCopyVolumeToSecondaryCommandWrapper.java +++ b/plugins/storage/volume/storpool/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/StorPoolCopyVolumeToSecondaryCommandWrapper.java @@ -85,7 +85,7 @@ public final class StorPoolCopyVolumeToSecondaryCommandWrapper extends CommandWr } SP_LOG("StorpoolCopyVolumeToSecondaryCommandWrapper.execute: dstName=%s, dstProvisioningType=%s, srcSize=%s, dstUUID=%s, srcUUID=%s " ,dst.getName(), dst.getProvisioningType(), src.getSize(),dst.getUuid(), src.getUuid()); - KVMPhysicalDisk newDisk = destPool.createPhysicalDisk(dst.getUuid(), dst.getProvisioningType(), src.getSize()); + KVMPhysicalDisk newDisk = destPool.createPhysicalDisk(dst.getUuid(), dst.getProvisioningType(), src.getSize(), null); SP_LOG("NewDisk path=%s, uuid=%s ", newDisk.getPath(), dst.getUuid()); String destPath = newDisk.getPath(); newDisk.setPath(dst.getUuid()); diff --git a/plugins/storage/volume/storpool/src/main/java/com/cloud/hypervisor/kvm/storage/StorPoolStorageAdaptor.java b/plugins/storage/volume/storpool/src/main/java/com/cloud/hypervisor/kvm/storage/StorPoolStorageAdaptor.java index d0fe5adaeee..915ad55934e 100644 --- a/plugins/storage/volume/storpool/src/main/java/com/cloud/hypervisor/kvm/storage/StorPoolStorageAdaptor.java +++ b/plugins/storage/volume/storpool/src/main/java/com/cloud/hypervisor/kvm/storage/StorPoolStorageAdaptor.java @@ -276,7 +276,7 @@ public class StorPoolStorageAdaptor implements StorageAdaptor { // The following do not apply for StorpoolStorageAdaptor? @Override - public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, KVMStoragePool pool, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { + public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, KVMStoragePool pool, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) { SP_LOG("StorpooolStorageAdaptor.createPhysicalDisk: uuid=%s, pool=%s, format=%s, size=%d", volumeUuid, pool, format, size); throw new UnsupportedOperationException("Creating a physical disk is not supported."); } @@ -317,7 +317,7 @@ public class StorPoolStorageAdaptor implements StorageAdaptor { @Override public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, PhysicalDiskFormat format, - ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout) { + ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout, byte[] passphrase) { SP_LOG("StorpooolStorageAdaptor.createDiskFromTemplate: template=%s, name=%s, fmt=%s, ptype=%s, size=%d, dst_pool=%s, to=%d", template, name, format, provisioningType, size, destPool.getUuid(), timeout); throw new UnsupportedOperationException("Creating a disk from a template is not yet supported for this configuration."); @@ -329,6 +329,11 @@ public class StorPoolStorageAdaptor implements StorageAdaptor { throw new UnsupportedOperationException("Creating a template from a disk is not yet supported for this configuration."); } + @Override + public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout, byte[] sourcePassphrase, byte[] destPassphrase, ProvisioningType provisioningType) { + return copyPhysicalDisk(disk, name, destPool, timeout); + } + @Override public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) { SP_LOG("StorpooolStorageAdaptor.copyPhysicalDisk: disk=%s, name=%s, dst_pool=%s, to=%d", disk, name, destPool.getUuid(), timeout); @@ -361,7 +366,7 @@ public class StorPoolStorageAdaptor implements StorageAdaptor { } public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, - PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout) { + PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout, byte[] passphrase) { SP_LOG("StorpooolStorageAdaptor.createDiskFromTemplateBacking: template=%s, name=%s, dst_pool=%s", template, name, destPool.getUuid()); throw new UnsupportedOperationException( diff --git a/plugins/storage/volume/storpool/src/main/java/com/cloud/hypervisor/kvm/storage/StorPoolStoragePool.java b/plugins/storage/volume/storpool/src/main/java/com/cloud/hypervisor/kvm/storage/StorPoolStoragePool.java index e44270ac8a1..47937212f21 100644 --- a/plugins/storage/volume/storpool/src/main/java/com/cloud/hypervisor/kvm/storage/StorPoolStoragePool.java +++ b/plugins/storage/volume/storpool/src/main/java/com/cloud/hypervisor/kvm/storage/StorPoolStoragePool.java @@ -104,13 +104,13 @@ public class StorPoolStoragePool implements KVMStoragePool { } @Override - public KVMPhysicalDisk createPhysicalDisk(String name, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) { - return _storageAdaptor.createPhysicalDisk(name, this, format, provisioningType, size); + public KVMPhysicalDisk createPhysicalDisk(String name, PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) { + return _storageAdaptor.createPhysicalDisk(name, this, format, provisioningType, size, passphrase); } @Override - public KVMPhysicalDisk createPhysicalDisk(String name, Storage.ProvisioningType provisioningType, long size) { - return _storageAdaptor.createPhysicalDisk(name, this, null, provisioningType, size); + public KVMPhysicalDisk createPhysicalDisk(String name, Storage.ProvisioningType provisioningType, long size, byte[] passphrase) { + return _storageAdaptor.createPhysicalDisk(name, this, null, provisioningType, size, passphrase); } @Override diff --git a/server/src/main/java/com/cloud/api/query/QueryManagerImpl.java b/server/src/main/java/com/cloud/api/query/QueryManagerImpl.java index d6d685e8cf5..d2eeb01a776 100644 --- a/server/src/main/java/com/cloud/api/query/QueryManagerImpl.java +++ b/server/src/main/java/com/cloud/api/query/QueryManagerImpl.java @@ -2948,6 +2948,7 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q Long zoneId = cmd.getZoneId(); Long volumeId = cmd.getVolumeId(); Long storagePoolId = cmd.getStoragePoolId(); + Boolean encrypt = cmd.getEncrypt(); // Keeping this logic consistent with domain specific zones // if a domainId is provided, we just return the disk offering // associated with this domain @@ -2995,6 +2996,10 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q sc.addAnd("name", SearchCriteria.Op.EQ, name); } + if (encrypt != null) { + sc.addAnd("encrypt", SearchCriteria.Op.EQ, encrypt); + } + if (zoneId != null) { SearchBuilder sb = _diskOfferingJoinDao.createSearchBuilder(); sb.and("zoneId", sb.entity().getZoneId(), Op.FIND_IN_SET); @@ -3118,6 +3123,7 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q Integer cpuNumber = cmd.getCpuNumber(); Integer memory = cmd.getMemory(); Integer cpuSpeed = cmd.getCpuSpeed(); + Boolean encryptRoot = cmd.getEncryptRoot(); SearchCriteria sc = _srvOfferingJoinDao.createSearchCriteria(); if (!_accountMgr.isRootAdmin(caller.getId()) && isSystem) { @@ -3229,6 +3235,10 @@ public class QueryManagerImpl extends MutualExclusiveIdsManagerBase implements Q sc.addAnd("systemUse", SearchCriteria.Op.EQ, isSystem); } + if (encryptRoot != null) { + sc.addAnd("encryptRoot", SearchCriteria.Op.EQ, encryptRoot); + } + if (name != null) { sc.addAnd("name", SearchCriteria.Op.EQ, name); } diff --git a/server/src/main/java/com/cloud/api/query/dao/DiskOfferingJoinDaoImpl.java b/server/src/main/java/com/cloud/api/query/dao/DiskOfferingJoinDaoImpl.java index 6b10afbdb1f..9592986151f 100644 --- a/server/src/main/java/com/cloud/api/query/dao/DiskOfferingJoinDaoImpl.java +++ b/server/src/main/java/com/cloud/api/query/dao/DiskOfferingJoinDaoImpl.java @@ -107,6 +107,7 @@ public class DiskOfferingJoinDaoImpl extends GenericDaoBase filteredDomainIds = filterChildSubDomains(domainIds); @@ -3103,7 +3106,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati bytesWriteRate, bytesWriteRateMax, bytesWriteRateMaxLength, iopsReadRate, iopsReadRateMax, iopsReadRateMaxLength, iopsWriteRate, iopsWriteRateMax, iopsWriteRateMaxLength, - hypervisorSnapshotReserve, cacheMode, storagePolicyID); + hypervisorSnapshotReserve, cacheMode, storagePolicyID, encryptRoot); } else { diskOffering = _diskOfferingDao.findById(diskOfferingId); } @@ -3145,7 +3148,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati Long bytesWriteRate, Long bytesWriteRateMax, Long bytesWriteRateMaxLength, Long iopsReadRate, Long iopsReadRateMax, Long iopsReadRateMaxLength, Long iopsWriteRate, Long iopsWriteRateMax, Long iopsWriteRateMaxLength, - final Integer hypervisorSnapshotReserve, String cacheMode, final Long storagePolicyID) { + final Integer hypervisorSnapshotReserve, String cacheMode, final Long storagePolicyID, boolean encrypt) { DiskOfferingVO diskOffering = new DiskOfferingVO(name, displayText, typedProvisioningType, false, tags, false, localStorageRequired, false); @@ -3185,6 +3188,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati diskOffering.setCustomizedIops(isCustomizedIops); diskOffering.setMinIops(minIops); diskOffering.setMaxIops(maxIops); + diskOffering.setEncrypt(encrypt); setBytesRate(diskOffering, bytesReadRate, bytesReadRateMax, bytesReadRateMaxLength, bytesWriteRate, bytesWriteRateMax, bytesWriteRateMaxLength); setIopsRate(diskOffering, iopsReadRate, iopsReadRateMax, iopsReadRateMaxLength, iopsWriteRate, iopsWriteRateMax, iopsWriteRateMaxLength); @@ -3441,7 +3445,8 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati Long bytesWriteRate, Long bytesWriteRateMax, Long bytesWriteRateMaxLength, Long iopsReadRate, Long iopsReadRateMax, Long iopsReadRateMaxLength, Long iopsWriteRate, Long iopsWriteRateMax, Long iopsWriteRateMaxLength, - final Integer hypervisorSnapshotReserve, String cacheMode, final Map details, final Long storagePolicyID, final boolean diskSizeStrictness) { + final Integer hypervisorSnapshotReserve, String cacheMode, final Map details, final Long storagePolicyID, + final boolean diskSizeStrictness, final boolean encrypt) { long diskSize = 0;// special case for custom disk offerings long maxVolumeSizeInGb = VolumeOrchestrationService.MaxVolumeSize.value(); if (numGibibytes != null && numGibibytes <= 0) { @@ -3523,6 +3528,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati throw new InvalidParameterValueException("If provided, Hypervisor Snapshot Reserve must be greater than or equal to 0."); } + newDiskOffering.setEncrypt(encrypt); newDiskOffering.setHypervisorSnapshotReserve(hypervisorSnapshotReserve); newDiskOffering.setDiskSizeStrictness(diskSizeStrictness); @@ -3538,6 +3544,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati detailsVO.add(new DiskOfferingDetailVO(offering.getId(), ApiConstants.ZONE_ID, String.valueOf(zoneId), false)); } } + if (MapUtils.isNotEmpty(details)) { details.forEach((key, value) -> { boolean displayDetail = !StringUtils.equalsAny(key, Volume.BANDWIDTH_LIMIT_IN_MBPS, Volume.IOPS_LIMIT); @@ -3634,6 +3641,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati final Long iopsWriteRateMaxLength = cmd.getIopsWriteRateMaxLength(); final Integer hypervisorSnapshotReserve = cmd.getHypervisorSnapshotReserve(); final String cacheMode = cmd.getCacheMode(); + final boolean encrypt = cmd.getEncrypt(); validateMaxRateEqualsOrGreater(iopsReadRate, iopsReadRateMax, IOPS_READ_RATE); validateMaxRateEqualsOrGreater(iopsWriteRate, iopsWriteRateMax, IOPS_WRITE_RATE); @@ -3647,7 +3655,7 @@ public class ConfigurationManagerImpl extends ManagerBase implements Configurati localStorageRequired, isDisplayOfferingEnabled, isCustomizedIops, minIops, maxIops, bytesReadRate, bytesReadRateMax, bytesReadRateMaxLength, bytesWriteRate, bytesWriteRateMax, bytesWriteRateMaxLength, iopsReadRate, iopsReadRateMax, iopsReadRateMaxLength, iopsWriteRate, iopsWriteRateMax, iopsWriteRateMaxLength, - hypervisorSnapshotReserve, cacheMode, details, storagePolicyId, diskSizeStrictness); + hypervisorSnapshotReserve, cacheMode, details, storagePolicyId, diskSizeStrictness, encrypt); } /** diff --git a/server/src/main/java/com/cloud/deploy/DeploymentPlanningManagerImpl.java b/server/src/main/java/com/cloud/deploy/DeploymentPlanningManagerImpl.java index dda96592b22..40f6667faec 100644 --- a/server/src/main/java/com/cloud/deploy/DeploymentPlanningManagerImpl.java +++ b/server/src/main/java/com/cloud/deploy/DeploymentPlanningManagerImpl.java @@ -274,7 +274,7 @@ StateListener, Configurable { long ram_requested = offering.getRamSize() * 1024L * 1024L; VirtualMachine vm = vmProfile.getVirtualMachine(); DataCenter dc = _dcDao.findById(vm.getDataCenterId()); - + boolean volumesRequireEncryption = anyVolumeRequiresEncryption(_volsDao.findByInstance(vm.getId())); if (vm.getType() == VirtualMachine.Type.User || vm.getType() == VirtualMachine.Type.DomainRouter) { checkForNonDedicatedResources(vmProfile, dc, avoids); @@ -296,7 +296,7 @@ StateListener, Configurable { if (plan.getHostId() != null && haVmTag == null) { Long hostIdSpecified = plan.getHostId(); if (s_logger.isDebugEnabled()) { - s_logger.debug("DeploymentPlan has host_id specified, choosing this host and making no checks on this host: " + hostIdSpecified); + s_logger.debug("DeploymentPlan has host_id specified, choosing this host: " + hostIdSpecified); } HostVO host = _hostDao.findById(hostIdSpecified); if (host != null && StringUtils.isNotBlank(uefiFlag) && "yes".equalsIgnoreCase(uefiFlag)) { @@ -337,6 +337,14 @@ StateListener, Configurable { Map> suitableVolumeStoragePools = result.first(); List readyAndReusedVolumes = result.second(); + _hostDao.loadDetails(host); + if (volumesRequireEncryption && !Boolean.parseBoolean(host.getDetail(Host.HOST_VOLUME_ENCRYPTION))) { + s_logger.warn(String.format("VM's volumes require encryption support, and provided host %s can't handle it", host)); + return null; + } else { + s_logger.debug(String.format("Volume encryption requirements are met by provided host %s", host)); + } + // choose the potential pool for this VM for this host if (!suitableVolumeStoragePools.isEmpty()) { List suitableHosts = new ArrayList(); @@ -402,6 +410,8 @@ StateListener, Configurable { s_logger.debug("This VM has last host_id specified, trying to choose the same host: " + vm.getLastHostId()); HostVO host = _hostDao.findById(vm.getLastHostId()); + _hostDao.loadHostTags(host); + _hostDao.loadDetails(host); ServiceOfferingDetailsVO offeringDetails = null; if (host == null) { s_logger.debug("The last host of this VM cannot be found"); @@ -419,6 +429,8 @@ StateListener, Configurable { if(!_resourceMgr.isGPUDeviceAvailable(host.getId(), groupName.getValue(), offeringDetails.getValue())){ s_logger.debug("The last host of this VM does not have required GPU devices available"); } + } else if (volumesRequireEncryption && !Boolean.parseBoolean(host.getDetail(Host.HOST_VOLUME_ENCRYPTION))) { + s_logger.warn(String.format("The last host of this VM %s does not support volume encryption, which is required by this VM.", host)); } else { if (host.getStatus() == Status.Up) { if (checkVmProfileAndHost(vmProfile, host)) { @@ -523,14 +535,12 @@ StateListener, Configurable { resetAvoidSet(plannerAvoidOutput, plannerAvoidInput); - dest = - checkClustersforDestination(clusterList, vmProfile, plan, avoids, dc, getPlannerUsage(planner, vmProfile, plan, avoids), plannerAvoidOutput); + dest = checkClustersforDestination(clusterList, vmProfile, plan, avoids, dc, getPlannerUsage(planner, vmProfile, plan, avoids), plannerAvoidOutput); if (dest != null) { return dest; } // reset the avoid input to the planners resetAvoidSet(avoids, plannerAvoidOutput); - } else { return null; } @@ -540,6 +550,13 @@ StateListener, Configurable { long hostId = dest.getHost().getId(); avoids.addHost(dest.getHost().getId()); + if (volumesRequireEncryption && !Boolean.parseBoolean(_hostDetailsDao.findDetail(hostId, Host.HOST_VOLUME_ENCRYPTION).getValue())) { + s_logger.warn(String.format("VM's volumes require encryption support, and the planner-provided host %s can't handle it", dest.getHost())); + continue; + } else { + s_logger.debug(String.format("VM's volume encryption requirements are met by host %s", dest.getHost())); + } + if (checkIfHostFitsPlannerUsage(hostId, DeploymentPlanner.PlannerResourceUsage.Shared)) { // found destination return dest; @@ -554,10 +571,18 @@ StateListener, Configurable { } } } - return dest; } + protected boolean anyVolumeRequiresEncryption(List volumes) { + for (Volume volume : volumes) { + if (volume.getPassphraseId() != null) { + return true; + } + } + return false; + } + private boolean isDeployAsIs(VirtualMachine vm) { long templateId = vm.getTemplateId(); VMTemplateVO template = templateDao.findById(templateId); @@ -664,7 +689,7 @@ StateListener, Configurable { return null; } - private boolean checkVmProfileAndHost(final VirtualMachineProfile vmProfile, final HostVO host) { + protected boolean checkVmProfileAndHost(final VirtualMachineProfile vmProfile, final HostVO host) { ServiceOffering offering = vmProfile.getServiceOffering(); if (offering.getHostTag() != null) { _hostDao.loadHostTags(host); @@ -877,14 +902,13 @@ StateListener, Configurable { } @DB - private boolean checkIfHostFitsPlannerUsage(final long hostId, final PlannerResourceUsage resourceUsageRequired) { + protected boolean checkIfHostFitsPlannerUsage(final long hostId, final PlannerResourceUsage resourceUsageRequired) { // TODO Auto-generated method stub // check if this host has been picked up by some other planner // exclusively // if planner can work with shared host, check if this host has // been marked as 'shared' // else if planner needs dedicated host, - PlannerHostReservationVO reservationEntry = _plannerHostReserveDao.findByHostId(hostId); if (reservationEntry != null) { final long id = reservationEntry.getId(); @@ -1222,7 +1246,6 @@ StateListener, Configurable { if (!suitableVolumeStoragePools.isEmpty()) { Pair> potentialResources = findPotentialDeploymentResources(suitableHosts, suitableVolumeStoragePools, avoid, resourceUsageRequired, readyAndReusedVolumes, plan.getPreferredHosts(), vmProfile.getVirtualMachine()); - if (potentialResources != null) { Host host = _hostDao.findById(potentialResources.first().getId()); Map storageVolMap = potentialResources.second(); @@ -1412,6 +1435,7 @@ StateListener, Configurable { List allVolumes = new ArrayList<>(); allVolumes.addAll(volumesOrderBySizeDesc); List> volumeDiskProfilePair = getVolumeDiskProfilePairs(allVolumes); + for (StoragePool storagePool : suitablePools) { haveEnoughSpace = false; hostCanAccessPool = false; @@ -1493,12 +1517,22 @@ StateListener, Configurable { } } - if (hostCanAccessPool && haveEnoughSpace && hostAffinityCheck && checkIfHostFitsPlannerUsage(potentialHost.getId(), resourceUsageRequired)) { + HostVO potentialHostVO = _hostDao.findById(potentialHost.getId()); + _hostDao.loadDetails(potentialHostVO); + + boolean hostHasEncryption = Boolean.parseBoolean(potentialHostVO.getDetail(Host.HOST_VOLUME_ENCRYPTION)); + boolean hostMeetsEncryptionRequirements = !anyVolumeRequiresEncryption(new ArrayList<>(volumesOrderBySizeDesc)) || hostHasEncryption; + boolean plannerUsageFits = checkIfHostFitsPlannerUsage(potentialHost.getId(), resourceUsageRequired); + + if (hostCanAccessPool && haveEnoughSpace && hostAffinityCheck && hostMeetsEncryptionRequirements && plannerUsageFits) { s_logger.debug("Found a potential host " + "id: " + potentialHost.getId() + " name: " + potentialHost.getName() + " and associated storage pools for this VM"); volumeAllocationMap.clear(); return new Pair>(potentialHost, storage); } else { + if (!hostMeetsEncryptionRequirements) { + s_logger.debug("Potential host " + potentialHost + " did not meet encryption requirements of all volumes"); + } avoid.addHost(potentialHost.getId()); } } diff --git a/server/src/main/java/com/cloud/storage/StorageManagerImpl.java b/server/src/main/java/com/cloud/storage/StorageManagerImpl.java index 140f62d14b8..00a15e6e32c 100644 --- a/server/src/main/java/com/cloud/storage/StorageManagerImpl.java +++ b/server/src/main/java/com/cloud/storage/StorageManagerImpl.java @@ -45,11 +45,6 @@ import java.util.stream.Collectors; import javax.inject.Inject; -import com.cloud.agent.api.GetStoragePoolCapabilitiesAnswer; -import com.cloud.agent.api.GetStoragePoolCapabilitiesCommand; -import com.cloud.network.router.VirtualNetworkApplianceManager; -import com.cloud.server.StatsCollector; -import com.cloud.upgrade.SystemVmTemplateRegistration; import com.google.common.collect.Sets; import org.apache.cloudstack.annotation.AnnotationService; import org.apache.cloudstack.annotation.dao.AnnotationDao; @@ -127,6 +122,8 @@ import com.cloud.agent.AgentManager; import com.cloud.agent.api.Answer; import com.cloud.agent.api.Command; import com.cloud.agent.api.DeleteStoragePoolCommand; +import com.cloud.agent.api.GetStoragePoolCapabilitiesAnswer; +import com.cloud.agent.api.GetStoragePoolCapabilitiesCommand; import com.cloud.agent.api.GetStorageStatsAnswer; import com.cloud.agent.api.GetStorageStatsCommand; import com.cloud.agent.api.GetVolumeStatsAnswer; @@ -178,6 +175,7 @@ import com.cloud.host.dao.HostDao; import com.cloud.hypervisor.Hypervisor; import com.cloud.hypervisor.Hypervisor.HypervisorType; import com.cloud.hypervisor.HypervisorGuruManager; +import com.cloud.network.router.VirtualNetworkApplianceManager; import com.cloud.offering.DiskOffering; import com.cloud.offering.ServiceOffering; import com.cloud.org.Grouping; @@ -185,6 +183,7 @@ import com.cloud.org.Grouping.AllocationState; import com.cloud.resource.ResourceState; import com.cloud.server.ConfigurationServer; import com.cloud.server.ManagementServer; +import com.cloud.server.StatsCollector; import com.cloud.service.dao.ServiceOfferingDetailsDao; import com.cloud.storage.Storage.ImageFormat; import com.cloud.storage.Storage.StoragePoolType; @@ -202,6 +201,7 @@ import com.cloud.storage.listener.StoragePoolMonitor; import com.cloud.storage.listener.VolumeStateListener; import com.cloud.template.TemplateManager; import com.cloud.template.VirtualMachineTemplate; +import com.cloud.upgrade.SystemVmTemplateRegistration; import com.cloud.user.Account; import com.cloud.user.AccountManager; import com.cloud.user.ResourceLimitService; diff --git a/server/src/main/java/com/cloud/storage/VolumeApiServiceImpl.java b/server/src/main/java/com/cloud/storage/VolumeApiServiceImpl.java index 9ce294d2332..7b43d5c5c09 100644 --- a/server/src/main/java/com/cloud/storage/VolumeApiServiceImpl.java +++ b/server/src/main/java/com/cloud/storage/VolumeApiServiceImpl.java @@ -131,6 +131,7 @@ import com.cloud.exception.PermissionDeniedException; import com.cloud.exception.ResourceAllocationException; import com.cloud.exception.StorageUnavailableException; import com.cloud.gpu.GPU; +import com.cloud.host.Host; import com.cloud.host.HostVO; import com.cloud.host.Status; import com.cloud.host.dao.HostDao; @@ -311,7 +312,6 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic VirtualMachineManager virtualMachineManager; @Inject private ManagementService managementService; - @Inject protected SnapshotHelper snapshotHelper; @@ -800,6 +800,11 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic parentVolume = _volsDao.findByIdIncludingRemoved(snapshotCheck.getVolumeId()); + // Don't support creating templates from encrypted volumes (yet) + if (parentVolume.getPassphraseId() != null) { + throw new UnsupportedOperationException("Cannot create new volumes from encrypted volume snapshots"); + } + if (zoneId == null) { // if zoneId is not provided, we default to create volume in the same zone as the snapshot zone. zoneId = snapshotCheck.getDataCenterId(); @@ -899,6 +904,7 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic } volume = _volsDao.persist(volume); + if (cmd.getSnapshotId() == null && displayVolume) { // for volume created from snapshot, create usage event after volume creation UsageEventUtils.publishUsageEvent(EventTypes.EVENT_VOLUME_CREATE, volume.getAccountId(), volume.getDataCenterId(), volume.getId(), volume.getName(), diskOfferingId, null, size, @@ -1113,10 +1119,15 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic Long instanceId = volume.getInstanceId(); VMInstanceVO vmInstanceVO = _vmInstanceDao.findById(instanceId); if (volume.getVolumeType().equals(Volume.Type.ROOT)) { - ServiceOfferingVO serviceOffering = _serviceOfferingDao.findById(vmInstanceVO.getServiceOfferingId()); + ServiceOfferingVO serviceOffering = _serviceOfferingDao.findById(vmInstanceVO.getServiceOfferingId()); if (serviceOffering != null && serviceOffering.getDiskOfferingStrictness()) { throw new InvalidParameterValueException(String.format("Cannot resize ROOT volume [%s] with new disk offering since existing disk offering is strictly assigned to the ROOT volume.", volume.getName())); } + if (newDiskOffering.getEncrypt() != diskOffering.getEncrypt()) { + throw new InvalidParameterValueException( + String.format("Current disk offering's encryption(%s) does not match target disk offering's encryption(%s)", diskOffering.getEncrypt(), newDiskOffering.getEncrypt()) + ); + } } if (diskOffering.getTags() != null) { @@ -1183,7 +1194,7 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic if (storagePoolId != null) { StoragePoolVO storagePoolVO = _storagePoolDao.findById(storagePoolId); - if (storagePoolVO.isManaged()) { + if (storagePoolVO.isManaged() && !storagePoolVO.getPoolType().equals(Storage.StoragePoolType.PowerFlex)) { Long instanceId = volume.getInstanceId(); if (instanceId != null) { @@ -1272,15 +1283,15 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic if (jobResult != null) { if (jobResult instanceof ConcurrentOperationException) { - throw (ConcurrentOperationException)jobResult; + throw (ConcurrentOperationException) jobResult; } else if (jobResult instanceof ResourceAllocationException) { - throw (ResourceAllocationException)jobResult; + throw (ResourceAllocationException) jobResult; } else if (jobResult instanceof RuntimeException) { - throw (RuntimeException)jobResult; + throw (RuntimeException) jobResult; } else if (jobResult instanceof Throwable) { - throw new RuntimeException("Unexpected exception", (Throwable)jobResult); + throw new RuntimeException("Unexpected exception", (Throwable) jobResult); } else if (jobResult instanceof Long) { - return _volsDao.findById((Long)jobResult); + return _volsDao.findById((Long) jobResult); } } @@ -2214,6 +2225,11 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic job.getId())); } + DiskOfferingVO diskOffering = _diskOfferingDao.findById(volumeToAttach.getDiskOfferingId()); + if (diskOffering.getEncrypt() && rootDiskHyperType != HypervisorType.KVM) { + throw new InvalidParameterValueException("Volume's disk offering has encryption enabled, but volume encryption is not supported for hypervisor type " + rootDiskHyperType); + } + _jobMgr.updateAsyncJobAttachment(job.getId(), "Volume", volumeId); if (asyncExecutionContext.isJobDispatchedBy(VmWorkConstants.VM_WORK_JOB_DISPATCHER)) { @@ -2872,6 +2888,10 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic vm = _vmInstanceDao.findById(instanceId); } + if (vol.getPassphraseId() != null) { + throw new InvalidParameterValueException("Migration of encrypted volumes is unsupported"); + } + // Check that Vm to which this volume is attached does not have VM Snapshots // OfflineVmwareMigration: consider if this is needed and desirable if (vm != null && _vmSnapshotDao.findByVm(vm.getId()).size() > 0) { @@ -3353,6 +3373,11 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic throw new InvalidParameterValueException("VolumeId: " + volumeId + " is not in " + Volume.State.Ready + " state but " + volume.getState() + ". Cannot take snapshot."); } + if (volume.getEncryptFormat() != null && volume.getAttachedVM() != null && volume.getAttachedVM().getState() != State.Stopped) { + s_logger.debug(String.format("Refusing to take snapshot of encrypted volume (%s) on running VM (%s)", volume, volume.getAttachedVM())); + throw new UnsupportedOperationException("Volume snapshots for encrypted volumes are not supported if VM is running"); + } + CreateSnapshotPayload payload = new CreateSnapshotPayload(); payload.setSnapshotId(snapshotId); @@ -3529,6 +3554,10 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic throw ex; } + if (volume.getPassphraseId() != null) { + throw new InvalidParameterValueException("Extraction of encrypted volumes is unsupported"); + } + if (volume.getVolumeType() != Volume.Type.DATADISK) { // Datadisk don't have any template dependence. @@ -3862,6 +3891,14 @@ public class VolumeApiServiceImpl extends ManagerBase implements VolumeApiServic sendCommand = true; } + if (host != null) { + _hostDao.loadDetails(host); + boolean hostSupportsEncryption = Boolean.parseBoolean(host.getDetail(Host.HOST_VOLUME_ENCRYPTION)); + if (volumeToAttach.getPassphraseId() != null && !hostSupportsEncryption) { + throw new CloudRuntimeException(errorMsg + " because target host " + host + " doesn't support volume encryption"); + } + } + if (volumeToAttachStoragePool != null) { verifyManagedStorage(volumeToAttachStoragePool.getId(), hostId); } diff --git a/server/src/main/java/com/cloud/storage/snapshot/SnapshotManagerImpl.java b/server/src/main/java/com/cloud/storage/snapshot/SnapshotManagerImpl.java index 961c0422048..4dbfc6f97df 100755 --- a/server/src/main/java/com/cloud/storage/snapshot/SnapshotManagerImpl.java +++ b/server/src/main/java/com/cloud/storage/snapshot/SnapshotManagerImpl.java @@ -97,6 +97,7 @@ import com.cloud.server.ResourceTag.ResourceObjectType; import com.cloud.server.TaggedResourceService; import com.cloud.storage.CreateSnapshotPayload; import com.cloud.storage.DataStoreRole; +import com.cloud.storage.DiskOfferingVO; import com.cloud.storage.ScopeType; import com.cloud.storage.Snapshot; import com.cloud.storage.Snapshot.Type; @@ -110,6 +111,7 @@ import com.cloud.storage.StoragePool; import com.cloud.storage.VMTemplateVO; import com.cloud.storage.Volume; import com.cloud.storage.VolumeVO; +import com.cloud.storage.dao.DiskOfferingDao; import com.cloud.storage.dao.SnapshotDao; import com.cloud.storage.dao.SnapshotPolicyDao; import com.cloud.storage.dao.SnapshotScheduleDao; @@ -172,6 +174,8 @@ public class SnapshotManagerImpl extends MutualExclusiveIdsManagerBase implement @Inject DomainDao _domainDao; @Inject + DiskOfferingDao diskOfferingDao; + @Inject StorageManager _storageMgr; @Inject SnapshotScheduler _snapSchedMgr; @@ -846,6 +850,14 @@ public class SnapshotManagerImpl extends MutualExclusiveIdsManagerBase implement throw new InvalidParameterValueException("Failed to create snapshot policy, unable to find a volume with id " + volumeId); } + // For now, volumes with encryption don't support snapshot schedules, because they will fail when VM is running + DiskOfferingVO diskOffering = diskOfferingDao.findByIdIncludingRemoved(volume.getDiskOfferingId()); + if (diskOffering == null) { + throw new InvalidParameterValueException(String.format("Failed to find disk offering for the volume [%s]", volume.getUuid())); + } else if(diskOffering.getEncrypt()) { + throw new UnsupportedOperationException(String.format("Encrypted volumes don't support snapshot schedules, cannot create snapshot policy for the volume [%s]", volume.getUuid())); + } + String volumeDescription = volume.getVolumeDescription(); _accountMgr.checkAccess(CallContext.current().getCallingAccount(), null, true, volume); diff --git a/server/src/main/java/com/cloud/template/TemplateManagerImpl.java b/server/src/main/java/com/cloud/template/TemplateManagerImpl.java index 2f1e1a552d4..2218acf238b 100755 --- a/server/src/main/java/com/cloud/template/TemplateManagerImpl.java +++ b/server/src/main/java/com/cloud/template/TemplateManagerImpl.java @@ -1802,6 +1802,11 @@ public class TemplateManagerImpl extends ManagerBase implements TemplateManager, // check permissions _accountMgr.checkAccess(caller, null, true, volume); + // Don't support creating templates from encrypted volumes (yet) + if (volume.getPassphraseId() != null) { + throw new UnsupportedOperationException("Cannot create templates from encrypted volumes"); + } + // If private template is created from Volume, check that the volume // will not be active when the private template is // created @@ -1825,6 +1830,11 @@ public class TemplateManagerImpl extends ManagerBase implements TemplateManager, // Volume could be removed so find including removed to record source template id. volume = _volumeDao.findByIdIncludingRemoved(snapshot.getVolumeId()); + // Don't support creating templates from encrypted volumes (yet) + if (volume != null && volume.getPassphraseId() != null) { + throw new UnsupportedOperationException("Cannot create templates from snapshots of encrypted volumes"); + } + // check permissions _accountMgr.checkAccess(caller, null, true, snapshot); diff --git a/server/src/main/java/com/cloud/vm/UserVmManagerImpl.java b/server/src/main/java/com/cloud/vm/UserVmManagerImpl.java index 3f0238aa8ff..e1397c4fbce 100644 --- a/server/src/main/java/com/cloud/vm/UserVmManagerImpl.java +++ b/server/src/main/java/com/cloud/vm/UserVmManagerImpl.java @@ -3848,6 +3848,7 @@ public class UserVmManagerImpl extends ManagerBase implements UserVmManager, Vir } ServiceOfferingVO offering = _serviceOfferingDao.findById(serviceOffering.getId()); + if (offering.isDynamic()) { offering.setDynamicFlag(true); validateCustomParameters(offering, customParameters); @@ -3880,6 +3881,10 @@ public class UserVmManagerImpl extends ManagerBase implements UserVmManager, Vir DiskOfferingVO rootdiskOffering = _diskOfferingDao.findById(rootDiskOfferingId); long volumesSize = configureCustomRootDiskSize(customParameters, template, hypervisorType, rootdiskOffering); + if (rootdiskOffering.getEncrypt() && hypervisorType != HypervisorType.KVM) { + throw new InvalidParameterValueException("Root volume encryption is not supported for hypervisor type " + hypervisorType); + } + if (!isIso && diskOfferingId != null) { DiskOfferingVO diskOffering = _diskOfferingDao.findById(diskOfferingId); volumesSize += verifyAndGetDiskSize(diskOffering, diskSize); diff --git a/server/src/main/java/com/cloud/vm/snapshot/VMSnapshotManagerImpl.java b/server/src/main/java/com/cloud/vm/snapshot/VMSnapshotManagerImpl.java index 5ebb1f27d00..a1afabcaaa2 100644 --- a/server/src/main/java/com/cloud/vm/snapshot/VMSnapshotManagerImpl.java +++ b/server/src/main/java/com/cloud/vm/snapshot/VMSnapshotManagerImpl.java @@ -388,6 +388,12 @@ public class VMSnapshotManagerImpl extends MutualExclusiveIdsManagerBase impleme s_logger.debug(message); throw new CloudRuntimeException(message); } + + // disallow KVM snapshots for VMs if root volume is encrypted (Qemu crash) + if (rootVolume.getPassphraseId() != null && userVmVo.getState() == VirtualMachine.State.Running && Boolean.TRUE.equals(snapshotMemory)) { + throw new UnsupportedOperationException("Cannot create VM memory snapshots on KVM from encrypted root volumes"); + } + } // check access diff --git a/server/src/test/java/com/cloud/deploy/DeploymentPlanningManagerImplTest.java b/server/src/test/java/com/cloud/deploy/DeploymentPlanningManagerImplTest.java index 87266883d90..41d5eaa2236 100644 --- a/server/src/test/java/com/cloud/deploy/DeploymentPlanningManagerImplTest.java +++ b/server/src/test/java/com/cloud/deploy/DeploymentPlanningManagerImplTest.java @@ -23,29 +23,43 @@ import static org.junit.Assert.assertTrue; import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; +import java.util.HashMap; import java.util.List; +import java.util.Map; import javax.inject.Inject; import javax.naming.ConfigurationException; +import com.cloud.dc.ClusterDetailsVO; import com.cloud.dc.DataCenter; +import com.cloud.gpu.GPU; import com.cloud.host.Host; +import com.cloud.host.HostVO; +import com.cloud.host.Status; +import com.cloud.storage.DiskOfferingVO; +import com.cloud.storage.Storage; +import com.cloud.storage.StoragePool; import com.cloud.storage.VMTemplateVO; +import com.cloud.storage.Volume; +import com.cloud.storage.VolumeVO; import com.cloud.storage.dao.VMTemplateDao; import com.cloud.user.AccountVO; import com.cloud.user.dao.AccountDao; +import com.cloud.utils.Pair; import com.cloud.vm.VMInstanceVO; import com.cloud.vm.VirtualMachine; import com.cloud.vm.VirtualMachine.Type; import com.cloud.vm.VirtualMachineProfile; import com.cloud.vm.VirtualMachineProfileImpl; import org.apache.cloudstack.affinity.dao.AffinityGroupDomainMapDao; +import org.apache.cloudstack.storage.datastore.db.StoragePoolVO; import org.apache.commons.collections.CollectionUtils; import org.junit.Assert; import org.junit.Before; import org.junit.BeforeClass; import org.junit.Test; import org.junit.runner.RunWith; +import org.mockito.ArgumentMatchers; import org.mockito.InjectMocks; import org.mockito.Matchers; import org.mockito.Mock; @@ -161,12 +175,30 @@ public class DeploymentPlanningManagerImplTest { @Inject HostPodDao hostPodDao; + @Inject + VolumeDao volDao; + + @Inject + HostDao hostDao; + + @Inject + CapacityManager capacityMgr; + + @Inject + ServiceOfferingDetailsDao serviceOfferingDetailsDao; + + @Inject + ClusterDetailsDao clusterDetailsDao; + @Mock Host host; - private static long dataCenterId = 1L; - private static long hostId = 1l; - private static final long ADMIN_ACCOUNT_ROLE_ID = 1l; + private static final long dataCenterId = 1L; + private static final long instanceId = 123L; + private static final long hostId = 0L; + private static final long podId = 2L; + private static final long clusterId = 3L; + private static final long ADMIN_ACCOUNT_ROLE_ID = 1L; @BeforeClass public static void setUp() throws ConfigurationException { @@ -178,7 +210,7 @@ public class DeploymentPlanningManagerImplTest { ComponentContext.initComponentsLifeCycle(); - PlannerHostReservationVO reservationVO = new PlannerHostReservationVO(200L, 1L, 2L, 3L, PlannerResourceUsage.Shared); + PlannerHostReservationVO reservationVO = new PlannerHostReservationVO(hostId, dataCenterId, podId, clusterId, PlannerResourceUsage.Shared); Mockito.when(_plannerHostReserveDao.persist(Matchers.any(PlannerHostReservationVO.class))).thenReturn(reservationVO); Mockito.when(_plannerHostReserveDao.findById(Matchers.anyLong())).thenReturn(reservationVO); Mockito.when(_affinityGroupVMMapDao.countAffinityGroupsForVm(Matchers.anyLong())).thenReturn(0L); @@ -189,9 +221,12 @@ public class DeploymentPlanningManagerImplTest { VMInstanceVO vm = new VMInstanceVO(); Mockito.when(vmProfile.getVirtualMachine()).thenReturn(vm); + Mockito.when(vmProfile.getId()).thenReturn(instanceId); Mockito.when(vmDetailsDao.listDetailsKeyPairs(Matchers.anyLong())).thenReturn(null); + Mockito.when(volDao.findByInstance(Matchers.anyLong())).thenReturn(new ArrayList<>()); + Mockito.when(_dcDao.findById(Matchers.anyLong())).thenReturn(dc); Mockito.when(dc.getId()).thenReturn(dataCenterId); @@ -435,6 +470,321 @@ public class DeploymentPlanningManagerImplTest { Assert.assertTrue(avoids.getClustersToAvoid().contains(expectedClusterId)); } + @Test + public void volumesRequireEncryptionTest() { + VolumeVO vol1 = new VolumeVO("vol1", dataCenterId,podId,1L,1L, instanceId,"folder","path", Storage.ProvisioningType.THIN, (long)10<<30, Volume.Type.ROOT); + VolumeVO vol2 = new VolumeVO("vol2", dataCenterId,podId,1L,1L, instanceId,"folder","path",Storage.ProvisioningType.THIN, (long)10<<30, Volume.Type.DATADISK); + VolumeVO vol3 = new VolumeVO("vol3", dataCenterId,podId,1L,1L, instanceId,"folder","path",Storage.ProvisioningType.THIN, (long)10<<30, Volume.Type.DATADISK); + vol2.setPassphraseId(1L); + + List volumes = List.of(vol1, vol2, vol3); + Assert.assertTrue("Volumes require encryption, but not reporting", _dpm.anyVolumeRequiresEncryption(volumes)); + } + + @Test + public void volumesDoNotRequireEncryptionTest() { + VolumeVO vol1 = new VolumeVO("vol1", dataCenterId,podId,1L,1L, instanceId,"folder","path",Storage.ProvisioningType.THIN, (long)10<<30, Volume.Type.ROOT); + VolumeVO vol2 = new VolumeVO("vol2", dataCenterId,podId,1L,1L, instanceId,"folder","path",Storage.ProvisioningType.THIN, (long)10<<30, Volume.Type.DATADISK); + VolumeVO vol3 = new VolumeVO("vol3", dataCenterId,podId,1L,1L, instanceId,"folder","path",Storage.ProvisioningType.THIN, (long)10<<30, Volume.Type.DATADISK); + + List volumes = List.of(vol1, vol2, vol3); + Assert.assertFalse("Volumes do not require encryption, but reporting they do", _dpm.anyVolumeRequiresEncryption(volumes)); + } + + /** + * Root requires encryption, chosen host supports it + */ + @Test + public void passEncRootProvidedHostSupportingEncryptionTest() { + HostVO host = new HostVO("host"); + Map hostDetails = new HashMap<>() {{ + put(Host.HOST_VOLUME_ENCRYPTION, "true"); + }}; + host.setDetails(hostDetails); + + VolumeVO vol1 = new VolumeVO("vol1", dataCenterId,podId,1L,1L, instanceId,"folder","path",Storage.ProvisioningType.THIN, (long)10<<30, Volume.Type.ROOT); + vol1.setPassphraseId(1L); + + setupMocksForPlanDeploymentHostTests(host, vol1); + + DataCenterDeployment plan = new DataCenterDeployment(dataCenterId, podId, clusterId, hostId, null, null); + try { + DeployDestination dest = _dpm.planDeployment(vmProfile, plan, avoids, null); + Assert.assertEquals(dest.getHost(), host); + } catch (Exception ex) { + ex.printStackTrace(); + } + } + + /** + * Root requires encryption, chosen host does not support it + */ + @Test + public void failEncRootProvidedHostNotSupportingEncryptionTest() { + HostVO host = new HostVO("host"); + Map hostDetails = new HashMap<>() {{ + put(Host.HOST_VOLUME_ENCRYPTION, "false"); + }}; + host.setDetails(hostDetails); + + VolumeVO vol1 = new VolumeVO("vol1", dataCenterId,podId,1L,1L, instanceId,"folder","path",Storage.ProvisioningType.THIN, (long)10<<30, Volume.Type.ROOT); + vol1.setPassphraseId(1L); + + setupMocksForPlanDeploymentHostTests(host, vol1); + + DataCenterDeployment plan = new DataCenterDeployment(dataCenterId, podId, clusterId, hostId, null, null); + try { + DeployDestination dest = _dpm.planDeployment(vmProfile, plan, avoids, null); + Assert.assertNull("Destination should be null since host doesn't support encryption and root requires it", dest); + } catch (Exception ex) { + ex.printStackTrace(); + } + } + + /** + * Root does not require encryption, chosen host does not support it + */ + @Test + public void passNoEncRootProvidedHostNotSupportingEncryptionTest() { + HostVO host = new HostVO("host"); + Map hostDetails = new HashMap<>() {{ + put(Host.HOST_VOLUME_ENCRYPTION, "false"); + }}; + host.setDetails(hostDetails); + + VolumeVO vol1 = new VolumeVO("vol1", dataCenterId,podId,1L,1L, instanceId,"folder","path",Storage.ProvisioningType.THIN, (long)10<<30, Volume.Type.ROOT); + + setupMocksForPlanDeploymentHostTests(host, vol1); + + DataCenterDeployment plan = new DataCenterDeployment(dataCenterId, podId, clusterId, hostId, null, null); + try { + DeployDestination dest = _dpm.planDeployment(vmProfile, plan, avoids, null); + Assert.assertEquals(dest.getHost(), host); + } catch (Exception ex) { + ex.printStackTrace(); + } + } + + /** + * Root does not require encryption, chosen host does support it + */ + @Test + public void passNoEncRootProvidedHostSupportingEncryptionTest() { + HostVO host = new HostVO("host"); + Map hostDetails = new HashMap<>() {{ + put(Host.HOST_VOLUME_ENCRYPTION, "true"); + }}; + host.setDetails(hostDetails); + + VolumeVO vol1 = new VolumeVO("vol1", dataCenterId,podId,1L,1L, instanceId,"folder","path",Storage.ProvisioningType.THIN, (long)10<<30, Volume.Type.ROOT); + + setupMocksForPlanDeploymentHostTests(host, vol1); + + DataCenterDeployment plan = new DataCenterDeployment(dataCenterId, podId, clusterId, hostId, null, null); + try { + DeployDestination dest = _dpm.planDeployment(vmProfile, plan, avoids, null); + Assert.assertEquals(dest.getHost(), host); + } catch (Exception ex) { + ex.printStackTrace(); + } + } + + /** + * Root requires encryption, last host supports it + */ + @Test + public void passEncRootLastHostSupportingEncryptionTest() { + HostVO host = Mockito.spy(new HostVO("host")); + Map hostDetails = new HashMap<>() {{ + put(Host.HOST_VOLUME_ENCRYPTION, "true"); + }}; + host.setDetails(hostDetails); + Mockito.when(host.getStatus()).thenReturn(Status.Up); + + VolumeVO vol1 = new VolumeVO("vol1", dataCenterId,podId,1L,1L, instanceId,"folder","path",Storage.ProvisioningType.THIN, (long)10<<30, Volume.Type.ROOT); + vol1.setPassphraseId(1L); + + setupMocksForPlanDeploymentHostTests(host, vol1); + + VMInstanceVO vm = (VMInstanceVO) vmProfile.getVirtualMachine(); + vm.setLastHostId(hostId); + + // host id is null here so we pick up last host id + DataCenterDeployment plan = new DataCenterDeployment(dataCenterId, podId, clusterId, null, null, null); + try { + DeployDestination dest = _dpm.planDeployment(vmProfile, plan, avoids, null); + Assert.assertEquals(dest.getHost(), host); + } catch (Exception ex) { + ex.printStackTrace(); + } + } + + /** + * Root requires encryption, last host does not support it + */ + @Test + public void failEncRootLastHostNotSupportingEncryptionTest() { + HostVO host = Mockito.spy(new HostVO("host")); + Map hostDetails = new HashMap<>() {{ + put(Host.HOST_VOLUME_ENCRYPTION, "false"); + }}; + host.setDetails(hostDetails); + Mockito.when(host.getStatus()).thenReturn(Status.Up); + + VolumeVO vol1 = new VolumeVO("vol1", dataCenterId,podId,1L,1L, instanceId,"folder","path",Storage.ProvisioningType.THIN, (long)10<<30, Volume.Type.ROOT); + vol1.setPassphraseId(1L); + + setupMocksForPlanDeploymentHostTests(host, vol1); + + VMInstanceVO vm = (VMInstanceVO) vmProfile.getVirtualMachine(); + vm.setLastHostId(hostId); + // host id is null here so we pick up last host id + DataCenterDeployment plan = new DataCenterDeployment(dataCenterId, podId, clusterId, null, null, null); + try { + DeployDestination dest = _dpm.planDeployment(vmProfile, plan, avoids, null); + Assert.assertNull("Destination should be null since last host doesn't support encryption and root requires it", dest); + } catch (Exception ex) { + ex.printStackTrace(); + } + } + + @Test + public void passEncRootPlannerHostSupportingEncryptionTest() { + HostVO host = Mockito.spy(new HostVO("host")); + Map hostDetails = new HashMap<>() {{ + put(Host.HOST_VOLUME_ENCRYPTION, "true"); + }}; + host.setDetails(hostDetails); + Mockito.when(host.getStatus()).thenReturn(Status.Up); + + VolumeVO vol1 = new VolumeVO("vol1", dataCenterId,podId,1L,1L, instanceId,"folder","path",Storage.ProvisioningType.THIN, (long)10<<30, Volume.Type.ROOT); + vol1.setPassphraseId(1L); + + DeploymentClusterPlanner planner = setupMocksForPlanDeploymentHostTests(host, vol1); + + // host id is null here so we pick up last host id + DataCenterDeployment plan = new DataCenterDeployment(dataCenterId, podId, clusterId, null, null, null); + + try { + DeployDestination dest = _dpm.planDeployment(vmProfile, plan, avoids, planner); + Assert.assertEquals(host, dest.getHost()); + } catch (Exception ex) { + ex.printStackTrace(); + } + } + + @Test + public void failEncRootPlannerHostSupportingEncryptionTest() { + HostVO host = Mockito.spy(new HostVO("host")); + Map hostDetails = new HashMap<>() {{ + put(Host.HOST_VOLUME_ENCRYPTION, "false"); + }}; + host.setDetails(hostDetails); + Mockito.when(host.getStatus()).thenReturn(Status.Up); + + VolumeVO vol1 = new VolumeVO("vol1", dataCenterId,podId,1L,1L, instanceId,"folder","path",Storage.ProvisioningType.THIN, (long)10<<30, Volume.Type.ROOT); + vol1.setPassphraseId(1L); + + DeploymentClusterPlanner planner = setupMocksForPlanDeploymentHostTests(host, vol1); + + // host id is null here so we pick up last host id + DataCenterDeployment plan = new DataCenterDeployment(dataCenterId, podId, clusterId, null, null, null); + + try { + DeployDestination dest = _dpm.planDeployment(vmProfile, plan, avoids, planner); + Assert.assertNull("Destination should be null since last host doesn't support encryption and root requires it", dest); + } catch (Exception ex) { + ex.printStackTrace(); + } + } + + // This is so ugly but everything is so intertwined... + private DeploymentClusterPlanner setupMocksForPlanDeploymentHostTests(HostVO host, VolumeVO vol1) { + long diskOfferingId = 345L; + List volumeVOs = new ArrayList<>(); + List volumes = new ArrayList<>(); + vol1.setDiskOfferingId(diskOfferingId); + volumes.add(vol1); + volumeVOs.add(vol1); + + DiskOfferingVO diskOffering = new DiskOfferingVO(); + diskOffering.setEncrypt(true); + + VMTemplateVO template = new VMTemplateVO(); + template.setFormat(Storage.ImageFormat.QCOW2); + + host.setClusterId(clusterId); + + StoragePool pool = new StoragePoolVO(); + + Map> suitableVolumeStoragePools = new HashMap<>() {{ + put(vol1, List.of(pool)); + }}; + + Pair>, List> suitable = new Pair<>(suitableVolumeStoragePools, volumes); + + ServiceOfferingVO svcOffering = new ServiceOfferingVO("test", 1, 256, 1, 1, 1, false, "vm", false, Type.User, false); + Mockito.when(vmProfile.getServiceOffering()).thenReturn(svcOffering); + Mockito.when(vmProfile.getHypervisorType()).thenReturn(HypervisorType.KVM); + Mockito.when(hostDao.findById(hostId)).thenReturn(host); + Mockito.doNothing().when(hostDao).loadDetails(host); + Mockito.doReturn(volumeVOs).when(volDao).findByInstance(ArgumentMatchers.anyLong()); + Mockito.doReturn(suitable).when(_dpm).findSuitablePoolsForVolumes( + ArgumentMatchers.any(VirtualMachineProfile.class), + ArgumentMatchers.any(DataCenterDeployment.class), + ArgumentMatchers.any(ExcludeList.class), + ArgumentMatchers.anyInt() + ); + + ClusterVO clusterVO = new ClusterVO(); + clusterVO.setHypervisorType(HypervisorType.KVM.toString()); + Mockito.when(_clusterDao.findById(ArgumentMatchers.anyLong())).thenReturn(clusterVO); + + Mockito.doReturn(List.of(host)).when(_dpm).findSuitableHosts( + ArgumentMatchers.any(VirtualMachineProfile.class), + ArgumentMatchers.any(DeploymentPlan.class), + ArgumentMatchers.any(ExcludeList.class), + ArgumentMatchers.anyInt() + ); + + Map suitableVolumeStoragePoolMap = new HashMap<>() {{ + put(vol1, pool); + }}; + Mockito.doReturn(true).when(_dpm).hostCanAccessSPool(ArgumentMatchers.any(Host.class), ArgumentMatchers.any(StoragePool.class)); + + Pair> potentialResources = new Pair<>(host, suitableVolumeStoragePoolMap); + + Mockito.when(capacityMgr.checkIfHostReachMaxGuestLimit(host)).thenReturn(false); + Mockito.when(capacityMgr.checkIfHostHasCpuCapability(ArgumentMatchers.anyLong(), ArgumentMatchers.anyInt(), ArgumentMatchers.anyInt())).thenReturn(true); + Mockito.when(capacityMgr.checkIfHostHasCapacity( + ArgumentMatchers.anyLong(), + ArgumentMatchers.anyInt(), + ArgumentMatchers.anyLong(), + ArgumentMatchers.anyBoolean(), + ArgumentMatchers.anyFloat(), + ArgumentMatchers.anyFloat(), + ArgumentMatchers.anyBoolean() + )).thenReturn(true); + Mockito.when(serviceOfferingDetailsDao.findDetail(vmProfile.getServiceOfferingId(), GPU.Keys.vgpuType.toString())).thenReturn(null); + + Mockito.doReturn(true).when(_dpm).checkVmProfileAndHost(vmProfile, host); + Mockito.doReturn(true).when(_dpm).checkIfHostFitsPlannerUsage(ArgumentMatchers.anyLong(), ArgumentMatchers.nullable(PlannerResourceUsage.class)); + Mockito.when(clusterDetailsDao.findDetail(ArgumentMatchers.anyLong(), ArgumentMatchers.anyString())).thenReturn(new ClusterDetailsVO(clusterId, "mock", "1")); + + DeploymentClusterPlanner planner = Mockito.spy(new FirstFitPlanner()); + try { + Mockito.doReturn(List.of(clusterId), List.of()).when(planner).orderClusters( + ArgumentMatchers.any(VirtualMachineProfile.class), + ArgumentMatchers.any(DeploymentPlan.class), + ArgumentMatchers.any(ExcludeList.class) + ); + } catch (Exception ex) { + ex.printStackTrace(); + } + + return planner; + } + private DataCenter prepareAvoidDisabledTests() { DataCenter dc = Mockito.mock(DataCenter.class); Mockito.when(dc.getId()).thenReturn(123l); diff --git a/server/src/test/java/com/cloud/storage/VolumeApiServiceImplTest.java b/server/src/test/java/com/cloud/storage/VolumeApiServiceImplTest.java index 5b9875bc61e..29374a3fe3b 100644 --- a/server/src/test/java/com/cloud/storage/VolumeApiServiceImplTest.java +++ b/server/src/test/java/com/cloud/storage/VolumeApiServiceImplTest.java @@ -35,11 +35,8 @@ import java.util.List; import java.util.UUID; import java.util.concurrent.ExecutionException; -import com.cloud.api.query.dao.ServiceOfferingJoinDao; -import com.cloud.api.query.vo.ServiceOfferingJoinVO; import com.cloud.service.ServiceOfferingVO; import com.cloud.service.dao.ServiceOfferingDao; -import com.cloud.storage.dao.VMTemplateDao; import org.apache.cloudstack.acl.ControlledEntity; import org.apache.cloudstack.acl.SecurityChecker.AccessType; import org.apache.cloudstack.api.command.user.volume.CreateVolumeCmd; @@ -75,6 +72,8 @@ import org.mockito.Spy; import org.mockito.runners.MockitoJUnitRunner; import org.springframework.test.util.ReflectionTestUtils; +import com.cloud.api.query.dao.ServiceOfferingJoinDao; +import com.cloud.api.query.vo.ServiceOfferingJoinVO; import com.cloud.configuration.Resource; import com.cloud.configuration.Resource.ResourceType; import com.cloud.dc.DataCenterVO; @@ -87,7 +86,9 @@ import com.cloud.org.Grouping; import com.cloud.serializer.GsonHelper; import com.cloud.server.TaggedResourceService; import com.cloud.storage.Volume.Type; +import com.cloud.storage.dao.DiskOfferingDao; import com.cloud.storage.dao.StoragePoolTagsDao; +import com.cloud.storage.dao.VMTemplateDao; import com.cloud.storage.dao.VolumeDao; import com.cloud.storage.snapshot.SnapshotManager; import com.cloud.user.Account; @@ -162,6 +163,8 @@ public class VolumeApiServiceImplTest { private ServiceOfferingJoinDao serviceOfferingJoinDao; @Mock private ServiceOfferingDao serviceOfferingDao; + @Mock + private DiskOfferingDao _diskOfferingDao; private DetachVolumeCmd detachCmd = new DetachVolumeCmd(); private Class _detachCmdClass = detachCmd.getClass(); @@ -273,6 +276,7 @@ public class VolumeApiServiceImplTest { VolumeVO correctRootVolumeVO = new VolumeVO("root", 1L, 1L, 1L, 1L, 2L, "root", "root", Storage.ProvisioningType.THIN, 1, null, null, "root", Volume.Type.ROOT); when(volumeDaoMock.findById(6L)).thenReturn(correctRootVolumeVO); + when(volumeDaoMock.getHypervisorType(6L)).thenReturn(HypervisorType.XenServer); // managed root volume VolumeInfo managedVolume = Mockito.mock(VolumeInfo.class); @@ -292,7 +296,7 @@ public class VolumeApiServiceImplTest { when(userVmDaoMock.findById(4L)).thenReturn(vmHavingRootVolume); List vols = new ArrayList(); vols.add(new VolumeVO()); - when(volumeDaoMock.findByInstanceAndDeviceId(4L, 0L)).thenReturn(vols); + lenient().when(volumeDaoMock.findByInstanceAndDeviceId(4L, 0L)).thenReturn(vols); // volume in uploaded state VolumeInfo uploadedVolume = Mockito.mock(VolumeInfo.class); @@ -310,6 +314,27 @@ public class VolumeApiServiceImplTest { upVolume.setState(Volume.State.Uploaded); when(volumeDaoMock.findById(8L)).thenReturn(upVolume); + UserVmVO kvmVm = new UserVmVO(4L, "vm", "vm", 1, HypervisorType.KVM, 1L, false, false, 1L, 1L, 1, 1L, null, "vm"); + kvmVm.setState(State.Running); + kvmVm.setDataCenterId(1L); + when(userVmDaoMock.findById(4L)).thenReturn(kvmVm); + + VolumeVO volumeOfKvmVm = new VolumeVO("root", 1L, 1L, 1L, 1L, 4L, "root", "root", Storage.ProvisioningType.THIN, 1, null, null, "root", Volume.Type.ROOT); + volumeOfKvmVm.setPoolId(1L); + lenient().when(volumeDaoMock.findById(9L)).thenReturn(volumeOfKvmVm); + lenient().when(volumeDaoMock.getHypervisorType(9L)).thenReturn(HypervisorType.KVM); + + VolumeVO dataVolumeVO = new VolumeVO("data", 1L, 1L, 1L, 1L, 2L, "data", "data", Storage.ProvisioningType.THIN, 1, null, null, "data", Type.DATADISK); + lenient().when(volumeDaoMock.findById(10L)).thenReturn(dataVolumeVO); + + VolumeInfo dataVolume = Mockito.mock(VolumeInfo.class); + when(dataVolume.getId()).thenReturn(10L); + when(dataVolume.getDataCenterId()).thenReturn(1L); + when(dataVolume.getVolumeType()).thenReturn(Volume.Type.DATADISK); + when(dataVolume.getInstanceId()).thenReturn(null); + when(dataVolume.getState()).thenReturn(Volume.State.Allocated); + when(volumeDataFactoryMock.getVolume(10L)).thenReturn(dataVolume); + // helper dao methods mock when(_vmSnapshotDao.findByVm(any(Long.class))).thenReturn(new ArrayList()); when(_vmInstanceDao.findById(any(Long.class))).thenReturn(stoppedVm); @@ -323,6 +348,10 @@ public class VolumeApiServiceImplTest { txn.close("runVolumeDaoImplTest"); } + DiskOfferingVO diskOffering = Mockito.mock(DiskOfferingVO.class); + when(diskOffering.getEncrypt()).thenReturn(false); + when(_diskOfferingDao.findById(anyLong())).thenReturn(diskOffering); + // helper methods mock lenient().doNothing().when(accountManagerMock).checkAccess(any(Account.class), any(AccessType.class), any(Boolean.class), any(ControlledEntity.class)); doNothing().when(_jobMgr).updateAsyncJobAttachment(any(Long.class), any(String.class), any(Long.class)); @@ -416,6 +445,25 @@ public class VolumeApiServiceImplTest { volumeApiServiceImpl.attachVolumeToVM(2L, 6L, 0L); } + // Negative test - attach data volume, to the vm on non-kvm hypervisor + @Test(expected = InvalidParameterValueException.class) + public void attachDiskWithEncryptEnabledOfferingonNonKVM() throws NoSuchFieldException, IllegalAccessException { + DiskOfferingVO diskOffering = Mockito.mock(DiskOfferingVO.class); + when(diskOffering.getEncrypt()).thenReturn(true); + when(_diskOfferingDao.findById(anyLong())).thenReturn(diskOffering); + volumeApiServiceImpl.attachVolumeToVM(2L, 10L, 1L); + } + + // Positive test - attach data volume, to the vm on kvm hypervisor + @Test + public void attachDiskWithEncryptEnabledOfferingOnKVM() throws NoSuchFieldException, IllegalAccessException { + thrown.expect(NullPointerException.class); + DiskOfferingVO diskOffering = Mockito.mock(DiskOfferingVO.class); + when(diskOffering.getEncrypt()).thenReturn(true); + when(_diskOfferingDao.findById(anyLong())).thenReturn(diskOffering); + volumeApiServiceImpl.attachVolumeToVM(4L, 10L, 1L); + } + // volume not Ready @Test(expected = InvalidParameterValueException.class) public void testTakeSnapshotF1() throws ResourceAllocationException { diff --git a/server/src/test/resources/createNetworkOffering.xml b/server/src/test/resources/createNetworkOffering.xml index 623cfaca66b..214fa29cd75 100644 --- a/server/src/test/resources/createNetworkOffering.xml +++ b/server/src/test/resources/createNetworkOffering.xml @@ -69,4 +69,5 @@ + diff --git a/test/integration/smoke/test_disk_offerings.py b/test/integration/smoke/test_disk_offerings.py index 660dd30024d..dc23a52a026 100644 --- a/test/integration/smoke/test_disk_offerings.py +++ b/test/integration/smoke/test_disk_offerings.py @@ -45,7 +45,7 @@ class TestCreateDiskOffering(cloudstackTestCase): raise Exception("Warning: Exception during cleanup : %s" % e) return - @attr(tags=["advanced", "basic", "eip", "sg", "advancedns", "smoke"], required_hardware="false") + @attr(tags=["advanced", "basic", "eip", "sg", "advancedns", "smoke", "diskencrypt"], required_hardware="false") def test_01_create_disk_offering(self): """Test to create disk offering @@ -87,6 +87,11 @@ class TestCreateDiskOffering(cloudstackTestCase): self.services["disk_offering"]["name"], "Check name in createServiceOffering" ) + self.assertEqual( + disk_response.encrypt, + False, + "Ensure disk encryption is false by default" + ) return @attr(hypervisor="kvm") @@ -294,6 +299,49 @@ class TestCreateDiskOffering(cloudstackTestCase): return + @attr(tags = ["advanced", "basic", "eip", "sg", "advancedns", "simulator", "smoke", "diskencrypt"]) + def test_08_create_encrypted_disk_offering(self): + """Test to create an encrypted type disk offering""" + + # Validate the following: + # 1. createDiskOfferings should return valid info for new offering + # 2. The Cloud Database contains the valid information + + disk_offering = DiskOffering.create( + self.apiclient, + self.services["disk_offering"], + name="disk-encrypted", + encrypt="true" + ) + self.cleanup.append(disk_offering) + + self.debug("Created Disk offering with ID: %s" % disk_offering.id) + + list_disk_response = list_disk_offering( + self.apiclient, + id=disk_offering.id + ) + + self.assertEqual( + isinstance(list_disk_response, list), + True, + "Check list response returns a valid list" + ) + + self.assertNotEqual( + len(list_disk_response), + 0, + "Check Disk offering is created" + ) + disk_response = list_disk_response[0] + + self.assertEqual( + disk_response.encrypt, + True, + "Check if encrypt is set after createServiceOffering" + ) + return + class TestDiskOfferings(cloudstackTestCase): def setUp(self): diff --git a/test/integration/smoke/test_service_offerings.py b/test/integration/smoke/test_service_offerings.py index 039b032b3a6..62e39e195c2 100644 --- a/test/integration/smoke/test_service_offerings.py +++ b/test/integration/smoke/test_service_offerings.py @@ -70,7 +70,7 @@ class TestCreateServiceOffering(cloudstackTestCase): "smoke", "basic", "eip", - "sg"], + "sg", "diskencrypt"], required_hardware="false") def test_01_create_service_offering(self): """Test to create service offering""" @@ -131,6 +131,11 @@ class TestCreateServiceOffering(cloudstackTestCase): self.services["service_offerings"]["tiny"]["name"], "Check name in createServiceOffering" ) + self.assertEqual( + list_service_response[0].encryptroot, + False, + "Ensure encrypt is false by default" + ) return @attr( @@ -304,6 +309,53 @@ class TestCreateServiceOffering(cloudstackTestCase): ) return + @attr( + tags=[ + "advanced", + "advancedns", + "smoke", + "basic", + "eip", + "sg", + "diskencrypt"], + required_hardware="false") + def test_05_create_service_offering_with_root_encryption_type(self): + """Test to create service offering with root encryption""" + + # Validate the following: + # 1. createServiceOfferings should return a valid information + # for newly created offering + + service_offering = ServiceOffering.create( + self.apiclient, + self.services["service_offerings"]["tiny"], + name="tiny-encrypted-root", + encryptRoot=True + ) + self.cleanup.append(service_offering) + + self.debug( + "Created service offering with ID: %s" % + service_offering.id) + + list_service_response = list_service_offering( + self.apiclient, + id=service_offering.id + ) + + self.assertNotEqual( + len(list_service_response), + 0, + "Check Service offering is created" + ) + + self.assertEqual( + list_service_response[0].encryptroot, + True, + "Check encrypt root is true" + ) + return + class TestServiceOfferings(cloudstackTestCase): diff --git a/test/integration/smoke/test_volumes.py b/test/integration/smoke/test_volumes.py index 02125ad9540..7d64a27eaf2 100644 --- a/test/integration/smoke/test_volumes.py +++ b/test/integration/smoke/test_volumes.py @@ -43,9 +43,12 @@ from marvin.lib.common import (get_domain, find_storage_pool_type, get_pod, list_disk_offering) -from marvin.lib.utils import checkVolumeSize +from marvin.lib.utils import (cleanup_resources, checkVolumeSize) from marvin.lib.utils import (format_volume_to_ext3, wait_until) +from marvin.sshClient import SshClient +import xml.etree.ElementTree as ET +from lxml import etree from nose.plugins.attrib import attr @@ -1034,3 +1037,555 @@ class TestVolumes(cloudstackTestCase): "Offering name did not match with the new one " ) return + + +class TestVolumeEncryption(cloudstackTestCase): + + @classmethod + def setUpClass(cls): + cls.testClient = super(TestVolumeEncryption, cls).getClsTestClient() + cls.apiclient = cls.testClient.getApiClient() + cls.services = cls.testClient.getParsedTestDataConfig() + cls._cleanup = [] + + cls.unsupportedHypervisor = False + cls.hypervisor = cls.testClient.getHypervisorInfo() + if cls.hypervisor.lower() not in ['kvm']: + # Volume Encryption currently supported for KVM hypervisor + cls.unsupportedHypervisor = True + return + + # Get Zone and Domain + cls.domain = get_domain(cls.apiclient) + cls.zone = get_zone(cls.apiclient, cls.testClient.getZoneForTests()) + + cls.services['mode'] = cls.zone.networktype + cls.services["virtual_machine"]["zoneid"] = cls.zone.id + cls.services["domainid"] = cls.domain.id + cls.services["zoneid"] = cls.zone.id + + # Get template + template = get_suitable_test_template( + cls.apiclient, + cls.zone.id, + cls.services["ostype"], + cls.hypervisor + ) + if template == FAILED: + assert False, "get_suitable_test_template() failed to return template with description %s" % cls.services["ostype"] + + cls.services["template"] = template.id + cls.services["diskname"] = cls.services["volume"]["diskname"] + + cls.hostConfig = cls.config.__dict__["zones"][0].__dict__["pods"][0].__dict__["clusters"][0].__dict__["hosts"][0].__dict__ + + # Create Account + cls.account = Account.create( + cls.apiclient, + cls.services["account"], + domainid=cls.domain.id + ) + cls._cleanup.append(cls.account) + + # Create Service Offering + cls.service_offering = ServiceOffering.create( + cls.apiclient, + cls.services["service_offerings"]["small"] + ) + cls._cleanup.append(cls.service_offering) + + # Create Service Offering with encryptRoot true + cls.service_offering_encrypt = ServiceOffering.create( + cls.apiclient, + cls.services["service_offerings"]["small"], + name="Small Encrypted Instance", + encryptroot=True + ) + cls._cleanup.append(cls.service_offering_encrypt) + + # Create Disk Offering + cls.disk_offering = DiskOffering.create( + cls.apiclient, + cls.services["disk_offering"] + ) + cls._cleanup.append(cls.disk_offering) + + # Create Disk Offering with encrypt true + cls.disk_offering_encrypt = DiskOffering.create( + cls.apiclient, + cls.services["disk_offering"], + name="Encrypted", + encrypt=True + ) + cls._cleanup.append(cls.disk_offering_encrypt) + + @classmethod + def tearDownClass(cls): + try: + cleanup_resources(cls.apiclient, cls._cleanup) + except Exception as e: + raise Exception("Warning: Exception during cleanup : %s" % e) + return + + def setUp(self): + self.apiclient = self.testClient.getApiClient() + self.dbclient = self.testClient.getDbConnection() + self.cleanup = [] + + if self.unsupportedHypervisor: + self.skipTest("Skipping test as volume encryption is not supported for hypervisor %s" % self.hypervisor) + + if not self.does_host_with_encryption_support_exists(): + self.skipTest("Skipping test as no host exists with volume encryption support") + + def tearDown(self): + try: + cleanup_resources(self.apiclient, self.cleanup) + except Exception as e: + raise Exception("Warning: Exception during cleanup : %s" % e) + return + + @attr(tags=["advanced", "smoke", "diskencrypt"], required_hardware="true") + def test_01_root_volume_encryption(self): + """Test Root Volume Encryption + + # Validate the following + # 1. Create VM using the service offering with encryptroot true + # 2. Verify VM created and Root Volume + # 3. Create Data Volume using the disk offering with encrypt false + # 4. Verify Data Volume + """ + + virtual_machine = VirtualMachine.create( + self.apiclient, + self.services, + accountid=self.account.name, + domainid=self.account.domainid, + serviceofferingid=self.service_offering_encrypt.id, + mode=self.services["mode"] + ) + self.cleanup.append(virtual_machine) + self.debug("Created VM with ID: %s" % virtual_machine.id) + + list_vm_response = VirtualMachine.list( + self.apiclient, + id=virtual_machine.id + ) + self.assertEqual( + isinstance(list_vm_response, list), + True, + "Check list response returns a valid list" + ) + self.assertNotEqual( + len(list_vm_response), + 0, + "Check VM available in List Virtual Machines" + ) + + vm_response = list_vm_response[0] + self.assertEqual( + vm_response.id, + virtual_machine.id, + "Check virtual machine id in listVirtualMachines" + ) + self.assertEqual( + vm_response.state, + 'Running', + msg="VM is not in Running state" + ) + + self.check_volume_encryption(virtual_machine, 1) + + volume = Volume.create( + self.apiclient, + self.services, + zoneid=self.zone.id, + account=self.account.name, + domainid=self.account.domainid, + diskofferingid=self.disk_offering.id + ) + self.debug("Created a volume with ID: %s" % volume.id) + + list_volume_response = Volume.list( + self.apiclient, + id=volume.id) + self.assertEqual( + isinstance(list_volume_response, list), + True, + "Check list response returns a valid list" + ) + self.assertNotEqual( + list_volume_response, + None, + "Check if volume exists in ListVolumes" + ) + + self.debug("Attaching volume (ID: %s) to VM (ID: %s)" % (volume.id, virtual_machine.id)) + + virtual_machine.attach_volume( + self.apiclient, + volume + ) + + try: + ssh = virtual_machine.get_ssh_client() + self.debug("Rebooting VM %s" % virtual_machine.id) + ssh.execute("reboot") + except Exception as e: + self.fail("SSH access failed for VM %s - %s" % (virtual_machine.ipaddress, e)) + + # Poll listVM to ensure VM is started properly + timeout = self.services["timeout"] + while True: + time.sleep(self.services["sleep"]) + + # Ensure that VM is in running state + list_vm_response = VirtualMachine.list( + self.apiclient, + id=virtual_machine.id + ) + + if isinstance(list_vm_response, list): + vm = list_vm_response[0] + if vm.state == 'Running': + self.debug("VM state: %s" % vm.state) + break + + if timeout == 0: + raise Exception( + "Failed to start VM (ID: %s) " % vm.id) + timeout = timeout - 1 + + vol_sz = str(list_volume_response[0].size) + ssh = virtual_machine.get_ssh_client( + reconnect=True + ) + + # Get the updated volume information + list_volume_response = Volume.list( + self.apiclient, + id=volume.id) + + volume_name = "/dev/vd" + chr(ord('a') + int(list_volume_response[0].deviceid)) + self.debug(" Using KVM volume_name: %s" % (volume_name)) + ret = checkVolumeSize(ssh_handle=ssh, volume_name=volume_name, size_to_verify=vol_sz) + self.debug(" Volume Size Expected %s Actual :%s" % (vol_sz, ret[1])) + virtual_machine.detach_volume(self.apiclient, volume) + self.assertEqual(ret[0], SUCCESS, "Check if promised disk size actually available") + time.sleep(self.services["sleep"]) + + @attr(tags=["advanced", "smoke", "diskencrypt"], required_hardware="true") + def test_02_data_volume_encryption(self): + """Test Data Volume Encryption + + # Validate the following + # 1. Create VM using the service offering with encryptroot false + # 2. Verify VM created and Root Volume + # 3. Create Data Volume using the disk offering with encrypt true + # 4. Verify Data Volume + """ + + virtual_machine = VirtualMachine.create( + self.apiclient, + self.services, + accountid=self.account.name, + domainid=self.account.domainid, + serviceofferingid=self.service_offering.id, + mode=self.services["mode"] + ) + self.cleanup.append(virtual_machine) + self.debug("Created VM with ID: %s" % virtual_machine.id) + + list_vm_response = VirtualMachine.list( + self.apiclient, + id=virtual_machine.id + ) + self.assertEqual( + isinstance(list_vm_response, list), + True, + "Check list response returns a valid list" + ) + self.assertNotEqual( + len(list_vm_response), + 0, + "Check VM available in List Virtual Machines" + ) + + vm_response = list_vm_response[0] + self.assertEqual( + vm_response.id, + virtual_machine.id, + "Check virtual machine id in listVirtualMachines" + ) + self.assertEqual( + vm_response.state, + 'Running', + msg="VM is not in Running state" + ) + + volume = Volume.create( + self.apiclient, + self.services, + zoneid=self.zone.id, + account=self.account.name, + domainid=self.account.domainid, + diskofferingid=self.disk_offering_encrypt.id + ) + self.debug("Created a volume with ID: %s" % volume.id) + + list_volume_response = Volume.list( + self.apiclient, + id=volume.id) + self.assertEqual( + isinstance(list_volume_response, list), + True, + "Check list response returns a valid list" + ) + self.assertNotEqual( + list_volume_response, + None, + "Check if volume exists in ListVolumes" + ) + + self.debug("Attaching volume (ID: %s) to VM (ID: %s)" % (volume.id, virtual_machine.id)) + + virtual_machine.attach_volume( + self.apiclient, + volume + ) + + try: + ssh = virtual_machine.get_ssh_client() + self.debug("Rebooting VM %s" % virtual_machine.id) + ssh.execute("reboot") + except Exception as e: + self.fail("SSH access failed for VM %s - %s" % (virtual_machine.ipaddress, e)) + + # Poll listVM to ensure VM is started properly + timeout = self.services["timeout"] + while True: + time.sleep(self.services["sleep"]) + + # Ensure that VM is in running state + list_vm_response = VirtualMachine.list( + self.apiclient, + id=virtual_machine.id + ) + + if isinstance(list_vm_response, list): + vm = list_vm_response[0] + if vm.state == 'Running': + self.debug("VM state: %s" % vm.state) + break + + if timeout == 0: + raise Exception( + "Failed to start VM (ID: %s) " % vm.id) + timeout = timeout - 1 + + vol_sz = str(list_volume_response[0].size) + ssh = virtual_machine.get_ssh_client( + reconnect=True + ) + + # Get the updated volume information + list_volume_response = Volume.list( + self.apiclient, + id=volume.id) + + volume_name = "/dev/vd" + chr(ord('a') + int(list_volume_response[0].deviceid)) + self.debug(" Using KVM volume_name: %s" % (volume_name)) + ret = checkVolumeSize(ssh_handle=ssh, volume_name=volume_name, size_to_verify=vol_sz) + self.debug(" Volume Size Expected %s Actual :%s" % (vol_sz, ret[1])) + + self.check_volume_encryption(virtual_machine, 1) + + virtual_machine.detach_volume(self.apiclient, volume) + self.assertEqual(ret[0], SUCCESS, "Check if promised disk size actually available") + time.sleep(self.services["sleep"]) + + @attr(tags=["advanced", "smoke", "diskencrypt"], required_hardware="true") + def test_03_root_and_data_volume_encryption(self): + """Test Root and Data Volumes Encryption + + # Validate the following + # 1. Create VM using the service offering with encryptroot true + # 2. Verify VM created and Root Volume + # 3. Create Data Volume using the disk offering with encrypt true + # 4. Verify Data Volume + """ + + virtual_machine = VirtualMachine.create( + self.apiclient, + self.services, + accountid=self.account.name, + domainid=self.account.domainid, + serviceofferingid=self.service_offering_encrypt.id, + diskofferingid=self.disk_offering_encrypt.id, + mode=self.services["mode"] + ) + self.cleanup.append(virtual_machine) + self.debug("Created VM with ID: %s" % virtual_machine.id) + + list_vm_response = VirtualMachine.list( + self.apiclient, + id=virtual_machine.id + ) + self.assertEqual( + isinstance(list_vm_response, list), + True, + "Check list response returns a valid list" + ) + self.assertNotEqual( + len(list_vm_response), + 0, + "Check VM available in List Virtual Machines" + ) + + vm_response = list_vm_response[0] + self.assertEqual( + vm_response.id, + virtual_machine.id, + "Check virtual machine id in listVirtualMachines" + ) + self.assertEqual( + vm_response.state, + 'Running', + msg="VM is not in Running state" + ) + + self.check_volume_encryption(virtual_machine, 2) + + volume = Volume.create( + self.apiclient, + self.services, + zoneid=self.zone.id, + account=self.account.name, + domainid=self.account.domainid, + diskofferingid=self.disk_offering_encrypt.id + ) + self.debug("Created a volume with ID: %s" % volume.id) + + list_volume_response = Volume.list( + self.apiclient, + id=volume.id) + self.assertEqual( + isinstance(list_volume_response, list), + True, + "Check list response returns a valid list" + ) + self.assertNotEqual( + list_volume_response, + None, + "Check if volume exists in ListVolumes" + ) + + self.debug("Attaching volume (ID: %s) to VM (ID: %s)" % (volume.id, virtual_machine.id)) + + virtual_machine.attach_volume( + self.apiclient, + volume + ) + + try: + ssh = virtual_machine.get_ssh_client() + self.debug("Rebooting VM %s" % virtual_machine.id) + ssh.execute("reboot") + except Exception as e: + self.fail("SSH access failed for VM %s - %s" % (virtual_machine.ipaddress, e)) + + # Poll listVM to ensure VM is started properly + timeout = self.services["timeout"] + while True: + time.sleep(self.services["sleep"]) + + # Ensure that VM is in running state + list_vm_response = VirtualMachine.list( + self.apiclient, + id=virtual_machine.id + ) + + if isinstance(list_vm_response, list): + vm = list_vm_response[0] + if vm.state == 'Running': + self.debug("VM state: %s" % vm.state) + break + + if timeout == 0: + raise Exception( + "Failed to start VM (ID: %s) " % vm.id) + timeout = timeout - 1 + + vol_sz = str(list_volume_response[0].size) + ssh = virtual_machine.get_ssh_client( + reconnect=True + ) + + # Get the updated volume information + list_volume_response = Volume.list( + self.apiclient, + id=volume.id) + + volume_name = "/dev/vd" + chr(ord('a') + int(list_volume_response[0].deviceid)) + self.debug(" Using KVM volume_name: %s" % (volume_name)) + ret = checkVolumeSize(ssh_handle=ssh, volume_name=volume_name, size_to_verify=vol_sz) + self.debug(" Volume Size Expected %s Actual :%s" % (vol_sz, ret[1])) + + self.check_volume_encryption(virtual_machine, 3) + + virtual_machine.detach_volume(self.apiclient, volume) + self.assertEqual(ret[0], SUCCESS, "Check if promised disk size actually available") + time.sleep(self.services["sleep"]) + + def does_host_with_encryption_support_exists(self): + hosts = Host.list( + self.apiclient, + zoneid=self.zone.id, + type='Routing', + hypervisor='KVM', + state='Up') + + for host in hosts: + if host.encryptionsupported: + return True + + return False + + def check_volume_encryption(self, virtual_machine, volumes_count): + hosts = Host.list(self.apiclient, id=virtual_machine.hostid) + if len(hosts) != 1: + assert False, "Could not find host with id " + virtual_machine.hostid + + host = hosts[0] + instance_name = virtual_machine.instancename + + self.assertIsNotNone(host, "Host should not be None") + self.assertIsNotNone(instance_name, "Instance name should not be None") + + ssh_client = SshClient( + host=host.ipaddress, + port=22, + user=self.hostConfig['username'], + passwd=self.hostConfig['password']) + + virsh_cmd = 'virsh dumpxml %s' % instance_name + xml_res = ssh_client.execute(virsh_cmd) + xml_as_str = ''.join(xml_res) + parser = etree.XMLParser(remove_blank_text=True) + virshxml_root = ET.fromstring(xml_as_str, parser=parser) + + encryption_format = virshxml_root.findall(".devices/disk/encryption[@format='luks']") + self.assertIsNotNone(encryption_format, "The volume encryption format is not luks") + self.assertEqual( + len(encryption_format), + volumes_count, + "Check the number of volumes encrypted with luks format" + ) + + secret_type = virshxml_root.findall(".devices/disk/encryption/secret[@type='passphrase']") + self.assertIsNotNone(secret_type, "The volume encryption secret type is not passphrase") + self.assertEqual( + len(secret_type), + volumes_count, + "Check the number of encrypted volumes with passphrase secret type" + ) diff --git a/ui/public/locales/en.json b/ui/public/locales/en.json index 14b65805929..e0e56986178 100644 --- a/ui/public/locales/en.json +++ b/ui/public/locales/en.json @@ -664,6 +664,8 @@ "label.enable.storage": "Enable storage pool", "label.enable.vpc.offering": "Enable VPC offering", "label.enable.vpn": "Enable remote access VPN", +"label.encrypt": "Encrypt", +"label.encryptroot": "Encrypt Root Disk", "label.end": "End", "label.end.date.and.time": "End date and time", "label.end.ip": "End IP", @@ -1846,6 +1848,7 @@ "label.volume": "Volume", "label.volume.empty": "No data volumes attached to this VM", "label.volume.volumefileupload.description": "Click or drag file to this area to upload.", +"label.volume.encryption.support": "Volume Encryption Supported", "label.volumechecksum": "MD5 checksum", "label.volumechecksum.description": "Use the hash that you created at the start of the volume upload procedure.", "label.volumefileupload": "Local file", diff --git a/ui/src/config/section/offering.js b/ui/src/config/section/offering.js index 918548ddae8..573cd6e8bf5 100644 --- a/ui/src/config/section/offering.js +++ b/ui/src/config/section/offering.js @@ -32,7 +32,7 @@ export default { params: { isrecursive: 'true' }, columns: ['name', 'displaytext', 'cpunumber', 'cpuspeed', 'memory', 'domain', 'zone', 'order'], details: () => { - var fields = ['name', 'id', 'displaytext', 'offerha', 'provisioningtype', 'storagetype', 'iscustomized', 'iscustomizediops', 'limitcpuuse', 'cpunumber', 'cpuspeed', 'memory', 'hosttags', 'tags', 'storagetags', 'domain', 'zone', 'created', 'dynamicscalingenabled', 'diskofferingstrictness'] + var fields = ['name', 'id', 'displaytext', 'offerha', 'provisioningtype', 'storagetype', 'iscustomized', 'iscustomizediops', 'limitcpuuse', 'cpunumber', 'cpuspeed', 'memory', 'hosttags', 'tags', 'storagetags', 'domain', 'zone', 'created', 'dynamicscalingenabled', 'diskofferingstrictness', 'encryptroot'] if (store.getters.apis.createServiceOffering && store.getters.apis.createServiceOffering.params.filter(x => x.name === 'storagepolicy').length > 0) { fields.splice(6, 0, 'vspherestoragepolicy') @@ -142,7 +142,7 @@ export default { params: { isrecursive: 'true' }, columns: ['name', 'displaytext', 'disksize', 'domain', 'zone', 'order'], details: () => { - var fields = ['name', 'id', 'displaytext', 'disksize', 'provisioningtype', 'storagetype', 'iscustomized', 'disksizestrictness', 'iscustomizediops', 'tags', 'domain', 'zone', 'created'] + var fields = ['name', 'id', 'displaytext', 'disksize', 'provisioningtype', 'storagetype', 'iscustomized', 'disksizestrictness', 'iscustomizediops', 'tags', 'domain', 'zone', 'created', 'encrypt'] if (store.getters.apis.createDiskOffering && store.getters.apis.createDiskOffering.params.filter(x => x.name === 'storagepolicy').length > 0) { fields.splice(6, 0, 'vspherestoragepolicy') diff --git a/ui/src/views/infra/HostInfo.vue b/ui/src/views/infra/HostInfo.vue index 5893284c928..a74407be7df 100644 --- a/ui/src/views/infra/HostInfo.vue +++ b/ui/src/views/infra/HostInfo.vue @@ -40,6 +40,14 @@ + +
+ {{ $t('label.volume.encryption.support') }} +
+ {{ host.encryptionsupported }} +
+
+
{{ $t('label.hosttags') }} diff --git a/ui/src/views/offering/AddComputeOffering.vue b/ui/src/views/offering/AddComputeOffering.vue index 6600035db41..b9461bad15a 100644 --- a/ui/src/views/offering/AddComputeOffering.vue +++ b/ui/src/views/offering/AddComputeOffering.vue @@ -530,6 +530,12 @@ + + + + @@ -651,6 +657,7 @@ export default { loading: false, dynamicscalingenabled: true, diskofferingstrictness: false, + encryptdisk: false, computeonly: true, diskOfferingLoading: false, diskOfferings: [], @@ -692,7 +699,8 @@ export default { qostype: this.qosType, iscustomizeddiskiops: this.isCustomizedDiskIops, diskofferingid: this.selectedDiskOfferingId, - diskofferingstrictness: this.diskofferingstrictness + diskofferingstrictness: this.diskofferingstrictness, + encryptdisk: this.encryptdisk }) this.rules = reactive({ name: [{ required: true, message: this.$t('message.error.required.input') }], @@ -908,7 +916,8 @@ export default { offerha: values.offerha === true, limitcpuuse: values.limitcpuuse === true, dynamicscalingenabled: values.dynamicscalingenabled, - diskofferingstrictness: values.diskofferingstrictness + diskofferingstrictness: values.diskofferingstrictness, + encryptroot: values.encryptdisk } if (values.diskofferingid) { params.diskofferingid = values.diskofferingid diff --git a/ui/src/views/offering/AddDiskOffering.vue b/ui/src/views/offering/AddDiskOffering.vue index 690ee065535..afa881c3c0d 100644 --- a/ui/src/views/offering/AddDiskOffering.vue +++ b/ui/src/views/offering/AddDiskOffering.vue @@ -75,12 +75,15 @@ + + + + @@ -313,12 +316,14 @@ export default { storagePolicies: null, storageTagLoading: false, isPublic: true, + isEncrypted: false, domains: [], domainLoading: false, zones: [], zoneLoading: false, loading: false, - disksizestrictness: false + disksizestrictness: false, + encryptdisk: false } }, beforeCreate () { @@ -345,7 +350,8 @@ export default { writecachetype: 'none', qostype: '', ispublic: this.isPublic, - disksizestrictness: this.disksizestrictness + disksizestrictness: this.disksizestrictness, + encryptdisk: this.encryptdisk }) this.rules = reactive({ name: [{ required: true, message: this.$t('message.error.required.input') }], @@ -458,7 +464,8 @@ export default { cacheMode: values.writecachetype, provisioningType: values.provisioningtype, customized: values.customdisksize, - disksizestrictness: values.disksizestrictness + disksizestrictness: values.disksizestrictness, + encrypt: values.encryptdisk } if (values.customdisksize !== true) { params.disksize = values.disksize diff --git a/utils/src/main/java/com/cloud/utils/UuidUtils.java b/utils/src/main/java/com/cloud/utils/UuidUtils.java index e733eff6da3..fc9bffe5834 100644 --- a/utils/src/main/java/com/cloud/utils/UuidUtils.java +++ b/utils/src/main/java/com/cloud/utils/UuidUtils.java @@ -24,13 +24,14 @@ import org.apache.xerces.impl.xpath.regex.RegularExpression; public class UuidUtils { - public final static String first(String uuid) { + private static final RegularExpression uuidRegex = new RegularExpression("[0-9a-fA-F]{8}(?:-[0-9a-fA-F]{4}){3}-[0-9a-fA-F]{12}"); + + public static String first(String uuid) { return uuid.substring(0, uuid.indexOf('-')); } public static boolean validateUUID(String uuid) { - RegularExpression regex = new RegularExpression("[0-9a-fA-F]{8}(?:-[0-9a-fA-F]{4}){3}-[0-9a-fA-F]{12}"); - return regex.matches(uuid); + return uuidRegex.matches(uuid); } /** @@ -53,4 +54,8 @@ public class UuidUtils { } return uuid; } + + public static RegularExpression getUuidRegex() { + return uuidRegex; + } } \ No newline at end of file