Marcus Sorensen 697e12f8f7
kvm: volume encryption feature (#6522)
This PR introduces a feature designed to allow CloudStack to manage a generic volume encryption setting. The encryption is handled transparently to the guest OS, and is intended to handle VM guest data encryption at rest and possibly over the wire, though the actual encryption implementation is up to the primary storage driver.

In some cases cloud customers may still prefer to maintain their own guest-level volume encryption, if they don't trust the cloud provider. However, for private cloud cases this greatly simplifies the guest OS experience in terms of running volume encryption for guests without the user having to manage keys, deal with key servers and guest booting being dependent on network connectivity to them (i.e. Tang), etc, especially in cases where users are attaching/detaching data disks and moving them between VMs occasionally.

The feature can be thought of as having two parts - the API/control plane (which includes scheduling aspects), and the storage driver implementation.

This initial PR adds the encryption setting to disk offerings and service offerings (for root volume), and implements encryption support for KVM SharedMountPoint, NFS, Local, and ScaleIO storage pools.

NOTE: While not required, operations can be significantly sped up by ensuring that hosts have the `rng-tools` package and service installed and running on the management server and hypervisors. For EL hosts the service is `rngd` and for Debian it is `rng-tools`. In particular, the use of SecureRandom for generating volume passphrases can be slow if there isn't a good source of entropy. This could affect testing and build environments, and otherwise would only affect users who actually use the encryption feature. If you find tests or volume creates blocking on encryption, check this first.

### Management Server

##### API

* createDiskOffering now has an 'encrypt' Boolean
* createServiceOffering now has an 'encryptroot' Boolean. The 'root' suffix is added here in case there is ever any other need to encrypt something related to the guest configuration, like the RAM of a VM.  This has been refactored to deal with the new separation of service offering from disk offering internally.
* listDiskOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listServiceOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listHosts now shows encryption support of each hypervisor host via `encryptionsupported`
* Volumes themselves don't show encryption on/off, rather the offering should be referenced. This follows the same pattern as other disk offering based settings such as the IOPS of the volume.

##### Volume functions

A decent effort has been made to ensure that the most common volume functions have either been cleanly supported or blocked. However, for the first release it is advised to mark this feature as *experimental*, as the code base is complex and there are certainly edge cases to be found.

Many of these features could eventually be supported over time, such as creating templates from encrypted volumes, but the effort and size of the change is already overwhelming.

Supported functions:
* Data Volume create
* VM root volume create
* VM root volume reinstall
* Offline volume snapshot/restore
* Migration of VM with storage (e.g. local storage VM migration)
* Resize volume
* Detach/attach volume

Blocked functions:
* Online volume snapshot
* VM snapshot w/memory
* Scheduled snapshots (would fail when VM is running)
* Disk offering migration to offerings that don't have matching encryption
* Creating template from encrypted volume
* Creating volume from encrypted volume
* Volume extraction (would we decrypt it first, or expose the key? Probably the former).

##### Primary Storage Support

For storage developers, adding encryption support involves:

1. Updating the `StoragePoolType` for your primary storage to advertise encryption support. This is used during allocation of storage to match storage types that support encryption to storage that supports it.

2. Implementing encryption feature when your `PrimaryDataStoreDriver` is called to perform volume lifecycle functions on volumes that are requesting encryption. You are free to do what your storage supports - this could be as simple as calling a storage API with the right flag when creating a volume. Or (as is the case with the KVM storage types), as complex as managing volume details directly at the hypervisor host. The data objects passed to the storage driver will contain volume passphrases, if encryption is requested.

##### Scheduling

For the KVM implementations specified above, we are dependent on the KVM hosts having support for volume encryption tools. As such, the hosts `StartupRoutingCommand` has been modified to advertise whether the host supports encryption. This is done via a probe during agent startup to look for functioning `cryptsetup` and support in `qemu-img`. This is also visible via the listHosts API and the host details in the UI.  This was patterned after other features that require hypervisor support such as UEFI.

The `EndPointSelector` interface and `DefaultEndpointSelector` have had new methods added, which allow the caller to ask for endpoints that support encryption.  This can be used by storage drivers to find the proper hosts to send storage commands that involve encryption. Not all volume activities will require a host to support encryption (for example a snapshot backup is a simple file copy), and this is the reason why the interface has been modified to allow for the storage driver to decide, rather than just passing the data objects to the EndpointSelector and letting the implementation decide.

VM scheduling has also been modified. When a VM start is requested, if any volume that requires encryption is attached, it will filter out hosts that don't support encryption.

##### DB Changes

A volume whose disk offering enables encryption will get a passphrase generated for it before its first use. This is stored in the new 'passphrase' table, and is encrypted using the CloudStack installation's standard configured DB encryption. A field has been added to the volumes table, referencing this passphrase, and a foreign key added to ensure passphrases that are referenced can't be removed from the database.  The volumes table now also contains an encryption format field, which is set by the implementer of the encryption and used as it sees fit.

#### KVM Agent

For the KVM storage pool types supported, the encryption has been implemented at Qemu itself, using the built-in LUKS storage support. This means that the storage remains encrypted all the way to the VM process, and decrypted before the block device is visible to the guest.  This may not be necessary in order to implement encryption for /your/ storage pool type, maybe you have a kernel driver that decrypts before the block device on the system, or something like that. However, it seemed like the simplest, common place to terminate the encryption, and provides the lowest surface area for decrypted guest data.

For qcow2 based storage, `qemu-img` is used to set up a qcow2 file with LUKS encryption. For block based (currently just ScaleIO storage), the `cryptsetup` utility is used to format the block device as LUKS for data disks, but `qemu-img` and its LUKS support is used for template copy.

Any volume that requires encryption will contain a passphrase ID as a byte array when handed down to the KVM agent. Care has been taken to ensure this doesn't get logged, and it is cleared after use in attempt to avoid exposing it before garbage collection occurs.  On the agent side, this passphrase is used in two ways:

1. In cases where the volume experiences some libvirt interaction it is loaded into libvirt as an ephemeral, private secret and then referenced by secret UUID in any libvirt XML. This applies to things like VM startup, migration preparation, etc.

2. In cases where `qemu-img` needs to use this passphrase for volume operations, it is written to a `KeyFile` on the cloudstack agent's configured tmpfs and passed along. The `KeyFile` is a `Closeable` and when it is closed, it is deleted. This allows us to try-with-resources any volume operations and get the KeyFile removed regardless.

In order to support the advanced syntax required to handle encryption and passphrases with `qemu-img`, the `QemuImg` utility has been modified to support the new `--object` and `--image-opts` flags. These are modeled as `QemuObject` and `QemuImageOptions`.  These `qemu-img` flags have been designed to supersede some of the existing, older flags being used today (such as choosing file formats and paths), and an effort could be made to switch over to these wholesale. However, for now we have instead opted to keep existing functions and do some wrapping to ensure backward compatibility, so callers of `QemuImg` can choose to use either way.

It should be noted that there are also a few different Enums that represent the encryption format for various purposes. While these are analogous in principle, they represent different things and should not be confused. For example, the supported encryption format strings for the `cryptsetup` utility has `LuksType.LUKS` while `QemuImg` has a `QemuImg.PhysicalDiskFormat.LUKS`.

Some additional effort could potentially be made to support advanced encryption configurations, such as choosing between LUKS1 and LUKS2 or changing cipher details. These may require changes all the way up through the control plane. However, in practice Libvirt and Qemu currently only support LUKS1 today. Additionally, the cipher details aren't required in order to use an encrypted volume, as they're stored in the LUKS header on the volume there is no need to store these elsewhere.  As such, we need only set the one encryption format upon volume creation, which is persisted in the volumes table and then available later as needed.  In the future when LUKS2 is standard and fully supported, we could move to it as the default and old volumes will still reference LUKS1 and have the headers on-disk to ensure they remain usable. We could also possibly support an automatic upgrade of the headers down the road, or a volume migration mechanism.

Every version of cryptsetup and qemu-img tested on variants of EL7 and Ubuntu that support encryption use the XTS-AES 256 cipher, which is the leading industry standard and widely used cipher today (e.g. BitLocker and FileVault).

Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
2022-09-27 10:20:59 +05:30
..
2022-04-14 11:12:01 -03:00

StorPool CloudStack Integration

CloudStack Overview

Primary and Secondary storage

Primary storage is associated with a cluster or zone, and it stores the virtual disks for all the VMs running on hosts in that cluster/zone.

Secondary storage stores the following:

  • Templates — OS images that can be used to boot VMs and can include additional configuration information, such as installed applications
  • ISO images — disc images containing data or bootable media for operating systems
  • Disk volume snapshots — saved copies of VM data which can be used for data recovery or to create new templates

ROOT and DATA volumes

ROOT volumes correspond to the boot disk of a VM. They are created automatically by CloudStack during VM creation. ROOT volumes are created based on a system disk offering, corresponding to the service offering the user VM is based on. We may change the ROOT volume disk offering but only to another system created disk offering.

DATA volumes correspond to additional disks. These can be created by users and then attached/detached to VMs. DATA volumes are created based on a user-defined disk offering.

Plugin Organization

The StorPool plugin consists of two parts:

KVM hypervisor plugin patch

Source directory: ./apache-cloudstack-4.17-src/plugins/hypervisors/kvm

StorPool primary storage plugin

Source directory: ./apache-cloudstack-4.17.0-src/plugins/storage/volume

There is one plugin for both the CloudStack management and agents, in the hope that having all the source in one place will ease development and maintenance. The plugin itself though is separated into two mainly independent parts:

  • ./src/com/... directory tree: agent related classes and commands send from management to agent
  • ./src/org/... directory tree: management related classes

The plugin is intended to be self contained and non-intrusive, thus ideally deploying it would consist of only dropping the jar file into the appropriate places. This is the reason why all StorPool related communication (ex. data copying, volume resize) is done with StorPool specific commands even when there is a CloudStack command that does pretty much the same.

Note that for the present the StorPool plugin may only be used for a single primary storage cluster; support for multiple clusters is planned.

Build, Install, Setup

Build

Go to the source directory and run:

mvn -Pdeveloper -DskipTests install

The resulting jar file is located in the target/ subdirectory.

Note: checkstyle errors: before compilation a code style check is performed; if this fails compilation is aborted. In short: no trailing whitespace, indent using 4 spaces, not tabs, comment-out or remove unused imports.

Note: Need to build both the KVM plugin and the StorPool plugin proper.

Install

StorPool primary storage plugin

For each CloudStack management host:

scp ./target/cloud-plugin-storage-volume-storpool-{version}.jar {MGMT_HOST}:/usr/share/cloudstack-management/lib/

For each CloudStack agent host:

scp ./target/cloud-plugin-storage-volume-storpool-{version}.jar {AGENT_HOST}:/usr/share/cloudstack-agent/plugins/

Note: CloudStack managements/agents services must be restarted after adding the plugin to the respective directories

Note: Agents should have access to the StorPool management API, since attach and detach operations happens on the agent. This is a CloudStack design issue, can't do much about it.

Setup

Setting up StorPool

Perform the StorPool installation following the StorPool Installation Guide.

Create a template to be used by CloudStack. Must set placeHead, placeAll, placeTail and replication. No need to set default volume size because it is determined by the CloudStack disks and services offering.

Setting up a StorPool PRIMARY storage pool in CloudStack

From the WEB UI, go to Infrastructure -> Primary Storage -> Add Primary Storage

Scope: select Zone-Wide Hypervisor: select KVM Zone: pick appropriate zone. Name: user specified name

Protocol: select SharedMountPoint Path: enter /dev/storpool (required argument, actually not needed in practice).

Provider: select StorPool Managed: leave unchecked (currently ignored) Capacity Bytes: used for accounting purposes only. May be more or less than the actual StorPool template capacity. Capacity IOPS: currently not used (may use for max IOPS limitations on volumes from this pool). URL: enter SP_API_HTTP=address:port;SP_AUTH_TOKEN=token;SP_TEMPLATE=template_name. At present one template can be used for at most one Storage Pool.

SP_API_HTTP - address of StorPool Api SP_AUTH_TOKEN - StorPool's token SP_TEMPLATE - name of StorPool's template

Storage Tags: If left blank, the StorPool storage plugin will use the pool name to create a corresponding storage tag. This storage tag may be used later, when defining service or disk offerings.

Plugin Functionality

Plugin Action CloudStack Action management/agent impl. details
Create ROOT volume from ISO create VM from ISO management createVolumeAsync
Create ROOT volume from Template create VM from Template management + agent copyAsync (T => T, T => V)
Create DATA volume create Volume management createVolumeAsync
Attach ROOT/DATA volume start VM (+attach/detach Volume) agent connectPhysicalDisk
Detach ROOT/DATA volume stop VM agent disconnectPhysicalDiskByPath
  Migrate VM agent attach + detach
Delete ROOT volume destroy VM (expunge) management deleteAsync
Delete DATA volume delete Volume (detached) management deleteAsync
Create ROOT/DATA volume snapshot snapshot volume management + agent takeSnapshot + copyAsync (S => S)
Create volume from snapshoot create volume from snapshot management + agent(?) copyAsync (S => V)
Create TEMPLATE from ROOT volume create template from volume management + agent copyAsync (V => T)
Create TEMPLATE from snapshot create template from snapshot SECONDARY STORAGE  
Download volume download volume management + agent copyAsync (V => V)
Revert ROOT/DATA volume to snapshot revert to snapshot management revertSnapshot
(Live) resize ROOT/DATA volume resize volume management + agent resize + StorpoolResizeCmd
Delete SNAPSHOT (ROOT/DATA) delete snapshot management StorpoolSnapshotStrategy
Delete TEMPLATE delete template agent deletePhysicalDisk
migrate VM/volume migrate VM/volume to another storage management/management + agent copyAsync (V => V)
VM snapshot group snapshot of VM's disks management StorpoolVMSnapshotStrategy takeVMSnapshot
revert VM snapshot revert group snapshot of VM's disks management StorpoolVMSnapshotStrategy revertVMSnapshot
delete VM snapshot delete group snapshot of VM's disks management StorpoolVMSnapshotStrategy deleteVMSnapshot
VM vc_policy tag vc_policy tag for all disks attached to VM management StorPoolCreateTagsCmd
delete VM vc_policy tag remove vc_policy tag for all disks attached to VM management StorPoolDeleteTagsCmd

NOTE: When using multicluster for each CloudStack cluster in its settings set the value of StorPool's SP_CLUSTER_ID in "sp.cluster.id".

NOTE: Secondary storage could be bypassed with Configuration setting "sp.bypass.secondary.storage" set to true.
In this case only snapshots won't be downloaded to secondary storage.

Creating template from snapshot

If bypass option is enabled

The snapshot exists only on PRIMARY (StorPool) storage. From this snapshot it will be created a template on SECONADRY.

If bypass option is disabled

TODO: Maybe we should not use CloudStack functionality, and to use that one when bypass option is enabled

This is independent of StorPool as snapshots exist on secondary.

Creating ROOT volume from templates

When creating the first volume based on the given template, if snapshot of the template does not exists on StorPool it will be first downloaded (cached) to PRIMARY storage. This is mapped to a StorPool snapshot so, creating succecutive volumes from the same template does not incur additional copying of data to PRIMARY storage.

This cached snapshot is garbage collected when the original template is deleted from CloudStack. This cleanup is done by a background task in CloudStack.

Creating a ROOT volume from an ISO image

We just need to create the volume. The ISO installation is handled by CloudStack.

Creating a DATA volume

DATA volumes are created by CloudStack the first time it is attached to a VM.

Creating volume from snapshot

We use the fact that the snapshot already exists on PRIMARY, so no data is copied. We will copy snapshots from SECONDARY to StorPool PRIMARY, when there is no corresponding StorPool snapshot.

Resizing volumes

We need to send a resize cmd to agent, where the VM the volume is attached to is running, so that the resize is visible by the VM.

Creating snapshots

The snapshot is first created on the PRIMARY storage (i.e. StorPool), then backed-up on SECONDARY storage (tested with NFS secondary) if bypass option is not enabled. The original StorPool snapshot is kept, so that creating volumes from the snapshot does not need to copy the data again to PRIMARY. When the snapshot is deleted from CloudStack so is the corresponding StorPool snapshot.

Currently snapshots are taken in RAW format.

Reverting volume to snapshot

It's handled by StorPool

Migrating volumes to other Storage pools

Tested with storage pools on NFS only.

Virtual Machine Snapshot/Group Snapshot

StorPool supports consistent snapshots of volumes attached to a virtual machine.

BW/IOPS limitations

Max IOPS are kept in StorPool's volumes with the help of custom service offerings, by adding IOPS limits to the corresponding system disk offering.

CloudStack has no way to specify max BW. Do they want to be able to specify max BW only is sufficient.