When I add a secondary IP to a nic on shared network in advanced zone with security groups, the network rules for new IP are not applied on KVM hypervisors.
It is because "--action -A" cannot be recognized in security_group.py after commit ac73e7e671ba107830f96b9fb534eb716956e405. changing to "--action=-A" will fix it.
Fixes issue #3590 by using the last element on the array from the snapshot "path" String for retrieving the snapshot id. Additionally, it uses the volumePath as the volume id which should always be the correct value. The error raised on issue #3590 was related to the wrong use of variable "path" where in some cases had a different set of substrings.
The proposed change has been tested and evaluated. The values used for openning the RBD connection and executing the rollback were stable on the tests. Runned rollback on multiple snapshots and could start the VM with the content matching the ROOT reverted snapshot.
KVM is supported on arm64 Linux (https://www.linux-kvm.org/page/Processor_support#ARM:).
For a small (IoT) platform such as the new Raspberry Pi 4 that uses armv8 processor
(cortex-a72) it's possible to run Linux host with `/dev/kvm`
accleration. This adds support for IoT IaaS in CloudStack.
This PR is from a fun weekend project where:
- I set up a Raspberry Pi 4 - 4GB RAM model with 4 CPU cores @ 1.5Ghz, 128GB SD samsung evo plus card
- Installed Ubuntu 19.10 raspi3 base image: http://cdimage.ubuntu.com/releases/19.10/release/ubuntu-19.10-preinstalled-server-arm64+raspi3.img.xz
- Build a custom Linux 5.3 kernel with KVM enabled, deb here: http://dl.rohityadav.cloud/cloudstack-rpi/kernel-19.10/ and install the linux-image and linux-module
- Then install/setup CloudStack on it (fix some issues around jna, by manually installing newer libjna-java to /usr/share/cloudstack-agent/lib)
- Since the host processor is not x86_64, I had to build a new arm64 (or aarch64) systemvmtemplate: http://dl.rohityadav.cloud/cloudstack-rpi/systemvmtemplate/
I could finally get a 4.13 CloudStack + Adv zone/networking to run on it
and deployed a KVM based Ubuntu 19.10 environment and NFS storage.
Deployed a test vm with isolated network, VR works as expected. Console
proxy works as well, for this tested against arm64 openstack Debian 9/10
templates.
I raised the issue of enabling KVM in upstream Ubuntu arm64 build: https://bugs.launchpad.net/ubuntu/+source/linux-raspi2/+bug/1783961
Ubuntu kernel team has come back and future arm64 releases may have
KVM enabled by default.
Limitation: on my aarch64 env, it did not support IDE, therefore all
default bus type for volumes are SCSI by default. With VIRTIO it fails
sometimes.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* kvm: Use 'ip' instead of 'brctl'
The command 'brctl' is deprecated and should no longer be used.
iproute2 supports all the features we need and therefor we should use
this instead of the old commands.
Feature wise this does not change anything. It just makes the code more
robust towards the future.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* kvm/modifyvlan: Use 'ip' instead of 'brctl'
brctl is deprecated and by using iproute2 we are future-proof
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Refactor: Cleanup duplicate code
Make use of Java 8 default implementation in interfaces,
to remove code duplication between XxxCmd and XxxCmdAsAdmin.
Refactor checkFormat by pre-calculating the supported
extensions. Also make use of this in ImageStoreUtil.
Makes it easier to add new file and compression formats.
Problem: In Vmware, appliances that have options that are required to be answered before deployments are configurable through vSphere vCenter user interface but it is not possible from the CloudStack user interface.
Root cause: CloudStack does not handle vApp configuration options during deployments if the appliance contains configurable options. These configurations are mandatory for VM deployment from the appliance on Vmware vSphere vCenter. As shown in the image below, Vmware detects there are mandatory configurations that the administrator must set before deploy the VM from the appliance (in red on the image below):
Solution:
On template registration, after it is downloaded to secondary storage, the OVF file is examined and OVF properties are extracted from the file when available.
OVF properties extracted from templates after being downloaded to secondary storage are stored on the new table 'template_ovf_properties'.
A new optional section is added to the VM deployment wizard in the UI:
If the selected template does not contain OVF properties, then the optional section is not displayed on the wizard.
If the selected template contains OVF properties, then the optional new section is displayed. Each OVF property is displayed and the user must complete every property before proceeding to the next section.
If any configuration property is empty, then a dialog is displayed indicating that there are empty properties which must be set before proceeding
image
The specific OVF properties set on deployment are stored on the 'user_vm_details' table with the prefix: 'ovfproperties-'.
The VM is configured with the vApp configuration section containing the values that the user provided on the wizard.
Fix regression bug that affects KVM local storage migration. Some of the desired execution flows for KVM local storage migration had been altered to allow only managed storage to execute. Fixed allowing managed and non managed storages to execute.
Fixes#3521
This reverts commit 7a27e35a612f13a0ce43459b22e01d9b69627220.
We're near 4.13 RC1, we've low confidence if the changes from #3152
would cause other regressions so reverting this. The author may send a
PR again towards 4.14.
Regressions found are all related to template and iso registration and
upload.
Retrieval of an image store using ImageStoreProviderManager has been refactored by introducing three different methods,
DataStore getRandomImageStore(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will not be used here.
DataStore getImageStoreWithFreeCapacity(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will be used here and the store with max free space will be returned. If no store with filled storage less than the threshold is found, the NULL value will be returned.
List<DataStore> listImageStoresWithFreeCapacity(List<DataStore> imageStores);
To get a list of image stores for writing purpose which fulfills threshold capacity check.
Correspondingly DataStoreManager methods have been refactored to return similar values for a given zone.
Fixes#3287 - NULL value will be returned when secondary storage is needed for writing but there is not store with free space.
Fixes#3041 - Rather than returning random secondary storage for writing, storage with max. free space will be returned.
Fixes#3478 - For migration on VMware, all writable secondary storage will be mounted while preparation.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Make use of Java 8 default implementation in interfaces,
to remove code duplication between XxxCmd and XxxCmdAsAdmin.
Refactor checkFormat by pre-calculating the supported
extensions. Also make use of this in ImageStoreUtil.
Makes it easier to add new file and compression formats.
Features:
Zone-wide and cluster-wide primary storage support
VM template caching automatically on Datera, the subsequent VMs can be created instantaneously by fast cloning the root volume.
Rapid storage-native snapshot
Multiple managed primary storages can be created with a single Datera cluster to provide better management of
Total provisioned capacity
Default storage QoS values
Replica size ( 1 to 5 )
IP pool assignment for iSCSI target
Volume Placement ( hybrid, single_flash, all_flash )
Volume snapshot to VM template
Volume to VM template
Volume size increase using service policy
Volume QoS change using service policy
Enabled KVM support
New Datera app_instance name format to include ACS volume name
VM live migration
There are certain scenarios where the 169.254.0.0/16 subnet is used for different
purposes then CloudStack on a hypervisor.
Once of such scenarios is a BGP+EVPN+VXLAN setup using BGP Unnumbered where the
169.254.0.1 address is used by Frr/Zebra BGP routing to send traffic to the
neighboring router.
The following settings can be changed in the agent.properties (default values added):
control.cidr=169.254.0.0/16
Make sure the global setting 'control.cidr' matches the values defined in the agent.propeties!
In the future the mgmt server can send this parameter to a KVM Agent on startup, but at the moment
this framework is not in place and thus these values can't be send to the Agent in a proper manner.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Currently when refreshing disk usage stats all kvm agents are asked to collect stats for all volumes. In setups with multiple kvm hosts where managed storage is used, not all volumes are attached to all kvm hosts, this results in a large number of warnings in the kvm agent logs. This change introduces a filter step in case managed storage is used so that the management server only requests kvm agents for stats about volumes that are connected to each kvm host.
Add CephSnapshotStrategy to handle RBD revert (rollback) snapshot. In order to support RBD revert (rbd_rollback), this PR adds a CephSnapshotStrategy class to handle Ceph/RBD snapshot actions.
During volume stats calculation, if a volume has more than one disk in
the chain-info it is not used to sum the physical and virtual size
in the loop, instead any previous entry was overwritten by the last disk.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Add revoke certificates API
* Add background task to sync certificates
* Fix marvin test and revoke certificate
* Fix certificate sent to hypervisor was missing headers
* Fix background task for uploading certificates to hosts
This change addresses #3089. There was an issue when disks were being added with bus type IDE when creating windows VMs from ISOs. It is not possible to select bus type when creating a VM with an ISO. The bus type is inferred based on the platform emulator string provided to the KVM agent. Currently when creating a VM with managed storage (ex: Solidfire) and OS type string Windows*, all disks are added as IDE. Qemu currently does not support multiple IDE controllers and this configuration results in VMs that cannot be started. This issue does not occur when using NFS as the storage provider due to logic in that KVM agent that makes all data volumes (non root) use a virtio controller for file based disk. Similar logic was added for raw physical disks so that managed storage has the same behavior as NFS. In addition specific versions were removed from the code that guesses the disk controller to be used based on the platform emulator string since most modern operating systems support virtio.
Fixes#3089
This PR partially fixes the logic around port forwarding rules on the Juniper SRX plugin. The code in the plugin is based on JunOS 10, which is very old. The changes here should not break compatibility, but should enable the plugin to be used on newer devices. Note that an additional change to a script file is required to be able to add port forwarding rules, but as this PR was targetted for 4.11.3, I thought it best not to include this change as it might break compatibility for anyone still using JunOS 10.
I've made the logic better and consistent for adding/removing static nat and port forwarding rules - these were multi-step processes which did not check each individual step. This would aid in manually fixing rules in case of further problems.
I've also improved the logging for communication with the SRX by stripping out the Apache header before sending it, and indicating the name of the template filename in use.
To be able to add port forwarding rules, the <dst-port> tags in dest-nat-rule-add.xml must be changed to <low>.
Fixes: #3379
Problem: The VM metrics has aggregated volume bytes read/write and iops metrics but not on per volume basis.
Root Cause: The volume stats sub-system is not used to export the metrics, the support is not available for VMware.
Solution: Use the volume stats sub-system and DB table to export the metrics via the listVolumes and listVolumeMetrics API, and implement support for VMware and fix issue with network and disk metrics in the VM metrics view.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem: Users don't know what keys/values to enter for template and VM details.
Root Cause: The feature does not exist that can list possible details and options.
Solution: Based on the possible VM and template details handled by the
codebase, those details were refactored and a list API is introduced
that can return users those details along with possible values. When
users add details now, they will be presented with a list of key details
and their possible options if any.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes a potential NPE when a mapped account is not found and
moving of user to the mapped account is performed. This will now
throw a more information exception than NPE.
Fixes#2853
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem: The listVolumeMetrics API response does not honor the volume detail visibility restrictions set for normal users and returns sensitive information which should only be visible to the root admin.
Root Cause: The listVolumeMetrics API response extends the ListVolumesByAdmin API internally and this results in a full display view response that is only meant for the root admin.
Solution: This has been fixed by rectifying the API response to not show ‘physical size’, 'storage type', and ‘storage pool’ information. The UI has also been fixed to hide these columns for normal users.
- Fixes tests path from old layout to standard maven in src/test/java/
- Removed duplicate SnapshotManagerImpl at old path `server/src/com...`
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548
Live storage migration on KVM under these conditions:
From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.
Additional notes:
To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:
Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem: Users can register ISOs from URL but cannot upload local ISOs.
Root cause: CloudStack provides browser-based upload support for volumes and templates, but ISOs are not supported.
Solution:
The existing browser-based upload from local functionality for templates and volumes (https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=39620237) is extended to support uploading local ISOs.
Extend the UI: A new button is created under the ISOs view: 'Upload from Local'. A new dialog form is displayed in which the user must select the ISO to upload from its local file system.
Extend the API: New 'GetUploadParamsForIso' API command is created to handle the ISO upload.
To make sure that a qemu2-image won't be corrupted by the snapshot deletion procedure which is being performed after copying the snapshot to a secondary store, I'd propose to put a VM in to suspended state.
Additional reference: https://bugzilla.redhat.com/show_bug.cgi?id=920020#c5Fixes#3193
* Improvements on upload direct download certificates
* Move upload direct download certificate logic to KVM plugin
* Extend unit test certificate expiration days
* Add marvin tests and command to revoke certificates
* Review comments
* Do not include revoke certificates API
Since the CloudStack virtual router was redesigned on version 4.6 it has been observed that the DHCP leases file is not persistent across network operations. This causes conflicts on guest VMs static IPs, causing these static IPs to not be renewed by the DHCP server running on isolated and VPC networks' virtual routers (dnsmasq). On stopping or destroying a VM, its dhcp/dns records are not removed from the virtual router causing ghost effects.
Fixes#3272Fixes#3354
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
We do NOT always reserve VMware CPU/RAM resources - only when "vmware.reserve.cpu" or "vmware.reserve.mem" setting is set to TRUE - AND we do so, irrelevant if overprovisioning is active or not. Verified for both system VMs and user VMs.