341 Commits

Author SHA1 Message Date
Rohit Yadav
d4635e3442 Merge remote-tracking branch 'origin/4.15'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-04-01 14:35:01 +05:30
Rakesh
76ba5c62d9
server: Fix displaying public IP address of shared networks (#4675)
Public IP addresses dedicated to one domain should not be accessed
by other domains. Also, root admin should be able to display all
public ip addresses in system.

Currently following issues exist

1. Public IP address assigned to one domain can be accessed by
other sibling domains

If use.system.public.ip is false then child domains should not
see public ip of ROOT domain

Before fix
```
(test1) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
  "count": 59,
  "publicipaddress": [
```

After fix

```
(test) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
  "count": 10,
```
2021-04-01 12:39:01 +05:30
Rohit Yadav
fa067e02a7 Updating pom.xml version numbers for release 4.14.2.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-03-02 12:32:27 +05:30
sureshanaparti
eba186aa40
storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
2021-02-24 14:58:33 +05:30
Pearl Dsilva
b6fe9f99eb
Network Offering: Allow enabling network and vpc offering during creation (#4564)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2021-02-13 10:19:06 +00:00
Rohit Yadav
66f0beda5f Updating pom.xml version numbers for release 4.14.1.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-02-08 16:24:09 +05:30
Rohit Yadav
ba127dab3e Merge remote-tracking branch 'origin/4.15'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-02-05 16:02:26 +05:30
Rohit Yadav
6bde1384ff Merge remote-tracking branch 'origin/4.14' into 4.15
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-02-05 16:01:01 +05:30
Wei Zhou
78f73c1bc6
server: Fix update capacity for hosts take long time if there are many service offerings (#4623)
Steps to reproduce the issue:

(1)Create 10000 service offerings (by db changes below or cloudmonkey).

```
DROP PROCEDURE IF EXISTS cloud.insert_service_offering;

DELIMITER $$
CREATE PROCEDURE cloud.insert_service_offering()
BEGIN
  DECLARE count INT DEFAULT 10000;
  SET @offeringid = (select max(id)+1 from disk_offering);

  WHILE count > 0 DO
    INSERT INTO disk_offering (id,name,uuid,display_text,disk_size,type,created) values (@offeringid,'test-offering-wei',uuid(), 'test-offering-wei',0,'Service',now());
    INSERT INTO service_offering (id,cpu,speed,ram_size) values (@offeringid, 1, 500,256);
    SET @offeringid = @offeringid + 1;
    SET count = count - 1;
  END WHILE;
END $$
DELIMITER ;

CALL cloud.insert_service_offering();

mysql> CALL cloud.insert_service_offering();
Query OK, 0 rows affected (2 min 30.85 sec)
```

(2) Check the total time of periodical capacity check in cloudstack.

Without this patch, it spend 2.5 seconds (2 hosts)
```
2021-01-15 16:10:12,793 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Running Capacity Checker ...
2021-01-15 16:10:15,287 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Done running Capacity Checker ...
```

With this patch ,it spend 1.3 seconds (2 hosts)
```
2021-01-15 16:12:43,604 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Running Capacity Checker ...
2021-01-15 16:12:44,927 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Done running Capacity Checker ...
```

If there are 100 hosts, the total time will be reduced from 100+ seconds to around 10 seconds.
2021-02-04 14:43:57 +05:30
Rohit Yadav
b482da8c91 Updating pom.xml version numbers for release 4.15.1.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-01-11 13:58:30 +05:30
Daan Hoogland
280c13a4bb Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-05 15:51:02 +00:00
Daan Hoogland
81e9e6809b Updating pom.xml version numbers for release 4.15.1.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:34:46 +00:00
Daan Hoogland
e26202f23e Updating pom.xml version numbers for release 4.16.0.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:32:10 +00:00
Daan Hoogland
01b3e361c7 Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2020-12-23 16:32:25 +00:00
Pearl Dsilva
e4a504b084
Make global setting non-dynamic (#4505)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2020-12-01 14:00:35 +05:30
Olivier Lemasle
5f8289ffe9
Re-enable IP address usage hiding (#4327) 2020-11-07 10:42:44 +01:00
nvazquez
d864e9dc39 [VMware] Full OVF properties support 2020-10-19 15:05:56 +05:30
Harikrishna Patnala
b4a23ea5f6 Allocation logic to skip datastore cluster and consider only storagepools inside the datastore cluster 2020-10-19 14:57:15 +05:30
Harikrishna Patnala
fb0a96e7fb Check if datastore is complaince with the storagepolicy provided in the disk offering.
Added corresponding manager objects from PBM sdk to do the job.
Made dao layer changes to read the storage policy in diskoffering
2020-10-19 14:57:15 +05:30
Pearl Dsilva
b464fe41c6
server: Secondary Storage Usage Improvements (#4053)
This feature enables the following:
Balanced migration of data objects from source Image store to destination Image store(s)
Complete migration of data
setting an image store to read-only
viewing download progress of templates across all data stores
Related Primate PR: apache/cloudstack-primate#326
2020-09-17 10:12:10 +05:30
Spaceman1984
d57aa83517
server: Added nfs minor version support (#4180)
This PR adds minor version support when mounting nfs on the SSVM as requested in #2861

The global setting "secstorage.nfs.version" has been changed to use the String data type which allows any minor version to be specified.
2020-08-19 14:53:38 +05:30
Rohit Yadav
2c82aac5aa Merge remote-tracking branch 'origin/4.14' 2020-07-07 12:53:05 +05:30
Rohit Yadav
b141b8e256 Merge remote-tracking branch 'origin/4.13' into 4.14 2020-07-07 12:51:46 +05:30
Wei Zhou
4da374b6b4
server: Dedicated hosts should be 'Not Suitable' while find hosts for vm migration (#4001)
While migrate a vm, in the popup, the host dedicated to other accounts/domains are also 'Suitable" for migration, which is obviously wrong.

The same issue happens with api findHostsForMigration
2020-07-04 11:01:41 +05:30
andrijapanicsb
5f926c3353 Updating pom.xml version numbers for release 4.15.0.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 10:18:39 +01:00
andrijapanicsb
05e9b11694 Updating pom.xml version numbers for release 4.14.1.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 09:59:32 +01:00
andrijapanicsb
6f96b3b2b3 Updating pom.xml version numbers for release 4.14.0.0
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-11 15:03:14 +01:00
andrijapanicsb
398e685e01 Updating pom.xml version numbers for release 4.13.2.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-04-29 12:29:12 +01:00
andrijapanicsb
b2ffa3efa5 Updating pom.xml version numbers for release 4.13.1.0
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-04-23 19:17:09 +01:00
Arthur Halet
3575f5ed52
vrouter in redundant mode acquire guest ips from first ip of th… (#3587) 2020-03-14 09:22:48 +01:00
Nicolas Vazquez
efe00aa7e0
[KVM] Rolling maintenance (#3610) 2020-03-12 16:59:46 +01:00
Abhishek Kumar
8cc70c7d87
CloudStack Kubernetes Service (#3680) 2020-03-06 08:51:23 +01:00
Rohit Yadav
d0e3c577c0 Merge remote-tracking branch 'origin/4.13' 2020-03-05 12:37:51 +05:30
Rohit Yadav
b4fdf22397
kvm: fix/optimize propogating configs (#3911)
Make some changes based on @nvazquez 's comments in PR #3491
Fix a bug in #3491
2020-03-05 12:20:51 +05:30
Wei Zhou
458d3b5b47
Multiple networks support for vms in advanced zone with securit… (#3639) 2020-02-19 14:02:12 +00:00
Daan Hoogland
b01e011def Merge release branch 4.13 to master
* 4.13:
  KVM: Propagating changes on host parameters to the agents (#3491)
2020-02-19 14:15:52 +01:00
Wei Zhou
ac7bcde45b
KVM: Propagating changes on host parameters to the agents (#3491) 2020-02-19 13:13:37 +00:00
dahn
4780a27255
Add missing HA config keys (#3776) (#3814)
* Add missing HA config keys (#3776)

* merge conflict-bugs fixed

Co-authored-by: mdominka <50666672+mdominka@users.noreply.github.com>
2020-01-15 12:24:05 +01:00
mdominka
54cc73af08 Add missing HA config keys (#3776) 2020-01-14 09:35:34 +01:00
Rakesh
482e7ebf9a New feature: Acquire specific public IP for network (#3775)
Currently in cloudstack, when we click on "Acquire New Ip", it will
randomly acquire IP from the pool. With this enhancement, it is
possible to select the IP from the drop down IP list of that network.
Same thing applies for a VPC as well.
2019-12-24 10:08:53 +01:00
Anurag Awasthi
4b43c2684f Better tracking host maintanence and handling of migration jobs (#3425)
* Service layer changes for new way of tracking maintanence progress

* Fixes after offline code review

* Fix marvin tests

* Change state name and add documentation

* Fix test

* Fix and add more unit tests for different caseS

* Fix and enhance Marvin Tests

* Fixes for corner cases

* More fixes and logging

* UI fixes

* Some minor changes and reducing VMs on host for more contained tests

* Fixed ssh client auth problem causing test failure

* Code review changes + fixes + some more logging

* Fix flaky tests by adding delays between host states

* Added fetching only enabled hosts for tests

* Make port blocking KVM specific and refactor to handle failure

* Make failing migrations due to tagged host instead of port blocking

* Added additional check for migrating VMs

* Refactor to use single place for methods checking maintenance states
2019-12-19 16:36:20 +01:00
Sven Vogel
cf6e616d5b
Revert "Add missing HA config keys (#3737)" (#3774)
This reverts commit 16527f1eb070a796732e99cf8fe466b61cf972c3.
2019-12-18 14:54:27 +01:00
mdominka
16527f1eb0 Add missing HA config keys (#3737)
* Add missing HA config keys
* Change time value to seconds
* Change Integer to Long
* Using ConfigKey defaultValue
* Do some code refactoring
* Simplify code
2019-12-17 15:24:53 +01:00
Paul Angus
50fc045f36 Updating pom.xml version numbers for release 4.14.0.0-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2019-09-07 09:57:46 +01:00
Paul Angus
61b8b77913 Updating pom.xml version numbers for release 4.13.1.0-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2019-09-01 13:36:50 +01:00
Paul Angus
8e08b47cc9 Updating pom.xml version numbers for release 4.13.0.0
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2019-08-20 15:35:49 +01:00
Nicolas Vazquez
bfc08715cc Display VM snapshot tags on usage records (#3560)
* Refactor usage helper tables to include VM snapshot id

* Fix resource type and resource id while listing usage records

* Add defensive checks
2019-08-20 14:20:23 +01:00
Abhishek Kumar
cf347c89ea Merge branch 'master' into storage-offering-domains-zones 2019-06-18 12:52:34 +05:30
Nicolas Vazquez
0fbf5006b8 kvm: live storage migration intra cluster from NFS source and destination (#2983)
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548

Live storage migration on KVM under these conditions:

From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.

Additional notes:

To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026
https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:

Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-10 15:35:26 +05:30
Abhishek Kumar
dc589a442d server: create network offering for specified domain(s) and zone(s)
Signed-off-by: Abhishek Kumar <abhishek.kumar@shapeblue.com>
2019-05-24 10:11:00 +05:30