52 Commits

Author SHA1 Message Date
Rohit Yadav
cb167072a1 Merge remote-tracking branch 'origin/4.15'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-05-07 16:37:42 +05:30
Harikrishna
32e3bbdcc5
VMware Datastore Cluster primary storage pool synchronisation (#4871)
Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.

Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
2021-05-07 16:30:54 +05:30
Gabriel Beims Bräscher
de557663ec
Migrate/Stop VMs with local storage when preparing host for maintenance (#4212) 2021-04-19 09:41:42 +02:00
Rohit Yadav
22f6c19248 Merge remote-tracking branch 'origin/4.15' 2021-04-09 13:21:07 +05:30
Rohit Yadav
ca8920dd36 Merge remote-tracking branch 'origin/4.14' into 4.15 2021-04-09 13:17:39 +05:30
Abhishek Kumar
cd60b8d97d
host-allocator: check capacity for suitable hosts (#4884)
Fixes #4517

Adds capacity checks for RandomAllocator (host allocator)

Factors out host cpu capability and capacity check wrt serviceoffering code into CapacityManager.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-04-09 12:35:58 +05:30
Rohit Yadav
d4635e3442 Merge remote-tracking branch 'origin/4.15'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-04-01 14:35:01 +05:30
Rakesh
76ba5c62d9
server: Fix displaying public IP address of shared networks (#4675)
Public IP addresses dedicated to one domain should not be accessed
by other domains. Also, root admin should be able to display all
public ip addresses in system.

Currently following issues exist

1. Public IP address assigned to one domain can be accessed by
other sibling domains

If use.system.public.ip is false then child domains should not
see public ip of ROOT domain

Before fix
```
(test1) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
  "count": 59,
  "publicipaddress": [
```

After fix

```
(test) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
  "count": 10,
```
2021-04-01 12:39:01 +05:30
sureshanaparti
eba186aa40
storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
2021-02-24 14:58:33 +05:30
Pearl Dsilva
b6fe9f99eb
Network Offering: Allow enabling network and vpc offering during creation (#4564)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2021-02-13 10:19:06 +00:00
Rohit Yadav
6bde1384ff Merge remote-tracking branch 'origin/4.14' into 4.15
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-02-05 16:01:01 +05:30
Wei Zhou
78f73c1bc6
server: Fix update capacity for hosts take long time if there are many service offerings (#4623)
Steps to reproduce the issue:

(1)Create 10000 service offerings (by db changes below or cloudmonkey).

```
DROP PROCEDURE IF EXISTS cloud.insert_service_offering;

DELIMITER $$
CREATE PROCEDURE cloud.insert_service_offering()
BEGIN
  DECLARE count INT DEFAULT 10000;
  SET @offeringid = (select max(id)+1 from disk_offering);

  WHILE count > 0 DO
    INSERT INTO disk_offering (id,name,uuid,display_text,disk_size,type,created) values (@offeringid,'test-offering-wei',uuid(), 'test-offering-wei',0,'Service',now());
    INSERT INTO service_offering (id,cpu,speed,ram_size) values (@offeringid, 1, 500,256);
    SET @offeringid = @offeringid + 1;
    SET count = count - 1;
  END WHILE;
END $$
DELIMITER ;

CALL cloud.insert_service_offering();

mysql> CALL cloud.insert_service_offering();
Query OK, 0 rows affected (2 min 30.85 sec)
```

(2) Check the total time of periodical capacity check in cloudstack.

Without this patch, it spend 2.5 seconds (2 hosts)
```
2021-01-15 16:10:12,793 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Running Capacity Checker ...
2021-01-15 16:10:15,287 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Done running Capacity Checker ...
```

With this patch ,it spend 1.3 seconds (2 hosts)
```
2021-01-15 16:12:43,604 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Running Capacity Checker ...
2021-01-15 16:12:44,927 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Done running Capacity Checker ...
```

If there are 100 hosts, the total time will be reduced from 100+ seconds to around 10 seconds.
2021-02-04 14:43:57 +05:30
Pearl Dsilva
e4a504b084
Make global setting non-dynamic (#4505)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2020-12-01 14:00:35 +05:30
Olivier Lemasle
5f8289ffe9
Re-enable IP address usage hiding (#4327) 2020-11-07 10:42:44 +01:00
nvazquez
d864e9dc39 [VMware] Full OVF properties support 2020-10-19 15:05:56 +05:30
Harikrishna Patnala
b4a23ea5f6 Allocation logic to skip datastore cluster and consider only storagepools inside the datastore cluster 2020-10-19 14:57:15 +05:30
Harikrishna Patnala
fb0a96e7fb Check if datastore is complaince with the storagepolicy provided in the disk offering.
Added corresponding manager objects from PBM sdk to do the job.
Made dao layer changes to read the storage policy in diskoffering
2020-10-19 14:57:15 +05:30
Pearl Dsilva
b464fe41c6
server: Secondary Storage Usage Improvements (#4053)
This feature enables the following:
Balanced migration of data objects from source Image store to destination Image store(s)
Complete migration of data
setting an image store to read-only
viewing download progress of templates across all data stores
Related Primate PR: apache/cloudstack-primate#326
2020-09-17 10:12:10 +05:30
Spaceman1984
d57aa83517
server: Added nfs minor version support (#4180)
This PR adds minor version support when mounting nfs on the SSVM as requested in #2861

The global setting "secstorage.nfs.version" has been changed to use the String data type which allows any minor version to be specified.
2020-08-19 14:53:38 +05:30
Rohit Yadav
b141b8e256 Merge remote-tracking branch 'origin/4.13' into 4.14 2020-07-07 12:51:46 +05:30
Wei Zhou
4da374b6b4
server: Dedicated hosts should be 'Not Suitable' while find hosts for vm migration (#4001)
While migrate a vm, in the popup, the host dedicated to other accounts/domains are also 'Suitable" for migration, which is obviously wrong.

The same issue happens with api findHostsForMigration
2020-07-04 11:01:41 +05:30
Arthur Halet
3575f5ed52
vrouter in redundant mode acquire guest ips from first ip of th… (#3587) 2020-03-14 09:22:48 +01:00
Nicolas Vazquez
efe00aa7e0
[KVM] Rolling maintenance (#3610) 2020-03-12 16:59:46 +01:00
Abhishek Kumar
8cc70c7d87
CloudStack Kubernetes Service (#3680) 2020-03-06 08:51:23 +01:00
Rohit Yadav
d0e3c577c0 Merge remote-tracking branch 'origin/4.13' 2020-03-05 12:37:51 +05:30
Rohit Yadav
b4fdf22397
kvm: fix/optimize propogating configs (#3911)
Make some changes based on @nvazquez 's comments in PR #3491
Fix a bug in #3491
2020-03-05 12:20:51 +05:30
Wei Zhou
458d3b5b47
Multiple networks support for vms in advanced zone with securit… (#3639) 2020-02-19 14:02:12 +00:00
Daan Hoogland
b01e011def Merge release branch 4.13 to master
* 4.13:
  KVM: Propagating changes on host parameters to the agents (#3491)
2020-02-19 14:15:52 +01:00
Wei Zhou
ac7bcde45b
KVM: Propagating changes on host parameters to the agents (#3491) 2020-02-19 13:13:37 +00:00
dahn
4780a27255
Add missing HA config keys (#3776) (#3814)
* Add missing HA config keys (#3776)

* merge conflict-bugs fixed

Co-authored-by: mdominka <50666672+mdominka@users.noreply.github.com>
2020-01-15 12:24:05 +01:00
mdominka
54cc73af08 Add missing HA config keys (#3776) 2020-01-14 09:35:34 +01:00
Rakesh
482e7ebf9a New feature: Acquire specific public IP for network (#3775)
Currently in cloudstack, when we click on "Acquire New Ip", it will
randomly acquire IP from the pool. With this enhancement, it is
possible to select the IP from the drop down IP list of that network.
Same thing applies for a VPC as well.
2019-12-24 10:08:53 +01:00
Anurag Awasthi
4b43c2684f Better tracking host maintanence and handling of migration jobs (#3425)
* Service layer changes for new way of tracking maintanence progress

* Fixes after offline code review

* Fix marvin tests

* Change state name and add documentation

* Fix test

* Fix and add more unit tests for different caseS

* Fix and enhance Marvin Tests

* Fixes for corner cases

* More fixes and logging

* UI fixes

* Some minor changes and reducing VMs on host for more contained tests

* Fixed ssh client auth problem causing test failure

* Code review changes + fixes + some more logging

* Fix flaky tests by adding delays between host states

* Added fetching only enabled hosts for tests

* Make port blocking KVM specific and refactor to handle failure

* Make failing migrations due to tagged host instead of port blocking

* Added additional check for migrating VMs

* Refactor to use single place for methods checking maintenance states
2019-12-19 16:36:20 +01:00
Sven Vogel
cf6e616d5b
Revert "Add missing HA config keys (#3737)" (#3774)
This reverts commit 16527f1eb070a796732e99cf8fe466b61cf972c3.
2019-12-18 14:54:27 +01:00
mdominka
16527f1eb0 Add missing HA config keys (#3737)
* Add missing HA config keys
* Change time value to seconds
* Change Integer to Long
* Using ConfigKey defaultValue
* Do some code refactoring
* Simplify code
2019-12-17 15:24:53 +01:00
Nicolas Vazquez
bfc08715cc Display VM snapshot tags on usage records (#3560)
* Refactor usage helper tables to include VM snapshot id

* Fix resource type and resource id while listing usage records

* Add defensive checks
2019-08-20 14:20:23 +01:00
Abhishek Kumar
cf347c89ea Merge branch 'master' into storage-offering-domains-zones 2019-06-18 12:52:34 +05:30
Nicolas Vazquez
0fbf5006b8 kvm: live storage migration intra cluster from NFS source and destination (#2983)
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548

Live storage migration on KVM under these conditions:

From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.

Additional notes:

To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026
https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:

Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-10 15:35:26 +05:30
Abhishek Kumar
dc589a442d server: create network offering for specified domain(s) and zone(s)
Signed-off-by: Abhishek Kumar <abhishek.kumar@shapeblue.com>
2019-05-24 10:11:00 +05:30
Abhishek Kumar
c85b3e597a server: ability to create disk offerings for domain(s) and zone(s)
Allows creating storage offerings associated with particular domain(s) and zone(s). In create disk/storage offfering form UI, a mult-select control has been addded to select desired zone(s) and domain select element has been made multi-select.
createDiskOffering API has been modified to allow passing list of domain and zone IDs with keys domainids and zoneids respectively. These lists are stored in DB in cloud.disk_offering_details table with 'domainids' and 'zoneids' key as string of comma separated list of IDs. Response for create, update and list disk offering APIs will return domainids, domainnames, zoneids and zonenames in details object of offering.
listDiskOfferings API has been modified to allow passing zoneid to return only offerings which are associated with the zone.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2019-05-24 09:56:23 +05:30
Daan Hoogland
29918e25e3 Merge release branch 4.11 to 4.12
* 4.11:
  KVM: Fix agents dont reconnect post maintenance (#3239)
2019-05-23 14:29:41 +02:00
Gabriel Beims Bräscher
7c5eca9481
Copy template to target KVM host if needed when migrating local <> local storage (#3154)
* Migrate template to target host if needed.

Fix KVM VM local storage live migration by migrating its template to the
target host if needed.

* Address reviewer and add method that updates the DB template reference

* Remove deprecated Config.PrimaryStorageDownloadWait

* Code formating of @Inject to follow checkstyle
2019-02-05 00:18:29 -02:00
dahn
b363fd49f7 Vmware offline migration (#2848)
* - Offline VM and Volume migration on Vmware hypervisor hosts
- Also add VM disk consolidation call on successful VM migrations

* Fix indentation of marvin test file and reformat against PEP8

* * Fix few comment typos
* Refactor debug messages to use String.format() when debug log level is enabled.

* Send list of commands returned by hypervisor Guru instead of explicitly selecting the first one

* Fix unhandled NPE during VM migration

* Revert back to distinct event descriptions for VM to host or storage pool migration

* Reformat test_primary_storage file against PEP-8 and Remove unused imports

* Revert back the deprecation messages in the custom StringUtils class to favour the use of the ApacheUtils
2019-01-25 10:05:13 -02:00
Rohit Yadav
92cc4514ea Merge remote-tracking branch 'origin/4.11'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-12-28 15:20:23 +05:30
Rohit Yadav
c49807f8f4 Merge remote-tracking branch 'origin/4.11'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-09-12 14:17:29 +05:30
Gabriel Beims Bräscher
fbf488497f Support IPv6 address in addIpToNic (#2773)
The admin will manually need to add the address to the Instance, but the
Security Grouping should allow it.
2018-09-11 12:03:19 -03:00
Mike Tutkowski
d12c106a47
Restrict the number of managed clustered file systems per compute cluster (#2500)
* Restrict the number of managed clustered file systems per compute cluster
2018-09-11 08:23:19 -06:00
Rohit Yadav
9146d7b7a0 Merge branch '4.11' 2018-06-06 12:41:18 +05:30
Rafael Weingärtner
d6cbd774b7
[CLOUDSTACK-10323] Allow changing disk offering during volume migration (#2486)
* [CLOUDSTACK-10323] Allow changing disk offering during volume migration

This is a continuation of work developed on PR #2425 (CLOUDSTACK-10240), which provided root admins an override mechanism to move volumes between storage systems types (local/shared) even when the disk offering would not allow such operation. To complete the work, we will now provide a way for administrators to enter a new disk offering that can reflect the new placement of the volume. We will add an extra parameter to allow the root admin inform a new disk offering for the volume. Therefore, when the volume is being migrated, it will be possible to replace the disk offering to reflect the new placement of the volume.

The API method will have the following parameters:

* storageid (required)
* volumeid (required)
* livemigrate(optional)
* newdiskofferingid (optional) – this is the new parameter

The expected behavior is the following:

* If “newdiskofferingid” is not provided the current behavior is maintained. Override mechanism will also keep working as we have seen so far.
* If the “newdiskofferingid” is provided by the admin, we will execute the following checks
** new disk offering mode (local/shared) must match the target storage mode. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** we will check if the new disk offering tags match the target storage tags. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** check if the target storage has the capacity for the new volume. If it does not have enough space, then an exception is thrown and the operator will receive a message indicating the problem.
** check if the size of the volume is the same as the size of the new disk offering. If it is not the same, we will ALLOW the change of the service offering, and a warning message will be logged.

We execute the change of the Disk offering as soon as the migration of the volume finishes. Therefore, if an error happens during the migration and the volume remains in the original storage system, the disk offering will keep reflecting this situation.

* Code formatting

* Adding a test to cover migration with new disk offering (#4)

* Adding a test to cover migration with new disk offering

* Update test_volumes.py

* Update test_volumes.py

* fix test_11_migrate_volume_and_change_offering

* Fix typo in Java doc
2018-04-26 20:05:55 -03:00
Rafael Weingärtner
972b8b71d7
CLOUDSTACK-8855 Improve Error Message for Host Alert State and reconnect host API. (#2387)
* CLOUDSTACK-8855 Improve Error Message for Host Alert State

* [CLOUDSTACK-9846] create column to save the content of alert messages

Remove declaration of throws CloudRuntimeException
I also removed some unused variables and comments left behind

This closes #837

* Isolate a problematic test "smoke/test_certauthority_root"
2018-03-14 15:27:43 -03:00