When domain is deleted, all the settings configured under
the domain scope still exists in domain_details table.
All the entries for the domain should be deleted as well
- Fixes inter-cluster migration of VMs
- Allows migration of stopped VM with disks attached to different and suitable pools
- Improves inter-cluster detached volume migration
- Allows inter-cluster migration (clusters of same Pod) for system VMs, VRs on VMware
- Allows storage migration for stopped system VMs, VRs on VMware within same Pod if StoragePool cluster scopetype
Linked Primate PR: https://github.com/apache/cloudstack-primate/pull/789 [Changes merged in this PR after new UI merge]
Documentation PR: https://github.com/apache/cloudstack-documentation/pull/170
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Steps to reproduce the issue
(1) create two vm wei-001 and wei-002, start them
(2) check /etc/cloudstack/dhcpentry.json and /etc/dhcphosts.txt in VR
They have entries for both of wei-001 and wei-002
(3) stop wei-002, and restart VR (or restart network with cleanup).
check /etc/cloudstack/dhcpentry.json and /etc/dhcphosts.txt in VR
They have entries for wei-001 only (as wei-002 is stopped)
(4) expunge wei-002. when it is done,
check /etc/cloudstack/dhcpentry.json and /etc/dhcphosts.txt in VR
They do not have entries for wei-001.
VR health check fails at dhcp_check.py and dns_check.py
macchinina-vmware.ova has changed at http://dl.openvm.eu/cloudstack/macchinina/x86_64/
sha1, sha256 and m5 checksum have been updated for template file in test_template.py
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Steps to reproduce the issue:
(1)Create 10000 service offerings (by db changes below or cloudmonkey).
```
DROP PROCEDURE IF EXISTS cloud.insert_service_offering;
DELIMITER $$
CREATE PROCEDURE cloud.insert_service_offering()
BEGIN
DECLARE count INT DEFAULT 10000;
SET @offeringid = (select max(id)+1 from disk_offering);
WHILE count > 0 DO
INSERT INTO disk_offering (id,name,uuid,display_text,disk_size,type,created) values (@offeringid,'test-offering-wei',uuid(), 'test-offering-wei',0,'Service',now());
INSERT INTO service_offering (id,cpu,speed,ram_size) values (@offeringid, 1, 500,256);
SET @offeringid = @offeringid + 1;
SET count = count - 1;
END WHILE;
END $$
DELIMITER ;
CALL cloud.insert_service_offering();
mysql> CALL cloud.insert_service_offering();
Query OK, 0 rows affected (2 min 30.85 sec)
```
(2) Check the total time of periodical capacity check in cloudstack.
Without this patch, it spend 2.5 seconds (2 hosts)
```
2021-01-15 16:10:12,793 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Running Capacity Checker ...
2021-01-15 16:10:15,287 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Done running Capacity Checker ...
```
With this patch ,it spend 1.3 seconds (2 hosts)
```
2021-01-15 16:12:43,604 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Running Capacity Checker ...
2021-01-15 16:12:44,927 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Done running Capacity Checker ...
```
If there are 100 hosts, the total time will be reduced from 100+ seconds to around 10 seconds.
* 4.15:
server: select root disk based on user input during vm import (#4591)
kvm: Use Q35 chipset for UEFI x86_64 (#4576)
server: fix wrong error message when create isolated network without SourceNat (#4624)
server: add possibility to scale vm to current customer offerings (#4622)
server: keep networks order and ips while move a vm with multiple networks (#4602)
server: throw exception when update vm nic on L2 network (#4625)
doc: fix typo in install notes (#4633)
* 4.14:
server: select root disk based on user input during vm import (#4591)
kvm: Use Q35 chipset for UEFI x86_64 (#4576)
server: fix wrong error message when create isolated network without SourceNat (#4624)
server: add possibility to scale vm to current customer offerings (#4622)
server: keep networks order and ips while move a vm with multiple networks (#4602)
server: throw exception when update vm nic on L2 network (#4625)
doc: fix typo in install notes (#4633)
We can use cloudmonkey to scale a vm with dynamic offering, to same offering but with different cpunumber or memory.
Enable it on UI to improve user experience.
This PR fixes an issue when move a vm from an account to another account.
Steps to reproduce the issue
(1) create a vm with multiple shared networks (in advanced zone, or advanced zone with security groups)
(2) create another account (in same domain who can also access the shared networks)
(3) move vm to new account, with a list of networkid
expected result: the vm has nics on the networks in same order as specified in API request, and nics have the same ips as before actual result: network order is not same as specified, ips are changed.
The bus type to `data disk` volumes is hardcoded to `virtio` or `scsi`, when using virtio-scsi (or, based on the template type). Therefore, there is no way to specify the bus type to data disk volumes (as we have for root disks).
This PR intends to replicate the `rootDiskController` behavior to `dataDiskController`, allowing the definition of the controller.
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
This changes deb and rpm packaging to build the UI using npm and bundle
it in the `cloudstack-management` package and a new `cloudstack-ui`
package. The `cloudstack-ui` package will install the UI under
`/usr/share/cloudstack-ui/`. For both packages the config.json will not
be overridden on upgrade and hosted at /etc/cloudstack/management
for the cloudstack-mangement package, and at /etc/cloudstack/ui for the
cloudstack-ui package. The cloudstack-ui package is for advanced users
who only want the UI want to setup reverse proxy (separate hosting of UI).
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* server: fix cannot create vm if another vm with same name has been added and removed on the network
steps to reproduce the issue
(1) create vm-1 on network-1
(2) add vm-1 to network-2
(3) remove vm-1 from network-2
(4) create another vm with same name vm-1 on network-2
expected result: operation succeed
actual result: operation failed.
* #4600: add back a removed line