This PR aims at restoring the previous level of leniency in listing templates on stores that have been marked as deleted / removed by updating the DB.
While Cloudstack doesn't allow deleting stores that have resources on them, it may so happen that users may mimic a deletion of a store by merely updating the DB. Under such a case, listing of templates is hampered due to an NPE that is caused. (as seen in #4606)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
CNI plugin release naming has changed, https://github.com/containernetworking/plugins/releases
Release are named for host OS from 0.8.0 onwards.
This change adds check for 404 response code and attempts retry
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Prevent KVM from performing volume migrations of running instances
KVM has a limitation to modify instances definitions while they are on running state. Therefore, it is not possible to change volumes backend location easily.
There is a problem in the `migrateVolume` API. This API command ignores that limitation and causes an inconsistence on the database. ACS processes the migrate command, copies the volume to the destination storage, modifies the database and finishes the process with success. However, the running backend is still using the "old volume file".
This PR intends to prevent KVM to perform volumes migrations while KVM instances are in the running state and inform the user of an alternative API command that enables such operation on running instances.
* Update VolumeApiServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
Co-authored-by: Rohit Yadav <rohit@apache.org>
In previous cloudstack versions, qcow2 image does not have a backing file format.
however, it is required in newer qemu versions, for example qemu 4.2 on ubuntu 20.04.
steps to reproduce the issue
(1) install cloudstack 4.14 or previous version, and ubuntu 19.04 or 18.04/16.04 LTS.
(2) create vms.
(3) upgrade to 4.15, upgrade os to ubuntu 20.04 , or install a new server with ubuntu 20.04.
(4) migrate vm from old ubuntu version to ubuntu 20.04, failed with exception below
```
2021-02-04 13:43:07,397 DEBUG [resource.wrapper.LibvirtMigrateCommandWrapper] (agentRequest-Handler-1:null) (logid:93da9385) ExecutionException : org.libvirt.LibvirtException: Requested operation is not valid: format of backing image '/mnt/03b6f487-9eaf-38bf-ad2d-d985423b832f/66990fcc-fd98-4932-9649-989bf6583d59' of image '/mnt/03b6f487-9eaf-38bf-ad2d-d985423b832f/a3dd1f0f-2557-4e07-951c-e4eb7b3f38b2' was not specified in the image metadata (See https://libvirt.org/kbase/backing_chains.html for troubleshooting)
```
(5)stop vm, and start it on ubuntu 20.04 server. failed with exception below
```
2021-02-04 13:46:29,766 WARN [resource.wrapper.LibvirtStartCommandWrapper] (agentRequest-Handler-5:null) (logid:b54745a7) LibvirtException
org.libvirt.LibvirtException: Requested operation is not valid: format of backing image '/mnt/03b6f487-9eaf-38bf-ad2d-d985423b832f/66990fcc-fd98-4932-9649-989bf6583d59' of image '/mnt/03b6f487-9eaf-38bf-ad2d-d985423b832f/a3dd1f0f-2557-4e07-951c-e4eb7b3f38b2' was not specified in the image metadata (See https://libvirt.org/kbase/backing_chains.html for troubleshooting)
```
To make testing easier, step 1 and 2 can be replaced by
```
qemu-img create -f qcow2 -b <backing file> <qcow2 image>
```
so qcow2 image does not have a backing file format.
* Show network name in exception message
* Update server/src/main/java/com/cloud/vm/UserVmManagerImpl.java
Co-authored-by: dahn <daan.hoogland@gmail.com>
When domain is deleted, all the settings configured under
the domain scope still exists in domain_details table.
All the entries for the domain should be deleted as well
Steps to reproduce the issue
(1) create two vm wei-001 and wei-002, start them
(2) check /etc/cloudstack/dhcpentry.json and /etc/dhcphosts.txt in VR
They have entries for both of wei-001 and wei-002
(3) stop wei-002, and restart VR (or restart network with cleanup).
check /etc/cloudstack/dhcpentry.json and /etc/dhcphosts.txt in VR
They have entries for wei-001 only (as wei-002 is stopped)
(4) expunge wei-002. when it is done,
check /etc/cloudstack/dhcpentry.json and /etc/dhcphosts.txt in VR
They do not have entries for wei-001.
VR health check fails at dhcp_check.py and dns_check.py
macchinina-vmware.ova has changed at http://dl.openvm.eu/cloudstack/macchinina/x86_64/
sha1, sha256 and m5 checksum have been updated for template file in test_template.py
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Steps to reproduce the issue:
(1)Create 10000 service offerings (by db changes below or cloudmonkey).
```
DROP PROCEDURE IF EXISTS cloud.insert_service_offering;
DELIMITER $$
CREATE PROCEDURE cloud.insert_service_offering()
BEGIN
DECLARE count INT DEFAULT 10000;
SET @offeringid = (select max(id)+1 from disk_offering);
WHILE count > 0 DO
INSERT INTO disk_offering (id,name,uuid,display_text,disk_size,type,created) values (@offeringid,'test-offering-wei',uuid(), 'test-offering-wei',0,'Service',now());
INSERT INTO service_offering (id,cpu,speed,ram_size) values (@offeringid, 1, 500,256);
SET @offeringid = @offeringid + 1;
SET count = count - 1;
END WHILE;
END $$
DELIMITER ;
CALL cloud.insert_service_offering();
mysql> CALL cloud.insert_service_offering();
Query OK, 0 rows affected (2 min 30.85 sec)
```
(2) Check the total time of periodical capacity check in cloudstack.
Without this patch, it spend 2.5 seconds (2 hosts)
```
2021-01-15 16:10:12,793 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Running Capacity Checker ...
2021-01-15 16:10:15,287 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Done running Capacity Checker ...
```
With this patch ,it spend 1.3 seconds (2 hosts)
```
2021-01-15 16:12:43,604 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Running Capacity Checker ...
2021-01-15 16:12:44,927 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Done running Capacity Checker ...
```
If there are 100 hosts, the total time will be reduced from 100+ seconds to around 10 seconds.
* 4.14:
server: select root disk based on user input during vm import (#4591)
kvm: Use Q35 chipset for UEFI x86_64 (#4576)
server: fix wrong error message when create isolated network without SourceNat (#4624)
server: add possibility to scale vm to current customer offerings (#4622)
server: keep networks order and ips while move a vm with multiple networks (#4602)
server: throw exception when update vm nic on L2 network (#4625)
doc: fix typo in install notes (#4633)
We can use cloudmonkey to scale a vm with dynamic offering, to same offering but with different cpunumber or memory.
Enable it on UI to improve user experience.
This PR fixes an issue when move a vm from an account to another account.
Steps to reproduce the issue
(1) create a vm with multiple shared networks (in advanced zone, or advanced zone with security groups)
(2) create another account (in same domain who can also access the shared networks)
(3) move vm to new account, with a list of networkid
expected result: the vm has nics on the networks in same order as specified in API request, and nics have the same ips as before actual result: network order is not same as specified, ips are changed.
This changes deb and rpm packaging to build the UI using npm and bundle
it in the `cloudstack-management` package and a new `cloudstack-ui`
package. The `cloudstack-ui` package will install the UI under
`/usr/share/cloudstack-ui/`. For both packages the config.json will not
be overridden on upgrade and hosted at /etc/cloudstack/management
for the cloudstack-mangement package, and at /etc/cloudstack/ui for the
cloudstack-ui package. The cloudstack-ui package is for advanced users
who only want the UI want to setup reverse proxy (separate hosting of UI).
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* server: fix cannot create vm if another vm with same name has been added and removed on the network
steps to reproduce the issue
(1) create vm-1 on network-1
(2) add vm-1 to network-2
(3) remove vm-1 from network-2
(4) create another vm with same name vm-1 on network-2
expected result: operation succeed
actual result: operation failed.
* #4600: add back a removed line
This merges the apache/cloudstack-primate UI under the ui directory as discussed and voted under:
https://markmail.org/message/bgnn4xkjnlzseeuv
ASF infra requested to archive the apache/cloudstack-primate repo here:
https://issues.apache.org/jira/browse/INFRA-21321
The purpose of the PR is to do a merge commit of the import as well as perform basic Travis and packaging checks. I'll merge once these basic checks (Travis and pkging) work.