When we add new guest os, sometimes we missed the records in guest_os_hypervisor.
However, the guest disk model (virtio/ide) is determined by record in the table.
It causes the issue that some new guest os(eg Debian 8/9) uses e1000 instead of virtio nic, and ide disk instead of virtio disk.
To fix the issue permanantly, pass the guest os name in guest_os if the record for kvm is not found in guest_os_hypervisor.
Related commit:7ac9f00eeeb4cd37ec39efeba066e799b581b1a0
When the state of the site to site vpn changes, the check
is done on all the virtual routers including the internal
load balancing vm as well. It is not needed to check the
state for internal load balancing vm
This implements the systemvm list API response creator to find and use
the host record for a ssvm/cpvm to get the agent status and other
details like last disconnected date and agent version.
Fixes 3875
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This makes the listSystemVms API to return the host status (agent state),
version and last pinged information. This makes it possible for UIs
to call a single API to get this information.
After a local template is uploaded via browser, the generated usage event with type = "TEMPLATE.CREATE" is persisted with the data store ID instead of the zone ID on the zone_id column. The fix will refactor the upload monitor logic, as after the upload completes, it sets the datastore ID on the zone ID column for the created "TEMPLATE.CREATE" usage event. This refactor will query the DB for the data store and will set its associated zone ID in the usage field.
The fix produces the same behaviour as when registering a template from URL.
FIx is also for uploading VOLUME from local/via browser.
Fixes#3783
As reported in the issue, creating volumes from pure snapshot fails with NPE. This is due to order of calls where disk offering access is checked before checking disk offering value. This PR fixes the same.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
In VM migration on KVM, libvirt qemu hook script will change the bridge name to bridges for guest networks. It works for user vm. However for virtual router, it has nics on control network and public network. If control/public use different physical networks than guest network, virtual router cannot be migrated.
Fixes: #2783
Fixes#3191
When a template is registered, code stores md5sum of the downloaded file in the vm_template table. However, this downloaded file could be deleted after template installation if it is not an actual (.qcow2, .ova, etc.) file. When the user copies a template using copyTemplate API, the actual template file will be copied across the image stores. Matching checksum for the copied templated file and the stored value from the vm_template table will result in a mismatch.
Changes will set an empty checksum value for the copied template while passing to download service which allows skipping wrong checksum check for the copied while install.
However, this results in a change in checksum value for concerned template entry in vm_template table post template install.
Co-authored-by: dahn <daan.hoogland@gmail.com>
The List Management Server api returns a list of all the management servers but fails when trying to list by id or name. This ensures that it fetches the details as per the parameters passed.
Fixes: #3833
The metrics API has few properties missing that are present in the corresponding resource.
Fixes#3831
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit@apache.org>
since 4.11.3, haproxy is always restarted when add/delete a lb rule.
When haproxy is started, the processes are
```
root@r-854-VM:~# ps aux |grep haproxy
root 22272 0.0 0.2 4036 668 ? Ss 07:52 0:00 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy 22274 0.0 2.3 38444 5856 ? S 07:52 0:00 /usr/sbin/haproxy-master
haproxy 22275 0.0 0.3 38444 880 ? Ss 07:52 0:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
```
When haproxy is reload, the processes are
```
root@r-854-VM:~# ps aux |grep haproxy
root 22272 0.0 0.2 4168 632 ? Ss 07:52 0:00 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy 22283 0.0 2.3 38444 5884 ? S 07:53 0:00 /usr/sbin/haproxy-master
haproxy 22286 0.0 0.3 38444 880 ? Ss 07:53 0:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 22275
```
We need to change the pid file from /var/run/haproxy.pid to /run/haproxy.pid, so the haproxy will be reloaded instead of restarted.
Steps to reproduce the issue
(1) create a custom service offering
(2) create a vm with the offering
(3) update vm with displayvm=false, returns an error
(local) > update virtualmachine id=f33fd06a-7643-40d1-833f-272845d9ba09 displayvm=false
Error 530: {"updatevirtualmachineresponse":{"uuidList":[],"errorcode":530,"cserrorcode":9999}}
When start a vm or migrate a vm (away from a host in host maintenance), cloudstack will check capacity of all hosts and choose one. If there are hundreds of hosts on the platform, it will take some seconds. When cloudstack choose a host and start/migrate vm to it, the resource consumption of the host might have been changed. This normally happens when we start/migrate multiple vms.
It would be better to double check the host capacity when start vm on a host.
This PR includes the fix for cpucore capacity when start/migrate a vm.
When we calculate a resource consumption of a host, we need to take the vms in following states into calculation: Running, Starting, Stopping, Migrating (to the host), and vms are Migrating from the host. Because, when stop a vm, the resource on host will be released when vm is stopped. When migrate a vm, the resource on destination host will be increased before migration starts, and resource on source host will be decreased after migraiton succeeds.
In cloudstack, there is a task named CapacityChecked which run every 5 minutes (capacity.check.period =300000 ms by default). It recalculates capacity of all hosts. However, it takes only vms in Running and Starting into consideration. We have faced some issues in host maintenance due to it.
Steps to reproduce the issue
(1) migrate N vms from host A to host B, cpu/ram resource increases before the migration.
(2) capacity check recalculate the capacity of hosts. used capacity of Host B will be reset to original value (not including the vms in Migrating).
(3) migrate some more vms from other host to host B, the migrations are allowed by cloudstack (because used capacity is incorrect). If the actual used memory exceed the physical memory on the host, there might be some critical issues (for example, libvirt dies)
When we create a vm in the network with redundant VRs, the lease file in the vm (for example /var/lib/dhcp/dhclient.eth0.leases) shows the dhcp-server-identifier is the guest ip (not vip/gateway) of master VR. That's the ip ipaddress where the vm fetch password and metadata from.
if we stop the master VR (then backup will be master) or restart the network with cleanup (VRs will be created), the guest ip of master VR changes so vm are not able to get metadata/ssh-key using the ips in dhcp lease file.
Setting up metadata/password/dhcp server on gateway instead of guest IP in redundant VRs will fix the issues.
FIxes#3409
Steps to reproduce the issue
(1) create an account (test)
(2) create a vm with the account (test)
(3) login with admin, and upgrade the vm to another offering
(4) the resource count (cpu,memory) of admin increases, not the account (test).
* Increase lease time to infinite
Lease time set to effectively infinite (36000+ days) since we fully control VM lifecycle via CloudStack
Infinite time helps avoid some edge cases which could cause DHCPNAK being sent to VMs since
(RHEL) system lose routes when they receive DHCPNAK
When VM is expunged, it's active lease and DHCP/DNS config is properly removed from related files in VR.
* desc fix
Logrotate should only touch security_group.log and resizevolume.log
as the agent.log is already rotated by log4j inside the Agent.
Having two systems trying to rotate agent.log leads to all kinds of
issues like having binary (compressed) data in the middle of a plain-text
log file.
In addition we do not have to rotate the logs every day, only when they
grow larger than 10M. On fairly idle hypervisors this should not cause
those logs to rotate every day.
Signed-off-by: Wido den Hollander <wido@widodh.nl>