As previously described by PR #3929:
If vm has attached ISO, the migration fails with error message "org.libvirt.LibvirtException: Cannot access storage file /mnt/b33e5a1d-e4ea-3465-b6ac-c98dc8ff8af0/207-2-cc5fd717-2d57-3bb3-bcf6-2c930268db6c.iso"
in 4.13, list sshkeypairs with keyword will ignore the search by name if name is specifed
Fixes an issue in #3098
for example,
(local) > list sshkeypairs name=wei keyword=wei filter=name
{
"count": 3,
"sshkeypair": [
{
"name": "wei3"
},
{
"name": "wei2"
},
{
"name": "wei"
}
]
}
with this patch ,it gives correct result.
(local) > list sshkeypairs name=wei keyword=wei filter=name
{
"count": 1,
"sshkeypair": [
{
"name": "wei"
}
]
}
When both routers of VPC is in MASTER state
then multiple alerts are sent equally to the number of tiers in the VPC.
If the VPC has 3 tiers then 6 alerts will be sent. This is not good
if VPC has more than 10 networks in it.
Instead of checking the router status for all the tiers in the VPC,
just check the status of the router for one tier in a VPC so that
multiple duplicate alerts can be avoided
Currently, the cloudstack sends VM password only to the first
router in the network even if its the backup and return the result.
In some cases the first router will be back up and the second will be master.
Since password server is not running in backup, when the user resets the password,
it is sent to the first router which can be backup.
In that case, the new password is not stored in the password server and users cant log in with a new password.
This change ensures that we send the password to both the routers instead
of the first router so that a new password is stored in the master router.
When we add new guest os, sometimes we missed the records in guest_os_hypervisor.
However, the guest disk model (virtio/ide) is determined by record in the table.
It causes the issue that some new guest os(eg Debian 8/9) uses e1000 instead of virtio nic, and ide disk instead of virtio disk.
To fix the issue permanantly, pass the guest os name in guest_os if the record for kvm is not found in guest_os_hypervisor.
Related commit:7ac9f00eeeb4cd37ec39efeba066e799b581b1a0
When the state of the site to site vpn changes, the check
is done on all the virtual routers including the internal
load balancing vm as well. It is not needed to check the
state for internal load balancing vm
This implements the systemvm list API response creator to find and use
the host record for a ssvm/cpvm to get the agent status and other
details like last disconnected date and agent version.
Fixes 3875
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This makes the listSystemVms API to return the host status (agent state),
version and last pinged information. This makes it possible for UIs
to call a single API to get this information.
After a local template is uploaded via browser, the generated usage event with type = "TEMPLATE.CREATE" is persisted with the data store ID instead of the zone ID on the zone_id column. The fix will refactor the upload monitor logic, as after the upload completes, it sets the datastore ID on the zone ID column for the created "TEMPLATE.CREATE" usage event. This refactor will query the DB for the data store and will set its associated zone ID in the usage field.
The fix produces the same behaviour as when registering a template from URL.
FIx is also for uploading VOLUME from local/via browser.
Fixes#3783
As reported in the issue, creating volumes from pure snapshot fails with NPE. This is due to order of calls where disk offering access is checked before checking disk offering value. This PR fixes the same.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes#3191
When a template is registered, code stores md5sum of the downloaded file in the vm_template table. However, this downloaded file could be deleted after template installation if it is not an actual (.qcow2, .ova, etc.) file. When the user copies a template using copyTemplate API, the actual template file will be copied across the image stores. Matching checksum for the copied templated file and the stored value from the vm_template table will result in a mismatch.
Changes will set an empty checksum value for the copied template while passing to download service which allows skipping wrong checksum check for the copied while install.
However, this results in a change in checksum value for concerned template entry in vm_template table post template install.
Co-authored-by: dahn <daan.hoogland@gmail.com>
The List Management Server api returns a list of all the management servers but fails when trying to list by id or name. This ensures that it fetches the details as per the parameters passed.
Fixes: #3833
The metrics API has few properties missing that are present in the corresponding resource.
Fixes#3831
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Rohit Yadav <rohit@apache.org>
Steps to reproduce the issue
(1) create a custom service offering
(2) create a vm with the offering
(3) update vm with displayvm=false, returns an error
(local) > update virtualmachine id=f33fd06a-7643-40d1-833f-272845d9ba09 displayvm=false
Error 530: {"updatevirtualmachineresponse":{"uuidList":[],"errorcode":530,"cserrorcode":9999}}
When start a vm or migrate a vm (away from a host in host maintenance), cloudstack will check capacity of all hosts and choose one. If there are hundreds of hosts on the platform, it will take some seconds. When cloudstack choose a host and start/migrate vm to it, the resource consumption of the host might have been changed. This normally happens when we start/migrate multiple vms.
It would be better to double check the host capacity when start vm on a host.
This PR includes the fix for cpucore capacity when start/migrate a vm.
When we calculate a resource consumption of a host, we need to take the vms in following states into calculation: Running, Starting, Stopping, Migrating (to the host), and vms are Migrating from the host. Because, when stop a vm, the resource on host will be released when vm is stopped. When migrate a vm, the resource on destination host will be increased before migration starts, and resource on source host will be decreased after migraiton succeeds.
In cloudstack, there is a task named CapacityChecked which run every 5 minutes (capacity.check.period =300000 ms by default). It recalculates capacity of all hosts. However, it takes only vms in Running and Starting into consideration. We have faced some issues in host maintenance due to it.
Steps to reproduce the issue
(1) migrate N vms from host A to host B, cpu/ram resource increases before the migration.
(2) capacity check recalculate the capacity of hosts. used capacity of Host B will be reset to original value (not including the vms in Migrating).
(3) migrate some more vms from other host to host B, the migrations are allowed by cloudstack (because used capacity is incorrect). If the actual used memory exceed the physical memory on the host, there might be some critical issues (for example, libvirt dies)
Steps to reproduce the issue
(1) create an account (test)
(2) create a vm with the account (test)
(3) login with admin, and upgrade the vm to another offering
(4) the resource count (cpu,memory) of admin increases, not the account (test).
After commit fbf488497fb863c13fc0908281e3f4f86906df43, admin need to specify an ipv4 or ipv6 addresses when add IP to nic which breaks backward compatibity. If IP is not specified, a IPv4 address should be returned.
* server: Do NOT cleanup dhcp and dns when stop a vm
According comment in PR #3608, dhcp and dns entries are cleaned up only when a VM is expunged.
Revert part of commit 8fb388e9312b917a8f36c7d7e3f45985a95ce773.
* server: cleanup dns/dhcp entries in removeNic instead of finalizeExpunge
This fixes a behaviour to not cleanup DHCP and DNS rules for NICs of a
VM in the VR when it is stopped, but instead when VM is expunged because
stopped VMs in CloudStack still retain the IPs and records.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
is not used; probably it is a legacy code/table.
Therefore, remove the verification that counts the IPs from
UserIpv6AddressVO in order to check if it can use the network for
deploying new VMs in UI [1].
[1] com.cloud.network.NetworkModelImpl.canUseForDeploy(Network).
Fixes NPE when trying to find suitable storage pools for a volume
when the volume is not attached to a VM.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When a network IP range is removed, the "vlan" stays mapped on pod_vlan_map; therefore, the method that lists the VLANs by pod id will return null VLANS.
This PR adds proper verifications to avoid null pointer exception when deploying VRs on a pod with removed VLANs. The exception was caused on getPlaceholderNicForRouter.
Problem: In Vmware, appliances that have options that are required to be answered before deployments are configurable through vSphere vCenter user interface but it is not possible from the CloudStack user interface.
Root cause: CloudStack does not handle vApp configuration options during deployments if the appliance contains configurable options. These configurations are mandatory for VM deployment from the appliance on Vmware vSphere vCenter. As shown in the image below, Vmware detects there are mandatory configurations that the administrator must set before deploy the VM from the appliance (in red on the image below):
Solution:
On template registration, after it is downloaded to secondary storage, the OVF file is examined and OVF properties are extracted from the file when available.
OVF properties extracted from templates after being downloaded to secondary storage are stored on the new table 'template_ovf_properties'.
A new optional section is added to the VM deployment wizard in the UI:
If the selected template does not contain OVF properties, then the optional section is not displayed on the wizard.
If the selected template contains OVF properties, then the optional new section is displayed. Each OVF property is displayed and the user must complete every property before proceeding to the next section.
If any configuration property is empty, then a dialog is displayed indicating that there are empty properties which must be set before proceeding
image
The specific OVF properties set on deployment are stored on the 'user_vm_details' table with the prefix: 'ovfproperties-'.
The VM is configured with the vApp configuration section containing the values that the user provided on the wizard.