Adding missing fields in the following APIs
osdisplayname in listVirtualMachines
vpcofferingname in listVpcs
vpcname in listPublicIpAddresses
vpcname in listPrivateGateways
vpcname in listVpnGateways
templatename, podname in listRouters
templatename, podname in listSystemVms
Fixes: #4161
Adds the listall parameter to listLdapConfigurations.
If set to true, and no domainid specified, list all LDAP configurations irrespective of the linked domain
Fixes wrong count in listAffinityGroup API.
API was returning the count of AffinityGroupJoinVO records.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes an issue where an instance fails to deploy due to a null pointer when using an L2 Guest Network with DefaultL2NetworkOfferingConfigDrive on Xenserver. It also fixes migrating an instance to another host.
This has been tested by:
- Creating an L2 Guest network, using DefaultL2NetworkOfferingConfigDrive as the network offering.
- Deploying an instance using the L2 Guest network created.
- Migrating the instance away from the host and back
When a VPC is restarted, the networks in the VPC is not restarted, this PR will add the logic to restart the networks in the VPC that needs a restart when the VPC is restarted.
Fixes#3816
Root cause:
Even though dynamic scaling job is handled in vmworkjob queue which ensures serilizing multiple jobs but the database updating and generating usage events are out of the job queue.
Solution:
Moved all updations into the job queue
Firstly I have tested all the scenarios to check if nothing is broken:
Scaling on a running VM with normal compute offering
Scaling on a stopped VM with normal compute offering
Scaling on a running VM with custom compute offering
Scaling on stopped VM with custom compute offering
Scaling on stopped/running VM between custom compute offering and normal compute offering and combinations among these. Checked if the custom parameters have been populated or deleted accordingly based on the offering to which the VM is scaled
Since this is a corner scenario I could not test the exact point where two usage events are recorded at the same time for two different API calls on same VM.
Fixes#4090
When trying to migrate a VM across 2 clusters, if a snapshot has been deleted and garbage collection has run to update the removed field, it is not possible to migrate the instance due to a null pointer.
When you migrate volume between data stores CS keeps the original UUID and changes the path of the volume.
When volume is not found by the given path the agent throws CloudRuntimeException but it's not catched in LibvirtGetVolumeStatsCommandWrapper.java
When expunge a Running vm, vm will be stopped with forcestop=false which does not make sense. we should honor vm.destroy.forcestop in global setting, or always set forcestop=true.
When scripts/vm/hypervisor/kvm/kvmvmactivity.sh is called with an incorrect file name, an error is printed which is then interpreted as output from the script.
When an incorrect file name is passed the script prints out:
stat: cannot stat ‘b51d7336-d964-44ee-be60-bf62783dabc’: No such file or directory
=====> DEAD <======
The KVMHAVMActivityChecker.java checkingHB() process is expecting just
=====> DEAD <======
but gets the unexpected error message and interprets the file as alive.
* Update AncientDataMotionStrategy.java
fix When secondary storage usage is> 90%, VOLUME migration across primary storage will cause the migration to fail and lose VOLUME
* Update AncientDataMotionStrategy.java
Volume is migrated across Primary storage. If no secondary storage is available(Or used capacity> 90% ), the migration is canceled.
Before modification, if secondary storage cannot be found, copyVolumeBetweenPools return NUll
copyAsync considers answer = null to be a sign of successful task execution, so it deletes the VOLUME on the old primary storage. This is the root cause of data loss, because VOLUME did not perform the migration at all.
* code in comment removed
Co-authored-by: div8cn <35140268+div8cn@users.noreply.github.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
* Fixes snapshot deletion
* Remove legacy '@Component', it is not necessary in this bean/class.
* Fix log message missing %d and remove snapshot on DB
* Remove "dummy" boolean return statement
* Manage snapshot deletion for KVM + NFS (primary storage)
* checkstyle trailing spaces
* rename options strings to *_OPTION
* Fix typo on deleteSnapshotOnSecondaryStorage and enhance log message
* Move the snapshotDao.remove(snapshotId); (#4006)
* Fix deletesnapshot worflow to handle both snapshots created in primary storage and snapshots backed up to secondary storage
* Fix extra space
* refactor out separate handling methods for secondary and primary (reducing returns)
* return false on unexpected error or log when expected
* != instead of ==
* secondary instead of backup storage
* init to null
* Handle snapshot deletion on primary storage. When primary store ref not found for snapshot do not fail the operation.
* Fix debug levels on log messages
Co-authored-by: GabrielBrascher <gabriel@apache.org>
Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
Though VMware does not support security groups, but in a basic zone with VMware and no isolation VMs should be able to deploy.
Root cause:
In case of VMware and basic zone control nic is set to 0.0.0.0 assuming control network will be shared with guest network.
But to have access to VMware instances management/private needs to be assigned to it.
Solution:
Assing a private ip even in case of basic zone VMware.
The listZonesMetrics does not return same keys are listZones as the
default response view is restricted. This fixes that by ensuring that
for root admin full response view is used.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
As previously described by PR #3929:
If vm has attached ISO, the migration fails with error message "org.libvirt.LibvirtException: Cannot access storage file /mnt/b33e5a1d-e4ea-3465-b6ac-c98dc8ff8af0/207-2-cc5fd717-2d57-3bb3-bcf6-2c930268db6c.iso"
in 4.13, list sshkeypairs with keyword will ignore the search by name if name is specifed
Fixes an issue in #3098
for example,
(local) > list sshkeypairs name=wei keyword=wei filter=name
{
"count": 3,
"sshkeypair": [
{
"name": "wei3"
},
{
"name": "wei2"
},
{
"name": "wei"
}
]
}
with this patch ,it gives correct result.
(local) > list sshkeypairs name=wei keyword=wei filter=name
{
"count": 1,
"sshkeypair": [
{
"name": "wei"
}
]
}
The previous setup of many hours would not work, due to some internal dnsmasq issues - lease was set correclty, but dnsmasq was setting the dhcp-renew-time (and rebind time) to less than 2 years from the date the lease was issued.
Using "infinite" as the value (instead of the number) works as expected - and (atm) the renew date is set to year 2088, etc.
Co-authored-by: dahn <daan.hoogland@gmail.com>
When both routers of VPC is in MASTER state
then multiple alerts are sent equally to the number of tiers in the VPC.
If the VPC has 3 tiers then 6 alerts will be sent. This is not good
if VPC has more than 10 networks in it.
Instead of checking the router status for all the tiers in the VPC,
just check the status of the router for one tier in a VPC so that
multiple duplicate alerts can be avoided
Currently, the cloudstack sends VM password only to the first
router in the network even if its the backup and return the result.
In some cases the first router will be back up and the second will be master.
Since password server is not running in backup, when the user resets the password,
it is sent to the first router which can be backup.
In that case, the new password is not stored in the password server and users cant log in with a new password.
This change ensures that we send the password to both the routers instead
of the first router so that a new password is stored in the master router.