Fixes case of account/domain having negative storage count when there is a volume size difference while deploying volumes on certain stroages. In case of Powerflex volume size results in a multiple of 8. If user deploys a volume of 12GB it will result in 16GB in size. But currently CloudStack will not update resource count after deployment.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes an intermittent issue where SDC id (local_path) is getting deleted and not getting populated when host connects back again.
Fix is to remove the code to delete the records from storage_pool_host_ref table. We are anyways updating the entry if the SDC ID is changed during agent restart which is anyways required inorder to get the new connections. I've quickly verified the host delete scenario to check the storage_pool_host_ref entries behavior, entries are getting deleted.
This PR resource throws exception with the correct error code and logs the error message when a resource allocation failure is encountered during resize volume operation.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
L2 networks created using offerings that allow using custom VLAN can remain in Setup state. Allow importing of VMs with such networks when VLAN matches with unmanaged instance.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Supported Virtual machine operations:
- live migration of VM to another host
- virtual machine snapshots (group snapshot without memory)
- revert VM snapshot
- delete VM snapshot
Supported Volume operations:
- attach/detach volume
- live migrate volume between two StorPool primary storages
- volume snapshot
- delete snapshot
- revert snapshot
This PR adds a check on host's status. Without this if the agent is not in Up or Connecting state, expunging of a VM fails.
Steps to reproduce:
- Enable vm.configdrive.force.host.cache.use in Global Configuration.
- Create a L2 network with config drive
- Deploy a vm with the L2 network created in previous step
- Stop the vm and destroy vm (not expunge it)
- Stop the cloudstack-agent on the VM's host
- Expunge the vm
Fixes: #7428
* Live storage migration of volume in scaleIO within same storage scaleio cluster
* Added migrate command
* Recent changes of migration across clusters
* Fixed uuid
* recent changes
* Pivot changes
* working blockcopy api in libvirt
* Checking block copy status
* Formatting code
* Fixed failures
* code refactoring and some changes
* Removed unused methods
* removed unused imports
* Unit tests to check if volume belongs to same or different storage scaleio cluster
* Unit tests for volume livemigration in ScaleIOPrimaryDataStoreDriver
* Fixed offline volume migration case and allowed encrypted volume migration
* Added more integration tests
* Support for migration of encrypted volumes across different scaleio clusters
* Fix UI notifications for migrate volume
* Data volume offline migration: save encryption details to destination volume entry
* Offline storage migration for scaleio encrypted volumes
* Allow multiple Volumes to be migrated with migrateVirtualMachineWithVolume API
* Removed unused unittests
* Removed duplicate keys in migrate volume vue file
* Fix Unit tests
* Add volume secrets if does not exists during volume migrations. secrets are getting cleared on package upgrades.
* Fix secret UUID for encrypted volume migration
* Added a null check for secret before removing
* Added more unit tests
* Fixed passphrase check
* Add image options to the encypted volume conversion
* Refactor test and change IP range
* countdown from 254 to allow multiple pseudo public nets counting up from 0
* cleanup
* location of asserts improved
---------
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: Daan Hoogland <daan@onecht.net>
* secondary-storage: delete backedup snapshot dir on delete
Fixes#7516
When a backed-up snapshot is deleted and the snapshot file is present in the same name directory (probably only for VMware), the whole directory can be deleted.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* resolve comment
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
JaCoCo used for code coverage calculation in the project doesn't support PowerMockito classes.
This PR attemps to reduce usage of PowerMockito.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
In troubleshooting ops issues we see logs like:
Maximum domain resource limits of Type 'user_vm' for Domain Id = 763 is exceeded: Domain Resource Limit = (1 bytes) 1, Current Domain Resource Amount = (0 bytes) 0, Requested Resource Amount = (1 bytes) 1."
However there is one missing value (currentResourceReservation) that is used in the calculation of limit check but it is not logged, which leads to confusion. Above we see we are using “0” and requested 1, with our limit being 1, but was rejected. Without logging all the values used in the calculation we don’t understand why it failed.
Additionally, if we had this log above it would be clearer that a second bug is occurring. When we query for domain level resource reservations in “getDomainReservation” the actual SearchBuilder is the listAccountAndTypeSearch, not the listDomainAndTypeSearch. As a result, when we call getDomainReservation the query returns any outstanding domain reservation for any account, as domain ID is not a valid filter for the account search.
This PR:
Increases detailed information in log for checking resource limit to include reservations information for functions: checkDomainResourceLimit() and checkAccountResourceLimit
Fixes getDomainReservation() to use listDomainAndTypeSearch instead of listAccountAndTypeSearch
Co-authored-by: Oscar Sandoval <osandovalocana@apple.com>
Fixes#6697
Allows the server to generate started and completed events for VM.CREATE event type when VM is deployed with startvm=false.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes#6702
The service conntrackd will be started by the script /opt/cloud/bin/cs/CsRedundant.py, therefore add it to disabled_svcs so it is not started automatically when VR is started.