1815 Commits

Author SHA1 Message Date
Daan Hoogland
5774b965f3 Merge pull request #1209 from ustcweizhou/free-deviceid
CLOUDSTACK-9134: set device_id as the first device_id not in use instead of nic count
when we restart vpc tiers, the old nics will be removed, and create a new nic.
however, the device_id was set to the nic count, which may be already used.
this commit get the first device_id not in use as the device_id of new nic.

This issue also happen when we add multiple networks to a vm and remove them.

* pr/1209:
  CLOUDSTACK-9134: set device_id as the first device_id not in use instead of nic count

Signed-off-by: Daan Hoogland <daan@onecht.net>
2015-12-13 18:43:30 +01:00
Remi Bergsma
a7b098ff16 Implement 4.6.1 -> 4.6.2 upgrade path 2015-12-13 00:06:02 +01:00
Remi Bergsma
5147dec4ff Updating pom.xml version numbers for release 4.6.2-SNAPSHOT
Signed-off-by: Remi Bergsma <github@remi.nl>
2015-12-12 21:49:37 +01:00
Wei Zhou
acfc19dc82 CLOUDSTACK-9134: set device_id as the first device_id not in use instead of nic count
when we restart vpc tiers, the old nics will be removed, and create a new nic.
however, the device_id was set to the nic count, which may be already used.
this commit get the first device_id not in use as the device_id of new nic.

This issue also happen when we add multiple networks to a vm and remove them.
2015-12-10 14:02:02 +01:00
Wei Zhou
52412286c6 CLOUDSTACK-8845: set isRevertable of snapshot to false if the volume is removed 2015-12-04 08:21:11 +01:00
Remi Bergsma
9a21873c4a Merge pull request #1134 from pdube/CLOUDSTACK-6276
CLOUDSTACK-6276 Fixing affinity groups for projectsWith some contributions from @resmo and @ustcweizhou.
This closes https://github.com/apache/cloudstack/pull/508

To test manually (need at least 2 hosts):
Create a project
Create an affinity group in that project
Deploy a vm with that affinity group
Deploy a second vm with that affinity group
They should be on different hosts

Ran old and new tests for affinity groups on the simulator

Test create affinity group as admin in project ... === TestName: test_01_admin_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group as domain admin for projects ... === TestName: test_02_doadmin_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group as user for projects ... === TestName: test_03_user_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group that exists (same name) for projects ... === TestName: test_4_user_create_aff_grp_existing_name_for_project | Status : SUCCESS ===
ok
#Delete Affinity Group by id. ... === TestName: test_01_delete_aff_grp_by_id | Status : SUCCESS ===
ok
#Delete Affinity Group by id should fail for user not in project ... === TestName: test_02_delete_aff_grp_by_id_another_user | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups ... === TestName: test_01_deploy_vm_anti_affinity_group | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups with more vms than hosts. ... === TestName: test_02_deploy_vm_anti_affinity_group_fail_on_not_enough_hosts | Status : SUCCESS ===
ok
List affinity group for a vm for projects ... === TestName: test_01_list_aff_grps_for_vm | Status : SUCCESS ===
ok
List multiple affinity groups associated with a vm for projects ... === TestName: test_02_list_multiple_aff_grps_for_vm | Status : SUCCESS ===
ok
List affinity groups by id for projects ... === TestName: test_03_list_aff_grps_by_id | Status : SUCCESS ===
ok
List Affinity Groups by name for projects ... === TestName: test_04_list_aff_grps_by_name | Status : SUCCESS ===
ok
List Affinity Groups by non-existing id for projects ... === TestName: test_05_list_aff_grps_by_non_existing_id | Status : SUCCESS ===
ok
List Affinity Groups by non-existing name for projects ... === TestName: test_06_list_aff_grps_by_non_existing_name | Status : SUCCESS ===
ok
List affinity group should list all for a vms associated with that group for projects ... === TestName: test_07_list_all_vms_in_aff_grp | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupids ... === TestName: test_01_update_aff_grp_by_ids | Status : SUCCESS ===
ok

----------------------------------------------------------------------
Ran 16 tests in 581.706s

OK

Deploy vm as Admin in Affinity Group belonging to regular user (should fail) ... === TestName: test_01_deploy_vm_another_user | Status : SUCCESS ===
ok
Create Affinity Group as admin for regular user ... === TestName: test_02_create_aff_grp_user | Status : SUCCESS ===
ok
List Affinity Groups as admin for all the users ... === TestName: test_03_list_aff_grp_all_users | Status : SUCCESS ===
ok
List Affinity Groups belonging to admin user ... === TestName: test_04_list_all_admin_aff_grp | Status : SUCCESS ===
ok
List Affinity Groups belonging to regular user passing account id and domain id ... === TestName: test_05_list_all_users_aff_grp | Status : SUCCESS ===
ok
List Affinity Groups belonging to regular user passing group id ... === TestName: test_06_list_all_users_aff_grp_by_id | Status : SUCCESS ===
ok
Delete Affinity Group belonging to regular user ... === TestName: test_07_delete_aff_grp_of_other_user | Status : SUCCESS ===
ok
Test create affinity group as admin ... === TestName: test_01_admin_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group as domain admin ... === TestName: test_02_doadmin_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group as user ... === TestName: test_03_user_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group that exists (same name) ... === TestName: test_04_user_create_aff_grp_existing_name | Status : SUCCESS ===
ok
Test create affinity group with existing name but within different account ... === TestName: test_05_create_aff_grp_same_name_diff_acc | Status : SUCCESS ===
ok
Test create affinity group of non-existing type ... === TestName: test_06_create_aff_grp_nonexisting_type | Status : SUCCESS ===
ok
Delete Affinity Group by name ... === TestName: test_01_delete_aff_grp_by_name | Status : SUCCESS ===
ok
Delete Affinity Group as admin for an account ... === TestName: test_02_delete_aff_grp_for_acc | Status : SUCCESS ===
ok
Delete Affinity Group which has vms in it ... === TestName: test_03_delete_aff_grp_with_vms | Status : SUCCESS ===
ok
Delete Affinity Group with id which does not belong to this user ... === TestName: test_05_delete_aff_grp_id | Status : SUCCESS ===
ok
Delete Affinity Group by name which does not belong to this user ... === TestName: test_06_delete_aff_grp_name | Status : SUCCESS ===
ok
Delete Affinity Group by id. ... === TestName: test_08_delete_aff_grp_by_id | Status : SUCCESS ===
ok
Root admin should be able to delete affinity group of other users ... === TestName: test_09_delete_aff_grp_root_admin | Status : SUCCESS ===
ok
Deploy VM without affinity group ... === TestName: test_01_deploy_vm_without_aff_grp | Status : SUCCESS ===
ok
Deploy VM by aff grp name ... === TestName: test_02_deploy_vm_by_aff_grp_name | Status : SUCCESS ===
ok
Deploy VM by aff grp id ... === TestName: test_03_deploy_vm_by_aff_grp_id | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups ... === TestName: test_04_deploy_vm_anti_affinity_group | Status : SUCCESS ===
ok
Deploy vms by affinity group id ... === TestName: test_05_deploy_vm_by_id | Status : SUCCESS ===
ok
Deploy vm in affinity group of another user by name ... === TestName: test_06_deploy_vm_aff_grp_of_other_user_by_name | Status : SUCCESS ===
ok
Deploy vm in affinity group of another user by id ... === TestName: test_07_deploy_vm_aff_grp_of_other_user_by_id | Status : SUCCESS ===
ok
Deploy vm in multiple affinity groups ... === TestName: test_08_deploy_vm_multiple_aff_grps | Status : SUCCESS ===
ok
Deploy multiple vms in multiple affinity groups ... === TestName: test_09_deploy_vm_multiple_aff_grps | Status : SUCCESS ===
ok
Deploy VM by aff grp name and id ... === TestName: test_10_deploy_vm_by_aff_grp_name_and_id | Status : SUCCESS ===
ok
List affinity group for a vm ... === TestName: test_01_list_aff_grps_for_vm | Status : SUCCESS ===
ok
List multiple affinity groups associated with a vm ... === TestName: test_02_list_multiple_aff_grps_for_vm | Status : SUCCESS ===
ok
List affinity groups by id ... === TestName: test_03_list_aff_grps_by_id | Status : SUCCESS ===
ok
List Affinity Groups by name ... === TestName: test_04_list_aff_grps_by_name | Status : SUCCESS ===
ok
List Affinity Groups by non-existing id ... === TestName: test_05_list_aff_grps_by_non_existing_id | Status : SUCCESS ===
ok
List Affinity Groups by non-existing name ... === TestName: test_06_list_aff_grps_by_non_existing_name | Status : SUCCESS ===
ok
List affinity group should list all for a vms associated with that group ... === TestName: test_07_list_all_vms_in_aff_grp | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupids ... === TestName: test_01_update_aff_grp_by_ids | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupnames ... === TestName: test_02_update_aff_grp_by_names | Status : SUCCESS ===
ok
Update the list of affinityGroups for vm which is not associated ... === TestName: test_03_update_aff_grp_for_vm_with_no_aff_grp | Status : SUCCESS ===
ok
Update the list of Affinity Groups to empty list ... SKIP: Skip - Failing - work in progress
Update the list of Affinity Groups on running vm ... === TestName: test_05_update_aff_grp_on_running_vm | Status : SUCCESS ===
ok

----------------------------------------------------------------------
Ran 42 tests in 976.432s

OK (SKIP=1)

* pr/1134:
  CLOUDSTACK-6276 Removing unused parameter in integration test for projects
  CLOUDSTACK-6276 Removing unused parameter in integration test
  CLOUDSTACK-6276 Fixing affinity groups for projects

Signed-off-by: Remi Bergsma <github@remi.nl>
2015-12-03 20:10:16 +01:00
Wei Zhou
9077c9a5b4 CLOUDSTACK-9022: keep Destroyed volumes for sometime 2015-11-30 20:43:13 +01:00
Patrick Dube
c76d317150 CLOUDSTACK-6276 Fixing affinity groups for projects 2015-11-27 14:43:02 -05:00
Abhinandan Prateek
d09898553e CLOUDSTACK-9064: The users should be able to create multiple Guest Shared Networks in same Vlan ID, same Physical Network and same network, just with a different IP ranges. 2015-11-25 10:22:24 +05:30
Daan Hoogland
d6e77624d9 CLOUDSTACK-9057 remove old system vm upgrade code 2015-11-16 10:46:02 +00:00
Remi Bergsma
e0ac9df529 implemented upgrade path from 4.6.0 to 4.6.1 2015-11-15 14:43:22 +01:00
Remi Bergsma
b38c3bed0c Updating pom.xml version numbers for release 4.6.1-SNAPSHOT
Signed-off-by: Remi Bergsma <github@remi.nl>
2015-11-13 21:27:57 +01:00
Remi Bergsma
e31ade03c6 Updating pom.xml version numbers for release 4.6.0
Signed-off-by: Remi Bergsma <github@remi.nl>
2015-11-10 15:45:34 +01:00
Wilder Rodrigues
72e79bcaa6 CLOUDSTACK-9046 - Add new ACS systemVMs website
- Also change the URl in the SQL file.
2015-11-09 15:13:53 +01:00
Wilder Rodrigues
4b503b4582 CLOUDSTACK-9046 - Add SystemVM upgrade from 4.5 to 4.6 in the Upgrade452to460.java file 2015-11-09 10:06:19 +01:00
Remi Bergsma
0c52f70b45 Merge pull request #995 from kansal/CLOUDSTACK-9002
CLOUDSTACk-9002: VM deployment is successful even when dhcp entry command fails - Fixed

Reason: The return value of the call to accept() function in the applyRules() function of BasicNetworkTopology.java was not checked for success or failure. As a result even if it fails, exception was not thrown and VM deployment went ahead without any errors.

Fix: Added the necessary checks.

* pr/995:
  CLOUDSTACk-9002: VM deployment is successful even when dhcp entry command fails - Fixed

Signed-off-by: Remi Bergsma <github@remi.nl>
2015-11-02 14:18:21 +01:00
Remi Bergsma
a981d34f49 Merge pull request #787 from anshul1886/CLOUDSTACK-8824-8825
CLOUDSTACK-8825, CLOUDSTACK-8824 : Fixed issues if vm.allocation.algorithm is set to firstfitleastconsumedFixed following issues if vm.allocation.algorithm is set to firstfitleastconsumed

1. VM deployment failure if thre is only ZWPS in setup
2. VM migration is impossible from UI

To test

1. Create setup with ZWPS only
2. set vm.allocation.algorithm to firstfitleastconsumed in global settings
3. deploy virtual machine

observation: vm deployment will fail

After this fix it will pass

second scenario

1. Create Cloudstack Setup with two hosts (As it needs setup for migration)
2. Try migrating VM from UI

Observation: There will be error response in logs with nothing available in UI

After fix it will pass

Regarding BVT I am not sure whether there exists tests for firstfitleastconsumed vm allocation algorithm.

* pr/787:
  CLOUDSTACK-8825, CLOUDSTACK-8824 : Fixed following issues if vm.allocation.algorithm is set to firstfitleastconsumed 1. VM deployment failure if thre is only ZWPS in setup 2. VM migration is impossible from UI

Signed-off-by: Remi Bergsma <github@remi.nl>
2015-11-02 12:04:34 +01:00
Kshitij Kansal
e24ecccdea CLOUDSTACK-8844: Network Update from RVR offering to Standalone offering fails - Fixed 2015-10-30 10:54:45 +05:30
Kshitij Kansal
301ea330ce CLOUDSTACk-9002: VM deployment is successful even when dhcp entry command fails - Fixed 2015-10-28 11:51:25 +05:30
Wei Zhou
1efcc19dbd CLOUDSTACK-8941: fix NPE when migrate vm to other zone-wide pools the second time 2015-10-26 07:29:59 +01:00
Remi Bergsma
0fd3919e8a Merge pull request #964 from snuf/Ovm3NetLabelFix
FIX: Ovm3 physical network traffic labels to work.The labeling was broken. Only labels assigned at zone creation
were used, changing labels was not working. Tested with changing
a label and checking it, labels at zone creation still works.

As a bonus fixed the consistency of KVM in Dutch compared to other
traffic labels in Dutch and copied in the OVM3 translated label
in other languages based on the other tarffic labels in those languages.

* pr/964:
  FIX: Ovm3 physical network traffic labels to work.

Signed-off-by: Remi Bergsma <github@remi.nl>
2015-10-26 06:08:43 +01:00
Remi Bergsma
7d73e9bfaf Merge pull request #976 from ustcweizhou/CLOUDSTACK-8964-removed-volume
CLOUDSTACK-8964: Can't create volume from snapshot of a removed volumeThis issue happens on KVM as well.
This is because the volume info is missing in the CopyCommand once the volume has been removed.
When the KVM agent tries to process the command, it will throws a NPE.

* pr/976:
  CLOUDSTACK-8964: Can't create volume from snapshot of a removed volume

Signed-off-by: Remi Bergsma <github@remi.nl>
2015-10-25 21:25:44 +01:00
Daan Hoogland
24500bd0e4 CID 1324349: conditionally return -1 or the dc id for the volume 2015-10-24 14:01:38 +02:00
Wei Zhou
bef92052ee CLOUDSTACK-8964: Can't create volume from snapshot of a removed volume
This issue happens on KVM as well.
This is because the volume info is missing in the CopyCommand once the volume has been removed.
When the KVM agent tries to process the command, it will throws a NPE.
2015-10-23 22:02:59 +02:00
Funs Kessen
1022883749 FIX: Ovm3 physical network traffic labels to work.
The labeling was broken. Only labels assigned at zone creation
were used, changing labels was not working. Tested with changing
a label and checking it.

As a bonus fixed the consistency of KVM in Dutch compared to other
traffic labels in dutch and copied in the OVM3 translated label
in other languages.
2015-10-22 11:57:42 +02:00
Daan Hoogland
b128e567c4 CLOUDSTACK-8848: added null pointer guard to new public method 2015-10-05 07:27:28 +02:00
Rene Moser
542880ae76 CLOUDSTACK-8848: ensure power state is up to date when handling missing VMs in powerReport
There 2 things which has been changed.

* We look on power_state_update_time instead of update_time. Didn't make sense to me at all to look at update_time.
* Due DB update optimisation, powerState will only be updated if < MAX_CONSECUTIVE_SAME_STATE_UPDATE_COUNT. That is why we can not rely on these information unless we make sure these are up to date.
2015-09-27 22:14:03 +02:00
Wido den Hollander
a0f8f56a5d Merge pull request #845 from borisroman/CLOUDSTACK-8763
[4.6][BLOCKER]CLOUDSTACK-8763: Resolved POD/ZONE deletion failure.Instead of having both checkIfPodIsDeletable() and checkIfZoneIsDeletable have there own SQL query, I've refactored them so they use DAO SQL Queries.

This resolves the SQL Exception thrown by both classes.

Test to confirm working order:
- deploy ACS
- Add zones / pods. -> Try to delete without resources. -> Direct success.
- Add resources to zones / pods. -> Try to delete with resources in the pod / zone. -> Correct exception thrown. (Error message is why it cannot remove the zone / pod. IE: There is still a hosts or vm or .... )

* pr/845:
  Added unit tests for checkIfPodIsDeletable() and checkIfZoneIsDeletable().
  Updated Dao classes with correct field names.
  Refactored checkIfZoneIsDeletable().
  Added findByDc(long dcId) to VolumeDao and VolumeDaoImpl.
  Added countIPs(long dcId, boolean onlyCountAllocated) to IPAddressDao and IPAddressDaoImpl.
  Added countIPs(long dcId, boolean onlyCountAllocated) to DataCenterIpAddressDao and DataCenterIpAddressDaoImpl.
  Refactored checkIfPodIsDeletable().
  Added findByPodId(Long podId) to HostDao and HostDaoImpl.

Signed-off-by: Wido den Hollander <wido@widodh.nl>
2015-09-23 14:00:04 +02:00
Remi Bergsma
d543e2aa2c Merge pull request #839 from bvbharatk/CLOUDSTACK-8851
CLOUDSTACK-8851 Redundant VR getting started in the same cluster or hwe are not populating the deployment destination of the previous rvr in the avoid set of the rvr that is being created. This was resulting in both the rvrs getting deployed to same host sometimes even when it
could have gone to a different one.

Now we are updating the avoids set of the deployment plan  to fix this issue.

* pr/839:
  CLOUDSTACK-8851 Redundant VR getting started in the same cluster or host even when there are suitable hosts available

Signed-off-by: Remi Bergsma <github@remi.nl>
2015-09-21 20:03:58 +02:00
Boris Schrijver
fa5f388fe9 Updated Dao classes with correct field names. 2015-09-17 15:54:29 +02:00
Boris Schrijver
0df3357cac Added findByDc(long dcId) to VolumeDao and VolumeDaoImpl. 2015-09-16 22:17:27 +02:00
Boris Schrijver
12fc2b4c26 Added countIPs(long dcId, boolean onlyCountAllocated) to IPAddressDao and IPAddressDaoImpl. 2015-09-16 22:15:53 +02:00
Boris Schrijver
473f1937e2 Added countIPs(long dcId, boolean onlyCountAllocated) to DataCenterIpAddressDao and DataCenterIpAddressDaoImpl. 2015-09-16 22:15:00 +02:00
Boris Schrijver
0648cb9804 Added findByPodId(Long podId) to HostDao and HostDaoImpl. 2015-09-16 22:13:10 +02:00
Rohit Yadav
5b5152b21b schema: add 4.5.3 to 4.6.0 upgrade path stubs
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2015-09-16 11:53:41 +05:30
Rohit Yadav
36a43abff4 schema: add 4.5.2 to 4.5.3 upgrade path stubs
(cherry picked from commit 17166eb631e4647be1a74970d0771b5add5f2ace)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2015-09-16 11:53:41 +05:30
Bharat Kumar
7439a9bbec CLOUDSTACK-8851 Redundant VR getting started in the same cluster or host even when there are suitable hosts available 2015-09-15 14:24:13 +05:30
Rajani Karuturi
ac80a2df50 Merge pull request #808 from karuturi/CLOUDSTACK-8835
CLOUDSTACK-8835: Added alerts incase of template download failureAuthored-By: @sanjaytripathi
Reviewed-By: @devdeep

* pr/808:
  CLOUDSTACK-8835: Added alerts incase of template download failure

Signed-off-by: Rajani Karuturi <rajani.karuturi@citrix.com>
2015-09-14 09:47:46 +05:30
Rajani Karuturi
f888e93e44 Merge pull request #782 from karuturi/CLOUDSTACK-8816
Cloudstack 8816 entityuuid missing in some of the eventsIn some of the events generated, entity uuid was missing making it difficult to find the entity. Fixed the same.

Tested it on rabbitmq instance.
There are the events before after the fix:

Before
--------------------------------------------------------------------------------

routing_key: management-server.ActionEvent.ACCOUNT-DELETE.Account.*
exchange: cloudstack-events
message_count: 2
payload:
{"eventDateTime":"2015-09-04 17:59:24 +0530","status":"Scheduled","description":"deleting User test4 (id: 28) and accountId \u003d 28","event":"ACCOUNT.DELETE","Account":"c09e2e81-8edc-4c27-b072-25005b522b63","account":"bd73dc2e-35c0-11e5-b094-d4ae52cb9af0","user":"bd7ea748-35c0-11e5-b094-d4ae52cb9af0"}

payload_bytes: 304
payload_encoding: string
redelivered: False

--------------------------------------------------------------------------------

routing_key: management-server.AsyncJobEvent.complete.Account.*
exchange: cloudstack-events
message_count: 0
payload: {"cmdInfo":"{\"id\":\"9dd3abc2-3f8b-4852-aa60-a74b234acb13\",\"response\":\"json\",\"sessionkey\":\"5ig1ItP2_5v-mgY4cVJbJN5hw_w\",\"ctxDetails\":\"
{\\\"interface com.cloud.user.Account\\\":\\\"9dd3abc2-3f8b-4852-aa60-a74b234acb13\\\"}

\",\"cmdEventType\":\"ACCOUNT.DELETE\",\"expires\":\"2015-09-07T11:11:56+0000\",\"ctxUserId\":\"2\",\"signatureversion\":\"3\",\"httpmethod\":\"GET\",\"uuid\":\"9dd3abc2-3f8b-4852-aa60-a74b234acb13\",\"ctxAccountId\":\"2\",\"ctxStartEventId\":\"447\"}","instanceType":"Account","jobId":"5004989d-0cde-4922-8afa-66bf38b75ea7","status":"SUCCEEDED","processStatus":"0","commandEventType":"ACCOUNT.DELETE","resultCode":"0","command":"org.apache.cloudstack.api.command.admin.account.DeleteAccountCmd","jobResult":"org.apache.cloudstack.api.response.SuccessResponse/null/
{\"success\":true}

","account":"bd73dc2e-35c0-11e5-b094-d4ae52cb9af0","user":"bd7ea748-35c0-11e5-b094-d4ae52cb9af0"}
payload_bytes: 914
payload_encoding: string
redelivered: False

--------------------------------------------------------------------------------

After
--------------------------------------------------------------------------------

 routing_key: management-server.ActionEvent.ACCOUNT-DELETE.Account.e5e2db91-414d-484c-99d5-c4e265c14ad8
exchange: cloudstack-events
message_count: 13
payload: {"eventDateTime":"2015-09-07 17:32:26 +0530","status":"Completed","description":"Successfully completed deleting account. Account Id: 45","event":"ACCOUNT.DELETE","entityuuid":"e5e2db91-414d-484c-99d5-c4e265c14ad8","entity":"com.cloud.user.Account","account":"bd73dc2e-35c0-11e5-b094-d4ae52cb9af0","user":"bd7ea748-35c0-11e5-b094-d4ae52cb9af0"}
payload_bytes: 344
payload_encoding: string
redelivered: True

--------------------------------------------------------------------------------

routing_key: management-server.AsyncJobEvent.complete.Account.e5e2db91-414d-484c-99d5-c4e265c14ad8
exchange: cloudstack-events
message_count: 12
payload: {"cmdInfo":"{\"id\":\"e5e2db91-414d-484c-99d5-c4e265c14ad8\",\"response\":\"json\",\"sessionkey\":\"8AJVbn8HIpg5LZ_VaVfSPs_QN2k\",\"ctxDetails\":\"{\\\"interface com.cloud.user.Account\\\":\\\"e5e2db91-414d-484c-99d5-c4e265c14ad8\\\"}\",\"cmdEventType\":\"ACCOUNT.DELETE\",\"expires\":\"2015-09-07T12:17:42+0000\",\"ctxUserId\":\"2\",\"signatureversion\":\"3\",\"httpmethod\":\"GET\",\"uuid\":\"e5e2db91-414d-484c-99d5-c4e265c14ad8\",\"ctxAccountId\":\"2\",\"ctxStartEventId\":\"465\"}","instanceType":"Account","instanceUuid":"e5e2db91-414d-484c-99d5-c4e265c14ad8","jobId":"0bb08486-6d9f-4e9f-bfef-b7463c42e71b","status":"SUCCEEDED","processStatus":"0","commandEventType":"ACCOUNT.DELETE","resultCode":"0","command":"org.apache.cloudstack.api.command.admin.account.DeleteAccountCmd","jobResult":"org.apache.cloudstack.api.response.SuccessResponse/null/{\"success\":true}","account":"bd73dc2e-35c0-11e5-b094-d4ae52cb9af0","user":"bd7ea748-35c0-11e5-b094-d4ae52cb9af0"}
payload_bytes: 968
payload_encoding: string
redelivered: True

--------------------------------------------------------------------------------

* pr/782:
  CLOUDSTACK-8816 Systemvm reboot event doesnt have uuids. Fixed the same
  CLOUDSTACK-8816: Project UUID is not showing for some of operations in RabbitMQ.
  CLOUDSTACK-8816: entity uuid missing in create network event
  CLOUDSTACK-8816: instance uuid is missing in events for delete account
  CLOUDSTACK-8816 Fixed entityUuid missing in some cases is events

Signed-off-by: Rajani Karuturi <rajani.karuturi@citrix.com>
2015-09-14 09:42:44 +05:30
wilderrodrigues
68bf049106 Merge pull request #805 from ekholabs/improvement/callable_CLOUDSTACK-8822
CLOUDSTACK-8822 - Replacing Runnable by CallableThat's the first part of the refactor, which will touch the ManagedContextRunnable and all its subclasses. However, in order to reduce impact, the first part comprises the Agent and Nio related classes (NioConnection, NioServer and NioClient).

* All the sub-classes were also updated according to the changes in the super-classes
* Improved exception handling
* There were also code formatting changes

Changes were structural and the NioTest covered them without need to modify the unit test.

This PR is quite extensive. Please, wait for the Test Report in order to proceed with further review.

ping @remibergsma @miguelaferreira @bhaisaab @karuturi @wido @DaanHoogland @borisroman @K0zka

Cheers,
Wilder

* pr/805:
  CLOUDSTACK-8822 - Replacing Runnable by Callable in the Taks and NioConnection classes

Signed-off-by: wilderrodrigues <wrodrigues@schubergphilis.com>
2015-09-11 14:52:36 +02:00
Sanjay Tripathi
9a35f87d37 CLOUDSTACK-8835: Added alerts incase of template download failure
Reviewed-By: Devdeep
2015-09-11 16:07:35 +05:30
wilderrodrigues
79a3f8c577 CLOUDSTACK-8822 - Replacing Runnable by Callable in the Taks and NioConnection classes
- All the sub-classes were also updated according to the changes in the super-classes
   - There were also code formatting changes
2015-09-11 11:28:40 +02:00
Wei Zhou
c15729e71a Fix coverity scan 1323172 2015-09-10 08:01:36 +02:00
Rajani Karuturi
b12d4730a6 CLOUDSTACK-8816: entity uuid missing in create network event
*Before*

|                      routing_key                       |     exchange      | message_count |                                                                                                                               payload                                                                                                                               | payload_bytes | payload_encoding | redelivered |
| management-server.ActionEvent.NETWORK-CREATE.Network.* | cloudstack-events | 0             | {"eventDateTime":"2015-09-09 09:35:02 +0530","status":"Completed","description":"Successfully completed creating network. Network Id: 206","event":"NETWORK.CREATE","account":"bd73dc2e-35c0-11e5-b094-d4ae52cb9af0","user":"bd7ea748-35c0-11e5-b094-d4ae52cb9af0"} | 259           | string           | False       |

*After*

|                                                    routing_key                                                    |     exchange      | message_count |                                                                                                                                                                           payload                                                                                                                                                                            | payload_bytes | payload_encoding | redelivered |
| management-server.ActionEvent.NETWORK-CREATE.Network.c9ed8d0d-6e58-456e-be58-28bb809f0b3a                         | cloudstack-events | 52            | {"eventDateTime":"2015-09-09 10:12:37 +0530","status":"Completed","description":"Successfully completed creating network. Network Id: 210","event":"NETWORK.CREATE","entityuuid":"c9ed8d0d-6e58-456e-be58-28bb809f0b3a","entity":"com.cloud.network.Network","account":"bd73dc2e-35c0-11e5-b094-d4ae52cb9af0","user":"bd7ea748-35c0-11e5-b094-d4ae52cb9af0"} | 348           | string           | False       |
2015-09-09 14:23:28 +05:30
Anshul Gangwar
a5555ed229 CLOUDSTACK-8825, CLOUDSTACK-8824 : Fixed following issues if vm.allocation.algorithm is set to firstfitleastconsumed
1. VM deployment failure if thre is only ZWPS in setup
2. VM migration is impossible from UI
2015-09-08 17:09:36 +05:30
Wei Zhou
c0a0aec0f9 Merge pull request #732 from ustcweizhou/revert-volume-snapshot-master
Guys, can you review it? things need to be discussed:
(1) this supports KVM/QCOW2 only. Anyone want to implement for other Hypervisor/format ?
(2) The original data volume (on primary storage) will be removed.
(3) The script uses the default timeout in libvirtComputingResource. Do we need to add one in global configuration (like copy.volume.wait or backup.snapshot.wait, create.volume.from.snapshot.wait)
(4) In scripts/storage/qcow2/managesnapshot.sh, I use "qemu-img convert -f qcow2 -O qcow2" to copy the snapshot from secondary to primary (hence there is no base image file), instead of "cp -f", this is because convert is faster than cp in my testing.

* pr/732:
  CLOUDSTACK-5863: revert volume snapshot for KVM/QCOW2

Signed-off-by: Wei Zhou <w.zhou@tech.leaseweb.com>
2015-09-01 16:18:40 +02:00
Rajani Karuturi
8bc0294014 Revert "Merge pull request #714 from rafaelweingartner/master-lrg-cs-hackday-003"
This reverts commit cd7218e241a8ac93df7a73f938320487aa526de6, reversing
changes made to f5a7395cc2ec37364a2e210eac60720e9b327451.

Reason for Revert:

noredist build failed with the below error:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on project cloud-plugin-hypervisor-vmware: Compilation failure
[ERROR] /home/jenkins/acs/workspace/build-master-noredist/plugins/hypervisors/vmware/src/com/cloud/hypervisor/guru/VMwareGuru.java:[484,12] error: non-static variable logger cannot be referenced from a static context
[ERROR] -> [Help 1]

even the normal build is broken as reported by @koushik-das on dev list
http://markmail.org/message/nngimssuzkj5gpbz
2015-08-31 11:27:57 +05:30
Rafael Weingartner
3818257a68 Solved jira ticket: CLOUDSTACK-8750 2015-08-28 22:35:08 -03:00
Rajani Karuturi
46a5f4bc91 Merge pull request #745 from nlivens/CLOUDSTACK-8773
CLOUDSTACK-8773 : NPE in CheckRouterTask, when a DomainRouter happens to be expunged at the same time

* pr/745:
  CLOUDSTACK-8773 : NPE in CheckRouterTask, when a DomainRouter happens to be expunged at the same time

Signed-off-by: Rajani Karuturi <rajani.karuturi@citrix.com>
2015-08-26 15:54:43 +05:30
Likitha Shetty
f499281625 CLOUDSTACK-8602. MigrateVirtualMachineWithVolume leaves old chain data for volume. Update chain info of a volume after migration.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>

This closes #548
2015-08-26 15:15:53 +05:30