CLOUDSTACK-8908 After copying the template charging for that template is getting stoppedThis is happening as the zone id is not part of the query. Zone id is added to the query and unit tests are also added
* pr/896:
CLOUDSTACK-8908 After copying the template charging for that template is stopped
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9626: Instance fails to start after unsuccesful computeISSUE
============
Instance fails to start after unsuccesful compute offering upgrade.
TROUBLESHOOTING
==================
We observed VM instance get compute values "cpuNumber","cpuSpeed","memory" removed from table "user_vm_details", which cause instance fail to startup next time on XenServer
`mysql> select * from user_vm_details where vm_id=10;
--------------------------------------------------------------------------------------------------
id vm_id name value display
--------------------------------------------------------------------------------------------------
218 10 platform viridian:true;acpi:1;apic:true;pae:true;nx:true 1
219 10 hypervisortoolsversion xenserver56 1
220 10 Message.ReservedCapacityFreed.Flag true 1
--------------------------------------------------------------------------------------------------
3 rows in set (0.00 sec)`
`2016-11-29 06:49:03,667 ERROR [c.c.a.ApiAsyncJobDispatcher] (API-Job-Executor-12:ctx-49c25b1d job-125) (logid:114a2f1b) Unexpected exception while executing org.apache.cloudstack.api.command.admin.vm.ScaleVMCmdByAdmin
java.lang.NullPointerException
at com.cloud.vm.UserVmManagerImpl.upgradeRunningVirtualMachine(UserVmManagerImpl.java:1664)
at com.cloud.vm.UserVmManagerImpl.upgradeVirtualMachine(UserVmManagerImpl.java:1631)
at com.cloud.vm.UserVmManagerImpl.upgradeVirtualMachine(UserVmManagerImpl.java:1561)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:106)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
at com.cloud.event.ActionEventInterceptor.invoke(ActionEventInterceptor.java:51)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at com.sun.proxy.$Proxy197.upgradeVirtualMachine(Unknown Source)
at org.apache.cloudstack.api.command.admin.vm.ScaleVMCmdByAdmin.execute(ScaleVMCmdByAdmin.java:48)
at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:150)
at com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:554)
at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:502)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)`
REPRO STEPS
==================
1. Set global setting enable.dynamic.scale.vm to true
2. Create a custom Compute Offerings A
3. Create a VM instance apply A, ie. cpuNumber=1,cpuSpeed=1000,memory=512M
4. Create another custom Compute Offerings B
5. Change service offering to B, ie. cpuNumber=2,cpuSpeed=2000,memory=4096M (ensure 4 times over previous memory size), then you will encounter scaling failed
6. Stop VM instance , you will never startup again
EXPECTED BEHAVIOR
==================
Succeed Startup VM instance
ACTUAL BEHAVIOR
==================
Fail to start instance
RCA:
The ROLLBACK does not take care of restoring old service offering details. In case failure we are removing the new service offering details but restoring old service offering details is missing.
Before Fix:
`user_vm_details before upgrade.
mysql> select * from user_vm_details where vm_id =9;
+-----+-------+------------------------------------+-------------------------------------------------+---------+
| id | vm_id | name | value | display |
+-----+-------+------------------------------------+-------------------------------------------------+---------+
| 118 | 9 | platform | viridian:true;acpi:1;apic:true;pae:true;nx:true | 1 |
| 119 | 9 | hypervisortoolsversion | xenserver56 | 1 |
| 120 | 9 | Message.ReservedCapacityFreed.Flag | false | 1 |
| 121 | 9 | cpuNumber | 1 | 1 |
| 122 | 9 | cpuSpeed | 1000 | 1 |
| 123 | 9 | memory | 256 | 1 |
+-----+-------+------------------------------------+-------------------------------------------------+---------+
6 rows in set (0.00 sec)
user_vm_details after unsuccessful upgrade.
mysql> select * from user_vm_details where vm_id =9;
+-----+-------+------------------------------------+-------------------------------------------------+---------+
| id | vm_id | name | value | display |
+-----+-------+------------------------------------+-------------------------------------------------+---------+
| 133 | 9 | platform | viridian:true;acpi:1;apic:true;pae:true;nx:true | 1 |
| 134 | 9 | hypervisortoolsversion | xenserver56 | 1 |
| 135 | 9 | Message.ReservedCapacityFreed.Flag | false | 1 |
+-----+-------+------------------------------------+-------------------------------------------------+---------+
3 rows in set (0.00 sec)`
After fix:
`
mysql> select * from user_vm_details where vm_id =9;
+-----+-------+------------------------------------+-------------------------------------------------+---------+
| id | vm_id | name | value | display |
+-----+-------+------------------------------------+-------------------------------------------------+---------+
| 166 | 9 | cpuNumber | 1 | 1 |
| 167 | 9 | platform | viridian:true;acpi:1;apic:true;pae:true;nx:true | 1 |
| 168 | 9 | cpuSpeed | 1000 | 1 |
| 169 | 9 | Message.ReservedCapacityFreed.Flag | false | 1 |
| 170 | 9 | memory | 256 | 1 |
| 171 | 9 | hypervisortoolsversion | xenserver56 | 1 |
+-----+-------+------------------------------------+-------------------------------------------------+---------+
6 rows in set (0.00 sec)
`
* pr/1796:
CLOUDSTACK-9626: Instance fails to start after unsuccesful compute offering upgrade.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9654 Missing hypervisor mapping of various SUSE Linux gues**Jira**
CLOUDSTACK-9654 Missing hypervisor mapping of various SUSE Linux guest os versions on VMware 6.0
**Issue:**
Currently many versions of SUSE Linux does not have any hypervisor mapping entry in guest_os_hypervisor table in cloud database for VMware 6.0. Also observed that the guest_os_name field is incorrect for some SUSE Linux variants, which results in deployed instance (with SUSE Linux) set to guest OS type as "Other (64-bit)" on vCenter, which would not represent the guest OS accurately on hypervisor.
**Fix:**
Add the missing hypervisor mappings
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
The current (4.9) list of SUSE Linux guest os in database looks as below,
```
> mysql> select id,display_name from guest_os where display_name like '%suse%';
+-----+----------------------------------------------+
| id | display_name |
+-----+----------------------------------------------+
| 40 | SUSE Linux Enterprise Server 9 SP4 (32-bit) |
| 41 | SUSE Linux Enterprise Server 10 SP1 (32-bit) |
| 42 | SUSE Linux Enterprise Server 10 SP1 (64-bit) |
| 43 | SUSE Linux Enterprise Server 10 SP2 (32-bit) |
| 44 | SUSE Linux Enterprise Server 10 SP2 (64-bit) |
| 45 | SUSE Linux Enterprise Server 10 SP3 (64-bit) |
| 46 | SUSE Linux Enterprise Server 11 (32-bit) |
| 47 | SUSE Linux Enterprise Server 11 (64-bit) |
| 96 | SUSE Linux Enterprise 8(32-bit) |
| 97 | SUSE Linux Enterprise 8(64-bit) |
| 107 | SUSE Linux Enterprise 9(32-bit) |
| 108 | SUSE Linux Enterprise 9(64-bit) |
| 109 | SUSE Linux Enterprise 10(32-bit) |
| 110 | SUSE Linux Enterprise 10(64-bit) |
| 151 | SUSE Linux Enterprise Server 10 SP3 (32-bit) |
| 152 | SUSE Linux Enterprise Server 10 SP4 (64-bit) |
| 153 | SUSE Linux Enterprise Server 10 SP4 (32-bit) |
| 154 | SUSE Linux Enterprise Server 11 SP1 (64-bit) |
| 155 | SUSE Linux Enterprise Server 11 SP1 (32-bit) |
| 185 | SUSE Linux Enterprise Server 11 SP2 (64-bit) |
| 186 | SUSE Linux Enterprise Server 11 SP2 (32-bit) |
| 187 | SUSE Linux Enterprise Server 11 SP3 (64-bit) |
| 188 | SUSE Linux Enterprise Server 11 SP3 (32-bit) |
| 202 | Other SUSE Linux(32-bit) |
| 203 | Other SUSE Linux(64-bit) |
| 244 | SUSE Linux Enterprise Server 12 (64-bit) |
+-----+----------------------------------------------+
26 rows in set (0.00 sec)
```
The current (4.9) hypervisor mappings for SUSE Linux guest os over VMware 6.0 in database looks as below. We can observe in the below query result, which lists all hypervisor mappings for SUSE Linux guest OS over VMware 6.0, many guest os listed in above query result are missing their mappings for VMware 6.0. Hence the need to add the missing hypervisor mappings.
```
mysql> select o.id,o.display_name, h.guest_os_name, h.hypervisor_version from guest_os as o, guest_os_hypervisor as h where o.id=h.guest_os_id and h.hypervisor_version='6.0' and h.hypervisor_type='vmware' and o.display_name like '%SUSE%';
+-----+----------------------------------+---------------+--------------------+
| id | display_name | guest_os_name | hypervisor_version |
+-----+----------------------------------+---------------+--------------------+
| 96 | SUSE Linux Enterprise 8(32-bit) | suseGuest | 6.0 |
| 97 | SUSE Linux Enterprise 8(64-bit) | suse64Guest | 6.0 |
| 107 | SUSE Linux Enterprise 9(32-bit) | suseGuest | 6.0 |
| 108 | SUSE Linux Enterprise 9(64-bit) | suse64Guest | 6.0 |
| 109 | SUSE Linux Enterprise 10(32-bit) | suseGuest | 6.0 |
| 110 | SUSE Linux Enterprise 10(64-bit) | suse64Guest | 6.0 |
| 202 | Other SUSE Linux(32-bit) | suseGuest | 6.0 |
| 203 | Other SUSE Linux(64-bit) | suse64Guest | 6.0 |
+-----+----------------------------------+---------------+--------------------+
8 rows in set (0.00 sec)
```
* pr/1817:
CLOUDSTACK-9654 Missing hypervisor mapping of various SUSE Linux guest os versions on VMware 6.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9594: API "list templates templatefilter=all" reveals allAPI "list templates templatefilter=all" reveals all templates.
Using a "list templates templatefilter=all" API call any domain admin can see all templates of all domains in ACS. Information returned includes the account and domain of the template's owner.
The template data shows what that VM is using and any hints from the label. This would give an advantage in what attack vectors to use. The account and domain can possibly be used in brute force attack to guess the password and login information.
Test Scenario:
created two accounts in different domain.
```
mysql> select account_id,username,api_key from user where id in (4,5);
+------------+-----------+----------------------------------------------------------------------------------------+
| account_id | username | api_key |
+------------+-----------+----------------------------------------------------------------------------------------+
| 4 | sudadmin1 | 3qeSuWadNzUFZ_i6c6zbwafjM3Eo0TWpkHw3En9jNsg5Ditk2N18DnbbL2quBYQ7FsdXQ8rwxbyFlE8vyUTwEg |
| 5 | sudadmin | N5uHVOrg1Ek1F1a_5OXTz4WpLG3ewHqcbPUSBjQ-2CTJdxmUe2go0S8fyqH4Np0scYiehYg2KqthZXCWEyKx1A |
+------------+-----------+----------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)
mysql> select account_name,domain_id from account where id in (4,5);
+--------------+-----------+
| account_name | domain_id |
+--------------+-----------+
| sudadmin | 2 |
| sudadmin1 | 3 |
+--------------+-----------+
2 rows in set (0.00 sec)
```
User sudadmin registered a private template named 'Debian'.
http://10.147.59.107:8080/client/api?apikey=N5uHVOrg1Ek1F1a_5OXTz4WpLG3ewHqcbPUSBjQ-2CTJdxmUe2go0S8fyqH4Np0scYiehYg2KqthZXCWEyKx1A&command=listTemplates&templatefilter=self&signature=ODt7zEWCLL20z1FT%2FIkd1molRaM%3D
listTemplate with "templatefilter=self", lists the newly registered template.
```
<listtemplatesresponse cloud-stack-version="4.8.0">
<count>1</count>
<template>
<id>51026d32-60ee-4e25-8ffd-3fa3c57fc14c</id>
<name>Debian</name>
<displaytext>Debian</displaytext>
<ispublic>false</ispublic>
<created>2016-11-10T17:18:00-0500</created>
<isready>true</isready>
<passwordenabled>false</passwordenabled>
<format>VHD</format>
<isfeatured>false</isfeatured>
<crossZones>false</crossZones>
<ostypeid>38c1fc84-a687-11e6-a8c8-06f654000053</ostypeid>
<ostypename>Debian GNU/Linux 7(64-bit)</ostypename>
<account>sudadmin</account>
<zoneid>25fa5b74-d4c2-4bad-8e3a-ceffcd10985e</zoneid>
<zonename>z1</zonename>
<status>Download Complete</status>
<size>2621440000</size>
<templatetype>USER</templatetype>
<hypervisor>XenServer</hypervisor>
<domain>SUDDOMAIN</domain>
<domainid>a350c00d-4048-4876-ae09-74ad4b7bb28c</domainid>
<isextractable>false</isextractable>
<checksum>e87a6d7291b999c92baa9623c9c3c207</checksum>
<details>{hypervisortoolsversion=xenserver61}</details>
<sshkeyenabled>false</sshkeyenabled>
<isdynamicallyscalable>false</isdynamicallyscalable>
</template>
</listtemplatesresponse>
```
User: sudadmin1
listTemplate with "templatefilter=self" does not list any template.
http://10.147.59.107:8080/client/api?apikey=3qeSuWadNzUFZ_i6c6zbwafjM3Eo0TWpkHw3En9jNsg5Ditk2N18DnbbL2quBYQ7FsdXQ8rwxbyFlE8vyUTwEg&command=listTemplates&templatefilter=self&signature=RfKsdg3RxDkqJotbTlHU2RdbdPA%3D
`<listtemplatesresponse cloud-stack-version="4.8.0"/>
`
NO TEMPLATES
**listTemplate with "templatefilter=all" lists all templates**
http://10.147.59.107:8080/client/api?apikey=3qeSuWadNzUFZ_i6c6zbwafjM3Eo0TWpkHw3En9jNsg5Ditk2N18DnbbL2quBYQ7FsdXQ8rwxbyFlE8vyUTwEg&command=listTemplates&templatefilter=all&signature=l5tubfyABT67d1jY702dvtZODbc%3D
Result:
```
<listtemplatesresponse cloud-stack-version="4.8.0">
<count>3</count>
<template>
<id>38451a02-a687-11e6-a8c8-06f654000053</id>
<name>CentOS 5.6(64-bit) no GUI (XenServer)</name>
<displaytext>CentOS 5.6(64-bit) no GUI (XenServer)</displaytext>
<ispublic>true</ispublic>
....
</template>
<template>
<id>51026d32-60ee-4e25-8ffd-3fa3c57fc14c</id>
<name>Debian</name>
<displaytext>Debian</displaytext>
<ispublic>false</ispublic>
<created>2016-11-10T17:18:00-0500</created>
<isready>true</isready>
<passwordenabled>false</passwordenabled>
<format>VHD</format>
<isfeatured>false</isfeatured>
<crossZones>false</crossZones>
<ostypeid>38c1fc84-a687-11e6-a8c8-06f654000053</ostypeid>
<ostypename>Debian GNU/Linux 7(64-bit)</ostypename>
**<account>sudadmin</account>**
<zoneid>25fa5b74-d4c2-4bad-8e3a-ceffcd10985e</zoneid>
<zonename>z1</zonename>
<size>2621440000</size>
<templatetype>USER</templatetype>
<hypervisor>XenServer</hypervisor>
<domain>SUDDOMAIN</domain>
<domainid>a350c00d-4048-4876-ae09-74ad4b7bb28c</domainid>
<isextractable>false</isextractable>
<checksum>e87a6d7291b999c92baa9623c9c3c207</checksum>
<details>{hypervisortoolsversion=xenserver61}</details>
<sshkeyenabled>false</sshkeyenabled>
<isdynamicallyscalable>false</isdynamicallyscalable>
</template>
<template>
<id>5f6af7bb-d965-4b9b-ab45-6d455b0d6bbe</id>
<name>SystemVM Template (XenServer)</name>
<displaytext>SystemVM Template (XenServer)</displaytext>
<ispublic>false</ispublic>
.....
</template>
</listtemplatesresponse>
```
**After Fix:**
http://10.147.59.107:8080/client/api?apikey=3qeSuWadNzUFZ_i6c6zbwafjM3Eo0TWpkHw3En9jNsg5Ditk2N18DnbbL2quBYQ7FsdXQ8rwxbyFlE8vyUTwEg&command=listTemplates&templatefilter=all&signature=l5tubfyABT67d1jY702dvtZODbc%3D
```
<listtemplatesresponse cloud-stack-version="4.8.0">
<count>1</count>
<template>
<id>38451a02-a687-11e6-a8c8-06f654000053</id>
<name>CentOS 5.6(64-bit) no GUI (XenServer)</name>
<displaytext>CentOS 5.6(64-bit) no GUI (XenServer)</displaytext>
<ispublic>true</ispublic>
<created>2016-11-10T09:32:44-0500</created>
<isready>true</isready>
<passwordenabled>false</passwordenabled>
<format>VHD</format>
<isfeatured>true</isfeatured>
<crossZones>true</crossZones>
<ostypeid>38a2bfd6-a687-11e6-a8c8-06f654000053</ostypeid>
<ostypename>CentOS 5.6 (64-bit)</ostypename>
<account>system</account>
<zoneid>25fa5b74-d4c2-4bad-8e3a-ceffcd10985e</zoneid>
<zonename>z1</zonename>
<size>21474836480</size>
<templatetype>BUILTIN</templatetype>
<hypervisor>XenServer</hypervisor>
<domain>ROOT</domain>
<domainid>383e0ea6-a687-11e6-a8c8-06f654000053</domainid>
<isextractable>true</isextractable>
<checksum>905cec879afd9c9d22ecc8036131a180</checksum>
<sshkeyenabled>false</sshkeyenabled>
<isdynamicallyscalable>true</isdynamicallyscalable>
</template>
</listtemplatesresponse>
```
Bug has been fixed considering below points
1. templatefilter=all or isofilter=all is applicable only to admin and domain admin.
2. With templatefilter=all or isofilter=all below are the visiblity of templates in system.
- admin should be able to see all templates/iso in system.
- domain admin should be able to see all public template and templates under its domain tree (including sub domain).
- domain admin in a project context should be able to see all public templates and templates registered
as project account and templates which are shared(using updateTemplatePermission api) with project account.
Also Modified "test/integration/component/test_escalation_listTemplateDomainAdmin.py"
This marvin test was written for this scenario but for the second account "templatefilter=all" is not used.
* pr/1763:
CLOUDSTACK-9594: reverted changes introduced in CLOUDSTACK-9376
CLOUDSTACK-9594: API "list templates templatefilter=all" reveals all templates of all domains
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9269: Missing field for Switch type for Management and Storage traffic types# Repro Steps:
Create an Advanced zone(VMware).
Configure physical network.
Edit traffic type (Management or Storage)
Observe the switch type dropdown list is missing.
If we choose Guest or Public the dropdown is visible. See the below snapshots.
# Expected Result:
The list should be shown in case of Management and storage.
# Actual Result:
The list is missing in case of management and storage.
# Fix:
Showing vswitchtype for all traffic types in case of VMware.
# Traffic Type - Guest:

# Traffic Type - Management (Issue):

* pr/1396:
CLOUDSTACK-9269: Missing field for Switch type for Management and Storage traffic types
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9656 Preventing autoboxing NPE in Usage by setting a default role when not foundhttps://issues.apache.org/jira/browse/CLOUDSTACK-9656
This is a workaround to avoid NPE when using the usage server with Projects
To reproduce bug:
Create project
Add account to project
Create VM within that project
Run the usage server
Same steps to test resolution
* pr/1820:
CLOUDSTACK-9656: Preventing autoboxing NPE in Usage by setting a default role when not found
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9584: Fix intermittent test failure in `test_volumes`The component/test_volume failures happen when disk offering is random selected to be a custom one. This fixes that.
* pr/1822:
CLOUDSTACK-9584: Fix intermittent test failure in `test_volumes`
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9659: mismatch in traffic type in ip_associations.json and ips.jsonAs part of the bug 'CLOUDSTACK-9339 Virtual Routers don't handle Multiple Public Interfaces correctly'
issue of mismatch of traffic type represented by 'nw_type' in config sent by management server in
ip_associations.json and how it is persisted in the ips.json data bag are differnet,
is addressed, however missed the change in final merge.
this bug is to add the functionality in cs_ip.py, to lower the traffic type sent by management server before persisting in the ips.json databag
* pr/1821:
CLOUDSTACK-9659: mismatch in traffic type in ip_associations.json and ips.json
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9637: Template create from snapshot does not populate vm_t**ISSUE**
============
Template create from snapshot does not populate vm_template_details
**REPRO STEPS**
==================
1. Register a template A and specify property:
Root disk controller: scsi
NIC adapter type: E1000
Keyboard type: us
2. Create a vm instance from template A
3. Take volume snapshot for vm instance
4. Delete VM instance
5. Switch to "Storage->Snapshots", convert snapshot to a template B
6. Observe template B does not inherit property from template A, the table vm_template_details is empty
**SOLUTION**: Retrieve and add source template details to VMTemplateVO.
Before Fix:
```
mysql> select id,name,source_template_id from vm_template where id=202;
+-----+--------+--------------------+
| id | name | source_template_id |
+-----+--------+--------------------+
| 202 | Debian | NULL |
+-----+--------+--------------------+
1 row in set (0.00 sec)
mysql> select * from vm_template_details where template_id=202;
+----+-------------+--------------------+-------+---------+
| id | template_id | name | value | display |
+----+-------------+--------------------+-------+---------+
| 1 | 202 | keyboard | us | 1 |
| 2 | 202 | nicAdapter | E1000 | 1 |
| 3 | 202 | rootDiskController | scsi | 1 |
+----+-------------+--------------------+-------+---------+
3 rows in set (0.00 sec)
mysql> select id,name,source_template_id from vm_template where source_template_id=202;
+-----+----------------+--------------------+
| id | name | source_template_id |
+-----+----------------+--------------------+
| 203 | derived-debian | 202 |
+-----+----------------+--------------------+
1 row in set (0.00 sec)
mysql> select * from vm_template_details where template_id=203;
Empty set (0.00 sec)
After Fix:
mysql> select id,name,source_template_id from vm_template where source_template_id=202;
+-----+--------------------------+--------------------+
| id | name | source_template_id |
+-----+--------------------------+--------------------+
| 203 | derived-debian | 202 |
| 204 | debian-derived-after-fix | 202 |
+-----+--------------------------+--------------------+
2 rows in set (0.00 sec)
mysql> select * from vm_template_details where template_id=204;
+----+-------------+--------------------+-------+---------+
| id | template_id | name | value | display |
+----+-------------+--------------------+-------+---------+
| 4 | 204 | keyboard | us | 1 |
| 5 | 204 | nicAdapter | E1000 | 1 |
| 6 | 204 | rootDiskController | scsi | 1 |
+----+-------------+--------------------+-------+---------+
3 rows in set (0.00 sec)
```
**Marvin Test :** test_template_from_snapshot_with_template_details.py
**Result:**
```
test_01_create_template_snampshot (integration.component.test_template_from_snapshot_with_template_details.TestCreateTemplate) ... === TestName: test_01_create_template_snampshot | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 864.523s
OK
```
* pr/1805:
CLOUDSTACK-9637: Template create from snapshot does not populate vm_template_details
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
As part of the bug 'CLOUDSTACK-9339 Virtual Routers don't handle Multiple Public Interfaces correctly'
issue of mismatch of traffic type represented by 'nw_type' in config sent by management server in
ip_associations.json and how it is persisted in the ips.json data bag are differnet,
is addressed, however missed the change in final merge.
this bug is to add the functionality in cs_ip.py, to lower the traffic type sent by management server before persisting in the ips.json databag
CLOUDSTACK-9646: No usage is generated for uploaded templates/volumes from localpublished usage events on successful upload of template or volume.
* pr/1809:
CLOUDSTACK-9646: No usage is generated for uploaded templates/volumes from local
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9339 Virtual Routers don't handle Multiple Public Interfaces correctlyAs pointed out in CLOUDSTACK-9339, in case of multiple public IP's from different public IP ranges are associated with VR, VR functionality is broken from 4.6. Below are the brief list of problems specific to non-VPC networks addressed in the PR. This PR handles both VPC and non-VPC scenarios.
- reverse traffic for the connections accepted on the eth3 and above public interfaces are getting blocked. Need a rule for e.g "-A FORWARD -i eth3 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT" in the FORWARD chain of filter table to permit reverse path traffic for established connections.
- outbound public traffic from eth0 to eth3 (or for interfaces above like eth4 eth5 etc) needs rule to run through FW_OUTBOUND chain in the filter table
- network stats on public interfaces eth3 are getting gathered
- default gateway is missing in the device specific routing table, resulting in traffic to be looked up in main routing table
- creating a device specific route table is generating "from all lookup Table_eth3" in the
ip rules, resulting in rest of the traffic getting blocked.
Picked few commits from #1519 from dsclose (https://github.com/apache/cloudstack/pull/1519) submitted for 4.7
Marvin tests are added to test below
- Static NAT works on the public interfaces above eth2, in case non-vpc networks
- Portforwarding works on the public interfaces above eth2, in case non-vpc networks
- Route tables are configured as expected for the device specific table for the public interfaces above eth2, in case non-vpc networks
- IP tables rules are as expected for the traffic from and to the public interfaces above eth2, in case non-vpc networks
* pr/1659:
CLOUDSTACK-9339 Virtual Routers don't handle Multiple Public Interfaces correctly
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Issue: Currently many versions of SUSE Linux does not have any hypervisor mapping entry in guest_os_hypervisor table in cloud database for VMware 6.0. Also observed that the guest_os_name field is incorrect for some SUSE Linux variants, which results in deployed instance (with SUSE Linux) set to guest OS type as "Other (64-bit)" on vCenter, which would not represent the guest OS accurately on hypervisor.
Fix: Add the missing hypervisor mappings
-when processing static nat rule, add a mangle table rule, to mark the traffic
from the guest vm when it has associated static nat rule so that traffic gets
routed using the route tabe of the device which has public ip associated
-fix the case where nic_device_id is empty when ip is getting disassociated
resulting in empty deviceid in ips.json
-add utility methods in CsRule, and CsRoute to add 'ip rule' and 'ip route' rules respectivley
-ensure traffic from all public interfaces are connection marked with device number, and restored
for the reverse traffic. use the connection marked number to do device specific routing table lookup
fill the device specific routing table with default route
-component tests for testing multiple public interfaces of VR
CLOUDSTACK-9632: Upgrade bouncy castle to version 1.55- Upgrades Maven dependency version to v1.55
- Fixes bountycastle usages and issues
- Adds timeout to jetty/annotation scanning
- Picks up PR #1510 by Daan
* pr/1799:
CLOUDSTACK-9632: Upgrade bouncy castle to version 1.55
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Upgrades Maven dependency version to v1.55
- Fixes bountycastle usages and issues
- Adds timeout to jetty/annotation scanning
- Fixes servlet issue, uses servlet 3.1.0
- Downgrade javassist used by reflections to fix annotation process errors
- Make console-proxy-rdp bc dependency same as rest of the codebase
- Picks up PR #1510 by Daan
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9564: Fix NPE due to intermittent test assertionThe test assertion on a pool object may return a null object, as objects
can be randomly expired/tombstoned. This will fix a NPE sometimes seen due
to recently merge for the fix for CLOUDSTACK-9564.
(we can merge this if Travis passes)
/cc @abhinandanprateek @murali-reddy
* pr/1816:
CLOUDSTACK-9564: Fix NPE due to intermittent test assertion
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The test assertion on a pool object may return a null object, as objects
can be randomly expired/tombstoned. This will fix a NPE sometimes seen due
to recently merge for the fix for CLOUDSTACK-9564.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Cloudstack 9586: When using local storage with Xenserver prepareTemplate does not work with multiple primary storeThe race condition will happen whenever there are multiple primary storages and the CS tries to mount the secondary store to xenserver host simultaneously.
Due to synchronised block one mount will be successful and other thread will get the already mounted SR. Without the fix the two thread will try to mount it parallely and one will fail on Xenserver.
* pr/1765:
Cloudstack 9586: When using local storage with Xenserver prepareTemplate does not work with multiple primary store The race condition will happen whenever there are multiple primary storages and the CS tries to mount the secondary store to xenserver host simultaneously. Due to synchronised block one mount will be successful and other thread will get the already mounted SR. Without the fix the two thread will try to mount it parallely and one will fail on Xenserver.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9564: Fix memory leaks in VmwareContextPoolIn a recent management server crash, it was found that the largest contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.
This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.
@blueorangutan package
* pr/1729:
CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9635: fix test_privategw_acl.pyensure VLAN used for createPrivateGateway is determined after the guest
networks in the VPC is created, so that we skip VLAN allocated for guest
network for the private network of vpc gateway
* pr/1802:
CLOUDSTACK-9635: fix test_privategw_acl.py
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
ensure VLAN used for createPrivateGateway is determined after the guest
networks in the VPC is created, so that we skip VLAN allocated for guest
network for the private network of vpc gateway
The race condition will happen whenever there are multiple primary storages and the CS tries to mount the secondary store to xenserver host simultaneously.
Due to synchronised block one mount will be successful and other thread will get the already mounted SR. Without the fix the two thread will try to mount it parallely and one will fail on Xenserver.
This fixes build_asf.sh release script to update checkstyle pom.xml with the
provided new version.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9636: The host alerts box should be named as hosts in Alerts.The host Alerts box shows hosts in Alerts. The name host Alerts is misleading,
it should be changed to hosts in alerts.
For rest of the languages, it should be modified accordingly.
As I am not familiar with other languages, contributors familiar with other languages can suggest the change. or Open a new PR.
* pr/1803:
CLOUDSTACK-9636: The host alerts box should be named as hosts in Alerts.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-8854: Apple Mac OS/X VM get created without USB controller in ESXi hypervisorsCLOUDSTACK-8854: Apple Mac OS/X VM get created without USB controller in ESXi hypervisors
Problem Description: CloudStack doesnt add a USB controller to the Apple Mac OS X VMs created in ESXi hypervisors. But, vSphere Client, by default, adds a USB Controller to the Mac OS VMs. Mac OS X machines require USB Controller for USB mouse and keyboard access.
Root Cause: The Guest OS details are specified in the Virtual Machine Configuration Spec for creating the VM (using the SDK API) in the EXSi hypervisor. No USB Controller is added to the Virtual Machine Configuration Spec. As the guest OS Identification details are specified in the VM Configuration Spec, It is assumed that the Create VM SDK API would create the defaults in the VM same as vSphere Client. But, as per the observation, USB Controller is not added to the Guest OS - Mac OS VM created through the SDK API.
Resolution: When the Guest OS is Apple Mac OS, Add the USB Controller (EHCI+UHCI - Mac supported) to the Virtual Machine Configuration Spec before Creating or Starting the VM. For any existing Mac OS VMs, Stop and Start to add the USB Controller. For new VMs with Mac OS, USB Controller is added automatically.
* pr/828:
CLOUDSTACK-8854: Apple Mac OS/X VM get created without USB controller in ESXi hypervisors
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
In a recent management server crash, it was found that the largest contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.
This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>