Steps to reproduce issue
Deploy a VM
Take snapshot of the root volume
Delete the snapshot
Before the garbage collector has run, shutdown the VM and assign the VM to other user.
When garage collector executes NPE shows in the logs.
This happens when the root disk size is overridden. The primary storage limit check should be performed based on overridden size instead of template size. Enabled root disk resize tests to run on simulator as well.
This feature allow admins to dedicate a range of public IP addresses to the SSVM and CPVM, such that they can be subject to specific external firewall rules. The option to dedicate a public IP range to the System VMs (SSVM & CPVM) is added to the createVlanIpRange API method and the UI.
Solution:
Global setting 'system.vm.public.ip.reservation.mode.strictness' is added to determine if the use of the system VM reservation is strict (when true) or preferred (false), false by default.
When a range has been dedicated to System VMs, CloudStack should apply IPs from that range to
the public interfaces of the CPVM and the SSVM depending on global setting's value:
If the global setting is set to false: then CloudStack will use any unused and unreserved public IP
addresses for system VMs only when the pool of reserved IPs has been exhausted
If the global setting is set to true: then CloudStack will fail to deploy the system VM when the pool
of reserved IPs has been exhausted, citing the lack of available IPs.
UI Changes
Under Infrastructure -> Zone -> Physical Network -> Public -> IP Ranges, button 'Account' label is refactored to 'Set reservation'.
When that button is clicked, dialog displayed is also refactored, including a new checkbox 'System VMs' which indicates if range should be dedicated for CPVM and SSVM, and a note indicating its usage.
When clicking on button for any created range, UI dialog displayed indicates whether IP range is dedicated for system vms or not.
The first PR(#1176) intended to solve #CLOUDSTACK-9025 was only tackling the problem for CloudStack deployments that use single hypervisor types (restricted to XenServer). Additionally, the lack of information regarding that solution (poor documentation, test cases and description in PRs and Jira ticket) led the code to be removed in #1124 after a long discussion and analysis in #1056. That piece of code seemed logicless (and it was!). It would receive a hostId and then change that hostId for other hostId of the zone without doing any check; it was not even checking the hypervisor and storage in which the host was plugged into.
The problem reported in #CLOUDSTACK-9025 is caused by partial snapshots that are taken in XenServer. This means, we do not take a complete snapshot, but a partial one that contains only the modified data. This requires rebuilding the VHD hierarchy when creating a template out of the snapshot. The point is that the first hostId received is not a hostId, but a system VM ID(SSVM). That is why the code in #1176 fixed the problem for some deployment scenarios, but would cause problems for scenarios where we have multiple hypervisors in the same zone. We need to execute the creation of the VHD that represents the template in the hypervisor, so the VHD chain can be built using the parent links.
This commit changes the method com.cloud.hypervisor.XenServerGuru.getCommandHostDelegation(long, Command). From now on we replace the hostId that is intended to execute the “copy command” that will create the VHD of the template according to some conditions that were already in place. The idea is that starting with XenServer 6.2.0 hotFix ESP1004 we need to execute the command in the hypervisor host and not from the SSVM. Moreover, the method was improved making it readable and understandable; it was also created test cases assuring that from XenServer 6.2.0 hotFix ESP1004 and upward versions we change the hostId that will be used to execute the “copy command”.
Furthermore, we are not selecting a random host from a zone anymore. A new method was introduced in the HostDao called “findHostConnectedToSnapshotStoragePoolToExecuteCommand”, using this method we look for a host that is in the cluster that is using the storage pool where the volume from which the Snaphost is taken of. By doing this, we guarantee that the host that is connected to the primary storage where all of the snapshots parent VHDs are stored is used to create the template.
Consider using Disabled hosts when no Enabled hosts are found
This also closes#2317
Fix for CLOUDSTACK-7931 enforces a valid integer value to be configured for integration.api.port and usage.sanity.check.interval. These global configs can't be reset back to null(default).
While creating the response object for the 'listDomain' API, several database calls are triggered to fetch details like parent domain, project limit, IP limit, etc. These database calls are triggered for each record found in the main fetch query, which is causing the response to slow down.
Fix:
The database transactions are reduced to improve response of the Listdomain API
This extends work presented on #2048 on which the ability to extend the management range is provided.
Aim
This PR allows separating the management network subnet on which SSVM and CPVM are from the virtual routers management subnet.
Detailed use case
PCI compliance requires that network elements are defined as ‘in scope’ or ‘out of scope’, for compliance purposes. The SSVM and CPVM are both in scope as they allow public HTTP or HTTPS connections. The virtual routers have been defined as out of scope as they have been placed entirely in a firewalled network's segment. However, all of the system VM types share management network. As SSVM and CPVM are both in scope this would bring the virtual routers into scope as well, requiring individual audits of every virtual router. As this is not practical, the ‘management network’ which the SSVM and CPVM are on, and the management network which the virtual routers are on, must be separated by a firewall.
Description
By this feature it is possible to dedicate a created range for SSVM and CPVM (system vms) and provide a VLAN ID for its range.
A new boolean global configuration is added: system.vm.management.ip.reservation.mode.strictness. If enabled, the use of System VMs management IP reservation is strict, preferred if not. Default value is false (preferred).
Strict reservation: System VMs should try to get a private IP from a range marked for system vms. If not available, deployment fails
Preferred reservation: System VMS will try to get a private IP from a range marked for system vms. If not available, IP for range not marked for system vms is taken.
The db queries in listTemplateAPI could be optimized to get unique results from the database which could help in reducing the listTemplate API response time.
In CLOUDSTACK-9886, we are reading ping.interval and ping.timeout using configdao which involves direct reading of DB. So, replaced it with ConfigKey based approach.
Snapshot on primary storage not cleaned up after Storage migration. This happens in the following scenario:
Steps To Reproduce
Create an instance on the local storage on any host
Create a scheduled snapshot of the volume:
Wait until ACS created the snapshot. ACS is creating a snapshot on local storage and is transferring this snapshot to secondary storage. But the latest snapshot on local storage will stay there. This is as expected.
Migrate the instance to another XenServer host with ACS UI and Storage Live Migration
The Snapshot on the old host on local storage will not be cleaned up and is staying on local storage. So local storage will fill up with unneeded snapshots.
Using cloudmonkey, when invoking the update template api call, it does not display the isdynamicallyscalable field as part of its template response.
fix done:
org.apache.cloudstack.api.response.TemplateResponse isdynamicallyscalable field is now populated in the server/src/com/cloud/api/query/dao/TemplateJoinDaoImpl.java.newUpdateResponse method.
Unit test:
the Unit test server/test/com/cloud/api/query/dao/TemplateJoinDaoImplTest.java testNewUpdateResponse() verifies that the TemplateResponse is populated correctly.
Marvin test:
the Marvin nosetest integration/smoke/test_templates.py test_02_edit_template(self) confirms that the template_response.isdynamicallyscalable field gets populated with the correct user data.
Test scenario:
Using cloudmonkey, when invoking the 'update template' API call, it should now display the isdynamicallyscalable field as part of its template response.
Consider this scenario:
1. User launches a VM from Template and keep it running
2. Admin logins and deleted that template [CloudPlatform does not check existing / running VM etc. while the deletion is done]
3. User resets the VM
4. CloudPlatform fails to star the VM as it cannot find the corresponding template.
It throws error as
java.lang.RuntimeException: Job failed due to exception Resource [Host:11] is unreachable: Host 11: Unable to start instance due to can't find ready template: 209 for data center 1
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:113)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:495)
Client is requesting better handing of this scenario. We need to check existing / running VM's when the template is deleted and warn admin about the possible issue that may occur.
REPRO STEPS
==================
1. Launches a VM from Template and keep it running
2. Now delete that template
3. Reset the VM
4. CloudPlatform fails to star the VM as it cannot find the corresponding template.
EXPECTED BEHAVIOR
==================
Cloud platform should throw some warning message while the template is deleted if that template is being used by existing / running VM's
ACTUAL BEHAVIOR
==================
Cloud platform does not throw as waring etc.
* Cleanup and Improve NetUtils
This class had many unused methods, inconsistent names and redundant code.
This commit cleans up code, renames a few methods and constants.
The global/account setting 'api.allowed.source.cidr.list' is set
to 0.0.0.0/0,::/0 by default preserve the current behavior and thus
allow API calls for accounts from all IPv4 and IPv6 subnets.
Users can set it to a comma-separated list of IPv4/IPv6 subnets to
restrict API calls for Admin accounts to certain parts of their network(s).
This is to improve Security. Should an attacker steal the Access/Secret key
of an account he/she still needs to be in a subnet from where accounts are
allowed to perform API calls.
This is a good security measure for APIs which are connected to the public internet.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Ran into an issue today where we passed both the "id" and "domainid" parameters into "listAccounts" and received a response despite the account id passed not belonging to the domainid passed.
Allow usage of "domainid" AND "id" in "listAccounts"
- Adding "AccountDoa::findActiveAccountById"
- Adding "AccountDaoImpl::findActiveAccountById"
- Removing seemingly pointless "listForDomain" parameter
- Updating "typeNEQ" value from "5" to "Account.ACCOUNT_TYPE_PROJECT"
(which is "5")
- Only attempt to load domain for "path" query parameter once
"searchForAccountsInternal" input validation logic pseudo-code:
- If "domainid" set, check immediately
- If "id" not set:
- and user is admin and "listall" is true
- if "domainid" not set, use caller domain id
- force "isrecursive" true
- else use caller account id
- Else if "domainid" and "name" set
- verify existence of account and that user has access
- Else:
- if "domainid" not set, locate account by "id"
- else, locate account by "id" and "domainid"
- verify account found and caller has access rights
createSnapshotPolicy API create multiple entries in DB for same parameters, if multiple threads are executed in parallel.
STEPS TO REPRODUCE :
Created a new machine having root and data disk.
Make sure that no existing snapshot policy is present for the volume.
Execute multiple threads in parallel for createSnapshotPolicy API having all required parameters exactly same.
Verify table snapshot_policy in DB, will get multiple entries for same policy.
Once again execute same multiple threads, by changing any API parameter, will see that existing entries are getting modified in DB and no new entries are added.
Add resource type name in request and response for listResources API call.
This adds in the response a new attribute typename with the String value for the corresponding resource enum.
{
"capacitytotal": 0,
"capacityused": 0,
"percentused": "0",
"type": 19,
"typename": "gpu",
"zoneid": "381d0a95-ed4a-4ad9-b41c-b97073c1a433",
"zonename": "ch-dk-2"
}
Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
Python base64 requires that the string is a multiple of 4 characters but
the Apache codec does not. RFC states is not mandatory so the data should
not fail the VR script (vmdata.py).
Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR fixes the issue in which there's a leak when doing API call for listing VPC with domain account and projectId=-1.
Note for reviewers: The code formatting changed so many lines in the commit but the actual change is in line 2467-2471.
This fixes the issue that a non-root user cannot tag a network ACL item
and after the fix a non-root user still cannot tag a globally defined
ACL item and only the ACLs they have access to.
This includes test related fixes and code review fixes based on
reviews from @rafaelweingartner, @marcaurele, @wido and @DaanHoogland.
This also includes VMware disk-resize limitation bug fix based on comments
from @sateesh-chodapuneedi and @priyankparihar.
This also includes the final changes to systemvmtemplate and fixes to
code based on issues found via test failures.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
In a VMware 55u3 environment it was found that CPVM and SSVM would
get the same public IP. After another investigative review of
fetchNewPublicIp method, it was found that it would always pick up the
first IP from the sql query list/result.
The cause was found to be that with the new changes no table/row locks
are done and first item is used without looping through the list of
available free IPs. The previously implementation method that put
IP address in allocating state did not check that it was a free IP.
In this refactoring/fix, the first free IP is first marked as allocating
and if assign is requested that is changed into Allocated state.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes incorrect total host memory in listHosts and related host
responses, regression introduced in #2120.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes test failures around VMware with the new systemvmtemplate.
In addition:
- Does not skip rVR related test cases for VMware
- Removes rc.local
- Processes unprocessed cmd_line.json
- Fixed NPEs around VMware tests/code
- On VMware, use udevadm to reconfigure nic/mac address than rebooting
- Fix proper acpi shutdown script for faster systemvm shutdowns
- Give at least 256MB of swap for VRs to avoid OOM on VMware
- Fixes smoke tests for environment related failures
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
On XenServer, both redundant router's vifs were getting deleted when any
PF rule is removed from any of the acquired public IPs. This fix
ensures that lastIp is set to `false` when processed by hypervisor
resources to avoid removing of VIFs when VPCs have any source nat IP.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Several systemvmtemplate optimizations
- Uses new macchinina template for running smoke tests
- Switch to latest Debian 9.3.0 release for systemvmtemplate
- Introduce a new `get_test_template` that uses tiny test template
such as macchinina as defined test_data.py
- rVR related fixes and improvements
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Fixes timezone issue where dates show up as nvalid in UI
- Introduces new event timeline listing/filtering of events
- Several UI improvements to add columns in list views
- Bulk operations support in instance list view to shutdown and destroy
multiple-selected VMs (limitation: after operation, redundant entries
may show up in the list view, refreshing VM list view fixes that)
- Align table thead/tbody to avoid splitting of tables
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The `assignDedicateIpAddress` previously had marked the newly fetched
IP as allocated but now it does not do that. This fails for VPCs
where SNATs IP are retained as allocating and not allocated after
creation.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem:
When a VM is deployed with in an Affinity group which has the cluster dedicated to a subdomain (zone is dedicated to parent domain) it is getting successful. We can also stop the vm and remove the affinity group, but if you want to add back the affinity it is failing.
Root cause:
During VM deployment there is clear check on affinity type (account/domain). Here since the acl_type is "domain" it does not expect to be same owner for entities.
But during update of affinity to VM there is no specific check for acl_type "domain".
Solution:
Fix is to make the access check similar to VM deployment where it does not expect to be same owner for entities if the acl_type is "domain".
This feature allows CloudStack administrators to create layer 2 networks on CloudStack. As these networks are purely layer 2, they don't require IP addresses or Virtual Router, only VLAN is necessary (provided by administrator or assigned by CloudStack). Also, network services should be handled externally, e.g. DNS, DHCP, as they are not provided by L2 networks.
As a consequence, a new Guest Network type is created within CloudStack: L2
Description:
Network offerings and networks support new guest type: L2.
L2 Network offering creation allows administrator to select Specify VLAN or let CloudStack assign it dynamically.
L2 Network creation allows administrator to specify VLAN tag (if network offerings allows it) or simply create network.
VM deployments on L2 networks:
VMs should not IP addresses or any network service
No Virtual Router deployed on network
If Specify VLAN = true for network offering, network gets implemented using a dynamically assigned VLAN
UI changes
A new button is added on Networks tab, available for admins, to allow L2 networks creation
At present, The management IP range can only be expanded under the same subnet. According to existing range, either the last IP can be forward extended or the first IP can be backward extended. But we cannot add an entirely different range from the same subnet. So the expansion of range is subnet bound, which is fixed. But when the range gets exhausted and a user wants to deploy more system VMs, then the operation would fail. The purpose of this feature is to expand the range of management network IPs within the existing subnet. It can also delete and list the IP ranges.
Please refer the FS here: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Expansion+of+Management+IP+Range
ISSUE :
Duplicate public VLAN for two different admin accounts.
STEPS TO REPRODUCE :
Start multiple threads for executing createVlanIpRange API.
Make sure multiple threads run in parallel.
Verify vlan table in DB, duplicate entry for same VLAN and IP address range will be encountered, just id and uuid will be different, rest all fields will have similar value.
Following entry will be observed in vlan table :
`mysql> select * from vlan where vlan_id like 'vlan://77' and removed is null;
+----+--------------------------------------+-----------+--------------+-----------------+---------------------------+----------------+----------------+------------+---------------------+-------------+----------+-----------+---------+---------------------+
| id | uuid | vlan_id | vlan_gateway | vlan_netmask | description | vlan_type | data_center_id | network_id | physical_network_id | ip6_gateway | ip6_cidr | ip6_range | removed | created |
+----+--------------------------------------+-----------+--------------+-----------------+---------------------------+----------------+----------------+------------+---------------------+-------------+----------+-----------+---------+---------------------+
| 15 | 6a205b78-d162-43e3-8da9-86a3ff60f40e | vlan://77 | 10.112.63.65 | 255.255.255.192 | 10.112.63.66-10.112.63.70 | VirtualNetwork | 1 | 200 | 200 | NULL | NULL | NULL | NULL | 2017-12-13 12:55:51 |
| 17 | ff8b5175-b247-45a5-b8d3-feb6a1ca64d0 | vlan://77 | 10.112.63.65 | 255.255.255.192 | 10.112.63.66-10.112.63.70 | VirtualNetwork | 1 | 200 | 200 | NULL | NULL | NULL | NULL | 2017-12-13 12:55:51 |
+----+--------------------------------------+-----------+--------------+-----------------+---------------------------+----------------+----------------+------------+---------------------+-------------+----------+-----------+---------+---------------------+
MySQLTransactionRollbackException is seen frequently in logs
Root Cause
Attempts to lock rows in the core data access layer of database fails if there is a possibility of deadlock. However Operations are not getting retried in case of deadlock. So introducing retries here
Solution
Operations would be retried after some wait time in case of dead lock exception.
This change adds allocatediops to the ListStoragePool API. This applies to managed storage where we have a guaranteed minimum IOPS set. This is useful for monitoring if we have reached the IOPS limit on a storage cluster.
Related 86bbe211f2 and CLOUDSTACK-494. Currently we can not have serveral VPCs in one account with different VPN customer gateways configuration per same gateway IP.
Root Cause:
Earlier, it was failing with ArrayIndexOutOfBoundsException, when the list is empty and accessing the first element.
The error was only observed in Log, but was not showing in UI as it was not throwing any exception.
Hence the API call was in turn successful.
Solution:
Added the empty check before sending device details.
Which says either the required GPU device is not available or out of capacity.
reate Snaphot with quiesce option set to true fails with InvalidParameterValueException
--quiescevm option is only supported with netapp plugin with vmware. When netapp is not there, they would be struck in Allocated state and Deletion is not supported in allocated state. To fix this issue, deletion is supported in allocated state as well.
Depending on the timezone you're running CS (before GMT timezones) you could experience that some jobs are marked as failed since the parent job got a null result despite its child job having successfully done the job. The child job got deleted by the CleanupTask ahead of time, due to a missing datetime conversion to GMT timezone.
Jobs are failing with this message: Job failed with un-handled exception
The fix intends to correct any datetime used in the code that should be using the GMT timezone instead of the local one since all DB datetime should be stored at GMT.