As requested here: https://github.com/apache/cloudstack/pull/1495
No scripts are using perl so that install requirement can be removed.
The new scripts are using standard python packages only.
Includes extensive unit test.
CLOUDSTACK-9348: NioConnection improvementsReopened PR with squashed changes for a re-review and testing after https://github.com/apache/cloudstack/pull/1493 and sub-sequent PRs got reverted
* pr/1549:
CLOUDSTACK-9348: NioConnection improvements
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9365 : updateVirtualMachine with userdata should not error when a VM is attached to multiple networks from which one or more doesn't support userdata
* pr/1523:
Marvin script for cloudstack-9365
CLOUDSTACK-9365 : updateVirtualMachine with userdata should not error when a VM is attached to multiple networks from which one or more doesn't support userdata
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Taking fast and efficient volume snapshots with XenServer (and your storage provider)A XenServer storage repository (SR) and virtual disk image (VDI) each have UUIDs that are immutable.
This poses a problem for SAN snapshots, if you intend on mounting the underlying snapshot SR alongside the source SR (duplicate UUIDs).
VMware has a solution for this called re-signaturing (so, in other words, the snapshot UUIDs can be changed).
This PR only deals with the CloudStack side of things, but it works in concert with a new XenServer storage manager created by CloudOps (this storage manager enables re-signaturing of XenServer SR and VDI UUIDs).
I have written Marvin integration tests to go along with this, but cannot yet check those into the CloudStack repo as they rely on SolidFire hardware.
If anyone would like to see these integration tests, please let me know.
JIRA ticket: https://issues.apache.org/jira/browse/CLOUDSTACK-9281
Here's a video I made that shows this feature in action:
https://www.youtube.com/watch?v=YQ3pBeL-WaA&list=PLqOXKM0Bt13DFnQnwUx8ZtJzoyDV0Uuye&index=13
* pr/1403:
Faster logic to see if a cluster supports resigning
Support for backend snapshots with XenServer
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9366: Capacity of one zone-wide primary storage ignoredDisable and Remove Host operation disables the primary storage capacity.
Steps to replicate:
Base Condition: There exists a host and storage pool with same id
Steps:
1. Find a host and storage pool having same id
2. Disable the host
3. CPU(1) and MEMORY(0) capacity in op_host_capacity for above host is disabled
4. STORAGE(3) capacity in op_host_capacity for storage pool with id same as above host is also disabled
RCA:
'host_id' column in 'op_host_capacity' table used for storing both storage pool id (for STORAGE capacity) and host id (MEMORY and CPU). While disabling a HOST we also disable the capacity associated with host.
Ideally while disabling capacity we should only disable MEMORY and CPU capacity, but we are not doing so.
Code Path:
ResourceManagerImpl.doDeleteHost() -> ResourceManagerImpl.resourceStateTransitTo() -> CapacityDaoImpl.updateCapacityState(null, null, null, host.getId(), capacityState.toString())
updateCapacityState is updating disabling all entries which matches the host_id. This will also disable a entry having storage pool id same as that of host id.
Changes:
introduced new capacityType parameter in updateCapacityState method and necessary changes to add capacity_type clause in sql
also fixed incorrect sql builder logic (unused code path for which it is never surfaced )
Added marvin test to check host and storagepool capacity when host is disabled
Test Result:
```
Before Fix:
mysql> select ohc.host_id, ohc.`capacity_state`, case capacity_type when 0 then 'MEMORY' when 1 then 'CPU' ELSE 'STORAGE' END as 'capacity_type' , total_capacity, case capacity_type when 0 then 'HOST' when 1 then 'HOST' ELSE 'STORAGE POOL' END as 'HOST/STORAGE POOL' from op_host_capacity ohc where host_id=3;
+---------+----------------+---------------+----------------+-------------------+
| host_id | capacity_state | capacity_type | total_capacity | HOST/STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
| 3 | Enabled | MEMORY | 8589934592 | HOST |
| 3 | Enabled | CPU | 32000 | HOST |
| 3 | Enabled | STORAGE | 2199023255552 | STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
9 rows in set (0.00 sec)
Disable Host 3 from UI.
mysql> select ohc.host_id, ohc.`capacity_state`, case capacity_type when 0 then 'MEMORY' when 1 then 'CPU' ELSE 'STORAGE' END as 'capacity_type' , total_capacity, case capacity_type when 0 then 'HOST' when 1 then 'HOST' ELSE 'STORAGE POOL' END as 'HOST/STORAGE POOL' from op_host_capacity ohc where host_id=3;
+---------+----------------+---------------+----------------+-------------------+
| host_id | capacity_state | capacity_type | total_capacity | HOST/STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
| 3 | Disabled | MEMORY | 8589934592 | HOST |
| 3 | Disabled | CPU | 32000 | HOST |
| 3 | Disabled | STORAGE | 2199023255552 | STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
After Fix:
mysql> select ohc.host_id, ohc.`capacity_state`, case capacity_type when 0 then 'MEMORY' when 1 then 'CPU' ELSE 'STORAGE' END as 'capacity_type' , total_capacity, case capacity_type when 0 then 'HOST' when 1 then 'HOST' ELSE 'STORAGE POOL' END as 'HOST/STORAGE POOL' from op_host_capacity ohc where host_id=3;
+---------+----------------+---------------+----------------+-------------------+
| host_id | capacity_state | capacity_type | total_capacity | HOST/STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
| 3 | Enabled | MEMORY | 8589934592 | HOST |
| 3 | Enabled | CPU | 32000 | HOST |
| 3 | Enabled | STORAGE | 2199023255552 | STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
3 rows in set (0.01 sec)
Disable Host 3 from UI.
mysql> select ohc.host_id, ohc.`capacity_state`, case capacity_type when 0 then 'MEMORY' when 1 then 'CPU' ELSE 'STORAGE' END as 'capacity_type' , total_capacity, case capacity_type when 0 then 'HOST' when 1 then 'HOST' ELSE 'STORAGE POOL' END as 'HOST/STORAGE POOL' from op_host_capacity ohc where host_id=3;
+---------+----------------+---------------+----------------+-------------------+
| host_id | capacity_state | capacity_type | total_capacity | HOST/STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
| 3 | Disabled | MEMORY | 8589934592 | HOST |
| 3 | Disabled | CPU | 32000 | HOST |
| 3 | Enabled | STORAGE | 2199023255552 | STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
3 rows in set (0.00 sec)
Sudhansus-MAC:cloudstack sudhansu$ nosetests-2.7 --with-marvin --marvin-config=setup/dev/advanced.cfg test/integration/component/maint/test_capacity_host_delete.py
==== Marvin Init Started ====
=== Marvin Parse Config Successful ===
=== Marvin Setting TestData Successful===
==== Log Folder Path: /tmp//MarvinLogs//Apr_22_2016_22_42_27_X4VBWD. All logs will be available here ====
=== Marvin Init Logging Successful===
==== Marvin Init Successful ====
===final results are now copied to: /tmp//MarvinLogs/test_capacity_host_delete_9RHSNB===
Sudhansus-MAC:cloudstack sudhansu$ cat /tmp//MarvinLogs/test_capacity_host_delete_9RHSNB/results.txt
test_01_op_host_capacity_disable_host (integration.component.maint.test_capacity_host_delete.TestHosts) ... === TestName: test_01_op_host_capacity_disable_host | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 0.168s
OK
```
* pr/1516:
CLOUDSTACK-9366: Capacity of one zone-wide primary storage ignored
Signed-off-by: Will Stevens <williamstevens@gmail.com>
dynamic-roles: packaging improvementsOn some MySQL server envs, this may cause a SQL statement error, though
I was unable to reproduce it. Since it's not needed, an order by 'sort_order'
is enough, we can safely remove it.
/cc @swill @anshulgangwar @DaanHoogland and others
* pr/1551:
migrate-dynamicroles: use mysql.connector due to #1054
packaging: don't bundle systemvm.zip in rpms
packaging: backup commands.properties as it does not exist in new rpms
dynamic-roles: remove unnecessary order by ID
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9377: Fix metrics pagesize issueFixes listing of clusters and host to list all clusters/hosts by passing
pagesize=-1.
/cc @koushik-das @swill @DaanHoogland -- fixes the issue raised on dev@ ML by Rashmi
* pr/1540:
CLOUDSTACK-9377: Fix metrics pagesize issue
Signed-off-by: Will Stevens <williamstevens@gmail.com>
SystemVM cleanupsfrom the logrotate docs
> size - With this, the log file is rotated when the specified size is reached. Size may be specified in bytes (default), kilobytes (sizek), or megabytes (sizem).
> Note: If size and time interval options are specified at same time, only size option take effect. it causes log files to be rotated without regard for the last rotation time. If both log size and timestamp of a log file need to be considered by logrotate, the minsize option should be used. logrotate will rotate log file when they grow bigger than minsize, but not before the additionally specified time interval.
* pr/1414:
systemvm, logrotate: remove daily explicitly as it is ignored
Signed-off-by: Will Stevens <williamstevens@gmail.com>
introduced new capacityType parameter in updateCapacityState method and necessary changes to add capacity_type clause in sql
also fixed incorrect sql builder logic (unused code path for which it is never surfaced )
Added marvin test to check host and storagepool capacity when host is disabled
Added conditions to ensure the capacity_type is added only when capacity_type length is greater than 0.
Added checks in marvin test to ensure the capacity exists for a host before disabling it.
Added checks to avoid index out of range exception
- Unit test to demonstrate denial of service attack
The NioConnection uses blocking handlers for various events such as connect,
accept, read, write. In case a client connects NioServer (used by
agent mgr to service agents on port 8250) but fails to participate in SSL
handshake or just sits idle, this would block the main IO/selector loop in
NioConnection. Such a client could be either malicious or aggresive.
This unit test demonstrates such a malicious client that can perform a
denial-of-service attack on NioServer that blocks it to serve any other client.
- Use non-blocking SSL handshake
- Uses non-blocking socket config in NioClient and NioServer/NioConnection
- Scalable connectivity from agents and peer clustered-management server
- Removes blocking ssl handshake code with a non-blocking code
- Protects from denial-of-service issues that can degrade mgmt server responsiveness
due to an aggressive/malicious client
- Uses separate executor services for handling ssl handshakes
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Seems to have a license issue so reverting for now.
This reverts commit 9a20ab8bcbbd39aa012a0ec5a65e66bcc737ee0e, reversing
changes made to 7a0b37a29a8be14011427dcf61bf3ea86e47dbf4.
Due to PR #1054 this patch fixes the dynamic-roles migration script
to use the mysql-connector-python dependency.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Removes bundling of systemvm.zip in cloudstack-common rpms. This is not
done in debian packaging either there we remove for rpms as well, as this
file is not used by any subsystem but systemvm.iso is used.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
In case of rpms, the commands.properties file is bundled at
/usr/share/cloudstack-management/webapps/client/WEB-INF/classes/commands.properties
In case of a rpm upgrade, new rpms won't ship with commands.properties file. For
existing installations this copies the commands.properties file to
/etc/cloudstack/management
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9362: Skip VXLANs when rewriting the bridge name for migrations (4.8-2)From the [JIRA issue](https://issues.apache.org/jira/browse/CLOUDSTACK-9362):
> bb8f7c652e
>
> The above commit introduces rewriting of bridge device names when migrating a virtual machine from one host to another. However, it also matches bridges called "brvx-1234" and rewrites them to (in my case) "brem1-1234" - this doesn't match the bridge name on the destination and causes the migration to fail with the error:
>
> error : virNetDevGetMTU:397 : Cannot get interface MTU on 'brem1-1234': No such device
>
> I have flagged this as major because it's not possible to migrate VMs using VXLANs for maintenance, which seems important (it's certainly important to me!).
This is a version of #1508 based against 4.8 (sorry!)
* pr/1513:
Skip VXLANs when rewriting the bridge name for migrations
Signed-off-by: Will Stevens <williamstevens@gmail.com>
* 4.7:
Fix Sync of template.properties in Swift
Configure rVPC for router.redundant.vrrp.interval advert_int setting
Have rVPCs use the router.redundant.vrrp.interval setting
Resolve conflict as forceencap is already in master
Split the cidr lists so we won't hit the iptables-resture limits
Check the existence of 'forceencap' parameter before use
Do not load previous firewall rules as we replace everyhing anyway
Wait for dnsmasq to finish restart
Remove duplicate spaces, and thus duplicate rules.
Restore iptables at once using iptables-restore instead of calling iptables numerous times
Add iptables copnversion script.
Fix Sync of template.properties in SwiftWhen using Swift as a secondary storage, we create a template.properties file which stores the metadata about the template. Currently the data which is present in the file is incorrect which leads to templates becoming unavailable after they are downloaded. This fix makes sure that the template.properties has the correct "path" set so that templates are available.
I've also done a bit of cleanup and made the code bit more clean.
* pr/1331:
Fix Sync of template.properties in Swift
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Reimplement router.redundant.vrrp.interval settingGlobal setting `router.redundant.vrrp.interval` is not used any more and it is now set to a hardcoded 1.
This results in a failover from master->backup when the backup doesn't hear from the master in ~3.6sec. This is a bit too tight, as we've seen failovers during live migrations. We could reproduce it in about half of the cases. Setting this to setting to 2 (tested it by hardcoding it in the systemvms) gives twice as much time and we didn't see issues any more. Instead of updating the hardcoded setting from 1 to 2, I reimplemented the global setting by sending it to the router with the cmd_line, as the non-VPC router also does.
Background:
Why is the maximum failover time in the example 3.6 seconds? This comes from the advertisement interval and the skew time. The default advertisement interval is 1 second (configurable in keepalived.conf). The skew time helps to keep everyone from trying to transition at once. It is a number between 0 and 1, based on the formula (256 - priority) / 256
As defined in the RFC, the backup must receive an advertisement from the master every (3 * advert_int) + skew_time seconds. If it doesn't hear anything from the master, it takes over. With a backup router priority of 100 (as in the example), the failover will happen at most 3.6 seconds after the master goes down.
Source: http://www.hollenback.net/KeepalivedForNetworkReliability
* pr/1486:
Configure rVPC for router.redundant.vrrp.interval advert_int setting
Have rVPCs use the router.redundant.vrrp.interval setting
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Restore iptables at once using iptables-restore instead of calling iptables numerous timesThis makes handling the firewall rules about 50-60 times faster because it is generated in memory and then loaded once. It's work by @borisroman see PR #1400. Reopened it here because I think this is a great improvement.
* pr/1482:
Resolve conflict as forceencap is already in master
Split the cidr lists so we won't hit the iptables-resture limits
Check the existence of 'forceencap' parameter before use
Do not load previous firewall rules as we replace everyhing anyway
Wait for dnsmasq to finish restart
Remove duplicate spaces, and thus duplicate rules.
Restore iptables at once using iptables-restore instead of calling iptables numerous times
Add iptables copnversion script.
Signed-off-by: Will Stevens <williamstevens@gmail.com>
On some MySQL server envs, this may cause a SQL statement error, though
I was unable to reproduce it. Since it's not needed, an order by 'sort_order'
is enough, we can safely remove it.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This reverts commit 7ce0e10fbcd949375e43535aae168421ecdaa562, reversing
changes made to 29ba71f2db3a3b7dcdad3a7c1f83725cffd56261.
This was reverted because it seemed to be related to an issue
when doing a DeployDC, causing an `addHost` error.
This reverts commit 9f970f28b18534dffe33196ead60ea861f501fa9, reversing
changes made to 6d0c92be7257afcf5f1a9f4453e3470e0083125d.
This was reverted because it seemed to be related to an issue
when doing a DeployDC, causing an `addHost` error.
This reverts commit f88cb880974fa56866492c437af291e40bd1a4f6, reversing
changes made to 688522ecd41d5e2bf09467aa236deb50f7826982.
This was reverted because it seemed to be related to an issue
when doing a DeployDC, causing an `addHost` error.
This reverts commit 540d9572fd491db3ce182d26636fc74ada4e171c.
This was reverted because it seemed to be related to an issue
when doing a DeployDC, causing an `addHost` error.
Add perl-modules as install dependency for cloudstack-agentRequired to run perl scripts that configure networking for VMs.
* pr/1495:
Add perl-modules as install dependency for cloudstack-agent
Signed-off-by: Will Stevens <williamstevens@gmail.com>
DAO: Hit the cache for entity flagged as removed tooI came along this part of the code and I don't see any reason why the cache should not be used when fetching with the "removed" ones. It will help decrease the number of DB queries.
*It can be merged in many CS versions*
* pr/1532:
DAO: Rewrite change for method findByIdIncludingRemoved(ID id)
dao: Hit the cache for entity flagged as removed too since they are put in cache afterwards.
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CPU socket count reporting correctionCPU socket count reporting correction
From https://github.com/MissionCriticalCloud/cosmic-plugin-hypervisor-kvm/pull/16
* pr/1520:
Remove empty spaces causing the build to fail
CPU socket count reporting correction
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Fix Nio/CPU issue and CI failures- Reverts ea2286 that introduced a wakeup on each connection loop run.
- In SSL handshake code removes delegated tasks to be run in separate threads.
/cc @kiwiflyer @swill @jburwell and others for review
@kiwiflyer please help me test this fix and share if it makes the NioConnection robust now, without having the selector consume a lot of CPU. Thanks.
* pr/1543:
test: fix cleanup sequence for test_acl_listvolume test
CLOUDSTACK-9299: Fix test failures on CI
CLOUDSTACK-9348: Make NioConnectio loop less aggressive
Signed-off-by: Will Stevens <williamstevens@gmail.com>
It defaults to 1, which is hardcoded in the template:
./cosmic/cosmic-core/systemvm/patches/debian/config/opt/cloud/templates/keepalived.conf.templ
As non-VPC redundant routers use this setting, I think it makes sense to use it for rVPCs as well.
We also need a change to pickup the cmd_line parameter and use it in the Python code that configures the router.
- Fixes oobm integration test to skip if known ipmitool bug is hit
- Fixes ProcessTest unit test case to use sleep
- Removes redundant unit test that covers code in ProcessTest
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Reverts ea2286 that introduced a wakeup on each connection loop run.
- In SSL handshake code removes delegated tasks to be run in separate threads.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Honour GS use_ext_dns and redundant VR VIPThis patch addresses two issues:
On redundant VR setups, the primary resolver being handed out to instances is the guest_ip (primary IP for the VR). This might lead to problems upon failover, at least while the DHCP lease doesn't update (because the primary resolver will be checked first until times out, however it'll be gone upon failover).
If Global Setting use_ext_dns is true, we don't want the VR to be the primary resolver at all.
* pr/1536:
This patch addresses two issues:
Signed-off-by: Will Stevens <williamstevens@gmail.com>