2172 Commits

Author SHA1 Message Date
Murali Reddy
6749785cab CLOUDSTACK-9339 Virtual Routers don't handle Multiple Public Interfaces correctly
-when processing static nat rule, add a mangle table rule, to mark the traffic
   from the guest vm when it has associated static nat rule so that traffic gets
   routed using the route tabe of the device which has public ip associated

  -fix the case where nic_device_id is empty when ip is getting disassociated
   resulting in empty deviceid in ips.json

  -add utility methods in CsRule, and CsRoute to add 'ip rule' and 'ip route' rules respectivley

  -ensure traffic from all public interfaces are connection marked with device number, and restored
   for the reverse traffic. use the connection marked number to do device specific routing table lookup
   fill the device specific routing table with default route

  -component tests for testing multiple public interfaces of VR
2016-12-07 14:33:24 +05:30
Mike Tutkowski
06806a6523 CLOUDSTACK-9619: Updates for SAN-assisted snapshots 2016-12-06 17:32:56 -07:00
Rohit Yadav
f82873ebf8
Merge branch '4.9' 2016-12-05 15:32:11 +05:30
Rohit Yadav
48b28f7d6e
Merge branch '4.8' into 4.9 2016-12-05 15:32:03 +05:30
Rohit Yadav
20aea27dc0
Merge pull request #1765 from shapeblue/CLOUDSTACK-9586
Cloudstack 9586: When using local storage with Xenserver prepareTemplate does not work with multiple primary storeThe race condition will happen whenever there are multiple primary storages and the CS tries to mount the secondary store to xenserver host simultaneously.

Due to synchronised block one mount will be successful and other thread will get the already mounted SR. Without the fix the two thread will try to mount it parallely and one will fail on Xenserver.

* pr/1765:
  Cloudstack 9586: When using local storage with Xenserver prepareTemplate does not work with multiple primary store The race condition will happen whenever there are multiple primary storages and the CS tries to mount the secondary store to xenserver host simultaneously. Due to synchronised block one mount will be successful and other thread will get the already mounted SR. Without the fix the two thread will try to mount it parallely and one will fail on Xenserver.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-05 15:31:25 +05:30
Rohit Yadav
45a746ad2e
Merge branch '4.9' 2016-12-05 15:16:22 +05:30
Rohit Yadav
8d14e8e8b5 Merge pull request #1729 from shapeblue/vmware-memleak-fix
CLOUDSTACK-9564: Fix memory leaks in VmwareContextPoolIn a recent management server crash, it was found that the largest contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

@blueorangutan package

* pr/1729:
  CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-05 15:15:58 +05:30
Abhinandan Prateek
ba32ac1a7c Cloudstack 9586: When using local storage with Xenserver prepareTemplate does not work with multiple primary store
The race condition will happen whenever there are multiple primary storages and the CS tries to mount the secondary store to xenserver host simultaneously.
Due to synchronised block one mount will be successful and other thread will get the already mounted SR. Without the fix the two thread will try to mount it parallely and one will fail on Xenserver.
2016-12-02 13:37:47 +05:30
Rohit Yadav
e091e82a7f
Merge branch '4.9' 2016-12-02 10:42:31 +05:30
Rohit Yadav
90a3d97c5e CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool
In a recent management server crash, it was found that the largest contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-01 15:27:29 +05:30
Suresh Kumar Anaparti
309da6a57f CLOUDSTACK-8854: Apple Mac OS/X VM get created without USB controller in ESXi hypervisors 2016-12-01 14:32:18 +05:30
Rohit Yadav
d8c038e5b2
Merge branch '4.9' 2016-11-25 13:10:56 +05:30
Rohit Yadav
50f80cc2a0
Merge branch '4.8' into 4.9 2016-11-25 13:03:04 +05:30
Rohit Yadav
020606ec31 Merge pull request #1738 from SudharmaJain/cs-9566
CLOUDSTACK-9566 instance-id metadata for baremetal VM returns IDThere is difference in instance-id metadata across baremetal and other hypervisors.

On Baremetal
[root@ip-172-17-0-144 ~]# curl http://8.37.203.221/latest/meta-data/instance-id
6021

on Xen
[root@ip-172-17-2-103 ~]# curl http://172.17.0.252/latest/meta-data/instance-id
cbeb517a-e833-4a0c-b1e8-9ed70200fbbf

In both cases it should be vm's uuid.

* pr/1738:
  CLOUDSTACK-9566 instance-id metadata for baremetal VM returns ID

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-11-25 12:52:10 +05:30
Wido den Hollander
2a5f37c1b1
CLOUDSTACK-8715: Add channel to Instances for Qemu Guest Agent
This commit adds a additional VirtIO channel with the name
'org.qemu.guest_agent.0' to all Instances.

With the Qemu Guest Agent the Hypervisor gains more control over the Instance if
these tools are present inside the Instance, for example:

* Power control
* Flushing filesystems
* Fetching Network information

In the future this should allow safer snapshots on KVM since we can instruct the
Instance to flush the filesystems prior to snapshotting the disk.

More information: http://wiki.qemu.org/Features/QAPI/GuestAgent

Keep in mind that on Ubuntu AppArmor still needs to be disabled since the default
AppArmor profile doesn't allow libvirt to write into /var/lib/libvirt/qemu

This commit does not add any communication methods through API-calls, it merely
adds the channel to the Instances and installs the Guest Agent in the SSVMs.

With the addition of the Qemu Guest Agent channel a second channel appears in /dev
on a SSVM as a VirtIO port.

The order in which the ports are defined in the XML matters for the naming inside
the SSVM VM and by not relying on /dev/vportXX but looking for a static name the
SSVM still boots properly if the order in the XML definition is changed.

A SSVM with both ports attached will have something like this:

  root@v-215-VM:~# ls -l /dev/virtio-ports
  total 0
  lrwxrwxrwx 1 root root 11 May 13 21:41 org.qemu.guest_agent.0 -> ../vport0p2
  lrwxrwxrwx 1 root root 11 May 13 21:41 v-215-VM.vport -> ../vport0p1
  root@v-215-VM:~# ls -l /dev/vport*
  crw------- 1 root root 251, 1 May 13 21:41 /dev/vport0p1
  crw------- 1 root root 251, 2 May 13 21:41 /dev/vport0p2
  root@v-215-VM:~#

In this case the SSVM port points to /dev/vport0p1, but if the order in the XML
is different it might point to /dev/vport0p2

By looking for a portname with a pre-defined pattern in /dev/virtio-ports we
do not rely on the order in the XML definition.

Signed-off-by: Wido den Hollander <wido@widodh.nl>
2016-11-23 16:01:08 +01:00
Rohit Yadav
feaeed7b16
Merge pull request #1542 from nvazquez/nestedv
CLOUDSTACK-9379: Support nested virtualization at VM level on VMware Hypervisor## Introduction

[JIRA TICKET](https://issues.apache.org/jira/browse/CLOUDSTACK-9379)

It is desired to support nested virtualization at VM level for VMware hypervisor. Current behaviour supports enabling/desabling global nested virtualization by modifying global config `'vmware.nested.virtualization'`. It is wished to improve this feature, having control at VM level instead of a global control only.

A new global configuration is added, to enable/disable VM nested virtualization control: `'vmware.nested.virtualization.perVM'`. Default value=false

After a vm deployment or start command, vm params include `'nestedVirtualizationFlag'` key and its value is:
- true -> nested virtualization enabled
- false -> nested virtualization disabled

**We will determinate nested virtualization enabled/disabled by examining this 3 values:**
- **(1)** global configuration `'vmware.nested.virtualization'` value
- **(2)** global configuration `'vmware.nested.virtualization.perVM'` value
- **(3)** `'nestedVirtualizationFlag'` value in `user_vm_details` if present, `null` if not.

Using this 3 values, there are different use cases:
- **(1)** = TRUE, **(2)** = TRUE, **(3)** is null -> _ENABLED_
- **(1)** = TRUE, **(2)** = TRUE, **(3)** = TRUE -> _ENABLED_
- **(1)** = TRUE, **(2)** = TRUE, **(3)** = FALSE -> _DISABLED_
- **(1)** = TRUE, **(2)** = FALSE, **(3)** indifferent  -> _ENABLED_
- **(1)** = FALSE, **(2)** = TRUE, **(3)** is null -> _DISABLED_
- **(1)** = FALSE, **(2)** = TRUE, **(3)** = TRUE -> _ENABLED_
- **(1)** = FALSE, **(2)** = TRUE, **(3)** = FALSE -> _DISABLED_
- **(1)** = FALSE, **(2)** = FALSE, **(3)** indifferent -> _DISABLED_

* pr/1542:
  CLOUDSTACK-9379: Support nested virtualization at VM level on VMware Hypervisor

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-11-23 15:00:36 +05:30
Rohit Yadav
35803805c7
Merge branch '4.9' 2016-11-23 14:28:31 +05:30
Rohit Yadav
1f2184800b Merge pull request #1681 from murali-reddy/router_eth_device_index
CLOUDSTACK-9491: incorrect parsing of device list to find ethernet index of plugged NICIn VmwareResource, findRouterEthDeviceIndex() method find ethernet interface index given
the mac address. This method is used, once a nic is plugged to determine ethernet interface.
"/proc/sys/net/ipv4/conf" from the VR and looped through the devices to find the right
ethernet interface. Howver current logic read it once, and loops through the device list.
Its observerd device may not show up '/proc/sys/net/ipv4/conf' immediatly once NIC is plugged
in the VM from vCenter.

Fix ensured, while waiting for 15 sec in the loop, read the latest content from /proc/sys/net/ipv4/conf
, so that right device list is processed.

Manual tested VPC scenarios of adding new tiers which uses findRouterEthDeviceIndex, to find the guest/public network ethernet index.

* pr/1681:
  CLOUDSTACK-9491: incorrect parsing of device list to find ethernet index of plugged NIC

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-11-23 14:28:12 +05:30
Rohit Yadav
0642a6982f
Merge branch '4.9' 2016-11-23 14:22:15 +05:30
Rohit Yadav
55b918076f
Merge branch '4.8' into 4.9 2016-11-23 13:50:15 +05:30
Rohit Yadav
ff616e700b Merge pull request #1745 from shapeblue/CLOUDSTACK-9503
CLOUDSTACK-9503: Increased the VR script timeout. Most of the changes are about converting int/long time values to joda Duration.

* pr/1745:
  CLOUDSTACK-9503: Increased the VR script timeout. Most of the changes are about converting int/long time values to joda Duration.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-11-23 13:41:52 +05:30
Koushik Das
027409d9bc Merge release branch 4.9 to master
* 4.9:
  CLOUDSTACK-9410: Data Disk shown as detached in XS
2016-11-21 11:25:38 +05:30
Koushik Das
bdc806e315 Merge release branch 4.8 to 4.9
* 4.8:
  CLOUDSTACK-9410: Data Disk shown as detached in XS
2016-11-21 11:12:09 +05:30
nvazquez
cebee7cbda CLOUDSTACK-9379: Support nested virtualization at VM level on VMware Hypervisor 2016-11-17 17:51:50 -03:00
John Burwell
5cc9070b0b Cleanup merge mistakes 2016-11-17 00:09:56 -05:00
John Burwell
c66cf1c60d Merge release branch 4.9 to master
* 4.9:
  CLOUDSTACK-9502: DS template copies don’t get deleted in VMware ESXi with multiple clusters and zone wide storage
2016-11-16 23:23:09 -05:00
John Burwell
20b43767d7 Merge pull request #1676 from nvazquez/dstemplates49
CLOUDSTACK-9502: DS template copies dont get deleted in VMware ESXi with multiple clusters and zone wide storage (include CLOUDSTACK-9386 into 4.9 release branch)Include #1560 into 4.9 release branch

* pr/1676:
  CLOUDSTACK-9502: DS template copies don’t get deleted in VMware ESXi with multiple clusters and zone wide storage

Signed-off-by: John Burwell <meaux@cockamamy.net>
2016-11-16 22:15:50 -05:00
John Burwell
becec33c2e Merge release branch 4.9 to master
* 4.9:
  CLOUDSTACK-8830: Fix for vm snapshots in Vmware, could not create vm snapshot until 12 minutes after vm creation due to vCenter sent null name on snpashot recent task
2016-11-16 09:45:46 -05:00
Rohit Yadav
b59db0dc06 Merge pull request #1705 from nemo9cby/CLOUDSTACK-9465
Made the changes to improve logging.CLOUSTACK-9465 Several log refactoring/improvement suggestions.

There are two scenarios of logging which needs refactoring/improvement:

Method invocation replaced by variable

This means that in the logging code, the method invocation is pre-defined as a variable. for simplicity,          the method invocation should be replaced by the variable.

Delete variable which must be null

The variable in the logging code is null, there is no need to put the variable there.

* pr/1705:
  Made the changes to improve logging.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-11-03 16:48:21 +05:30
Abhinandan Prateek
83b5a8b2b2 CLOUDSTACK-9503: Increased the VR script timeout. Most of the changes are about converting int/long time values to joda Duration. 2016-11-01 16:14:23 +05:30
Syed
f46651e672 Support Backup of Snapshots for Managed Storage
This PR adds an ability to Pass a new parameter, locationType,
    to the “createSnapshot” API command. Depending on the locationType,
    we decide where the snapshot should go in case of managed storage.

    There are two possible values for the locationType param

    1) `Standard`: The standard operation for managed storage is to
    keep the snapshot on the device. For non-managed storage, this will
    be to upload it to secondary storage. This option will be the
    default.

    2) `Archive`: Applicable only to managed storage. This will
    keep the snapshot on the secondary storage. For non-managed
    storage, this will result in an error.

    The reason for implementing this feature is to avoid a single
    point of failure for primary storage. Right now in case of managed
    storage, if the primary storage goes down, there is no easy way
    to recover data as all snapshots are also stored on the primary.
    This features allows us to mitigate that risk.
2016-10-30 23:19:58 -06:00
Murali Reddy
b449351a9f CLOUDSTACK-9491: incorrect parsing of device list to find ethernet index of plugged NIC
In VmwareResource, findRouterEthDeviceIndex() method find ethernet interface index given
  the mac address. This method is used, once a nic is plugged to determine ethernet interface.
  "/proc/sys/net/ipv4/conf" from the VR and looped through the devices to find the right
  ethernet interface. However current logic read it once, and loops through the device list.
  Its observerd device may not show up '/proc/sys/net/ipv4/conf' immediatly once NIC is plugged
  in the VM from vCenter.Fix ensured, while waiting for 15 sec in the loop, read the latest
  content from /proc/sys/net/ipv4/conf, so that right device list is processed.
2016-10-28 17:50:36 +05:30
Sudharma Jain
759a92d251 CLOUDSTACK-9566 instance-id metadata for baremetal VM returns ID 2016-10-27 13:50:39 +05:30
subhash yedugundla
38c56bdf44 CLOUDSTACK-9410: Data Disk shown as detached in XS 2016-10-25 14:57:33 +05:30
nvazquez
94222b1356 CLOUDSTACK-8830: Fix for vm snapshots in Vmware, could not create vm snapshot until 12 minutes after vm creation due to vCenter sent null name on snpashot recent task 2016-10-24 13:26:45 -03:00
Nemo
cd9c7737d1 Made the changes to improve logging. 2016-10-11 12:58:02 -04:00
Wido den Hollander
0beb41b6e7 CLOUDSTACK-9395: Add Virtio RNG device to Instances when configured
By adding a Random Number Generator device to Instances we can prevent
entropy starvation inside guest.

The default source is /dev/random on the host, but this can be configured
to another source when present, for example a hardware RNG.

When enabled it will add the following to the Instance's XML definition:

  <rng model='virtio'>
    <rate period='1000' bytes='2048' />
    <backend model='random'>/dev/random</backend>
  </rng>

If the Instance has the proper support, which most modern distributions have,
it will have a /dev/hwrng device which it can use for gathering entropy.

More information: https://libvirt.org/formatdomain.html#elementsRng
2016-10-04 12:44:55 +02:00
Rohit Yadav
3a82636b90
Merge pull request #1602 from nvazquez/clonegranular
CLOUDSTACK-9422: Granular 'vmware.create.full.clone' as Primary Storage setting### Introduction

For VMware, It is possible to decide creating VMs as full clones on ESX HV, adjusting `vmware.create.full.clone` global setting. We would like to introduce this property as a primary storage detail, and use its value instead of global setting's value.

We propose introducing `fullCloneFlag` on `PrimaryDataStoreTO` sent on `CopyCommand`. This way we can reconfigure `VmwareStorageProcessor` and `VmwareStorageSubsystemCommandHandler` similar as it was done for `nfsVersion` but refactoring it to be more general.

* pr/1602:
  CLOUDSTACK-9422: Granular VMware vms creation as full clones on HV

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-09-21 16:41:35 +05:30
nvazquez
4104cea300 CLOUDSTACK-9502: DS template copies don’t get deleted in VMware ESXi with multiple clusters and zone wide storage 2016-09-19 07:49:31 -07:00
Rajani Karuturi
3f6faeb4c3 Merge pull request #1645 from myENA/feature/convert_rbd_to_qcow
On snapshot backup, this converts the rbd raw format on disk to qcow2 for compression.

* pr/1645:
  CLOUDSTACK-9461

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2016-09-16 15:08:12 +05:30
nvazquez
bb275a5ad1 CLOUDSTACK-9422: Granular VMware vms creation as full clones on HV 2016-09-13 09:59:04 -07:00
Rajani Karuturi
f21477a178 Merge pull request #1671 from mike-tutkowski/copy-vol-migration
Adding support for cross-cluster storage migration for managed storage when using XenServerThis PR adds support for cross-cluster storage migration of VMs that make use of managed storage with XenServer.

Managed storage is when you have a 1:1 mapping between a virtual disk and a volume on a SAN (in the case of XenServer, an SR is placed on this SAN volume and a single virtual disk placed in the SR).

Managed storage allows features such as storage QoS and SAN-side snapshots to work (sort of analogous to VMware VVols).

This PR focuses on enabling VMs that are using managed storage to be migrated across XenServer clusters.

I have successfully run the following tests on this branch:

TestVolumes.py
TestSnapshots.py
TestVMSnapshots.py
TestAddRemoveHosts.py
TestVMMigrationWithStorage.py (which is a new test that is being added with this PR)

* pr/1671:
  Adding support for cross-cluster storage migration for managed storage when using XenServer

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2016-09-13 17:40:12 +05:30
Mike Tutkowski
b508fb8692 Adding support for cross-cluster storage migration for managed storage when using XenServer 2016-09-12 07:39:13 -06:00
nvazquez
1384d748a7 CLOUDSTACK-9386: Find vm on datacenter instead of randomly choosing a cluster 2016-09-11 20:56:09 -03:00
Rafael Weingärtner
744cb2c502 Merge pull request #1605 from nvazquez/fixVram
CLOUDSTACK-9428: Fix for CLOUDSTACK-9211 - Improve performance of 3D GPU support in cloud-plugin-hypervisor-vmwareJIRA TICKET: https://issues.apache.org/jira/browse/CLOUDSTACK-9428

### Introduction

On #1310 passing vRAM size to support 3D GPU problem was addressed on VMware. It was found out that it could be improved to increase performance by reducing extra API calls, as we'll describe later

### Improvement
On WMware, `VmwareResource` manages execution of `StartCommand.` Before sending power on command to ESXi hypervisor, vm is configured by calling `reconfigVMTask` web method on vSphere's client `VimPortType` web service.
It was found out that we were using this method 2 times when passing vRAM size, as it implied creating a new vm config spec only editing video card specs and making an extra call to `reconfigVMTask.`

We propose reducing the extra web service call by adjusting vm's config spec. This way video card gets properly configured (when passing vRAM size) in the same configure call, increasing performance.

### Use case (passing vRAM size)
* Deploy a new VM, let its id be X
* Stop VM
* Execute SQL, where X is vm's id and Z is vRAM size (in kB):
````
INSERT INTO cloud.user_vm_details (vm_id, name, value) VALUES (X, 'mks.enable3d', 'true');
INSERT INTO cloud.user_vm_details (vm_id, name, value) VALUES (X, 'mks.use3dRenderer', 'automatic');
INSERT INTO cloud.user_vm_details (vm_id, name, value) VALUES (X, 'svga.autodetect', 'false');
INSERT INTO cloud.user_vm_details (vm_id, name, value) VALUES (X, 'svga.vramSize', Z);
````
* Start VM

* pr/1605:
  CLOUDSTACK-9428: Add marvin test
  CLOUDSTACK-9428: Fix for CLOUDSTACK-9211 - Improve performance

Signed-off-by: Rafael Weingärtner <rafael@apache.org>
2016-09-11 08:16:11 -03:00
John Burwell
8d11511b1f Adds support for four position versions and optional db upgrades
Often, patch and security releases do not require schema migrations or
data migrations.  However, if an empty upgrade class and associated
scripts are not defined, the upgrade process will break.  With this
change, if a release does not have an upgrade, a noop DbUpgrade is added
to the upgrade path.  This approach allows the upgrade to proceed and
for the database to properly reflect the installed version.  This change
should make the release process simpler as RMs no longer need to
rememeber to create this boilerplate code when starting a new release.

Beginning with the 4.8.2.0 and 4.9.1.0 releases, the project will
formally adopt a four (4) position release number to properly accomodate
rekeases that contain only CVE fixes.  The DatabaseUpgradeChecker and
Version classes made assumptions that they would always parse and
compare three (3) position version numbers.  This change adds the
CloudStackVersion value object that supports both three (3) and four (4)
version numbers.   It encapsulates version comparsion logic, as well as,
the rules to allow three (3) and four (4) to interoperate.

  * Modifies DatabaseUpgradeChecker to handle derive an upgrade path for
  a version that was not explicitly specified.  It determines the
  releases the first release before it with database migrations and uses
  that list as the basis for the list for version being calculated.  A
  noop upgrade is then added to the list which causes no schema changes
  or data migrations, but will update the database to the version.
  * Adds unit tests for the upgrade path calculation logic in
  DatabaseUpgradeChecker
  * Removes dummy upgrade logic for the 4.8.2.0 introduced in previous
  versions of this patch
  * Introduces the CloudStackVersion value object which parses and
  compares three (3) and four (4) position version numbers.  This class
  is intended to replace com.cloud.maint.Version.
  * Adds the junit-dataprovider dependency -- allowing test data to be
  concisely generated separately from the execution of a test case.
  Used extensively in the CloudStackVersionTest.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-08-30 13:32:32 +05:30
Nathan Johnson
46df85c5bf CLOUDSTACK-9461
This converts the rbd raw format on disk to qcow2 for compression.
2016-08-26 09:52:24 -05:00
nvazquez
4297857e22 CLOUDSTACK-9428: Fix for CLOUDSTACK-9211 - Improve performance 2016-08-25 11:31:11 -03:00
Rohit Yadav
3a81a4498f Merge branch '4.9' 2016-08-24 12:15:24 +05:30
Rohit Yadav
fa3fe7bb05 Merge pull request #1634 from shapeblue/patchviasocket-49-py26fix
[blocker] CLOUDSTACK-9452: add python-argparse dependency on el6,7 rpmsThe patchviasocket script was rewritten in Python from PR #1533 and made
assumptions that Python 2.7 would be available. In case of CentOS, python 2.7
may not be available or installed. This change ensures that python-argparse
is installed which is used by this script.

/cc @wido @sverrirab @karuturi @jburwell

@blueorangutan package

* pr/1634:
  CLOUDSTACK-9452: add python-argparse dependency on el6,7 rpms

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-08-24 12:14:02 +05:30