2152 Commits

Author SHA1 Message Date
Jayapal
e3ae08b3ee CLOUDSTACK-9709: Updated the vm ip fetch task to use the correct the thread 2017-03-07 09:50:18 +05:30
Sateesh Chodapuneedi
d171bb7857 CLOUDSTACK-9698 Make the wait timeout for NIC adapter hotplug as configurable
Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR to get detected by guest OS.
The time taken to detect hot plugged NIC in guest OS depends on type of NIC adapter like (E1000, VMXNET3, E1000e etc.)
and guest OS itself. In uncommon scenarios the NIC detection may take longer time than 15 seconds,
in such cases NIC hotplug would be treated as failure which results in VPC tier configuration failure.
Alternatively making the wait timeout for NIC adapter hotplug as configurable will be helpful for admins in such scenarios.

Also in future if VMware introduces new NIC adapter types which may take time to get detected by guest OS, it is good to have flexibility of
configuring the wait timeout as fallback mechanism in such scenarios.

Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
2017-03-02 03:04:53 +05:30
Rajani Karuturi
fa85151be9 Merge release branch 4.9 to master
* 4.9:
  CLOUDSTACK-9746 system-vm: logrotate config causes critical failures
  CLOUDSTACK-9788: Fix exception listNetworks with pagesize=0
  CLOUDSTACK-8663: Fixed various issues to allow VM snapshots and volume snapshots to exist together
  Fix HVM VM restart bug in XenServer
2017-02-28 05:47:06 +05:30
Rajani Karuturi
d9bd01266f Merge pull request #1829 from syed/hvm-volume-attach-restart-fix
CLOUDSTACK-9363: Fix HVM VM restart bug in XenServerHere is the longer description of the problem:

By default XenServer limits HVM guests to only 4 disks. Two of those are reserved for the ROOT disk (deviceId=0) and CD ROM (device ID=3) which means that we can only attach 2 data disks. This limit however is removed when Xentools is installed on the guest. The information that a guest has Xentools installed and can handle more than 4 disks is stored in the VM metadata on XenServer. When a VM is shut down, Cloudstack removes the VM and all the metadata associated with the VM from XenServer. Now, when you start the VM again, even if it has Xentools installed, it will default to only 4 attachable disks.

Now this problem manifests itself when you have a HVM VM and you stop and start it with more than 2 data disks attached. The VM fails to start and the only way to start the VM is to detach the extra disks and then reattach them after the VM start.

In this fix, I am removing the check which is done before creating a `VBD` which enforces this limit. This will not affect current workflow and will fix the HVM issue.

@koushik-das this is related to the "autodetect" feature that you introduced a while back (https://issues.apache.org/jira/browse/CLOUDSTACK-8826). I would love your review on this fix.

* pr/1829:
  Fix HVM VM restart bug in XenServer

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2017-02-28 05:42:59 +05:30
Kishan Kavala
9a021904af Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm. free memory = total memory - (all vm memory) 2017-02-20 11:32:48 +05:30
Anshul Gangwar
ca84fd4ffd CLOUDSTACK-8663: Fixed various issues to allow VM snapshots and volume
snapshots to exist together

Reverting VM to disk only snapshot in Xenserver corrupts VM

Stale NFS secondary storage on XS leads to volume creation failure from snapshot
2017-02-15 12:56:39 +05:30
nvazquez
49dadc5505 CLOUDSTACK-9752: [Vmware] Optimization of volume attachness to vm 2017-02-10 11:02:44 -03:00
nvazquez
6ce6cf67f0 CLOUDSTACK-9738: [Vmware] Optimize vm expunge process for instances with vm snapshots 2017-02-06 23:39:01 -03:00
Rajani Karuturi
7233ac37cd Merge pull request #977 from ustcweizhou/vm-snapshot
[4.10] CLOUDSTACK-8746: VM Snapshotting implementation for KVM

* pr/977:
  Fixes for testing VM Snapshots on KVM. Related to PR 977
  CLOUDSTACK-8746: vm snapshot implementation for KVM

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2017-01-31 05:58:56 +05:30
Rajani Karuturi
f10c8bfe0c Merge pull request #1700 from wido/ipv6-basic-networking
CLOUDSTACK-9359: IPv6 for Basic NetworkingThis PR is a proposal for adding very basic IPv6 to Basic Networking. The main goal of this PR is that the API returns a valid IPv6 address over which the Instance is reachable.

The GUI will show the IPv6 address after deployment of the Instance.

![screenshot from 2016-10-03 16 34 56](https://cloud.githubusercontent.com/assets/326786/19070024/b06d2de6-8a29-11e6-8fe7-4902e2801ada.png)

If the table VLAN has a proper IPv6 CIDR configured the DirectPodBasedNetworkGuru will calculate the IPv6 Address the Instance will obtain using EUI-64 and SLAAC: https://tools.ietf.org/search/rfc4862

In this case the _vlan_ table contained:

<pre>mysql> select * from vlan \G
*************************** 1. row ***************************
                 id: 1
               uuid: 90e0716c-5261-4992-bb9d-0afd3006f476
            vlan_id: vlan://untagged
       vlan_gateway: 172.16.0.1
       vlan_netmask: 255.255.255.0
        description: 172.16.0.10-172.16.0.250
          vlan_type: DirectAttached
     data_center_id: 1
         network_id: 204
physical_network_id: 200
        ip6_gateway: 2001:980:7936:112::1
           ip6_cidr: 2001:980:7936:112::/64
          ip6_range: NULL
            removed: NULL
            created: 2016-07-19 20:39:41
1 row in set (0.00 sec)

mysql></pre>

It will then log:

<pre>2016-10-04 11:42:44,998 DEBUG [c.c.n.g.DirectPodBasedNetworkGuru] (Work-Job-Executor-1:ctx-1975ec54 job-186/job-187 ctx-0d967d88) (logid:275c4961) Found IPv6 CIDR 2001:980:7936:112::/64 for VLAN 1
2016-10-04 11:42:45,009 INFO  [c.c.n.g.DirectPodBasedNetworkGuru] (Work-Job-Executor-1:ctx-1975ec54 job-186/job-187 ctx-0d967d88) (logid:275c4961) Calculated IPv6 address 2001:980:7936:112:4ba:80ff:fe00:e9 using EUI-64 for NIC 6a05deab-b5d9-4116-80da-c94b48333e5e</pre>

The template has to be configured accordingly:
- No IPv6 Privacy Extensions
- Use SLAAC
- Follow RFC4862

This is also described in: https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+in+Basic+Networking

The next steps after this will be:
- Security Grouping to prevent IPv6 Address Spoofing
- Security Grouping to filter ICMP, UDP and TCP traffic

* pr/1700:
  CLOUDSTACK-676: IPv6 In -and Egress filtering for Basic Networking
  CLOUDSTACK-676: IPv6 Basic Security Grouping for KVM
  CLOUDSTACK-9359: IPv6 for Basic Networking with KVM

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2017-01-27 05:42:44 +05:30
Rajani Karuturi
4721c53ea0 Merge pull request #1749 from mike-tutkowski/archived_snapshots
CLOUDSTACK-9619: Updates for SAN-assisted snapshotsThis PR is to address a few issues in #1600 (which was recently merged to master for 4.10).

In StorageSystemDataMotionStrategy.performCopyOfVdi we call getSnapshotDetails. In one such scenario, the source snapshot in question is coming from secondary storage (when we are creating a new volume on managed storage from a snapshot of ours thats on secondary storage).

This usually worked in the regression tests due to a bit of "luck": We retrieve the ID of the snapshot (which is on secondary storage) and then try to pull out its StorageVO object (which is for primary storage). If you happen to have a primary storage that matches the ID (which is the ID of a secondary storage), then getSnapshotDetails populates its Map<String, String> with inapplicable data (that is later ignored) and you dont easily see a problem. However, if you dont have a primary storage that matches that ID (which I didnt today because I had removed that primary storage), then a NullPointerException is thrown.

I have fixed that issue by skipping getSnapshotDetails if the source is coming from secondary storage.

While fixing that, I noticed a couple more problems:

1)       We can invoke grantAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).
2)       We can invoke revokeAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).

I have corrected those issues, as well.

I then came across one more problem:
         When using a SAN snapshot and copying it to secondary storage or creating a new managed-storage volume from a snapshot of ours on secondary storage, we attach to the SR in the XenServer code, but detach from it in the StorageSystemDataMotionStrategy code (by sending a message to the XenServer code to perform an SR detach). Since we know to detach from the SR after the copy is done, we should detach from the SR in the XenServer code (without that code having to be explicitly called from outside of the XenServer logic).

I went ahead and changed that, as well.

JIRA Ticket:
https://issues.apache.org/jira/browse/CLOUDSTACK-9619

* pr/1749:
  CLOUDSTACK-9619: Updates for SAN-assisted snapshots

Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
2017-01-27 05:35:06 +05:30
Wido den Hollander
84e496b4f9
CLOUDSTACK-676: IPv6 Basic Security Grouping for KVM
This commit implements basic Security Grouping for KVM in
Basic Networking.

It does not implement full Security Grouping yet, but it does:
- Prevent IP-Address source spoofing
- Allow DHCPv6 clients, but disallow DHCPv6 servers
- Disallow Instances to send out Router Advertisements

The Security Grouping allows ICMPv6 packets as described by RFC4890
as they are essential for IPv6 connectivity.

Following RFC4890 it allows:
- Router Solicitations
- Router Advertisements (incoming only)
- Neighbor Advertisements
- Neighbor Solicitations
- Packet Too Big
- Time Exceeded
- Destination Unreachable
- Parameter Problem
- Echo Request

ICMPv6 is a essential part of IPv6, without it connectivity will break or be very
unreliable.

For now it allows any UDP and TCP packet to be send in to the Instance which
effectively opens up the firewall completely.

Future commits will implement Security Grouping further which allows controlling UDP and TCP
ports for IPv6 like can be done with IPv4.

Regardless of the egress filtering (which can't be done yet) it will always allow outbound DNS
to port 53 over UDP or TCP.

Signed-off-by: Wido den Hollander <wido@widodh.nl>
2017-01-26 15:36:08 +01:00
Wei Zhou
a2428508e2 CLOUDSTACK-8746: vm snapshot implementation for KVM
(1) add support to create/delete/revert vm snapshots on running vms with QCOW2 format
(2) add new API to create volume snapshot from vm snapshot
(3) delete metadata of vm snapshots before stopping/migrating and recover vm snapshots after starting/migrating
(4) enable deleting of VM snapshot on stopped vm or vm snapshot is not listed in qcow2 image.
(5) enable smoke tests for vmsnaphsots on KVM
2017-01-24 21:47:30 +01:00
Rohit Yadav
8b6e96bca9 Updating pom.xml version numbers for release 4.9.3.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2017-01-06 10:40:15 +05:30
Rohit Yadav
dfc39c1f08 Updating pom.xml version numbers for release 4.9.2.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2017-01-03 12:28:47 +05:30
Sateesh Chodapuneedi
7ea41a5641 CLOUDSTACK-9704 Remove dependency on VmwareContext object to fetch system VM key file
While remote executing commands/scripts through SSH in VR, ACS uses system vm keyfile.
ACS is fetching this key file using VMwareContext object which encapsulates vCenter connection handle.
This is inefficient because of dependency on getServiceContext() which means a vCenter connection
handle which is not required just to fetch a file in name space in management server.

Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
2016-12-30 02:25:54 +05:30
Sateesh Chodapuneedi
0ef1c17541 CLOUDSTACK-9684 Invalid zone id error while listing vmware zone
Issue
=====
While listing datacenters associated with a zone, only zone Id validation is required.
There is no need to have additional checks like zone is a legacy zone or not.

Fix
===
Removed unnecessary checks over zone ID and just checking if zone with specified ID exists or not.

Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
2016-12-30 02:14:33 +05:30
Rohit Yadav
ec847a890e Merge pull request #1638 from shapeblue/spring4-java8-only
CLOUDSTACK-9456: Migrate master to Spring 4.xThis changes makes CloudStack use spring 4:

```
- Bump spring-framework version to 4.x and Jetty to version that runs with JDK7
- Bump servet dependency version
- Migrates various xmls to use version independent schema uris
```

Outstanding issue:
    - Testing of various non-standard plugins such as network and storage plugins etc.

Since, this is a big change pinging for review -- @jburwell @karuturi @wido @murali-reddy @abhinandanprateek @DaanHoogland @GaborApatiNagy @JayapalUradi @kishankavala @K0zka @nvazquez @rafaelweingartner @pyr and others

@blueorangutan package

* pr/1638:
  CLOUDSTACK-9456: Update Spring version in maven poms

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-24 09:39:18 +05:30
Rohit Yadav
a9f45dfc5f
Merge branch '4.9' 2016-12-23 17:50:42 +05:30
Wei Zhou
714221234d CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agent 2016-12-22 12:00:10 +01:00
Rohit Yadav
342162bad7 Merge branch '4.9'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-22 11:59:02 +05:30
Rohit Yadav
a0e36b73ae Merge pull request #1711 from syed/xenserver7
XenServer 7 SupportThis PR adds support for XenServer 7. I have manually done the following tests
- Create a new cluster with XenServer7
- Add Primary storage: Should create an SR on XS7
- Add another XS7 host to the Pool
- Add host2 to Cloudstack
- Create VM1 from template
- Create VM2 from template
- Ping/SSH VM1 to VM2 and vice-versa
- Stop/Delete/Expunge VM2
- Create Data disk
- Attach it to VM1
- Create VM snaphsot of VM1
- Restore VM snapshot of VM1
- Delete VM snapshot of VM1
- Create Volume snapshot of Datadisk
- Create volume snapshot of Root disk
- Create new template from snapshot of root disk
- Create volume from snapshot of datadisk
- Detach datadisk volume
- Delete datadisk volume
- Aquire a public IP
- Create a static nat to VM1
- Live migrate VM1 while traffic on VM
- Delete VM1

* pr/1711:
  [CLOUDSTACK-9662] Add support for XenServer 7

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-22 11:18:57 +05:30
Syed
eabf862ba9 [CLOUDSTACK-9662] Add support for XenServer 7 2016-12-21 16:58:10 -05:00
Sateesh Chodapuneedi
32e9e29a17 CLOUDSTACK-9673 Exception occured while creating the CPVM in the VmWare Setup over standard vSwitches
Issue
====
Exception occured while creating the CPVM in the VmWare Setup using standard vswitches.
StartCommand failed due to Exception: com.vmware.vim25.AlreadyExists
message: [] com.vmware.vim25.AlreadyExistsFaultMsg: The specified key, name, or identifier already exists

Fix
===
Ensure synchronization while attempting to create port group such that simultaneous attempts are not made with same port group name on same ESXi host.

Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
2016-12-22 01:35:10 +05:30
Anshul Gangwar
336df84f17 CLOUDSTACK-9685: delete snapshot on primary associated with a volume when that volume is deleted
as that snapshot will never be going to use again and also it will fill up primary storage
2016-12-20 15:21:04 +05:30
Rohit Yadav
0dce1c50c1 CLOUDSTACK-9456: Update Spring version in maven poms
- Bump spring-framework version to 4.x and Jetty to version that runs with JDK8
- Bump servet dependency version
- Migrate spring xmls to version 4, fixes schema locations that are 3.0
  dependent in various xmls.
- Fix failing tests due to spring upgrade
  (Thanks @marcaurele Marc-Aurèle Brothier for fixing them)
    * Fix test DeploymentPlanningManagerImplTest
    * Fix GloboDNS test

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-16 21:21:20 +05:30
Rohit Yadav
5e19e64f2f Updating pom.xml version numbers for release 4.9.2.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-16 20:48:16 +05:30
Syed
966814b07d Fix HVM VM restart bug in XenServer 2016-12-15 14:12:59 -05:00
Rohit Yadav
af2679959b Updating pom.xml version numbers for release 4.9.1.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-10 08:38:03 +05:30
Rohit Yadav
6bdc411ff2
Merge branch '4.9' 2016-12-08 00:04:26 +05:30
Murali Reddy
6749785cab CLOUDSTACK-9339 Virtual Routers don't handle Multiple Public Interfaces correctly
-when processing static nat rule, add a mangle table rule, to mark the traffic
   from the guest vm when it has associated static nat rule so that traffic gets
   routed using the route tabe of the device which has public ip associated

  -fix the case where nic_device_id is empty when ip is getting disassociated
   resulting in empty deviceid in ips.json

  -add utility methods in CsRule, and CsRoute to add 'ip rule' and 'ip route' rules respectivley

  -ensure traffic from all public interfaces are connection marked with device number, and restored
   for the reverse traffic. use the connection marked number to do device specific routing table lookup
   fill the device specific routing table with default route

  -component tests for testing multiple public interfaces of VR
2016-12-07 14:33:24 +05:30
Mike Tutkowski
06806a6523 CLOUDSTACK-9619: Updates for SAN-assisted snapshots 2016-12-06 17:32:56 -07:00
Rohit Yadav
f82873ebf8
Merge branch '4.9' 2016-12-05 15:32:11 +05:30
Rohit Yadav
48b28f7d6e
Merge branch '4.8' into 4.9 2016-12-05 15:32:03 +05:30
Rohit Yadav
20aea27dc0
Merge pull request #1765 from shapeblue/CLOUDSTACK-9586
Cloudstack 9586: When using local storage with Xenserver prepareTemplate does not work with multiple primary storeThe race condition will happen whenever there are multiple primary storages and the CS tries to mount the secondary store to xenserver host simultaneously.

Due to synchronised block one mount will be successful and other thread will get the already mounted SR. Without the fix the two thread will try to mount it parallely and one will fail on Xenserver.

* pr/1765:
  Cloudstack 9586: When using local storage with Xenserver prepareTemplate does not work with multiple primary store The race condition will happen whenever there are multiple primary storages and the CS tries to mount the secondary store to xenserver host simultaneously. Due to synchronised block one mount will be successful and other thread will get the already mounted SR. Without the fix the two thread will try to mount it parallely and one will fail on Xenserver.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-05 15:31:25 +05:30
Rohit Yadav
45a746ad2e
Merge branch '4.9' 2016-12-05 15:16:22 +05:30
Rohit Yadav
8d14e8e8b5 Merge pull request #1729 from shapeblue/vmware-memleak-fix
CLOUDSTACK-9564: Fix memory leaks in VmwareContextPoolIn a recent management server crash, it was found that the largest contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

@blueorangutan package

* pr/1729:
  CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-05 15:15:58 +05:30
Abhinandan Prateek
ba32ac1a7c Cloudstack 9586: When using local storage with Xenserver prepareTemplate does not work with multiple primary store
The race condition will happen whenever there are multiple primary storages and the CS tries to mount the secondary store to xenserver host simultaneously.
Due to synchronised block one mount will be successful and other thread will get the already mounted SR. Without the fix the two thread will try to mount it parallely and one will fail on Xenserver.
2016-12-02 13:37:47 +05:30
Rohit Yadav
e091e82a7f
Merge branch '4.9' 2016-12-02 10:42:31 +05:30
Rohit Yadav
90a3d97c5e CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool
In a recent management server crash, it was found that the largest contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-12-01 15:27:29 +05:30
Suresh Kumar Anaparti
309da6a57f CLOUDSTACK-8854: Apple Mac OS/X VM get created without USB controller in ESXi hypervisors 2016-12-01 14:32:18 +05:30
Rohit Yadav
d8c038e5b2
Merge branch '4.9' 2016-11-25 13:10:56 +05:30
Rohit Yadav
50f80cc2a0
Merge branch '4.8' into 4.9 2016-11-25 13:03:04 +05:30
Rohit Yadav
020606ec31 Merge pull request #1738 from SudharmaJain/cs-9566
CLOUDSTACK-9566 instance-id metadata for baremetal VM returns IDThere is difference in instance-id metadata across baremetal and other hypervisors.

On Baremetal
[root@ip-172-17-0-144 ~]# curl http://8.37.203.221/latest/meta-data/instance-id
6021

on Xen
[root@ip-172-17-2-103 ~]# curl http://172.17.0.252/latest/meta-data/instance-id
cbeb517a-e833-4a0c-b1e8-9ed70200fbbf

In both cases it should be vm's uuid.

* pr/1738:
  CLOUDSTACK-9566 instance-id metadata for baremetal VM returns ID

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-11-25 12:52:10 +05:30
Wido den Hollander
2a5f37c1b1
CLOUDSTACK-8715: Add channel to Instances for Qemu Guest Agent
This commit adds a additional VirtIO channel with the name
'org.qemu.guest_agent.0' to all Instances.

With the Qemu Guest Agent the Hypervisor gains more control over the Instance if
these tools are present inside the Instance, for example:

* Power control
* Flushing filesystems
* Fetching Network information

In the future this should allow safer snapshots on KVM since we can instruct the
Instance to flush the filesystems prior to snapshotting the disk.

More information: http://wiki.qemu.org/Features/QAPI/GuestAgent

Keep in mind that on Ubuntu AppArmor still needs to be disabled since the default
AppArmor profile doesn't allow libvirt to write into /var/lib/libvirt/qemu

This commit does not add any communication methods through API-calls, it merely
adds the channel to the Instances and installs the Guest Agent in the SSVMs.

With the addition of the Qemu Guest Agent channel a second channel appears in /dev
on a SSVM as a VirtIO port.

The order in which the ports are defined in the XML matters for the naming inside
the SSVM VM and by not relying on /dev/vportXX but looking for a static name the
SSVM still boots properly if the order in the XML definition is changed.

A SSVM with both ports attached will have something like this:

  root@v-215-VM:~# ls -l /dev/virtio-ports
  total 0
  lrwxrwxrwx 1 root root 11 May 13 21:41 org.qemu.guest_agent.0 -> ../vport0p2
  lrwxrwxrwx 1 root root 11 May 13 21:41 v-215-VM.vport -> ../vport0p1
  root@v-215-VM:~# ls -l /dev/vport*
  crw------- 1 root root 251, 1 May 13 21:41 /dev/vport0p1
  crw------- 1 root root 251, 2 May 13 21:41 /dev/vport0p2
  root@v-215-VM:~#

In this case the SSVM port points to /dev/vport0p1, but if the order in the XML
is different it might point to /dev/vport0p2

By looking for a portname with a pre-defined pattern in /dev/virtio-ports we
do not rely on the order in the XML definition.

Signed-off-by: Wido den Hollander <wido@widodh.nl>
2016-11-23 16:01:08 +01:00
Rohit Yadav
feaeed7b16
Merge pull request #1542 from nvazquez/nestedv
CLOUDSTACK-9379: Support nested virtualization at VM level on VMware Hypervisor## Introduction

[JIRA TICKET](https://issues.apache.org/jira/browse/CLOUDSTACK-9379)

It is desired to support nested virtualization at VM level for VMware hypervisor. Current behaviour supports enabling/desabling global nested virtualization by modifying global config `'vmware.nested.virtualization'`. It is wished to improve this feature, having control at VM level instead of a global control only.

A new global configuration is added, to enable/disable VM nested virtualization control: `'vmware.nested.virtualization.perVM'`. Default value=false

After a vm deployment or start command, vm params include `'nestedVirtualizationFlag'` key and its value is:
- true -> nested virtualization enabled
- false -> nested virtualization disabled

**We will determinate nested virtualization enabled/disabled by examining this 3 values:**
- **(1)** global configuration `'vmware.nested.virtualization'` value
- **(2)** global configuration `'vmware.nested.virtualization.perVM'` value
- **(3)** `'nestedVirtualizationFlag'` value in `user_vm_details` if present, `null` if not.

Using this 3 values, there are different use cases:
- **(1)** = TRUE, **(2)** = TRUE, **(3)** is null -> _ENABLED_
- **(1)** = TRUE, **(2)** = TRUE, **(3)** = TRUE -> _ENABLED_
- **(1)** = TRUE, **(2)** = TRUE, **(3)** = FALSE -> _DISABLED_
- **(1)** = TRUE, **(2)** = FALSE, **(3)** indifferent  -> _ENABLED_
- **(1)** = FALSE, **(2)** = TRUE, **(3)** is null -> _DISABLED_
- **(1)** = FALSE, **(2)** = TRUE, **(3)** = TRUE -> _ENABLED_
- **(1)** = FALSE, **(2)** = TRUE, **(3)** = FALSE -> _DISABLED_
- **(1)** = FALSE, **(2)** = FALSE, **(3)** indifferent -> _DISABLED_

* pr/1542:
  CLOUDSTACK-9379: Support nested virtualization at VM level on VMware Hypervisor

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-11-23 15:00:36 +05:30
Rohit Yadav
35803805c7
Merge branch '4.9' 2016-11-23 14:28:31 +05:30
Rohit Yadav
1f2184800b Merge pull request #1681 from murali-reddy/router_eth_device_index
CLOUDSTACK-9491: incorrect parsing of device list to find ethernet index of plugged NICIn VmwareResource, findRouterEthDeviceIndex() method find ethernet interface index given
the mac address. This method is used, once a nic is plugged to determine ethernet interface.
"/proc/sys/net/ipv4/conf" from the VR and looped through the devices to find the right
ethernet interface. Howver current logic read it once, and loops through the device list.
Its observerd device may not show up '/proc/sys/net/ipv4/conf' immediatly once NIC is plugged
in the VM from vCenter.

Fix ensured, while waiting for 15 sec in the loop, read the latest content from /proc/sys/net/ipv4/conf
, so that right device list is processed.

Manual tested VPC scenarios of adding new tiers which uses findRouterEthDeviceIndex, to find the guest/public network ethernet index.

* pr/1681:
  CLOUDSTACK-9491: incorrect parsing of device list to find ethernet index of plugged NIC

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2016-11-23 14:28:12 +05:30
Rohit Yadav
0642a6982f
Merge branch '4.9' 2016-11-23 14:22:15 +05:30
Rohit Yadav
55b918076f
Merge branch '4.8' into 4.9 2016-11-23 13:50:15 +05:30