1) vxlan will use bridge scheme 'brvx-<vni>'. Multiple physical networks can host guest
traffic type with vxlan isolation, so long as they don't use the same VNI range.
2) Guest traffic labels can be physical interface if bridge by given name is not found.
Normally we take traffic label name, find the matching bridge, then resolve that to a
physical interface. Then we create guest bridges on that interface. Now we can just
specify the interface.
Initial patch for VXLAN support.
Fully functional, hopefully, for GuestNetwork - AdvancedZone.
Patch Note:
in cloudstack-server
- Add isolation method VXLAN
- Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
- Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
- Add VXLAN isolation option in zoneWizard UI
in cloudstack-agent (kvm)
- Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation script
-- Usage is exactly same to modifyvlan.sh
- BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when VXLAN is used for isolation
Database changes:
- No change in database structure.
- VXLAN isolation uses same tables that VLAN uses to store vNet allocation status.
Known Issue and/or TODO:
- Some resource still says 'VLAN' in log even if VXLAN is used
- in UI, "Network - GuestNetworks" dosen't display VNI
-- VLAN ID field displays "N/A"
- Documentation!
Signed-off-by : Toshiaki Hatano <haeena@haeena.net>
There still exist two issues after Edison's commits.
(1) Migration from new hosts to old hosts failed.
The bridge name on old host is set to cloudVirBr* if network.bridge.name.schema is set to 3.0 in /etc/cloudstack/agent/agent.properties, but the actual bridge name is breth*-* after running cloudstack-agent-upgrade.
(2) all ports of vms (Basic zone, or Advanced zone with security groups) on old hosts are open, because the iptables rules are binding to device (bridge) name which is changed by cloudstack-agent-upgrade.
After this, the KVM upgrade steps :
a. Install 4.2 cloudstack agent on each kvm host
b. Run "cloudstack-agent-upgrade". This script will upgrade all the existing bridge name to new bridge name, and update related firewall rules.
c. install a libvirt hook:
c1. mkdir /etc/libvirt/hooks
c2. cp /usr/share/cloudstack-agent/lib/libvirtqemuhook /etc/libvirt/hooks/qemu
c3. chmod +x /etc/libvirt/hooks/qemu
c4. service libvirtd restart
c5. service cloudstack-agent restart
Signed-off-by: Wei Zhou <w.zhou@leaseweb.com>
Initial patch for VXLAN support.
Fully functional, hopefully, for GuestNetwork - AdvancedZone.
Patch Note:
in cloudstack-server
- Add isolation method VXLAN
- Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
- Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
- Add VXLAN isolation option in zoneWizard UI
in cloudstack-agent (kvm)
- Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation script
-- Usage is exactly same to modifyvlan.sh
- BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when VXLAN is used for isolation
Database changes:
- No change in database structure.
- VXLAN isolation uses same tables that VLAN uses to store vNet allocation status.
Known Issue:
- Some resource still says 'VLAN' in log even if VXLAN is used
- in UI, "Network - GuestNetworks" dosen't display VNI
-- VLAN ID field displays "N/A"
When 'security_group.py cleanup_rules' is called by the KVM Agent it will clean up all Instances
not in the "running" state according to libvirt.
However, when a snapshot is created of a Instance it will go to the "paused" state while the snapshot
is created.
This leads to Security Rules being removed when a Instance is being snapshotted and the cleanup process
is initiated.
Start/stop vm/dhcp server are done. Not done with VM migration.
A new command(PvlanSetupCommand) is sent for setting up PVLAN for vms. Currently
it's focus on OVS implementation. Need to be more abstruct and add vSwitch part.
Detail: Added exception handling around iptables chain flushing, along
with a call to default_network_rules() to re-initialize.
Testing:
On agent, ls /var/run/cloud and pick one of the VMs to test with. Make a
backup of it's logfile (eg cp /var/run/cloud/i-2-1722.log /tmp )
Destroy the firewall ruleset for that VM with
/usr/lib64/cloud/common/scripts/vm/network/security_group.py destroy_network_rules_for_vm --vmname i-2-1722-VM --vif vnet10
Now copy the log file back, edit the file and decrement the last field by 1
ACS should notice the out-of-date sequence ID and push a new ruleset for
the VM within 60 seconds.
BUG-ID: CLOUDSTACK-1685
Bugfix-for: John Kinsella
Reviewed-by:
Reported-by:
Signed-off-by: John Kinsella <jlk@stratosec.co> 1363286927 -0700
Detail: A grep in security_group.py wasn't defined well enough, could
potentially delete rules for VMs other than intended
BUG-ID: CLOUDSTACK-309
Bugfix-for: master
Reviewed-by:
Reported-by: Francois Scala
Signed-off-by: John Kinsella <jlk@stratosec.co> 1363222521 -0700
Detail: Code was attempting to concatinate an exception to a string.
Updated to convert to text and concatinate that.
BUG-ID: CLOUDSTACK-1052
Bugfix-for: master
Reported-by: Noa Resare
Signed-off-by: John Kinsella <jlk@stratosec.co> 1363218769 -0700
Checks the args length, doesn't throw IndexError when no args
passed. Also logs to security_group.log when executed with no args or unknown
command.
Review: https://reviews.apache.org/r/9588
Reviewed-by: Rohit Yadav <bhaisaab@apache.org>
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
Detail: Users can experience long delays during VM migration, because the
linux bridge by default will have a forwarding delay set. This means that the
network will likely miss any gratuitous ARP from qemu notifying the network that
the MAC has moved. This change is a common reccommendation for virtualization
running on Linux bridges.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1357259186 -0700
- Since we're always getting the first from the list, use head -1 to get the first
of the results instead of processing again
- Remove unecessay pop (why was it even there)
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
work)
Cloudstack seems to let you create guest traffic types on multiple
physical networks. However, when I try this with KVM I end up always
bridging to whatever device is used for guest.network.device. This pulls
the traffic label (NicTO.getName()) and uses that bridge to ensure that
we get on the correct physical network, rather than just always using
the guest.network.device.
This also changes the bridge naming scheme from cloudVirBr + vlanid to
br + physicalinterface + "-" + vlanid. This is because we should be able
to support the same vlan numbers per physical network, and the previous
bridge name would not support this and collide.
Signed-off-by: Edison Su <sudison@gmail.com>
The default_network_rules_systemvm method in security_group.py only created the appropriate rules for
just one bridge.
This however leads to traffic not being forwarded to the virtual machine in the case of the system VMs
both (console & storage) having different bridges in basic networking.
This patch makes sure rules are generated for all target devices based on their source device/bridge
It however excludes the LinkLocalBridge since no filtering is needed on that bridge.