Use SRX firewall filter as SRX firewall. The old security policy mechanism
cannot be used as IP based. This would enable SRX's ability to control traffic
for F5 behind it.
Detail: Users can experience long delays during VM migration, because the
linux bridge by default will have a forwarding delay set. This means that the
network will likely miss any gratuitous ARP from qemu notifying the network that
the MAC has moved. This change is a common reccommendation for virtualization
running on Linux bridges.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1357259186 -0700
- Since we're always getting the first from the list, use head -1 to get the first
of the results instead of processing again
- Remove unecessay pop (why was it even there)
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
Detail: If source image is qcow2, and we want a qcow2 image, then doing a
convert strips off compression and any snapshots the user had in that image. If
a backing file exists, we stick with convert so we can pull in both the backing
file and the COW image, otherwise we just cp the qcow2 file. This is also faster
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1354755241 -0700
Detail: createvolume.sh had '$qemu-img' in one spot instead of '$qemu_img' as it
uses everywhere else
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1354754792 -0700
work)
Cloudstack seems to let you create guest traffic types on multiple
physical networks. However, when I try this with KVM I end up always
bridging to whatever device is used for guest.network.device. This pulls
the traffic label (NicTO.getName()) and uses that bridge to ensure that
we get on the correct physical network, rather than just always using
the guest.network.device.
This also changes the bridge naming scheme from cloudVirBr + vlanid to
br + physicalinterface + "-" + vlanid. This is because we should be able
to support the same vlan numbers per physical network, and the previous
bridge name would not support this and collide.
Signed-off-by: Edison Su <sudison@gmail.com>
host if there are multiple storage pools in a cluster.
The issue is as follows:
1. When CloudStack detects that a host is not responding to ping
requests it'll send a fence command for this host to another host in the
cluster.
2. The agent takes a long time to respond to this check if the storage
is fenced. This is because the agent checks if the first host is writing
to its heartbeat file on all pools in the cluster. It is doing this in a
sequential manner on all storage pool.
Making a fix to get rid of sleep, wait during HA. The behavior is now
similar to Xenserver.
RB: https://reviews.apache.org/r/6133/
Send-by:devdeep.singh@citrix.com
The default_network_rules_systemvm method in security_group.py only created the appropriate rules for
just one bridge.
This however leads to traffic not being forwarded to the virtual machine in the case of the system VMs
both (console & storage) having different bridges in basic networking.
This patch makes sure rules are generated for all target devices based on their source device/bridge
It however excludes the LinkLocalBridge since no filtering is needed on that bridge.