The basic idea behind this is, deploy a fix sized threadpool for updating RvR
status, then using producer/consumer model. There is a global configuration
router.check.poolsize(10 by default) to control the pool size.
Using pool size 100 for 1000 RvR is tested with simulator and works well.
Also we can adjust the global configuration option router.check.interval to e.g.
60s from default 30s to mitigate the issue.
For LB device in inline mode, the ip deployer(the owner of public ip) is the
firewall in front of it, not itself. So check if it's inline or not, if it's
inline, return the firewall as ip deployer
Use SRX firewall filter as SRX firewall. The old security policy mechanism
cannot be used as IP based. This would enable SRX's ability to control traffic
for F5 behind it.
If you update your build to build a version with a name not ending in -SNAPSHOT,
you are required to declare versions on all your depdendencies. There is already
a cs.mysql.version property, this patch makes sure it is used where appropriate.
Signed-off-by: Chip Childers <chip.childers@gmail.com>
I added this in commit bc94948e0604e0e5931759be3c3d3155e84686f6 to be able to bind
the VNC on KVM on the Private IP Address of the Hypervisor.
This got (accidentally) reverted in commit 110903a91a21c04b931a26354a04bd7f534ba050 breaking
this behaviour with KVM.
By passing the destination host again in StartCommand we are able to bind the VNC to the private
IP address of the hypervisor.
This makes sure the VNC is not open for the world and users don't have to firewall these ports, nor
do they have to change "vnc_listen" in their qemu.conf libvirt settings.
In the past, we use same MAC address therefore once MASTER is down, the packet
to the same MAC would go to BACKUP ASAP.
But now we also have arping after BACKUP become MASTER, which should update the
ARP cache of public gateway router quickly. Though it would be a little
delay(likely less than 1 second), it's still fine for different MAC.
And it would solve some cache issue for same mac on vSwitch different ports.
1. Modified create-schema.sql to add version as 4.1.0 instead of 4.0.0
2. Removed schema-40to41.sql amd moved the content to schema-40to410.sql
3. Added to schema-40to410.sql Upgrade40to41.java
This is improvement of:
commit 1ca493e4facf190a288012bf9b888f90e2bc2855
Author: Sheng Yang <sheng.yang@cloud.com>
Date: Wed Feb 29 17:43:50 2012 -0800
bug 14042: Don't set dhcp:router option on DHCP server for non-default
network on CentOS/RHEL
The old solution only works on CentOS/RHEL, this one would enable the ability to more
guest OS, and enable user to choose what policy should be for each guest os
type.
As noted in the bug, several of the API command in question
are async calls. I've added a simple regex-based string cleaning
function, and have the request and response strings running through
it prior to being appended to the audit log.
Unit tests added for the new cleaning function as well.
The call to skip logging the createSSHKeyPair response remains intact
for now, although it should probably be scrubbed similarly to the
password fields.
Signed-off-by: Chip Childers <chip.childers@gmail.com>
Detail: Instead of using LibvirtStorageAdaptor for everything, you can create
your own storage adaptor and use it. We select storage adaptor based on storage
pool type, thus we needed to adjust LibvirtComputingResource to pass pool type
to everything in KVMStoragePoolManager. This in turn required that we pass the
info necessary to LibvirtComputingResource as well, so a few agent Commands were
modified.
Note this patch in and of itself shouldn't change any existing behavior, just
allow for new storage adaptors to be selected based on storage pool type.
Reviewed-by: Edison Su
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1355769696 -0700
These unittests do not depend on the componentlocator but instead are
completely dependent on mock objects. This ensures that they can be run
standalone without any requirements on the environment.
Includes some fixes to NiciraNvpGuestNetworkGuru and GuestNetworkGuru
When zoneid is passed an no state is specified listVIrtualMachines does
not return the destroyed vms. This patch fixes the issue.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
- introduces Capability in the network offering, which
decides when EIP service is enabled, by defualt public IP
should be assigned to the VM or not
- default network offering with EIP/ELB service will still work with old EIP
semantics, i.e) assign a public IP to each VM on start
the advanced zone
Details:
1). Added validation to check the Vlan Id specified in the createNetwork()
does not overlap with any of the vlan's used by isolated networks or
shared networks in the zone.
2). state change for shared network with services to go to 'Setup' state
on network shutdown instead of 'Allocated'
Bug ID:CLOUDSTACK-312 enable L4-L7 network services in the shared network in the advanced zone
network in the advanced zone
Details : ensure that CIDR specified for shared network does not overlap with any
CloudStack generated CIDR's for isaolated guest networks when using
external networking devices
Bug ID:CLOUDSTACK-312 enable L4-L7 network services in the shared network in the advanced zone
Conflicts:
server/src/com/cloud/network/NetworkManagerImpl.java
network in the advanced zone
Summary: change 'shared network' in advanced zone with L4-L7 services to go through network
implement phase. Add ACL checks to associate IP to shared network in the
advanced zone
Bug ID:CLOUDSTACK-312 enable L4-L7 network services in the shared network in the advanced zone
Conflicts:
server/src/com/cloud/network/NetworkManagerImpl.java
Details:
- changed associateIPAddr API to accept shared network Id and account Id. Ip will be owned by tuple (account Id, network Id)
- chaged createNetwork API to accpet CIDR when network offering has external networking device providers
Bug ID:CLOUDSTACK-312 enable L4-L7 network services in the shared
network in the advanced zone
Detail: Because of the way most other primary storage types work with cloudstack
(i.e. backing stores) CLVM actually copies the template to a local logical
volume on primary storage, then uses that. This causes all of your primary
storage to be littered with a copy of every template used. Since we're not
using these, dump the template direct to the newly created logical volume.
This is faster as well since the template is sparse; we're not creating a fat
template on primary storage and then copying that to a logical volume when we
deploy from template.
BUG-ID: CLOUDSTACK-508
Bugfix-for: 4.1
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1353221260 -0700
* enable SG provider if the zone is SG enabled
* don't create public traffic type for the zone if there is no public network exist in the zone in 2.2.x