The basic idea behind this is, deploy a fix sized threadpool for updating RvR
status, then using producer/consumer model. There is a global configuration
router.check.poolsize(10 by default) to control the pool size.
Using pool size 100 for 1000 RvR is tested with simulator and works well.
Also we can adjust the global configuration option router.check.interval to e.g.
60s from default 30s to mitigate the issue.
For LB device in inline mode, the ip deployer(the owner of public ip) is the
firewall in front of it, not itself. So check if it's inline or not, if it's
inline, return the firewall as ip deployer
These columns were not added by the upgrade from 4.0 to 4.1 causing SQL-errors
These columns are present in create-schema.sql, so by adding them here the upgrade also works to 4.1
Multiple fixes:
1. changes to the mvn configuration
a. include simulator to client.war
b. activate simulator by profile
2. templates for simulator
3. developer prefill for simulator
a. Use deplydb-simulator to setup simulator db
4. Inherit components-simulator.xml from components.xml
5. ListVolumesCommand missed for MockStorageManager
6. Include simulator properties into utils/db.properties
TODO:
Secondary storage VMs don't come up because ComponentLocator doesn't
retain a unique set of adapaters by name. Fix this in subsequent
checkin.
BootArgs carry router priority for master and backup and simulator will
reuse the priority to decide RvR status. Deprecating the odd/even logic.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
1. Modified create-schema.sql to add version as 4.1.0 instead of 4.0.0
2. Removed schema-40to41.sql amd moved the content to schema-40to410.sql
3. Added to schema-40to410.sql Upgrade40to41.java
This is improvement of:
commit 1ca493e4facf190a288012bf9b888f90e2bc2855
Author: Sheng Yang <sheng.yang@cloud.com>
Date: Wed Feb 29 17:43:50 2012 -0800
bug 14042: Don't set dhcp:router option on DHCP server for non-default
network on CentOS/RHEL
The old solution only works on CentOS/RHEL, this one would enable the ability to more
guest OS, and enable user to choose what policy should be for each guest os
type.
- introduces Capability in the network offering, which
decides when EIP service is enabled, by defualt public IP
should be assigned to the VM or not
- default network offering with EIP/ELB service will still work with old EIP
semantics, i.e) assign a public IP to each VM on start
Make cloud-setup-databases compatible to python 2.4 and before.
Add code Prasanna Santhanam <tsp@apache.org>
Partially revert a6dcd7af4962a584f46d9b7271e2c225618624ed which removed
the fix for CLOUDSTACK-199: Fix how cloud-setup-databases parses
Patch splits by right most @ in supplied argument to get
user:password and host substrings.
Less than python <2.4 the following is unsupported and produces a
SyntaxError.
try:
...code ...
except ValueError:
...code ...
finally:
...code ...
Workaround is the following
try:
try:
...code ...
except ValueError:
...code ...
finally:
Credits to Prasanna Santhanam <tsp@apache.org>
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
This commit merges the nicira-l3support branch with master. This
effectively adds nicira nvp l3 support to master. The NiciraNVP Provider
can support the following services with this modification: Connectivity,
SourceNat, StaticNat and PortForwarding
Testing done:
Create, Delete network offerings with Nicira Element
Use Gui to add, modify, remove Nicira Element and Provider
Provision, deprovision SourceNat networks
Provision, deprovision Portforwarding and StaticNat rules
Tested with Nicira NVP release 2.1.0, 2.2.0 and 2.2.1 (2.2.x recommended)
Support for local data disk. Currently enable/disable config is at zone level, in subsequent checkins it can be made more granular.
Following changes are made:
- Create disk offering API now takes an extra parameter to denote storage type (local or shared). This is similar to storage type in service offering.
- Create/delete of data volume on local storage
- Attach/detach for local data volumes. Re-attach is allowed as long as vm host and data volume storage pool host is same.
- Migration of VM instance is not supported if it uses local root or data volumes.
- Migrate is not supported for local volumes.
- Zone level config to enable/disable local storage usage for service and disk offerings.
- Local storage gets discovered when a host is added/reconnected if zone level config is enabled. When disabled existing local storages are not removed but any new local storage is not added.
- Deploy VM command validates service and disk offerings based on local storage config.
- Upgrade uses the global config 'use.local.storage' to set the zone level config for local storage.
(cherry picked from commit 62710aed37606168012a0ed255a876c8e7954010)