Check the existence of 'forceencap' parameter before useCheck the existence of 'forceencap' parameter before use.
Error seen:
```
Traceback (most recent call last):
File "/opt/cloud/bin/update_config.py", line 140, in <module>
process_file()
File "/opt/cloud/bin/update_config.py", line 54, in process_file
finish_config()
File "/opt/cloud/bin/update_config.py", line 44, in finish_config
returncode = configure.main(sys.argv)
File "/opt/cloud/bin/configure.py", line 1003, in main
vpns.process()
File "/opt/cloud/bin/configure.py", line 488, in process
self.configure_ipsec(self.dbag[vpn])
File "/opt/cloud/bin/configure.py", line 544, in configure_ipsec
file.addeq(" forceencaps=%s" % CsHelper.bool_to_yn(obj['encap']))
KeyError: 'encap'
```
* pr/1402:
Check the existence of 'forceencap' parameter before use
Signed-off-by: Will Stevens <williamstevens@gmail.com>
They might never appear.. for example when we have entries in
/etc/cloudstack/ips.json that haven't been plugged yet. Waiting
this long makes everything horribly slow (every vm, interface,
static route, etc, etc, will hit this wait, for every device).
[4.7] Critical VPCVR issues fixed: CLOUDSTACK-9154; CLOUDSTACK-9187; and CLOUDSTACK-9188This PR applies the same fixes as in the PR #1259, but against branch 4.7.
Please refer to PR #1259 for the tests results and all the comments already made there.
Issues fixed are:
* CLOUDSTACK-9154: rVPC doesn't recover from cleaning up of network garbage collector
* CLOUDSTACK-9187: rVPC routers in Master/Master due to concurrency problem when writing the keepalivd.conf
* CLOUDSTACK-9188: NetworkGarbageCollector is not using gc.interval and gc.wait from settings
Those changes have been covered by 2 new tests added to ```smoke/test_vpc_redundant.py```:
* test_04_rvpc_network_garbage_collector_nics
* test_05_rvpc_multi_tiers
The test ```test_04_rvpc_network_garbage_collector_nics``` depends on the global settings for the network.gc.interval and gc.wait. If one wants the test to run quicker, please change the settings (default is 600 seconds for each) and restart the Management Server before running the tests. I would suggest to set it to 60 seconds.
In addition, the NetworkGarbageCollector was redefining the settings above mentioned and not reading their values through ConfigDao. Due to that, the settings were not being applied properly and the test was waiting to long to check the VPC routers.
* pr/1277:
CLOUDSTACK-9154 - Sets the pub interface down when all guest nets are gone
CLOUDSTACK-9187 - Makes code ready for more something like ethXXXX, if we ever get that far
CLOUDSTACK-9188 - Reads network GC interval and wait from configDao
CLOUDSTACK-9187 - Fixes interface allocation to VRRP instances
CLOUDSTACK-9187 - Adds test to cover multiple nics and nic removal
CLOUDSTACK-9154 - Adds test to cover nics state after GC
CLOUDSTACK-9154 - Returns the guest iterface that is marked as added
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-9222 Prevent cloud.log.1 filling up the diskDelay Compress results in more space usage than needed. Since we have copy truncate we don't need it.
* pr/1329:
CLOUDSTACK-9222 Prevent cloud.log.1 filling up the disk
Signed-off-by: Remi Bergsma <github@remi.nl>
[4.7] FIX Site2SiteVPN on redundant VPCThis PR:
- fixes the inability to setup more than one Site2Site VPN connection from a VPC
- fixes starting of Site2Site VPN on redundant VPC
- fixes Site2Site VPN state checking on redundant VPC
- improves the vpc_vpn test to allow multple hypervisors
- adds an integration test for Site2Site VPN on redundant VPC
Tested it on 4.7 single Xen server zone:
command:
```
nosetests --with-marvin --marvin-config=/data/shared/marvin/mct-zone1-xen1.cfg -a tags=advanced,required_hardware=true /tmp/test_vpc_vpn.py
```
results:
```
Test Site 2 Site VPN Across redundant VPCs ... === TestName: test_01_redundant_vpc_site2site_vpn | Status : SUCCESS ===
ok
Test Remote Access VPN in VPC ... === TestName: test_01_vpc_remote_access_vpn | Status : SUCCESS ===
ok
Test Site 2 Site VPN Across VPCs ... === TestName: test_01_vpc_site2site_vpn | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 3 tests in 1490.076s
OK
```
also performed numerous manual inspections of state of VPN connections and connectivity between VPC's
* pr/1276:
Fix unable to setup more than one Site2Site VPN Connection
FIX S2S VPN rVPC: Check only redundant routers in state MASTER
PEP8 of integration/smoke/test_vpc_vpn
Add S2S VPN test for Redundant VPC
Make integration/smoke/test_vpc_vpn Hypervisor independant
FIX VPN: non-working ipsec commands
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-9181 Prevent syntax error in checkrouter.shAdded quotes to prevent syntax errors in weird situations.
Error seen in mgt server:
```
2015-12-15 14:30:32,371 DEBUG [c.c.a.m.AgentManagerImpl] (RedundantRouterStatusMonitor-7:ctx-0dd8ef3e) Details from executing class com.cloud.agent.api.CheckRouterCommand: Status: UNKNOWN
/opt/cloud/bin/checkrouter.sh: line 28: [: =: unary operator expected
/opt/cloud/bin/checkrouter.sh: line 31: [: =: unary operator expected
```
Cause:
```
root@r-1191-VM:/opt/cloud/bin# ./checkrouter.sh
./checkrouter.sh: line 28: [: =: unary operator expected
./checkrouter.sh: line 31: [: =: unary operator expected
Status: UNKNOWN
```
Somehow a nic was missing.
After fix the script can handle this:
```
root@r-1191-VM:/opt/cloud/bin# ./checkrouter.sh
Status: UNKNOWN
```
The other states are also reported fine:
```
root@r-1191-VM:/opt/cloud/bin# ./checkrouter.sh
Status: MASTER
```
```
root@r-1192-VM:/opt/cloud/bin# ./checkrouter.sh
Status: BACKUP
```
While at it, I also removed the INTERFACES variable/constant as it was only used once and hardcoded the second time. Now both are hardcoded and easier to read.
* pr/1296:
make both check lines consistent
CLOUDSTACK-9181 Prevent syntax error in checkrouter.sh
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-9204 Do not error when staticroute is already goneWhen deleting a static route fails because it isn't there any more (KeyError), it should succeed instead.
Error seen:
```
[INFO] Processing JSON file static_routes.json.1451560145
Traceback (most recent call last):
File "/opt/cloud/bin/update_config.py", line 140, in <module>
process_file()
File "/opt/cloud/bin/update_config.py", line 52, in process_file
qf.load(None)
File "/opt/cloud/bin/merge.py", line 258, in load
proc = updateDataBag(self)
File "/opt/cloud/bin/merge.py", line 91, in _init_
self.process()
File "/opt/cloud/bin/merge.py", line 131, in process
dbag = self.process_staticroutes(self.db.getDataBag())
File "/opt/cloud/bin/merge.py", line 179, in process_staticroutes
return cs_staticroutes.merge(dbag, self.qFile.data)
File "/opt/cloud/bin/cs_staticroutes.py", line 26, in merge
del dbag[key]
KeyError: u'192.168.0.3'
```
* pr/1298:
CLOUDSTACK-9204 Do not error when staticroute is already gone
Signed-off-by: Remi Bergsma <github@remi.nl>
- Refactors the set_backup, set_master and set_fault methods to have better names for the variable
- Increase the sleep on the test in order to wait for the routers to be ready. It's now 3 times the GC settings
CLOUDSTACK-9155 make sure logrotate is effective for cloud.logMany processes on the VRs log to cloud.log. When log rotate kicks in, the file is rotated but the scripts still write to the old inode (cloud.log.1 after rotate). Tis quickly fills up the tiny log partition.
Using 'copytruncate' is a small tradeoff, there is a slight change of missing a log entry, but in the old situation nothing ended up in cloud.log after rotate (except for stuff that was (re)started) so I think this is the best solution until we properly rewrite the script to either use their own script or syslog.
More details: https://issues.apache.org/jira/browse/CLOUDSTACK-9155
* pr/1235:
CLOUDSTACK-9155 make sure logrotate is effective
Signed-off-by: Remi Bergsma <github@remi.nl>
Many processes on the VRs log to cloud.log. When logrotate
kicks in, the file is rotated but the scripts still write
to the old inode (cloud.log.1 after rotate). Tis quickly
fills up the tiny log partition.
Using 'copytruncate' is a tradeoff, there is a slight
change of missing a log entry, but in the old situation
we were missing all of them after logrotate.