(outside cloudstack), the state of the vm is not updated in cloudstack db. The
ping task was not checking for resource (host) status by default. The power
state of the vms is returned as part of the resource status. Fixed the issue by
making sure ping task atleast tries once to get the resource status.
Separate global config to enable/disable Storage Migration during normal deployment
Introduced a configuration parameter named enable.storage.migration
Everytime when checking the RvR status, you must wait some time for RvR to
update it's status. The polling thread would update the status only every 30
seconds by default.
1. getRouters() doesn't handle RestartNetwork with cleanup=true for basic zone,
because pod wouldn't be specific at the time.
2. The regression caused by the following fix. The variable "routers" was
overrided with some local values, result in only one of the routers in multiple
pods would return, thus only one router would be started.
commit 6dd5c3fd42c70855d75156243dddc4933436baaf
Author: Rohit Yadav <bhaisaab@apache.org>
Date: Thu Oct 11 18:30:00 2012 +0530
CLOUDSTACK-70: Improve restart network behaviour for basic network
This closes#16
Pull request summary:
E-mail thread:
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201407.mbox/%3C7A6CF878-7A28-4D4A-BCD2-0C264F8C90B7%40schubergphilis.com%3E
This started out as wanting the systemvm build to take
systemvm/patches/debian/{debian,vpn} from the local machine/branch,
rather than downloading from the apache git master [1]. In working out
how on earth to get veewee to do that cleanly (hint: you can’t, hence
resorting to shar usage) I got quite frustrated with the image rebuild
times.
It so happens that veewee has a --skip-to-postinstall instruction which
is quite useful while debugging these scripts. To get that working
requires the post install steps to be retryable/convergent. Of course,
our existing scripts weren’t set up for that. So I had to add a bunch
of tests whether changes had applied already. Which implied a pretty
significant refactor.
Summarizing this kind of thing is always hard...it’s many little
things...the interesting stuff is at the end/bottom, in particular
the two main improvements
schubergphilis@142d087
When working on the systemvm in isolation, or using vagrant or
similar tools, it can be useful to inject a custom SSH key before
merging a management server systemvm.iso into it. This option
allows that. It should not have effect on management-server-
managed vms which always get their SSH keys injected.
schubergphilis@e2240ea
The current build downloads its script from master by fetching a
cloudstack tarball. Besides being an unneeded load on the apache
git server, this is a problem when working on a branch and
wanting to inject a different set of scripts. It also makes it
pretty likely that the injected copy of the script will not match
what a production release wants, so there is very little chance of
not needing to overwrite the scripts.
Ideally we would just rsync over some files. However, veewee does
not provide an option to do that. In order to keep a 'cleanly
veewee-only' build possible, and work with any recent veewee
version, in this change we restor to using shar
(http://en.wikipedia.org/wiki/Shar) to produce an archive which
can execute as a script, which we feed to veewee to execute.
In order to avoid having to re-do this cleanup twice, I also ended up
merging the systemvm and systemvm64 template definitions, factoring out
their small differences by inspecting the os architecture.
schubergphilis@f570b39
schubergphilis@50e9121
Everything else…well it pretty much falls into two categories:
general code cleanup without functional changes
general code defensiveness to survive various jenkins build scenarios
All in all it should help with ongoing maintenance, I think.
Most of these commits are now a while old but I wanted to wait with
sending this upstream until we had sufficiently tested the systemvms
built with this changed approach locally.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
It is simpler to expect that rvm setup is done outside of this build.
The buildacloud.org jenkins has rvm installed/enabled by default so
does not invoke rvm.
Running --export creates the .ovf and the .vmdk files referenced
from that .ovf in one go. Guessing/predicting the names of the .vmdk
files is not fool-proof.
The backticks in the Vagrantfile template were getting evaluated by bash.
This caused some harmless but confusing error messages to appear on running
the build. Easy fix is to remove them.
* bundle install needs to run before running the vbox cleaning scripts,
so move prepare step before clean step
* feature branches have / in their name which is a bad character to
put into filenames
Veewee supports exporting vagrant boxes out of virtualbox, out of the box.
However, it assumes that it can export a disk if the shutdown of the vm that
is using that disk has succeeded. This assumption is not strictly always true
(see previous commit). So, we replicate the bit of logic in veewee for making
vagrant boxes.
This has the added side benefit of creating an .ovf export only once, rather
than once for vmware and then again for vagrant.
Having experimented with many edge cases of running multiple build.sh
commands in parallel / against busy virtualbox setups, the only really
reliable way to produce consistent images is to not do these commands
in parallel and to not do them while the machine is doing many other
things.
If virtualbox or the machine that hosts it is very busy, and/or it has
a lot of disks it knows/knew about, and/or its tuesday, behavior may
be a bit different.
Realizing this reality, this commit adds some scripts that try really
hard to set virtualbox back to known/healthy state before building.