Modify the spec file to package the agent files and the scripts
Some changes to the poms to put the java dependencies in the right place.
Move the agent script to the dedicated os dir in packaging.
Detail: new script called cloud-ssh replaces the long
'ssh -i /root/.ssh/id_rsa.cloud -p 3922 root@169.254.0.12'
users can now just run 'cloud-ssh 169.254.0.12'. Also adds it to deb and rpm
builds.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1353086232 -0700
Agent does not need jetty for anything, so remove the dependency.
Console-proxy should only depend on agent and agent will pull the other
dependency.
Patches does not require any dependencies
* send StartupAnswer right after StartupCommand is recieved
* if post processor going wrong, send out readycommand with error message to agent, then agent will exit
The management server also depends on a couple of these scripts, so renaming
to cloud-scripts makes more sence then installing cloud-agent-scripts.
In the future we might want to split this up in two packages.
The example configuration file said 'workers' was the directive, but the code said
'threads'.
Now we accept both to prevent configuration errors, but the example config remains 'workers'
This is a very dangerous file. Although we do not package it, it would be very dangerous to do so.
If this file would be present on a HyperVisor it would cause all instances to be stopped on a libvirt restart.
There is no need for stopping all instances when libvirt is being stopped (or restarted).
This patch adds RBD (RADOS Block Device) support for primary storage in combination with KVM.
To get this patch working you need:
- libvirt-java 0.4.8
- libvirt with RBD storage pool support (>0.9.13)
- Qemu with RBD support (>0.14)
The primary storage does not support all the functions of CloudStack yet, for example snapshotting is disabled
due to the fact that backupping up a RBD snapshot is not possible in the way CloudStack wants to do it.
Creating templates from RBD volumes goes well, creating a VM from a template however is still a hit-and-miss.
NFS primary storage is also still required, you are not able to run your System VM's from RBD, they will need
to run on NFS.
Other then these points you can run instances with RBD backed disks.
host if there are multiple storage pools in a cluster.
The issue is as follows:
1. When CloudStack detects that a host is not responding to ping
requests it'll send a fence command for this host to another host in the
cluster.
2. The agent takes a long time to respond to this check if the storage
is fenced. This is because the agent checks if the first host is writing
to its heartbeat file on all pools in the cluster. It is doing this in a
sequential manner on all storage pool.
Making a fix to get rid of sleep, wait during HA. The behavior is now
similar to Xenserver.
RB: https://reviews.apache.org/r/6133/
Send-by:devdeep.singh@citrix.com
We also exit earlier, we don't display that we are even trying to start.
When we detect the agent is already running we exit right away with a message.
With LSB there is no need for having different init script for different distributions.
This init script should be fully LSB compliant and should work on all Linux platforms we support.
RHEL, (Open)SUSE, Debian and Ubuntu all support at least LSB 3.1
[Problem]
CloudStack uses a significant amount of third party software. As part of the move to ASF there is a certain set of licenses that are compatible with ASF policy. We need to make sure that every dependency we have is in that set. If it's not we have to remove it.
[Solution]
First set: Removing JnetPcap.
[Reviewers]
Edison Su, David Nalley
[Testing]
[Test Cases]
Executed ANT build-all sucessfully after removing JnetPcap and its respective dependencies.
[Platform]
Fedora release
Signed-off-by: Pradeep <pradeep.soundararajan@citrix.com>
The init script was waiting for cloudbr0 to come up, but it's not mandatory that this bridge is available.
We now wait for at least one bridge to be up before starting the cloud agent.
The default_network_rules_systemvm method in security_group.py only created the appropriate rules for
just one bridge.
This however leads to traffic not being forwarded to the virtual machine in the case of the system VMs
both (console & storage) having different bridges in basic networking.
This patch makes sure rules are generated for all target devices based on their source device/bridge
It however excludes the LinkLocalBridge since no filtering is needed on that bridge.