For LB device in inline mode, the ip deployer(the owner of public ip) is the
firewall in front of it, not itself. So check if it's inline or not, if it's
inline, return the firewall as ip deployer
Use SRX firewall filter as SRX firewall. The old security policy mechanism
cannot be used as IP based. This would enable SRX's ability to control traffic
for F5 behind it.
Change access to canHandle so it's easier to unittest.
Make a note that answers can be null if the host is down, there should
be a way to deal with this, but for now an NPE is an adequate indication
that something is wrong.
Multiple fixes:
1. changes to the mvn configuration
a. include simulator to client.war
b. activate simulator by profile
2. templates for simulator
3. developer prefill for simulator
a. Use deplydb-simulator to setup simulator db
4. Inherit components-simulator.xml from components.xml
5. ListVolumesCommand missed for MockStorageManager
6. Include simulator properties into utils/db.properties
TODO:
Secondary storage VMs don't come up because ComponentLocator doesn't
retain a unique set of adapaters by name. Fix this in subsequent
checkin.
BootArgs carry router priority for master and backup and simulator will
reuse the priority to decide RvR status. Deprecating the odd/even logic.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Detail:
To induce latency for a command you have to use an API call like so
http://localhost:8096/client/api?command=configureSimulator&zoneid=1&podid=1&name=CheckRouterCommand&value=wait:80|timeout:0
(This is a hidden API command just for the simulator)
You will see the configuration effected in the mockconfiguration table of
simulator db. You can introduce the latency at runtime without restarting
management server.
mysql> select * from mockconfiguration;
+----+----------------+--------+------------+---------+--------------------+-------------------+
| id | data_center_id | pod_id | cluster_id | host_id | name | values |
+----+----------------+--------+------------+---------+--------------------+-------------------+
| 1 | 1 | 1 | NULL | NULL | CheckRouterCommand | wait:80|timeout:0 |
+----+----------------+--------+------------+---------+--------------------+-------------------+
1 row in set (0.00 sec)
By providing the optional zoneid, podid, clusterid, hostid you can induce the
latency at various levels. This delay will happen before the command is
processed and post-execution return Command's Answer back to management
server.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Simulator just like any hypervisor should be a plugin.
resurrecting it to aid api refactoring tests. WIP
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Detail: Instead of using LibvirtStorageAdaptor for everything, you can create
your own storage adaptor and use it. We select storage adaptor based on storage
pool type, thus we needed to adjust LibvirtComputingResource to pass pool type
to everything in KVMStoragePoolManager. This in turn required that we pass the
info necessary to LibvirtComputingResource as well, so a few agent Commands were
modified.
Note this patch in and of itself shouldn't change any existing behavior, just
allow for new storage adaptors to be selected based on storage pool type.
Reviewed-by: Edison Su
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1355769696 -0700
These unittests do not depend on the componentlocator but instead are
completely dependent on mock objects. This ensures that they can be run
standalone without any requirements on the environment.
Includes some fixes to NiciraNvpGuestNetworkGuru and GuestNetworkGuru
Note to self, surefire actually runs with assertions enabled where junit
inside eclipse doesn't. Sure surefire will mark tests as failed when an
assertion is triggered. Take care when mocking stuff.
Added some unittests for the NiciraNvpApi. These tests mainly validate
the logic of the execute methods, which are the main thing in this
class. Other methods are basically wrappers around these functions.
Changed NiciraNvpApi to have a factory method for obtaining the
HttpMethod. This makes it easier to mock it.
Changed the executeMethods in NiciraNvpApi to protected so the unittests
have access.
Fixed a bug in NiciraNvpApi where releaseconnection was not called in
some cases.
Nicira NVP can't handle a range of port when implementing port
forwarding, so return an error message when a rule is being implemented
that uses port ranges.
Include unittest to verify this behaviour
Rewritten handling for static nat and port forwarding, should make some
more sense now and the complex functions are split in smaller units.
Fix a small bug in Match
Add equals function to NatRule that ignores the uuid field.
network in the advanced zone
Changing the F5, NetScaler, SRX network elemetns to handle both 'isolated networks'
and 'shared networks' in the advanced zone
Bug ID:CLOUDSTACK-312 enable L4-L7 network services in the shared network in the advanced zone
Detail: If source image is qcow2, and we want a qcow2 image, then doing a
convert strips off compression and any snapshots the user had in that image. If
a backing file exists, we stick with convert so we can pull in both the backing
file and the COW image, otherwise we just cp the qcow2 file. This is also faster
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1354755241 -0700
Changed the creation of the NiciraNvpApi to a factory method that can be
overridden by a mock object.
Setup two tests to test the configure function of the NiciraNvpResource
to test this factory method.
Detail: This patch deletes any patchdisk found when deleting root volume for
system VM.
BUG-ID: CLOUDSTACK-566
Bugfix-for: 4.0.1
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1354222335 -0700
stopped
Detail: This patch fixed an issue with hosts trying to stop system vms that were
already not running and deleting a patch disk for the system vm running on
another host. It got applied to 4.0 but not master.
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1354222160 -0700
Detail: Because of the way most other primary storage types work with cloudstack
(i.e. backing stores) CLVM actually copies the template to a local logical
volume on primary storage, then uses that. This causes all of your primary
storage to be littered with a copy of every template used. Since we're not
using these, dump the template direct to the newly created logical volume.
This is faster as well since the template is sparse; we're not creating a fat
template on primary storage and then copying that to a logical volume when we
deploy from template.
BUG-ID: CLOUDSTACK-508
Bugfix-for: 4.1
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1353221260 -0700