Detail: Because of the way most other primary storage types work with cloudstack
(i.e. backing stores) CLVM actually copies the template to a local logical
volume on primary storage, then uses that. This causes all of your primary
storage to be littered with a copy of every template used. Since we're not
using these, dump the template direct to the newly created logical volume.
This is faster as well since the template is sparse; we're not creating a fat
template on primary storage and then copying that to a logical volume when we
deploy from template.
BUG-ID: CLOUDSTACK-508
Bugfix-for: 4.1
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1353221260 -0700
Detail: In com.cloud.hypervisor.kvm.resource.BridgeVifDriver.java, in 2 places
an if block should have evaluated to true if trafficLabel was null, however it
was causing a NullPointerException instead.
BUG-ID : NONE
Bugfix-for: 4.0
Reviewed-by: Marcus Sorensen
Reported-by: Dave Cahill
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1352307750 -0700
The authenticators now have an encode function that cloudstack will use to encode the user supplied password before storing it in the database. This makes it easier to add other authenticators with other hashing algorithms. The requires a two step approach to creating the admin account at first start as the authenticators are only present in the management-server component locator.
The SHA256 salted authenticator make use of this new system and adds a hashing algorithm based on SHA256 with a salt. This type of hash is far less susceptible to rainbow table attacks.
To make use of these new features the users password will be sent over the wire just as he typed it and it will be transformed into a hash on the server and compared with the stored password. This means that the hash will not go over the wire anymore.
The default authenticator in components.xml is still set to md5 for backwards compatibility. For new installations the sha256 could be enabled.
Detail: There was a regression in functionality introduced by
915babd970a9b4f209deceb3c4973b7d1c9c0c12 where the public
bridge could not also be the private bridge. This had several
additional consequences, this patch should revert the behavior
back while keeping the functionality enhancements introduced by that
commit.
BUG-ID : NONE
Reviewed-by: Dave Cahill
Reported-by: Dave Cahill via cloudstack-dev
Signed-off-by: Marcus Sorensen <shadowsor@gmail.com> 1351574006 -0600
called.
VifDriver.unplug must be called in MigrateCommand which hooks VM
migration in source host, because plug will be called in
PrepareForMigration in destination host. But that operation is missing
in current LibvirtComputingResources.
Signed-off-by: Edison Su <sudison@gmail.com>
On kvm computing host, vifdriver.unplug will always fails (throws
LibvirtException) and network cleanup will not be called. This was
because the code first undefine the computing domain, and then tries to
query the destroyed machine definition to fetch NIC information. IMHO,
kvm plugin code rounds LibvirtException too much.
Signed-off-by: Edison Su <sudison@gmail.com>
The vmware modules should be listed as provided so they are never
packaged. However this also means that you have to put them in the
web-inf/lib directory by hand.
Set the version of the api in the central pom for easy reference.
Add wsdl4j as a runtime requirement. It is actually required by the
vmware implementation but it is easier to list it as a requirements for
the component here as vmware is not in any maven repo
put the dependency on vim back in the dependencies. It is not required
for compile, but is required as runtime by apputils.
vmware-lib-jaxrpc is now provided by axis-jaxrpc-1.4.jar, the former is
the same as latter (bit by bit) and only difference is the file name.
- Fix dependency in vmware-base's pom.xml
- Fix dependency in hypervisor-plugin-vmware's pom.xml
- Fix install-non-oss.sh by reverting commit:
2e6ddc6c36f4ce79e67ad223647071bccfc41c52.
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
This commit merges the nicira-l3support branch with master. This
effectively adds nicira nvp l3 support to master. The NiciraNVP Provider
can support the following services with this modification: Connectivity,
SourceNat, StaticNat and PortForwarding
Testing done:
Create, Delete network offerings with Nicira Element
Use Gui to add, modify, remove Nicira Element and Provider
Provision, deprovision SourceNat networks
Provision, deprovision Portforwarding and StaticNat rules
Tested with Nicira NVP release 2.1.0, 2.2.0 and 2.2.1 (2.2.x recommended)
Since only the cephx user like 'admin' was passed we couldn't define two RBD storage pools
using the cephx user admin, even if they were running on different Ceph clusters.
By adding the monitor hostname and poolname to the secret's usage (which we don't even use) it becomes
unique.
Fixes the hard coded path in the vmware plugin.
The systemvm.iso file would copy the script only to /opt/cloud/bin.
Same is the path used for vpc_netusage.sh
Signed-off-by: Rohit Yadav <rohit.yadav@citrix.com>