Update ImageFormat enum to include VHDX format introduced with Hyper-V
Server 2012.
Remove existing Hyper-V plugin, because it does not work and is dead
code.
Remove references to existing Hyper-V plugin from config files.
Remove Hypervisor.HypervisorType.Hyperv special cases from manager code
that are unused or unsupported.
Specifically, there is no CIFS secondary storage class
"CifsSecondaryStorageResource". Also, the Hyper-V plugin's
ServerResource is contacted by the management server and not the other
way around.
Add Hyperv-V support to ListHypervisorsCmd API call
Signed-off-by: Edison Su <sudison@gmail.com>
- writeToFile removed since no references to it
- readFileAsString replaced with FileUtils.readFileToString
- minor code duplication removed in dependent method getNicStats
- unit test added
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
instead of a dynamically configured one.
Refactor bits of the HypervisorHostHelper to better
support multiple BroadcastDomainTypes
Cleanup some parts of the code that was using tab indents and removed some dead code.
Read the global configuration setting while configuring VmwareManager.
Also enabling autoExpand feature for supported distributed virtual switch versions.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Stratosphere SSP is an SDN solution which creates virtual L2
networks backed by vxlan and vlan. SSP will ask hypervisor to set a
specific vlan, then SSP will interact with openflow switches and
put vxlan/vlan translation flow rules.
This plugin provides SSP as "connctivity" service provider.
Signed-off-by: Hiroaki KAWAI <kawai@stratosphere.co.jp>
The file cloudstack-oss.asf/client/target/cloud-client-ui-4.2.0-SNAPSHOT/WEB-INF/classes/scripts/util/ipmi.py
is not mark as executible, so we cannot execute it directly.
Involve "/usr/bin/python" for it. The position of python file maybe different in
the different machines.
1. Baremetal doesn't have secondary storage, so we don't need check them.
2. The new "AddBaremetalHostCmd" hasn't been used by UI, so keep the validity
checking out for now. "AddHostCmd" would still works.
3. Baremetal haven't implemented multiple ip range feature(CLOUDSTACK-702),
return true for now for single range.
Introduced the SimulatorStorageProcessor to handle image store related
commands. Right now only mock implementations are provided, no error
handling, logging, runtime exception scenarios limited. SystemVMs are
able to start up but the default built-in template has trouble in going
to Ready state.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
CLOUDSTACK-3042 - handle Scaling up of vm memory/CPU based on the presence of XS tools in the template
This also takes care of updation of VM after XS tools are installed in the vm and set memory values accordingly to support dynamic scaling after stop start of VM
Signed-off-by: Abhinandan Prateek <aprateek@apache.org>
for number of commands participating in Vm deployment process, as parallel deployment is supported on the hypervisor side.
The behavior is controlled by global config varirables:
"execute.in.sequence.hypervisor.commands" (false by default) sets/resets the synchronization for commands:
=========================
StartCommand
StopCommand
CreateCommand
CopyVolumeCommand
"execute.in.sequence.network.element.commands" (false by default) sets/resets the synchronization for commands:
==========================
DhcpEntryCommand
SavePasswordCommand
UserDataCommand
VmDataCommand
As a part of the fix, increased the global lock timeout to 30 mins in several VR scripts:
===========================
edithosts.sh
savepassword.sh
userdata.sh
to support situations when multiple concurrent calls to the script are being made.
has dedicated resources and the dedicated resources have all been consumed - use.system.public.ips and use.system.guest.vlans
Both configs are configurable at the account level too.
During Scale up of VM, memory/cpu calculations should consider the memory/cpu overprovisioning factors which are set per cluster.
CLOUDSTACK-2939: CPU limit is not getting set for vm after scaleup to a service offering which have cpu cap enabled
Signed-off-by: Abhinandan Prateek <aprateek@apache.org>