Discovery of modules is based on classpath scanning. In some situations it
may not be possible or desirable to change the classpath. To force a
module to not load you can create a file called modules.properties on the
classpath that can exclude specific modules from loading. Additionally
this same file can be used to exclude a specific extension. Extension
loading is typically done through global configuration. If you want to set
up an environment and you don't even want the extension/module loaded on
the first start, then using the config file is appropriate.
Example: modules.properties
modules.exclude=storage-image-s3,storage-volume-solidfire
extensions.exclude=ClusterScopeStoragePoolAllocator,ZoneWideStoragePoolAllocator
Typically you would want to place this file in /etc/cloudstack/management
Added assertElementInList to cloudstackTestCase.
Users can use this new custom addition to add
assertions, thus replacing current multiple assertions
Signed-off-by: Santhosh Edukulla <Santhosh.Edukulla@citrix.com>
If the VM has snapshots then the chain_info of a volume can be longer than 255 characters.
Increasing the column length of chain_info in VolumeVO to match the maximum length of type text(db schema type)
Changes:
- Set total capacity of a host if it has changed in the CapacityChecker thread
- Fix bug while setting the reserved/used cpu/mem capacity - only one of them used to get set
Changes:
- Consider if VM requires the local storage or shared storage or both for its disks.
- Accordingly all pools in the cluster should consider local or shared or both pools
Conflicts:
server/src/com/cloud/agent/manager/allocator/HostAllocator.java
Fixing rebase issues after integrating with wmi v2 implementation.
Removing the executable attribute from some files.
Remove the unused wmi v1 interface file.
Unit test for DestroyCommand implementation in hyperv agent.
Fixed VM state changes w.r.t wmi version 2 changes
If a VM is already running, deploy virtual machine shouldn't fail and throw an exception.
Don't run vhd-util on templates which are present on CIFS. Hyperv uses cifs as secondary storage
Add a SCSI controller by default. This is needed so that data volumes can be added/removed
on a running vm.
Remove the hard coded path in the agent code.
Rat fixes for hyper agent. Added the missing headers in files where it was missing.
Copy the iso to the secondary storage and let the hypervisor agent know of its
location during setup. The agent will copy it over once it handles the setup
command.
Changes for attaching the systemvm iso to virtual router will booting it -
part 2. The agent copies over the systemvm iso during setup. When a
virtual router is being booted it attaches the iso to it.
Hyperv unit tests for the agent. Unit tests are written using NSubstitute and XUnit and
they test the create, stop and start commands in the agent.
Fix to make sure the hyperv agent and the funcitonal tests are working after the unit tests update.
Fixing the warnings while running unit tests for hyper agent.
Added a new switch for functional tests.
Update the unit test to create a fake vhd file on the fly and run the test. The file is removed when the test completes.
Fix for functional tests. The test was failing to build on java 1.6.
Fix to bring up SSVM and Console Proxy systemvms
Fix to discover the seeded template to bring up the systemvm's for the first startup and fixed UNC path isues
Fixed the UNC path for copying the files from CIFS, and from seeded template
Fixed the issues for ssvm and cpvm to wait until it gets configured and then return the status. Made checksum method to return true.
Fixed HypervDirectConnect resource to figure out the status of systemvms, Need to fix this issue by connecting to public/control ip instead of local ip
checksum is failing for the copied system vm images, currently bypassing.
Implemented commands that are required for VR to bootup and Vm deployment to work
Modified hyperv agent code, to deploy VR with Boot Args, boot args passed to VR using KVP Exchange Component.
Fix for VR to boot up and get configured with boot args, Fixed issue in VolumeOrchestrator
Implemented SetFirewallRulesCommand in HyperV Resource
Implemented VR network commands to provide the necessary services from VR
Fixed hyperv localstorage path encode url issue. encode is converting space to '+'
architecture allows additional functionality to be easily added. Incorporating the plugin in CloudStack will allow
the community to participate in improving the features available with Hyper-V. The plugin uses a Director Connect
Agent architecture described here: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Progress
Add ability to pass kvp data via the key cloudstack-vm-userdata
Rearrange code to make it clearer what .NET objects are being used.
Test failures are easier to deal with if test key is not deleted.
Acquire management/pod ip for control ip when VR deploys in HyperV
Fixed deletion on VM's on hyperv host when mgmt server gets restarted due to HA
Implementation for attach iso command. Attaches an iso to a given vm.
Cloudstack sends requests to directly managed HV hosts (direct agents) using the direct agent thread pool. The size of the pool is determined by global config direct.agent.pool.size defaulted to 500.
Currently there is no restriction on the number of threads a direct agent can use from this shared thread pool to send requests to the host. This is fine as long as the host is responding to requests
in a reasonable amount of time. But if there is a considerable delay in getting response, the thread remain blocked for that much time. As more commands are send to the slow host threads keep getting
blocked. This can eventually lead to a situation where requests to healthy hosts cannot be processed as there are not enough free threads.
The problem being addressed here is to localize the impact of few bad hosts, so that entire management server is not affected.
One such way is to throttle based on the # of outstanding requests on per host basis. The outstanding requests to a host will be a % of direct agent pool size. This is configurable based on
direct.agent.thread.cap. The default value is 0.1 or 10%, a value of 1 would mean the old behavior where there is no upper cap. This will ensure that the impacted host will be bound by a upper cap on the number of threads it can use to process requests and not the entire pool.