This adds support for virtio-scsi on KVM hosts, either
for guests that are associated with a new os_type of 'Other PV Virtio-SCSI (64-bit)',
or when a VM or template is regstered with a detail parameter rootDiskController=scsi.
Update cloudstack add template dialog to allow for selecting rootDiskController with KVM
Update cloudstack kvm virtio-scsi to enable discard=unmap
CLOUDSTACK-9821: Fixed issue in deploying vm in basic zoneFixed issue in deploying vm in basic zone.
There is issue in ipset command with xenserver 6.5. In util.pread2 ipset and -N is passed as single string and it caused the issue in command failure.
util.pread2(['/bin/bash', '-c', 'ipset', '-N ', tmpname , type])
* pr/1991:
CLOUDSTACK-9821: Fixed issue in deploying vm in basic zone
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9784 : GPU detail not displayed in GPU tab of management server UI.ISSUE
==================
When GPU tab of the host is selected on the management server UI, no GPU detail is displayed.
RESOLUTION
==================
In the javascript file "system.js" while fetching the GPU details, sort functionality in dataprovider is returning value as undefined and hence it throwing an exception. So handled the output as undefined gracefully to avoid exception.
**Screenshot before applying fix :**

**Screenshot after applying fix :**

* pr/1942:
CLOUDSTACK-9784 : GPU detail not displayed in GPU tab of management server UI.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK 9601: Upgrade: change logic for update path for filesFor going from version A to version D, it uses to run the SQL files in
that order: A -> B -> C -> D -> A-cleanup -> B-cleanup -> C-cleanup ->
D-cleanup. If you had upgraded each version separatively you would have
run A -> A-cleanup -> B -> B-cleanup -> C -> C-cleanup -> D ->
D-cleanup.
This change the logic to follow the same path if you are jumping over
versions.
Signed-off-by: Marc-Aurle Brothier <m@brothier.org>
* pr/1768:
Upgrade: change logic for update path for files
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-8841: Storage XenMotion from XS 6.2 to XS 6.5 fails.Removed Host version check in API. Because
Case 1:(Lower to Higher Version)
Migration from lower version to higher version is valid.
Case 2:(Higher to Lower Version)
In this case system(Host) will not allow.
So no need to check version in API. Additionally, CLOUDSTACK User Interface(UI) does not allow migration between different version of hyper-visors. But sometimes user wants to do migration from Lower to Higher Version. Now he can do it via API.
ACS Link ==>
https://issues.apache.org/jira/browse/CLOUDSTACK-8841
* pr/815:
CLOUDSTACK-8841: Storage XenMotion from XS 6.2 to XS 6.5 fails.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Security group ingress/egress issues with xenserver 6.2There is issue with the xenserver 6.2 ipset type nethash. Fixed it by adding nethash for ipset version 6 which is xenserver 6.5. For ipset version 4.x use iptreemap.
1. Tested configuring egress/ingress rules.
2. Tested the traffic for the configured rules from the VM.
* pr/843:
CLOUDSTACK-8871: fixed issue with the xenserver 6.2 ipset nethash
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests
NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle
only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
* pr/1825:
CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Fix public IPs not being removed from the VR when deprovisionedThis PR replaces #1706. It does not remove the IP from the database, but it does deprovision the IP correctly from the VR when the public IP is removed.
* pr/1907:
Fix public IPs not being removed from the VR when deprovisioned
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9757: Fixed issue in traffic from additional public subnetAcquire ip from additional public subnet and configure nat on that ip.
After this pick any from that network and access additional public subnet from this vm. Traffic is supposed to go via additional public subnet interface in the VR.
* pr/1922:
CLOUDSTACK-9757: Fixed issue in traffic from additional public subnet
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
* 4.9:
CLOUDSTACK-9746 system-vm: logrotate config causes critical failures
CLOUDSTACK-9788: Fix exception listNetworks with pagesize=0
CLOUDSTACK-8663: Fixed various issues to allow VM snapshots and volume snapshots to exist together
Fix HVM VM restart bug in XenServer
CLOUDSTACK-9746 system-vm: logrotate config causes critical failures* rotate both daily and by size by using maxsize in stead of size
* decrease the max size to 10M for rsyslog files
* remove delaycompress for rsyslog files
* increase rotate to 10 for cloud.log
* pr/1915:
CLOUDSTACK-9746 system-vm: logrotate config causes critical failures
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9363: Fix HVM VM restart bug in XenServerHere is the longer description of the problem:
By default XenServer limits HVM guests to only 4 disks. Two of those are reserved for the ROOT disk (deviceId=0) and CD ROM (device ID=3) which means that we can only attach 2 data disks. This limit however is removed when Xentools is installed on the guest. The information that a guest has Xentools installed and can handle more than 4 disks is stored in the VM metadata on XenServer. When a VM is shut down, Cloudstack removes the VM and all the metadata associated with the VM from XenServer. Now, when you start the VM again, even if it has Xentools installed, it will default to only 4 attachable disks.
Now this problem manifests itself when you have a HVM VM and you stop and start it with more than 2 data disks attached. The VM fails to start and the only way to start the VM is to detach the extra disks and then reattach them after the VM start.
In this fix, I am removing the check which is done before creating a `VBD` which enforces this limit. This will not affect current workflow and will fix the HVM issue.
@koushik-das this is related to the "autodetect" feature that you introduced a while back (https://issues.apache.org/jira/browse/CLOUDSTACK-8826). I would love your review on this fix.
* pr/1829:
Fix HVM VM restart bug in XenServer
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-8663: Fixed various issues to allow VM snapshots and volumesnapshots to exist together
Reverting VM to disk only snapshot in Xenserver corrupts VM
Stale NFS secondary storage on XS leads to volume creation failure from snapshot
Fixed various concerns raised in #672
* pr/1941:
CLOUDSTACK-8663: Fixed various issues to allow VM snapshots and volume snapshots to exist together
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
[CLOUDSTACK-9793] Faster IP in subnet checkThis change removes the conversion from IPNetwork to list in one of the router scripts. This makes the router faster at processing static NAT rules, which can prevent timeouts when attaching or detaching IPs.
With the `list` conversion, it has to potentially check a list of 65536 IP strings multiple times. We assume that the comparison implemented in the IPNetwork is far more efficient. We have seen speed-up from 218 seconds to enable static NAT with 18 IPs on the router to 2 or 3 seconds by removing this cast. This also fixes a potential bug where adding IPs to a router time out because the scripts are taking too long. 218 seconds, for example, is beyond the timeout on the KVM agent for script execution, and then all enableStaticNat operations will fail.
* pr/1948:
CLOUDSTACK-9793: Faster ip in subnet check
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9795: moved logrotate from cron.daily to cron.hourly for vpcrouter moved logrotate from cron.daily to cron.hourly for vpcrouter in cloud-early-config. This brings 'vpcrouter' inline with 'router'. We are having issues with cloud.log not rotating fast enough, which filled up /var/log and ultimately caused the VR to stop functioning in such a way that it prevented new VMs from being deployed.
* pr/1954:
moved logrotate from cron.daily to cron.hourly for vpcrouter in cloud-early-config
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
ipv6: Set IPv6 CIDR and Gateway in 'nic' profileWithout this information a NPE might be triggered when starting a VR, SSVM or CP
as this information is read from the 'nics' table and causes a NPE.
During deployment we should set the IPv6 Gateway and CIDR for the NIC object so that
it is persisted to the database.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* pr/1927:
ipv6: Set IPv6 CIDR and Gateway in 'nic' profile
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-8857 listProjects doesn't return tags vmstopped or vmrunning when their value is zero listProjects doesn't return tags vmstopped or vmrunning when their value is zero
added the the appropriate tags to response.
tested this manually by creating projects, launching vms from project accounts and then listing the projects.
* pr/838:
CLOUDSTACK-8857 listProjects doesn't return tags vmstopped or vmrunning when their value is zero
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-8856 Primary Storage Used(type tag with value 2) related tPrimary Storage Used(type tag with value 2) related tag is not showing in listCapacity api response
* pr/865:
CLOUDSTACK-8856 Primary Storage Used(type tag with value 2) related tag is not showing in listCapacity api response.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
* 4.9:
CLOUDSTACK-9789: Fix releasing secondary guest IP fails with associated static nat which is actually not used
CLOUDSTACK-9628: Use correct virtualsize with Swift as secondary storage
CLOUDSTACK-9789: Fix releasing secondary guest IP fails with associated static nat which is actually not used
* pr/1947:
CLOUDSTACK-9789: Fix releasing secondary guest IP fails with associated static nat which is actually not used
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9628: Fix Template Size in Swift as Secondary StorageCloudstack incorrectly uses the physical size as the size of the
template. Ideally, the size should refelct the virtual size. This
PR fixes that issue.
* pr/1770:
CLOUDSTACK-9628: Use correct virtualsize with Swift as secondary storage
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-8324: config drive data set/get scripts for the guest vmAdded the guest vm scripts for set/get the vm data, password and ssh keys
* pr/1379:
CLOUDSTACK-8324: updated the mount directory name and kvm virt device
CLOUDSTACK-8324: config drive data set/get scripts for the guest vm
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9724: Fixed missing additional public ip on tier network wIn VPC tier network acquire an ip and configure the PF service on it. VR now will have the two ip addresses on the interface.
Now restart the VPC tier network with cleanup option. After router comes up the public interface has only one ip (source nat ip)
Fixed the above issue.
* pr/1885:
CLOUDSTACK-9724: Fixed missing additional public ip on tier network with cleanup
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
* rotate both daily and by size by using maxsize in stead of size
* decrease the max size to 10M for rsyslog files
* remove delaycompress for rsyslog files
* increase rotate to 10 for cloud.log
For going from version A to version D, it uses to run the SQL files in
that order: A -> B -> C -> D -> A-cleanup -> B-cleanup -> C-cleanup ->
D-cleanup. If you had upgraded each version separatively you would have
run A -> A-cleanup -> B -> B-cleanup -> C -> C-cleanup -> D ->
D-cleanup.
This change the logic to follow the same path if you are jumping over
versions.
Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
CLOUDSTACK-8737: Removed the missed out-of-band VR reboot code, not required based on persistent VR changes.
* pr/1882:
CLOUDSTACK-8737: Removed the missed out-of-band VR reboot code, not required based on persistent VR changes.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9768: Time displayed for events in UI is incorrectTime displayed for events in UI is incorrect. Let's say, when we login using Japanese language the time displayed in the events is GMT instead of JST. However with English language the time is JST, as expected.
Example:
Time is displayed in the event is 10:40, if you are logged in using English language.
Whereas, time in the event shows 19:40 If you login with Japanese language.
* pr/1926:
CLOUDSTACK-9768: Time displayed for events in UI is incorrect
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9766 : Executing deleteSnapshot api with already deleted sIf we try to delete the snapshot which is already deleted, then no proper error appears in the log and it just try to delete the snapshot which is already deleted.
Steps to reproduce :
-------
1-create a snapshot
2-delete the snapshot
3-try to delete snapshot which is deleted in step 2
Expected Result
-------------
Result should show proper error message. Request for deleting already deleted snapshot should not be placed.
* pr/1924:
CLOUDSTACK-9766 : Executing deleteSnapshot api with already deleted snapshot does not throw any exception or failure message
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9711: Fixed error reporting while adding vpn user
If configuring vpn user in one of the network fails the failure is ignored, failure should be shown in API response.
* pr/1874:
CLOUDSTACK-9711: Fixed error reporting while adding vpn user
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9228: Network update with mistmatch in services require forced option# Steps to reproduce:
1.Bring up CloudStack in advanced zone
2.Create isolated network with sourcenat, pf, lb, firewall services
3.Deploy a VM in the above network
4.Create another network offering with sourcenat, pf, firewall services
5.Try to update the network with offering created in step4
# Result:
The new offering:DefaultIsolatedNetworkOfferingForVpcNetworksNoLB will remove the following services [Lb]along with all the related configuration currently in use. will not proceed with the network update.set forced parameter to true for forcing an update."
# Workaround:
Use api with forced=true
# Fix:
Added a confirmation dialog box to check whether to make force update or not.
The dialog appears only for the Admin. Only admin can make force update.
The new dialog appears after the first CIDR unchanged confirmation dialog.
* pr/1333:
CLOUDSTACK-9228: Network update with mistmatch in services require forced option
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9588: Add Load Balancer functionality in Network page is Redundant.Steps to Reproduce:
Network -> Select any network -> Observe Add Load Balancer tab
The "Add Load Balancer" functionality is redundant.
The above is used to create LB rule without any public IP.
Resolution:
There exist similar functionality in Network -> Any Network -> Details Tab -> View IP Addresses -> Any public IP -> Configuration Tab -> Observe Load Balancing.
The above is used to create LB rule with a public IP. This is a more convenient way of creating LB rule as the IP is involved.
* pr/1758:
CLOUDSTACK-9588: Add Load Balancer functionality in Network page is redundant. The "Add Load Balancer" functionality is redundant. The above is used to create LB rule without any public IP. This commit removes the tab from network page.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9618: Load Balancer configuration page does not have "Source" method in the drop down list.If we create an isolated network with NetScaler published service offering for Load balancing service, then the load balancing configuration UI does not show "Source" as one of the supported LB methods in the drop down list. It only shows "Round-Robin" and "LeastConnection" methods in the list. However, It successfully creates LB rule with "Source" as the LB method using API.
* pr/1786:
CLOUDSTACK-9618: Load Balancer configuration page does not have "Source" method in the drop down list
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
* 4.9:
CLOUDSTACK-9691: Added test list_snapshots_with_removed_data_store
CLOUDSTACK-9691: Fixed unhandeled excetion in list snapshot command when a primary store is deleted related to it