346 Commits

Author SHA1 Message Date
Marcus Sorensen
dcdcd09058
Randomize managed volume copy host (#5789)
* Randomize managed volume copy host

* Managed volume copy was always returning first host that could see storage pools

* Fix null value in logging for ScaleIOPrimaryDataStoreDriver due to if/else logic error

Signed-off-by: Marcus Sorensen <mls@apple.com>

* Use String.format for ScaleIO debug message

Signed-off-by: Marcus Sorensen <mls@apple.com>

* Update debug message for ScaleIO copy methods

Signed-off-by: Marcus Sorensen <mls@apple.com>

Co-authored-by: Marcus Sorensen <mls@apple.com>
2021-12-30 16:11:00 +05:30
nicolas
93c3c3b9ac
Updating pom.xml version numbers for release 4.16.1.0-SNAPSHOT
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-09 22:50:22 -03:00
nicolas
44c08b5acc
Updating pom.xml version numbers for release 4.16.0.0
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-04 14:14:57 -03:00
Harikrishna
cd4e7e031a
Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster (#5539)
* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster

* Change in constructors

* Naming changes

* Remove commented code

* Refactor code for more readability

* Addressed review comments on code refactor
2021-10-04 20:58:25 -03:00
Peinthor Rene
66c39c1589
storage: Linstor volume plugin (#4994)
This adds a volume(primary) storage plugin for the Linstor SDS.
Currently it can create/delete/migrate volumes, snapshots should be possible,
but currently don't work for RAW volume types in cloudstack.

* plugin-storage-volume-linstor: notify libvirt guests about the resize
2021-09-16 10:50:58 +05:30
Abhishek Kumar
56f4da6dce Merge remote-tracking branch 'apache/4.15' into main 2021-09-02 16:13:33 +05:30
Wei Zhou
4e53997ca2
server: do not remove volume from DB if fail to expunge it from primary storage or secondary storage (#5373)
* server: do not remove volume from DB if fail to expunge it from primary storage or secondary storage

* server/VolumeApiServiceImpl.java: move to method

* update #5373
2021-08-31 13:48:58 -03:00
Rohit Yadav
a1a3aff2b5 Merge remote-tracking branch 'origin/4.15' into main 2021-08-31 14:29:30 +05:30
slavkap
961e85eb60
Fix of creating volumes from snapshots without backup to secondary storage (#5349)
* Fix of creating volumes from snapshots without backup

When few snaphots are created onyl on primary storage, and try to create
a volume or a template from the snapshot only the first operation is
successful. Its because the snapshot is backup on secondary storage with
wrong SQL query. The problem appears on Ceph/NFS but may affects other
storage plugins.
Bypassing secondary storage is implemented only for Ceph primary storage
and it didn't cover the functionality to create volume from snapshot
which is kept only on Ceph

* Address review
2021-08-31 12:46:57 +05:30
Spaceman1984
96c9c5a5e2
Added disk provisioning type support for VMWare (#4640)
* Added disk provisioning type support for VMWare

* Review changes

* Fixed unit test

* Review changes

* Added missing licenses

* Review changes

* Update StoragePoolInfo.java

Removed white space

* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id

* Delete __init__.py

* Merge fix

* Fixed failing test

* Added comment about parameters

* Added error log when update fails

* Added exception when using API

* Ordering storage pool selection to prefer thick disk capable pools if available

* Removed unused parameter

* Reordering changes

* Returning storage pool details after update

* Removed multiple pool update, updated marvin test, removed duplicate enum

* Removed comment

* Removed unused import

* Removed for loop

* Added missing return statements for failed checks

* Class name change

* Null pointer

* Added more info when a deployment fails

* Null pointer

* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java

Co-authored-by: dahn <daan.hoogland@gmail.com>

* Small bug fix on API response and added missing bracket

* Removed datastore cluster code

* Removed unused imports, added missing signature

* Removed duplicate config key

* Revert "Added more info when a deployment fails"

This reverts commit 2486db78dca8e034d8ad2386174dfb11004ce654.

Co-authored-by: dahn <daan.hoogland@gmail.com>
2021-07-16 22:37:42 -03:00
Rohit Yadav
d916e416ec Updating pom.xml version numbers for release 4.15.2.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-07-02 22:59:07 +05:30
Rohit Yadav
379454caae Updating pom.xml version numbers for release 4.15.1.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-06-28 15:27:27 +05:30
sureshanaparti
07cabbe7ac
scaleio: Updated PowerFlex/ScaleIO gateway client with some improvements. (#5037)
- Added connection manager to the gateway client.
 - Renew the client session on '401 Unauthorized' response.
 - Refactored the gateway client calls, for GET and POST methods.
 - Consume the http entity content after login/(re)authentication and close the content stream if exists.
 - Updated storage pool client connection timeout configuration 'storage.pool.client.timeout' to non-dynamic.
 - Added storage pool client max connections configuration 'storage.pool.client.max.connections' (default: 100) to specify the maximum connections for the ScaleIO storage pool client.
 - Updated unit tests.
and blocked the attach volume operation for uploaded volume on ScaleIO/PowerFlex storage pool
2021-06-16 12:45:27 +05:30
Abhishek Kumar
426f14b6ed Merge remote-tracking branch 'apache/4.15'
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-05-18 15:19:20 +05:30
Abhishek Kumar
dc91a1fd4d
server: destroy ssvm, cpvm on last host maintenance (#4644)
* server: destroy ssvm, cpvm on last host maintenance

When a single or last UP host enters into maintenance just stopping SSVM and CPVM will leave behind VMs on hypervisor side. As these system vms will be recreated they can be destroyed.
Fixes #3719

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix methods

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* immediately destroy systemvms

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix destroy

Added bypassHostMaintenance flag in Comma.java class to allow command to be handled by host agent even when host is in maintenace.
Flag is set true only for delete commands for ssvm and cpvm.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* unit test fix

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix missing return statement

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix

VM should be stopped with cleanup before calling expunge else it server may through error with host in PrepareForMaintenance state.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactor

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* rename

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactor

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-05-14 23:16:15 +05:30
sureshanaparti
eba186aa40
storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
2021-02-24 14:58:33 +05:30
Rohit Yadav
b482da8c91 Updating pom.xml version numbers for release 4.15.1.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-01-11 13:58:30 +05:30
Daan Hoogland
280c13a4bb Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-05 15:51:02 +00:00
Daan Hoogland
81e9e6809b Updating pom.xml version numbers for release 4.15.1.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:34:46 +00:00
Daan Hoogland
e26202f23e Updating pom.xml version numbers for release 4.16.0.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:32:10 +00:00
Daan Hoogland
01b3e361c7 Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2020-12-23 16:32:25 +00:00
Alexandru Bagu
fdb2ee3165
storage: Fix hypervisor type cast to string (#4516)
This PR addresses an error that appears when you try to add a new host. I don't even understand why there was a cast to String in the first place. I will assume some classes send HypervisorType and some send a string (empty or otherwise). Shouldn't this be addressed to use the same type everywhere? With this fix adding a new xenserver host works fine.

Co-authored-by: dahn <daan.hoogland@gmail.com>
2020-12-14 11:56:44 +05:30
lujiefsi
2aa7fac9ac
CLOUDSTACK-10423:Potential sensitive information disclosure (#4536)
* fixing CLOUDSTACK-10423

* make the message clear

Co-authored-by: lujie <lujie@foxmail.com>
2020-12-14 11:40:23 +05:30
Rakesh
735b6de296
Cleanup download urls when SSVM destroyed (#4078)
Co-authored-by: Rakesh Venkatesh <r.venkatesh@global.leaseweb.com>
2020-11-18 14:01:31 +01:00
nvazquez
d864e9dc39 [VMware] Full OVF properties support 2020-10-19 15:05:56 +05:30
Harikrishna Patnala
40934ba9ff Fix travis failures by removing dependency of vmware from storage.
Added a new command class to verify the vCenter details provided while adding primary storage
2020-10-19 14:57:16 +05:30
Harikrishna Patnala
570f3214b8 Handle VMFS6 sesparse format disk files 2020-10-19 14:57:16 +05:30
Harikrishna Patnala
d4d372a9a4 Fix addition of datastores with invalid vCenter server details 2020-10-19 14:57:16 +05:30
Harikrishna Patnala
48786b2d31 DataStore Clusters addition as a storage pool 2020-10-19 14:57:15 +05:30
Harikrishna Patnala
6df819028e UI changes and accept any type of datastore as presetup in vmware 2020-10-19 14:57:15 +05:30
Spaceman1984
b586eb22f1
Human readable sizes in logs (#4207)
This PR adds outputting human readable byte sizes in the management server logs, agent logs, and usage records. A non-dynamic global variable is added (display.human.readable.sizes) to control switching this feature on and off. This setting is sent to the agent on connection and is only read from the database when the management server is started up. The setting is kept in memory by the use of a static field on the NumbersUtil class and is available throughout the codebase.

Instead of seeing things like:
2020-07-23 15:31:58,593 DEBUG [c.c.a.t.Request] (AgentManager-Handler-12:null) (logid:) Seq 8-1863645820801253428: Processing: { Ans: , MgmtId: 52238089807, via: 8, Ver: v1, Flags: 10, [{"com.cloud.agent.api.NetworkUsageAnswer":{"routerName":"r-224-VM","bytesSent":"106496","bytesReceived":"0","result":"true","details":"","wait":"0",}}] }

The KB MB and GB values will be printed out:

2020-07-23 15:31:58,593 DEBUG [c.c.a.t.Request] (AgentManager-Handler-12:null) (logid:) Seq 8-1863645820801253428: Processing: { Ans: , MgmtId: 52238089807, via: 8, Ver: v1, Flags: 10, [{"com.cloud.agent.api.NetworkUsageAnswer":{"routerName":"r-224-VM","bytesSent":"(104.00 KB) 106496","bytesReceived":"(0 bytes) 0","result":"true","details":"","wait":"0",}}] }

FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Human+Readable+Byte+sizes
2020-08-13 15:55:16 +05:30
Wido den Hollander
c3554ec31d
kvm: For ceph only if a port number has been specified define in the XML (#4231)
Ceph used to use port 6789 (no need to specify it), but with the messenger v2
from Ceph it switched to port 3300 while 6789 still works.

librados/librbd/libvirt will automatically figure out the ports to use if none is
specified.

Therefor there is no need for CloudStack to explicitely define the port in the XML
passed to Libvirt or Qemu.

Leave blank if no port number has been defined by the user.
2020-08-05 13:44:40 +05:30
Sid Kattoju
8dd6cef9a6
create Volume Access Groups per cluster instead of CloudStack-RandomUUID() (#3794)
* create vags per cluster

* vagname in solidfire utils vag object

* fix string compare

* refactor to make use of existing map

* fix typos

* rebuild vag to iqn map after creating cluster vag

* refactor loop using java 8 stream api

* update null entry in vag to iqn map

* remove null vag to iqn mapping when creating cluster id vag

* add initiator to sf vag when adding hosts

* use cluster uuid instead of cluster id and refactor

* update null entry in vagtoiqnmap

* update sfvag list after creating new vag

* pass clusterDao to handleVagForHost

* check if initiator is not already added to the vag

* factor logic into methods

* fix typo and camel case

* fix listing clusters by zone id

Co-authored-by: Sid Kattoju <siddharthakattoju@gmail.com>
2020-06-02 12:58:20 -06:00
andrijapanicsb
5f926c3353 Updating pom.xml version numbers for release 4.15.0.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 10:18:39 +01:00
andrijapanicsb
05e9b11694 Updating pom.xml version numbers for release 4.14.1.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 09:59:32 +01:00
andrijapanicsb
6f96b3b2b3 Updating pom.xml version numbers for release 4.14.0.0
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-11 15:03:14 +01:00
Anurag Awasthi
c0abfce8fa
Health check feature for virtual router (#3575) 2020-01-30 12:39:03 +01:00
Gabriel Beims Bräscher
93aad24bbb storage: Handle RBD snapshot deletion (#3615)
When deleting volume snapshots, only records in the database are deleted, and snapshots are not deleted on the main storage.

Fixes: #3586
2019-12-08 14:48:51 +05:30
Paul Angus
50fc045f36 Updating pom.xml version numbers for release 4.14.0.0-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2019-09-07 09:57:46 +01:00
manojkverma
e3d70b7dcc storage: Datera storage plugin (#3470)
Features:

Zone-wide and cluster-wide primary storage support
VM template caching automatically on Datera, the subsequent VMs can be created instantaneously by fast cloning the root volume.
Rapid storage-native snapshot
Multiple managed primary storages can be created with a single Datera cluster to provide better management of
Total provisioned capacity
Default storage QoS values
Replica size ( 1 to 5 )
IP pool assignment for iSCSI target
Volume Placement ( hybrid, single_flash, all_flash )
Volume snapshot to VM template
Volume to VM template
Volume size increase using service policy
Volume QoS change using service policy
Enabled KVM support
New Datera app_instance name format to include ACS volume name
VM live migration
2019-07-25 14:13:04 +05:30
skattoju4
4c60a5b1ff Fix slow vm creation when large sf snapshot count (#3282)
* skip geting used bytes for volumes that are not in Ready state
* updated log message
* filter snapshots by state backedup
* removed * import
* filter templates by state 'DOWNLOADED'
* refactored getUsedBytes to use O(1) queries
* querying for ready volumes instead filtering in memory
* make listByStoreIdInReadyState more generic ex listByStoreIdAndState
* updated snapshot search criteria for listByStoreIdAndState
* updated template search criteria for listByPoolIdAndState
* fixed typo in search criteria for listByTemplateAndState
* fixed typo in search criteria for templates in listByPoolIdAndState
2019-05-11 16:02:52 +02:00
GabrielBrascher
8d3feb100a Updating pom.xml version numbers for release 4.13.0.0-SNAPSHOT
Signed-off-by: GabrielBrascher <gabriel@pcextreme.nl>
2019-03-20 18:47:35 -03:00
GabrielBrascher
a137398bf1 Updating pom.xml version numbers for release 4.12.0.0
Signed-off-by: GabrielBrascher <gabriel@pcextreme.nl>
2019-03-14 10:11:46 -03:00
Gabriel Beims Bräscher
7c5eca9481
Copy template to target KVM host if needed when migrating local <> local storage (#3154)
* Migrate template to target host if needed.

Fix KVM VM local storage live migration by migrating its template to the
target host if needed.

* Address reviewer and add method that updates the DB template reference

* Remove deprecated Config.PrimaryStorageDownloadWait

* Code formating of @Inject to follow checkstyle
2019-02-05 00:18:29 -02:00
Gabriel Beims Bräscher
bf209405e7 Allow KVM VM live migration with ROOT volume on file storage type (#2997)
* Allow KVM VM live migration with ROOT volume on file

* Allow KVM VM live migration with ROOT volume on file
- Add JUnit tests

* Address reviewers and change some variable names to ease future
implementation (developers can easily guess the name and use
autocomplete)
2018-12-14 09:01:28 -02:00
Mike Tutkowski
3db33b7385 Support online migration of a virtual disk on XenServer from non-managed storage to managed storage 2018-08-12 00:23:36 -06:00
Khosrow Moossavi
7c6630bca7 Cleanup POMs (#2613)
* Cleaup and code-formatting POM files

* Remove obsolete mycila license-maven-plugin

* Remove obsolete console-proxy/plugin project

* Move console-proxy-rdbconsole under console-proxy parent

* Use correct parent path for rdpconsole

* Order alphabetally items in setnextversion.sh

* Unifiy License header in POMs

* Alphabetic order of modules definition

* Extract all defined versions into parent pom

* Remove obsolete files: version-info.in, configure-info.in

* Remove redundant defaultGoal

* Remove useless checkstyle plugin from checkstyle project

* Order alphabetally items in pom.xml

* Add aditional SPACEs to fix debian build

* Don't execute checkstyle on parent projects

* Use UTF-8 encoding in building checkstyle project

* Extract plugin versions into properties

* Execute PMD plugin on all the projects with -Penablefindbugs

* Upgrade maven plugins to latest version

* Make sure to always look for apache parent pom from repository

* Fix incorrect version grep in debian packaging

* Fix rebase conflicts

* Fix rebase conflicts

* Remove PMD for now to be fixed on another PR
2018-07-25 14:39:37 -03:00
Mike Tutkowski
73608dec28 Support multiple volume access groups per compute cluster 2018-07-16 15:13:16 -06:00
Kui LIU
951f73b107 CLOUDSTACK-10362: Change the "getXXX" method names to "isXXX" (#2600)
These Boolean-return methods are named "getXXX", but other Boolean-return methods are named "isXXX", such as the following two methods. They will return boolean values, rename them as "isXXX" should be more clear than "getXXX".
2018-05-09 21:44:40 +05:30
Rohit Yadav
e7bd73e72b Merge branch '4.11' 2018-05-04 12:39:53 +05:30