2784 Commits

Author SHA1 Message Date
Rohit Yadav
d916e416ec Updating pom.xml version numbers for release 4.15.2.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-07-02 22:59:07 +05:30
Rohit Yadav
379454caae Updating pom.xml version numbers for release 4.15.1.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-06-28 15:27:27 +05:30
DK101010
53963256d8
server: Bug/false positive success message vm start (#5148)
* add throws statement during the retry process

* Update engine/orchestration/src/main/java/org/apache/cloudstack/engine/cloud/entity/api/VMEntityManagerImpl.java

Co-authored-by: dahn <daan.hoogland@gmail.com>

Co-authored-by: DK101010 <dirk.klahre@itelligence.de>
Co-authored-by: dahn <daan.hoogland@gmail.com>
2021-06-27 06:40:30 +05:30
slavkap
d82909318f
server: Fix of delete of Ceph's snapshots from secondary storage (#5130)
This PR fixes the deletion will be handled by DefaultSnapshotStrategy::deleteSnapshot #4797
2021-06-25 12:04:36 +05:30
Harikrishna
12b2e80d82
vmware: Fix fetching chain_info of the volumes. It is used to assume datastore names are in the form of UUIDs but it can be any name. So fetch chain_info based on the datastore name. (#5097)
his PR fixes the problem of not updating the chain info or setting chain info to null after volume migrations.

Problem: While fetching the volume chain info, management server assumes datastore name to be a UUID (this is true only for NFS storages added by CloudStack) but datastore name can be with any name.
Solution: To fetch the volume chain info, use datastore name instead of UUID.

The fix is made in the flow of following API operations

migrateVirtualMachine
migrateVirtualMachineWithVolume
migrateVolume
2021-06-11 20:06:06 +05:30
Gabriel Beims Bräscher
3ee563905d
kvm: Check for VLAN or VXLAN in NetworkDaoImpl.listByPhysicalNetworkPvlan (#5074)
This PR fixes #5071; where it was reported an issue when creating a network with VXLAN.
2021-06-05 22:25:01 +05:30
Pearl Dsilva
d04fa0201d
server: usage generated for destroyed VMs with no backups (#5017)
Fixes: #4990
When a VM associated with a backup offering is destroyed/expunged, the backup offering isn't unassigned, and despite the VM having no backups present, backup usage is generated. This PR prevent usage record generation when there are no backups present for a VM with a backup offering associated to it. This is done by ensuring that usage event for backups is generated only when a the backup size > 0
2021-05-31 18:59:48 +05:30
Rohit Yadav
fbc8610f6e Merge remote-tracking branch 'origin/4.14' into 4.15 2021-05-31 15:54:56 +05:30
Gabriel Beims Bräscher
a78f676037
engine: fix network with SG disabled still has security group script adding rules on KVM (#5049)
This PR fixes #5047 which can be reproduced on Zones with _(I) Advanced Networks, (II) Security Groups enabled for the Zone, (III)  network offering without Security Groups_; for instance, `DefaultSharedNetworkOffering` which does not list Security Group as supported service.

The issue is due to the following code inside the method `VirtualMachineManagerImpl.orchestrateReboot`:
[VirtualMachineManagerImpl.java#L3340](280c13a4bb/engine/orchestration/src/main/java/com/cloud/vm/VirtualMachineManagerImpl.java (L3340)).

```
  final Answer rebootAnswer = cmds.getAnswer(RebootAnswer.class);
  if (rebootAnswer != null && rebootAnswer.getResult()) {
      if (dc.isSecurityGroupEnabled() && vm.getType() == VirtualMachine.Type.User) {
          List<Long> affectedVms = new ArrayList<Long>();
          affectedVms.add(vm.getId());
          _securityGroupManager.scheduleRulesetUpdateToHosts(affectedVms, true, null);
      }
      return;
  }
```
2021-05-31 15:52:26 +05:30
Pearl Dsilva
2eae0f5385
SystemVM: Set agent state to disconnected on Stopping the systemVM (#5010)
Fixes: #4972
This PR sets systevms' agent state to disconnected when it is stopped. Currently, when a systemVM (Console Proxy VM / Secondary storage VM) is stopped, the agent state still appears to be 'Up'
2021-05-19 13:00:17 +05:30
Rohit Yadav
2286c8d2bf Merge remote-tracking branch 'origin/4.14' into 4.15 2021-05-14 23:19:06 +05:30
Abhishek Kumar
dc91a1fd4d
server: destroy ssvm, cpvm on last host maintenance (#4644)
* server: destroy ssvm, cpvm on last host maintenance

When a single or last UP host enters into maintenance just stopping SSVM and CPVM will leave behind VMs on hypervisor side. As these system vms will be recreated they can be destroyed.
Fixes #3719

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix methods

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* immediately destroy systemvms

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix destroy

Added bypassHostMaintenance flag in Comma.java class to allow command to be handled by host agent even when host is in maintenace.
Flag is set true only for delete commands for ssvm and cpvm.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* unit test fix

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix missing return statement

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix

VM should be stopped with cleanup before calling expunge else it server may through error with host in PrepareForMaintenance state.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactor

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* rename

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactor

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-05-14 23:16:15 +05:30
Wei Zhou
e2183ed666
forceha: fix two issues when (1)stop vm from inside (2) force remove host (#4647)
* forceha: fix vm is not started if it is poweroff from inside

steps to reproduce the issue
(1) make sure force.ha is true in global setting. if not, change it to true, and restart mgt server
(2) create a service offering , ha is not enabled
(3) create a vm
(4) log into the vm, and power off via cli.

expected result: vm is started again by cloudstack
actual result: vm is not started.

* forceha: fix vms are still running if host is force-removed

when host can be force removed, however vms are stopped in cloudstack, but not stopped on host
```
(localcloud) 🐱 > delete host id="a5625393-444d-4d0a-b31d-62baf88a8be1" forced=true
{
  "success": true
}```

after some minutes, vms are still runnning on host
```
root@mgt01:~# ssh node63 virsh list
 Id   Name        State
---------------------------
 1    i-2-19-VM   running
 2    i-2-11-VM   running
```

error message are
```
Cannot transmit host 2 to Enabled state
com.cloud.utils.fsm.NoTransitionException: No next resource state found for current state = Enabled event = DeleteHost
        at com.cloud.resource.ResourceManagerImpl.resourceStateTransitTo(ResourceManagerImpl.java:1216)
        at com.cloud.resource.ResourceManagerImpl$1.doInTransactionWithoutResult(ResourceManagerImpl.java:907)
```

* forceha: Make ForceHA dynamic
2021-05-14 23:14:39 +05:30
Wei Zhou
1b28ea1ebb
network: fix dhcp/password/metadata issues on shared networks with multiple subnets (#5013)
* #4943: apply iptables for password and metadata

* #4943: fix wrong ip alias

* #4943: revert previous change and add ip_aliases

Co-authored-by: Wei Zhou <weizhouapache@gmail.com>
2021-05-13 14:31:47 +05:30
Harikrishna
32e3bbdcc5
VMware Datastore Cluster primary storage pool synchronisation (#4871)
Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.

Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
2021-05-07 16:30:54 +05:30
Pearl Dsilva
bc80815cf5
server: Adding VPN options for IKE version and IKE split connections (#4953)
IKE version allows selecting ike (autoselect), ikev1, or ikev2.
Split connections gives an option of separating the first right subnet from the rest, and kicking out individual statements for each right subnet for better cross-compatibility.

Backported from PR: #4137
update per PR suggestion

Fixes #3138

Co-authored-by: Greg Goodrich <ggoodrich@ippathways.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2021-05-05 12:54:23 +05:30
Gabriel Beims Bräscher
ab790c11d5
server: Allow to upgrade service offerings from local <> shared storage pools (#4915)
This PR addresses the issue raised at #4545 (Fail to change Service offering from local <> shared storage).

When upgrading a VM service offering it is validated if the new offering has the same storage scope (local or shared) as the current offering. I think that the validation makes sense in a way of preventing running Root disks with an offering that does not match the current storage pool. However, the validation only compares both offerings and does not consider that it is possible to migrate Volumes between local <> shared storage pools.

The idea behind this implementation is that CloudStack should check the scope of the current storage pool which the ROOT volume is allocated; this, it is possible to migrate the volume between storage pools and list/upgrade according to the offerings that are supported for such pool.

This PR also fixes an issue where the API command that lists offerings for a VM should follow the same idea and list based on the storage pool that the volume is allocated and not the previous offering.

Fixes: #4545
2021-04-30 11:59:50 +05:30
Olivier Lemasle
72f6612971
server: Increase max length for VMInstanceVO.backupVolumes (#4967)
The default length is 255, which caused a truncation of data if
the JSON object representing the backup volumes is too big.
It caused errors when backups were made on VMs with 3 volumes
or more.

`vm_instance.backup_volumes` has the type TEXT, which has a
maximal length of 65535 characters.

Fixes #4965
2021-04-30 11:57:56 +05:30
dahn
be255e4203
server: protect against stray snapshot-details without snapshot (#4924)
This PR makes sure no orphaned snapshot details are considered in the cleanup at startup job.
a real solution would be to implement some kind of cascading delete, but as the parent record is "only" marked as removed this would be a bit com

Co-authored-by: Daan Hoogland <dahn@onecht.net>
2021-04-29 20:40:29 +05:30
Nicolas Vazquez
f728287aa2
server: Fix template garbage collection cleanup (#4944) 2021-04-24 18:57:47 +05:30
Abhishek Kumar
a30d518e8a
vmware: fix stopped VM volume migration (#4758)
* prevent other vm disks getting deleted

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* vmware: fix inter-cluster stopped vm migration

Fixes #4838

For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix detached volume inter-cluster migration

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* cleanup unused method

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* review changes

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* changes

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* vmware: allow attached volume migration using VmwareStorageMotionStrategy

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* find vm clusterid with multiple ROOT volumes

VM can have multiple ROOT volumes and some can be on zone-wide store therefore iterate over all of them till a cluster ID is found.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix successive storage migration

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix intercluster check

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactor vm cluster, host method

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* remove inter-pod check

Added by mistake, VMware won't have pods

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* address review comment

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-04-24 18:55:25 +05:30
slavkap
b4ee4acaf3
server: Fix volume state on migrate with migrateVirtualMachineWithVolume API call (#4934)
When invoking migrateVirtualMachineWithVolume API call and a strategy isn't found the volumes are left in Migrating state

This PR puts back the volumes to Ready state.
2021-04-22 14:30:18 +05:30
Rohit Yadav
0302750aac
vmware: Add support for VMware 7 (#4300) 2021-04-15 16:10:14 +05:30
Harikrishna
f00b5fc7ac
server: Fix for the issue of recover VM not able to attach the data disks which are there before destroy in case of VMware (#4493)
This PR fixes: #4462

Problem Statement:
In case of VMware, when a VM having multiple data disk is destroyed (without expunge) and tried to recover the VM then the previous data disks are not attached to the VM like before destroy. Only root disk is attached to the VM.

Root cause:
All data disks were removed as part of VM destroy. Only the volumes which are selected to delete (while destroying VM) are supposed to be detached and destroyed.

Solution:
During VM destroy, detach and destroy only volumes which are selected during VM destroy. Detach the other volumes during expunge of VM.
2021-04-15 12:50:53 +05:30
Pearl Dsilva
a64ad9d9b7
server: Prevent vm snapshots being indefinitely stuck in Expunging state on deletion failure (#4898)
Fixes #4201

This PR addresses the issue of a vm snapshot being indefinitely stuck is Expunging state in case deletion fails. 

Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2021-04-12 08:09:37 +05:30
Abhishek Kumar
fdefee75ff
vmware: fix inter-cluster stopped vm and volume migration (#4895)
Fixes #4838

For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster during a stopped VM migration. Also, find target datastore using the host in the target cluster.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-04-10 13:22:28 +05:30
Rohit Yadav
ca8920dd36 Merge remote-tracking branch 'origin/4.14' into 4.15 2021-04-09 13:17:39 +05:30
Abhishek Kumar
d8c6e00498
hypervisor: XCP-ng 8.2 support (#4672)
Adds new/missing guest os mappings for XCP-ng/Xenserver 8.1
Copy guest OS mappings from XCP-ng/Xenserver 8.1 for XCP-ng/Xenserver 8.2
Adds Ubuntu 20.04 guest os mapping for XCP-ng/Xenserver 8.2

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-04-09 13:12:06 +05:30
Abhishek Kumar
cd60b8d97d
host-allocator: check capacity for suitable hosts (#4884)
Fixes #4517

Adds capacity checks for RandomAllocator (host allocator)

Factors out host cpu capability and capacity check wrt serviceoffering code into CapacityManager.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-04-09 12:35:58 +05:30
Rohit Yadav
7270ca7e25 Merge remote-tracking branch 'origin/4.14' into 4.15 2021-04-06 12:51:26 +05:30
Gabriel Beims Bräscher
cb91a769d3
Fix npe when migrating vm with volume (#4698) (#4775)
Cherry-pick commit 59fba4916b and fix conflict.

Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
2021-04-06 11:54:29 +05:30
Wei Zhou
63c91c1458
server: Fix network statistics for vpc (#3944)
This contains 3 main changes
(1) add NETWORK_STATS_ethX for all nics with public ips in VPC VRs (current: NETWORK_STATS_eth1)
(2) DO NOT create records in user_statistics for each VPC tier (only one record per public nic per VPC VR)
(3) send NetworkUsageCommand before unplugging a NIC with public IPs from VPC VR
2021-04-01 12:43:06 +05:30
Rakesh
76ba5c62d9
server: Fix displaying public IP address of shared networks (#4675)
Public IP addresses dedicated to one domain should not be accessed
by other domains. Also, root admin should be able to display all
public ip addresses in system.

Currently following issues exist

1. Public IP address assigned to one domain can be accessed by
other sibling domains

If use.system.public.ip is false then child domains should not
see public ip of ROOT domain

Before fix
```
(test1) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
  "count": 59,
  "publicipaddress": [
```

After fix

```
(test) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
  "count": 10,
```
2021-04-01 12:39:01 +05:30
Wei Zhou
b8884efa7f
server: create DB entry for storage pool capacity when create storage pool (#4805)
* server: create DB entry for storage pool capacity when create storage pool

* Revert "server: create DB entry for storage pool capacity when create storage pool"

This reverts commit e790167bfe8cdebc80c8a51cb0191184edc40afd.

* server: create DB entry for storage pool capacity when create zone-wide storage pools
2021-03-29 16:21:24 +05:30
Rohit Yadav
9b1d1e6de3
systemvmtemplate: new template for 4.15.1 (#4793)
Update new systemvmtemplate for 4.15.1.0; synced:
http://download.cloudstack.org/systemvm/4.15/

A new template is necessary due to many security fixes over the last year, the 4.15.0 systemvmtemplate was created about a year ago.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2021-03-24 17:08:46 +05:30
Pearl Dsilva
546bf3d5a2
server: Update vm_template table to set template as removed on deletion (#4748)
* Update vm_template table removed field when template is deleted

* Update method name

* address comment

* Extracted code to separate methods

* Address test failure

* refactor test cleanup

Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2021-03-24 12:41:03 +05:30
Pearl Dsilva
136252d65d
server: Maintain order or project owners added to account (#4822)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2021-03-17 17:36:46 +05:30
Michael
1cfb44994f
db: add schema upgrade from 4.15.0.0 to 4.15.1.0 (#4574) 2021-03-11 13:24:29 +05:30
Rohit Yadav
fa067e02a7 Updating pom.xml version numbers for release 4.14.2.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-03-02 12:32:27 +05:30
Abhishek Kumar
88337bdea4
server: fix finding pools for volume migration (#4693)
While finding pools for volume migration list following compatible storages:
- all zone-wide storages of the same hypervisor.
- when the volume is attached to a VM, then all storages from the same cluster as that of VM.
- for detached volume, all storages that belong to clusters of the same hypervisor. 

Fixes #4692 
Fixes #4400
2021-02-25 22:13:50 +05:30
Rakesh
787491871a
server: Look for active templates for VR deployment (#4047)
If the template from which VR is created got deleted, the state
is set to inactive and removed to null.
Since the template is already deleted, the VR can't be created
using this template again.

If someone restarts network with cleanup then it will try to
deploy the vr from the old non existing template again.
So search only for active template which are not yet deleted.
2021-02-25 22:05:31 +05:30
Wei Zhou
5a3ae159ca
upgrade: check systemvm template before db changes (#4582)
* Upgrade: check systemvm template before db changes

* Upgrade: move some codes to a separated method

* #4582 add txn.commit()
2021-02-24 16:26:31 +05:30
Rohit Yadav
66f0beda5f Updating pom.xml version numbers for release 4.14.1.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-02-08 16:24:09 +05:30
Rohit Yadav
6bde1384ff Merge remote-tracking branch 'origin/4.14' into 4.15
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-02-05 16:01:01 +05:30
Wei Zhou
78f73c1bc6
server: Fix update capacity for hosts take long time if there are many service offerings (#4623)
Steps to reproduce the issue:

(1)Create 10000 service offerings (by db changes below or cloudmonkey).

```
DROP PROCEDURE IF EXISTS cloud.insert_service_offering;

DELIMITER $$
CREATE PROCEDURE cloud.insert_service_offering()
BEGIN
  DECLARE count INT DEFAULT 10000;
  SET @offeringid = (select max(id)+1 from disk_offering);

  WHILE count > 0 DO
    INSERT INTO disk_offering (id,name,uuid,display_text,disk_size,type,created) values (@offeringid,'test-offering-wei',uuid(), 'test-offering-wei',0,'Service',now());
    INSERT INTO service_offering (id,cpu,speed,ram_size) values (@offeringid, 1, 500,256);
    SET @offeringid = @offeringid + 1;
    SET count = count - 1;
  END WHILE;
END $$
DELIMITER ;

CALL cloud.insert_service_offering();

mysql> CALL cloud.insert_service_offering();
Query OK, 0 rows affected (2 min 30.85 sec)
```

(2) Check the total time of periodical capacity check in cloudstack.

Without this patch, it spend 2.5 seconds (2 hosts)
```
2021-01-15 16:10:12,793 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Running Capacity Checker ...
2021-01-15 16:10:15,287 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Done running Capacity Checker ...
```

With this patch ,it spend 1.3 seconds (2 hosts)
```
2021-01-15 16:12:43,604 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Running Capacity Checker ...
2021-01-15 16:12:44,927 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Done running Capacity Checker ...
```

If there are 100 hosts, the total time will be reduced from 100+ seconds to around 10 seconds.
2021-02-04 14:43:57 +05:30
Daan Hoogland
b6b778f003 Merge release branch 4.14 to 4.15
* 4.14:
  server: select root disk based on user input during vm import (#4591)
  kvm: Use Q35 chipset for UEFI x86_64 (#4576)
  server: fix wrong error message when create isolated network without SourceNat (#4624)
  server: add possibility to scale vm to current customer offerings (#4622)
  server: keep networks order and ips while move a vm with multiple networks (#4602)
  server: throw exception when update vm nic on L2 network (#4625)
  doc: fix typo in install notes (#4633)
2021-02-01 09:57:35 +00:00
Wei Zhou
313ae1f449
server: fix wrong error message when create isolated network without SourceNat (#4624)
This PR fixes wrong message when create isolated network without SourceNat.
2021-02-01 14:15:47 +05:30
Wei Zhou
1913c6854e
server: keep networks order and ips while move a vm with multiple networks (#4602)
This PR fixes an issue when move a vm from an account to another account.

Steps to reproduce the issue
(1) create a vm with multiple shared networks (in advanced zone, or advanced zone with security groups)
(2) create another account (in same domain who can also access the shared networks)
(3) move vm to new account, with a list of networkid

expected result: the vm has nics on the networks in same order as specified in API request, and nics have the same ips as before actual result: network order is not same as specified, ips are changed.
2021-02-01 14:14:20 +05:30
Rohit Yadav
74bae56642 Merge remote-tracking branch 'origin/4.14' into 4.15
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-01-28 14:24:25 +05:30
Wei Zhou
182cea79b5
server: fix cannot create vm if another vm with same name has been added and removed on the network (#4600)
* server: fix cannot create vm if another vm with same name has been added and removed on the network

steps to reproduce the issue
(1) create vm-1 on network-1
(2) add vm-1 to network-2
(3) remove vm-1 from network-2
(4) create another vm with same name vm-1 on network-2

expected result: operation succeed
actual result: operation failed.

* #4600: add back a removed line
2021-01-27 19:28:52 +05:30