Changes:
- This is a race condition between the deleteDomain thread and AccountChecker thread. DeleteDomain thread marks the domain as inactive and proceeds for cleanup, AccountChecker thread that runs at the same time cleans up any domains marked as inactive.
- When the DeleteDomain thread finds that domain is already removed, it need not error out since the domain deletion has already happened
Changes:
- The event of deleteing an affinity group is published on the MessageBus so that IAM Service can listen and process the event, However the publish operation should not be handled within a DB transaction, since it may take longer and hold the DB transaction for long unnecessarily
- Publish any events to MessageBus outside of the transaction
Conflicts:
server/src/org/apache/cloudstack/affinity/AffinityGroupServiceImpl.java
Changes:
- When there is HA we try to redeploy the affected vm using regular planners and if that fails we retry using the special planner for HA (which skips checking disable threshold)
Now because of job framework the InsufficientCapacittyException gets masked and the special planners are not called. Job framework needs to be fixed to rethrow the correct exception.
- Also the VM Work Job framework is not setting the DeploymentPlanner to the VmWorkJob. So the HA Planner being passed by HAMgr was not getting used.
- Now the job framework sets the planner passed in by any caller of the VM Start operation, to the job
This makes sure dom0 in xenserver doesn't get hammered
when copying templates. It doesn't make sense to use
the cache of dom0 as the template does not fit in
memory. The directIO flags prevent it from trying.
(cherry picked from commit 4e1527e87aaaa87d14d3c7d3a6782b80cbf36a8c)
Upgrade fails if value is set using plain text encoding, the value needs to
be encrypted (if a key was provided during db was setup).
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Steps to reproduce if you have this issue:
- Create a VM's volume snapshot
- Remove VM's template and mark the template as removed with timestamp in DB
- Restart mgmt server and create a volume out of snapshot you should get NPE
Fix: In `storagePoolHasEnoughSpace`, we're only searching for a VM's volume's
snapshot's template by Id and not including removed templates. This is a corner
case and NPE hits when template has been marked removed for a VM's volume's
template so we should search including removed templates.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit f189c105d8dde5491697b171b969e757f8f15858)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When adding a VM, it adds an entry to /etc/hosts file on the VR but does not
clear up any older entries for the VM with a same name. The fix uncomments the
command that removes any old entries in the VM.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit 63298d9b742811919717ffd6303c8a2e9d37a3dd)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
On default iptables rules are updated to add ACCEPT egress traffic.
If the network egress default policy is false, CS remove ACCEPT and adds the DROP rule which
is egress default rule when there are no other egress rules.
If the CS network egress default policy is true, CS won't configure any default rule for egress because
router already came up to accept egress traffic. If there are already egress rules for network then the
egress rules get applied on VR.
For isolated network with out firewall service, VR default allows egress traffic (guestnetwork --> public network)
Added destination and source definition. Flag -S can be used
to ignore this. It's the new default as it is more secure
and does not impact the way things work (backwords compatible).
(cherry picked from commit ef3b4bb4e3342f166489034fa7149540d2ef1383)
If connecting the VPN takes some time, for example because
the other end is not (yet) up, CloudStack will delete
the VPN because the ipsectunnel.sh does not return in time.
The VPN connection then enters the Error state.
This change makes sure ipsectunnel.sh returns in time,
and lets ipsec connect in the background. If it all fails,
the connection enters Disconnected.
(cherry picked from commit 7f33f7c3969d3b217ad6977f01bb487ebeee665d)
Changed default to no, as the other side may not be up yet.
If this check fails, the VPN enters Error state and will not
work. It's safe to just let it connect on its own so it will
connect when it can.
(cherry picked from commit f8d718e3e31ad517969663d24647fcbd9b50cc3d)
Changed 'auto=add' to 'auto=start' to make sure the tunnel starts.
When both sides are there they will connect. This resolves the
issue that there is only a small time frame in which the VPN
would connect.
(cherry picked from commit b95addd3efb45f61b129584ade49bad7bbaa16f8)
Biglock breaks creating VPN's when other scripts run at the
same time that also use the same biglock. These other scripts
do nothing that could harm our deployment and even multiple
vpn's can safely be created simultaniously.
(cherry picked from commit 8b412ce194eaf195dc77531379687de43e14a088)
The last commit 513adab51b53ba1acdea908225cfffab90ca1595 didn't fully fix it. Scenario #1 was not handled, only #2 was handled.
Fixed#1 as part of this commit.
1. If VM is in stopped state in CS due to 'PowerMissing' report from old host (hostId is null) and then there is a 'PowerOn' report from new host
2. If VM is in running state in CS and there is a 'PowerOn' report from new host
- Increases the disable thresholds for developers
- Removes the use local storage for systemvms
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit 314e2daceeeecccdbdc34973d039d16817d2d166)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>