This PR fixes the test failures with CKS HA-cluster upgrade.
In production, the CKS HA cluster should have at least 3 control VMs as well.
The etcd cluster requires 3 members to achieve reliable HA. The etcd daemon in control VMs uses RAFT protocol to determine the roles of nodes. During upgrade of CKS with HA, the etcd become unreliable if there are only 2 control VMs.
Sometimes the hostStats object of the agents becomes null in the management server. It is a rare situation, and we haven't found the root cause yet, but it occurs occasionally in our CloudStack deployments with many hosts.
The hostStat is null, even though the agent is UP and hosting multiple VMs. It is possible to access the VM consoles and execute tasks on them.
This pull request doesn't address the issue directly; rather it displays those hosts in Prometheus so we can restart the agent and get the necessary information.
* NSX: Add ALL LB IP to the list of route advertisements in tier1
* NSX: Support Source NAT on NSX Isolated networks
* NSX: Cks Support
* NSX: Create segment group on segment creation
* Add unit tests
* Remove group for segment before removing segment
* Create Distributed Firewall rules
* Remove distributed firewall policy on segment deletion
* Fix policy rule ID and add more unit tests
* Add support for routed NSX Isolated networks \n and non RFC 1918 compliant IPs
* Add support for routed NSX Isolated networks \n and non RFC 1918 compliant IPs
* Add Firewall rules
* build failure - fix unit test
* fix npes
* Add support to delete firewall rules
* update nsx cks offering
* add license
* update order of ports in PF & FW rules
* fix filter for getting transport zones
* CKS support changed - MTU updated, etc
* add LB for CKS on VPC
* address comments
* adapt upstream cks logic for vpc
* rever mtu hack
* update UI changes as per upstream fix
* change display test for CKS n/w offerings for isolated and VPC tiers
* add extra line for linter
* address comment
* revert list change
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
There are tools like cluster-api which create and manage kubernetes cluster on CloudStack. This PR adds the option to add unmanaged kubernetes cluster which are not managed by CKS plugin. This helps provide a consolidated view of unmanaged clusters on CloudStack. The changes done make sure that operations for managed clusters are not executed for unmanaged clusters.
Two new APIs have also been added:
1. addVirtualMachinesToKubernetesCluster - to add VMs to unmanaged clusters.
2. removeVirtualMachinesFromKubernetesCluster - to remove VMs to unmanaged clusters.
Two APIs have been updated:
1. createKubernetesCluster - made KUBERNETES_VERSION_ID, SERVICE_OFFERING_ID, SIZE as not required for unmanaged clusters. Add an additional parameter, managed, which is true by default.
2. listKubernetesClusters - Add a parameter managed to filter on managed field.
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
This change will allow CKS to be enabled by default on new installs. It would not affect server or performance but would help highlighting k8s support in CloudStack.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
When a host is not tagged, its maintenance status is reported in the
cloudstack_hosts_total metric: maintenance_enabled is OFFLINE,
maintenance_disabled is ONLINE.
When a host is tagged, its maintenance status is now also verified to
ensure consistent behaviour.
In prometheus exporter, maintenance status for cloudstack_hosts_total_by_tag is not checked. While it is checked for cloudstack_hosts_total metric.
Classified by_tag or not, metrics should be the same.
Fixes: #7470
* 4.17:
api: fix new password is applied on host when update host password with update_passwd_on_host=false (#7092)
CKS: remove details when delete a cks cluster (#7104)
api/server: add project id/name in ssh keypair response (#7100)