This feature allows OVN-Kubernetes to be used as the CNI plugin with AKO on OpenShift.
Starting with AKO 1.10.1, the OVN-Kubernetes Container Network Interface (CNI) plugin is supported in OpenShift. Prior to 1.10.1, only OpenShift SDN was supported as the CNI plugin on OpenShift.
In order to support OVN-Kubernetes as the CNI plugin with AKO, the AKOSettings.cniPlugin value in the AKO Helm chart values.yaml should be set to ovn-kubernetes
. The sample values.yaml can be found at https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes/blob/master/helm/ako/values.yaml, and the description for the AKOSettings.cniPlugin field can be found at https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes/blob/master/docs/values.md#akosettingscniplugin.
AKO needs to read the pod CIDR subnets configured for Kubernetes or OpenShift nodes to create static routes in the Avi controller for the pool backend servers (pods) to be reachable from the Service Engine. The OVN-Kubernetes CNI configures the Pod CIDR subnet(s) on each node as part of the k8s.ovn.org/node-subnets
annotation. AKO reads the default pod CIDR subnet value(s) from this annotation for each node and configures the required static routes on the Avi controller. Some sample annotation values are shown below.
For OpenShift version less than 4.13
k8s.ovn.org/node-subnets: '{"default":"10.128.0.0/23"}'
For OpenShift version 4.13 or higher
k8s.ovn.org/node-subnets: '{"default":["10.128.0.0/23"]}'
For OVN-Kubernetes CNI, there are some OpenShift setup installations where, by default, the routing gateway (OVS) performs Source NAT for traffic from PODs leaving the nodes. This Source NAT results in the pool servers, i.e., pods, being marked down in Avi Controller as it leads to failure in the health monitoring performed by Avi Controller. This issue is seen only when AKO runs with the service type configured as ClusterIP (ClusterIP mode). No such issue is seen in the NodePort mode, and pools come up normally. To learn more about serviceType
configuration, please see https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes/blob/master/docs/values.md#l7settingsservicetype.
In order to use ClusterIP mode, the Source NAT has to be disabled. However, disabling SNAT will break the ability of pods to route externally with the Node's IP Address, there by leading to failure in NodePort mode. NodePort mode should be leveraged if disabling SNAT is not desired. The changes below will disable the SNAT functionality for the namespaces that require Ingress/Route support.
- Create a ConfigMap to set disable-snat-multiple-gws for cluster network.operator. Create a file named cm_gateway-mode-config.yaml with the following content.
apiVersion: v1
kind: ConfigMap
metadata:
name: gateway-mode-config
namespace: openshift-network-operator
data:
disable-snat-multiple-gws: "true"
mode: "shared"
immutable: true
-
Create ConfigMap with
oc apply -f cm_gateway-mode-config.yaml
-
Add
k8s.ovn.org/routing-external-gws
annotation to namespaces that require Ingress/Route support.
- Edit any namespace with
oc edit namespace <name-of-namespace>
and add thek8s.ovn.org/routing-external-gws
annotation as shown below:
apiVersion: v1
kind: Namespace
metadata:
annotations:
k8s.ovn.org/routing-external-gws: <ip-of-node-gateway>