Skip to content

Installing Kubernetes with Contrail

Ovidiu Valeanu edited this page Feb 2, 2021 · 22 revisions

Installing Kubernetes on Master and Worker nodes

Since Kubernetes 1.5 container runtimes are integrated through Container Runtime Interface, CRI. The CRI is a gRPC API which allows kubelet to interface with container runtime. Kubernetes can be deployed using a various container runtimes. I will refer here only to docker, containerd and cri-o. Read a versus about them here.

For a HA deployment, you will need a LB for K8s API. A HA Proxy node can be configured easily.

This is an example of a haproxy config.

Choose which container runtime you would like to use.

On all nodes

Prepare the nodes and install Kubernetes components.

Use any these scripts for Centos or these scripts for Ubuntu.

On the first master

Create K8s cluster

# kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs

In my case LOAD_BALANCER_DNS:LOAD_BALANCER_PORT is 172.16.125.120:6443. This is the IP of the HA proxy node.

If you are using containerd or cri-o, you need to specify the container runtime endpoint.

# kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs --cri-socket /run/containerd/containerd.sock

or

# kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs --cri-socket /var/run/crio/crio.sock

Once "kubeadm init" completes, save the "join" command that will be printed on the shell

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 172.16.125.120:6443 --token 6uxu5v.48n4vc7phxb8mkcr \
    --discovery-token-ca-cert-hash sha256:176c11030c253e58cfdce1637da308260e7632153c49777d005c08f519eab120 \
    --control-plane --certificate-key 7eeae4ad3ba23ce59878eaa3821513b4aaf6d7fc3ca6d98cafe4eb30712118d6

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.125.120:6443 --token 6uxu5v.48n4vc7phxb8mkcr \
    --discovery-token-ca-cert-hash sha256:176c11030c253e58cfdce1637da308260e7632153c49777d005c08f519eab120

Run the following commands to setup the k8s cli

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

On the additional master nodes

Join the other two masters node running this command as root:

# kubeadm join 172.16.125.120:6443 --token 6uxu5v.48n4vc7phxb8mkcr \
    --discovery-token-ca-cert-hash sha256:176c11030c253e58cfdce1637da308260e7632153c49777d005c08f519eab120 \
    --control-plane --certificate-key 7eeae4ad3ba23ce59878eaa3821513b4aaf6d7fc3ca6d98cafe4eb30712118d6

On the Workers

Join the Master nodes

# kubeadm join 172.16.125.120:6443 --token 6uxu5v.48n4vc7phxb8mkcr \
    --discovery-token-ca-cert-hash sha256:176c11030c253e58cfdce1637da308260e7632153c49777d005c08f519eab120

On the master

Check if the nodes are joined

$ kubectl get nodes -o wide
NAME          STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
k8s-master1   NotReady    master   18h   v1.18.9   172.16.125.115   <none>        Ubuntu 18.04.5 LTS   4.15.0-118-generic   docker://18.9.9
k8s-master2   NotReady    master   18h   v1.18.9   172.16.125.116   <none>        Ubuntu 18.04.5 LTS   4.15.0-118-generic   docker://18.9.9
k8s-master3   NotReady    master   18h   v1.18.9   172.16.125.117   <none>        Ubuntu 18.04.5 LTS   4.15.0-118-generic   docker://18.9.9
k8s-node1     NotReady    <none>   18h   v1.18.9   172.16.125.118   <none>        Ubuntu 18.04.5 LTS   4.15.0-112-generic   docker://18.9.9
k8s-node2     NotReady    <none>   18h   v1.18.9   172.16.125.119   <none>        Ubuntu 18.04.5 LTS   4.15.0-112-generic   docker://18.9.9

Create secret for downloading Contrail docker images

$ kubectl create secret docker-registry contrail-registry --docker-server=hub.juniper.net/contrail --docker-username=JNPR-FieldUserXXX --docker-password=XXXXXXXXXXX [email protected] -n kube-system

Install Contrail by applying the single yaml file. There is an example here. Before applying the yaml file change %MASTER_IP% variable with masters IP addresses separated by comma (eg. 172.16.125.115,172.16.125.116,172.16.125.117) and %K8S_API_IP% with K8s API you used for kubeadm init.

$ kubectl apply -f contrail_single.yaml

You can install Contrail also without optional analytics components. There is an example here

$ kubectl apply -f contrail_single_wo_analytics.yaml

Watch contrail pods being created.

$ watch -n5 kubectl get pods -n kube-system

Once is finished all the pods should be up and running. Note: This is an example of the cluster running without optional analytics components.

$ kubectl get pods -n kube-system -o wide
NAME                                  READY   STATUS    RESTARTS   AGE   IP               NODE          NOMINATED NODE   READINESS GATES
config-zookeeper-4klts                1/1     Running   0          18h   172.16.125.117   k8s-master3   <none>           <none>
config-zookeeper-cs2fk                1/1     Running   0          18h   172.16.125.116   k8s-master2   <none>           <none>
config-zookeeper-wgrtb                1/1     Running   0          18h   172.16.125.115   k8s-master1   <none>           <none>
contrail-agent-ch8kv                  3/3     Running   2          18h   172.16.125.118   k8s-node1     <none>           <none>
contrail-agent-kh9cf                  3/3     Running   1          18h   172.16.125.117   k8s-master3   <none>           <none>
contrail-agent-kqtmz                  3/3     Running   0          18h   172.16.125.115   k8s-master1   <none>           <none>
contrail-agent-m6nrz                  3/3     Running   1          18h   172.16.125.116   k8s-master2   <none>           <none>
contrail-agent-qgzxt                  3/3     Running   3          18h   172.16.125.119   k8s-node2     <none>           <none>
contrail-analytics-6666s              4/4     Running   1          18h   172.16.125.115   k8s-master1   <none>           <none>
contrail-analytics-jrl5x              4/4     Running   4          18h   172.16.125.117   k8s-master3   <none>           <none>
contrail-analytics-x756g              4/4     Running   4          18h   172.16.125.116   k8s-master2   <none>           <none>
contrail-configdb-2h7kd               3/3     Running   4          18h   172.16.125.116   k8s-master2   <none>           <none>
contrail-configdb-d57tb               3/3     Running   4          18h   172.16.125.117   k8s-master3   <none>           <none>
contrail-configdb-zpmsq               3/3     Running   4          18h   172.16.125.115   k8s-master1   <none>           <none>
contrail-controller-config-c2226      6/6     Running   9          18h   172.16.125.116   k8s-master2   <none>           <none>
contrail-controller-config-pbbmz      6/6     Running   5          18h   172.16.125.115   k8s-master1   <none>           <none>
contrail-controller-config-zqkm6      6/6     Running   4          18h   172.16.125.117   k8s-master3   <none>           <none>
contrail-controller-control-2kz4c     5/5     Running   2          18h   172.16.125.116   k8s-master2   <none>           <none>
contrail-controller-control-k522d     5/5     Running   0          18h   172.16.125.115   k8s-master1   <none>           <none>
contrail-controller-control-nr54m     5/5     Running   2          18h   172.16.125.117   k8s-master3   <none>           <none>
contrail-controller-webui-5vxl7       2/2     Running   0          18h   172.16.125.115   k8s-master1   <none>           <none>
contrail-controller-webui-mzpdv       2/2     Running   1          18h   172.16.125.116   k8s-master2   <none>           <none>
contrail-controller-webui-p8rc2       2/2     Running   1          18h   172.16.125.117   k8s-master3   <none>           <none>
contrail-kube-manager-88c4f           1/1     Running   0          18h   172.16.125.115   k8s-master1   <none>           <none>
contrail-kube-manager-fsz2z           1/1     Running   0          18h   172.16.125.116   k8s-master2   <none>           <none>
contrail-kube-manager-qc27b           1/1     Running   0          18h   172.16.125.117   k8s-master3   <none>           <none>
coredns-684f7f6cb4-4mmgc              1/1     Running   0          59m   10.47.255.251    k8s-master2   <none>           <none>
coredns-684f7f6cb4-dvpjk              1/1     Running   0          73m   10.47.255.252    k8s-master1   <none>           <none>
coredns-684f7f6cb4-m6sj7              1/1     Running   0          49m   10.47.255.249    k8s-node1     <none>           <none>
coredns-684f7f6cb4-nfkfh              1/1     Running   0          49m   10.47.255.248    k8s-node1     <none>           <none>
coredns-684f7f6cb4-tk48d              1/1     Running   0          52m   10.47.255.250    k8s-master3   <none>           <none>
etcd-k8s-master1                      1/1     Running   0          60m   172.16.125.115   k8s-master1   <none>           <none>
etcd-k8s-master2                      1/1     Running   0          61m   172.16.125.116   k8s-master2   <none>           <none>
etcd-k8s-master3                      1/1     Running   0          58m   172.16.125.117   k8s-master3   <none>           <none>
kube-apiserver-k8s-master1            1/1     Running   0          60m   172.16.125.115   k8s-master1   <none>           <none>
kube-apiserver-k8s-master2            1/1     Running   0          61m   172.16.125.116   k8s-master2   <none>           <none>
kube-apiserver-k8s-master3            1/1     Running   0          58m   172.16.125.117   k8s-master3   <none>           <none>
kube-controller-manager-k8s-master1   1/1     Running   0          60m   172.16.125.115   k8s-master1   <none>           <none>
kube-controller-manager-k8s-master2   1/1     Running   0          61m   172.16.125.116   k8s-master2   <none>           <none>
kube-controller-manager-k8s-master3   1/1     Running   0          58m   172.16.125.117   k8s-master3   <none>           <none>
kube-proxy-975tn                      1/1     Running   0          74m   172.16.125.119   k8s-node2     <none>           <none>
kube-proxy-9qzc9                      1/1     Running   0          74m   172.16.125.118   k8s-node1     <none>           <none>
kube-proxy-fgwqt                      1/1     Running   0          75m   172.16.125.115   k8s-master1   <none>           <none>
kube-proxy-n6nnq                      1/1     Running   0          75m   172.16.125.116   k8s-master2   <none>           <none>
kube-proxy-wf289                      1/1     Running   0          74m   172.16.125.117   k8s-master3   <none>           <none>
kube-scheduler-k8s-master1            1/1     Running   0          60m   172.16.125.115   k8s-master1   <none>           <none>
kube-scheduler-k8s-master2            1/1     Running   0          61m   172.16.125.116   k8s-master2   <none>           <none>
kube-scheduler-k8s-master3            1/1     Running   0          56m   172.16.125.117   k8s-master3   <none>           <none>
rabbitmq-82lmk                        1/1     Running   0          18h   172.16.125.116   k8s-master2   <none>           <none>
rabbitmq-b2lz8                        1/1     Running   0          18h   172.16.125.115   k8s-master1   <none>           <none>
rabbitmq-f2nfc                        1/1     Running   0          18h   172.16.125.117   k8s-master3   <none>           <none>
redis-42tkr                           1/1     Running   0          18h   172.16.125.115   k8s-master1   <none>           <none>
redis-bj76v                           1/1     Running   0          18h   172.16.125.116   k8s-master2   <none>           <none>
redis-ctzhg                           1/1     Running   0          18h   172.16.125.117   k8s-master3   <none>           <none>

crictl is a tool that is installed during installation of the Kubernetes components. For clusters using containerd or cri-o container runtime, use crictl to pull images, check containers or pods status.

To pull a image from a private docker repo use:

# crictl pull --creds JNPR-FieldUserXXX:XXXXXXXXXXX hub.juniper.net/contrail-nightly/contrail-status:master.latest

To check the status of images, containers

# crictl images

crictl ps

Check crictl help for more options.

You can try the Contrail Kubernetes use cases here like Load Balancing, Namespace and Custom Isolation, Pod Multi-Interface.

If you need to add PV using local-storage, follow the installation guide here

Clone this wiki locally