Skip to content

Installing Kubernetes with Contrail

Ovidiu Valeanu edited this page Sep 22, 2020 · 22 revisions

Installing Kubernetes on Master and Worker nodes

Since Kubernetes 1.5 container runtimes are integrated through Container Runtime Interface, CRI. The CRI is a gRPC API which allows kubelet to interface with container runtime. Kubernetes can be deployed using a various container runtimes. I will refer here only to docker, containerd and cri-o. Read a versus about them here.

For a HA deployment, you will need a LB for K8s API. A HA Proxy node can be configured easily.

This is an example of a haproxy config.

Choose which container runtime you would like to use.

On all nodes

Prepare the nodes and install Kubernetes components.

Use any these scripts for Centos or these scripts for Ubuntu.

On the first master

Create K8s cluster

# kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs

In my case LOAD_BALANCER_DNS:LOAD_BALANCER_PORT is 172.16.125.120:6443. This is the IP of the HA proxy node.

If you are using containerd or cri-o, you need to specify the container runtime endpoint.

# kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs --cri-socket /run/containerd/containerd.sock

or

# kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs --cri-socket /var/run/crio/crio.sock

Once "kubeadm init" completes, save the "join" command that will be printed on the shell

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 172.16.125.120:6443 --token 6uxu5v.48n4vc7phxb8mkcr \
    --discovery-token-ca-cert-hash sha256:176c11030c253e58cfdce1637da308260e7632153c49777d005c08f519eab120 \
    --control-plane --certificate-key 7eeae4ad3ba23ce59878eaa3821513b4aaf6d7fc3ca6d98cafe4eb30712118d6

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.125.120:6443 --token 6uxu5v.48n4vc7phxb8mkcr \
    --discovery-token-ca-cert-hash sha256:176c11030c253e58cfdce1637da308260e7632153c49777d005c08f519eab120

Run the following commands to setup the k8s cli

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Join the other two masters node running this command as root:

# kubeadm join 172.16.125.120:6443 --token 6uxu5v.48n4vc7phxb8mkcr \
    --discovery-token-ca-cert-hash sha256:176c11030c253e58cfdce1637da308260e7632153c49777d005c08f519eab120 \
    --control-plane --certificate-key 7eeae4ad3ba23ce59878eaa3821513b4aaf6d7fc3ca6d98cafe4eb30712118d6

On the Workers

Join the Master nodes

# kubeadm join 172.16.125.120:6443 --token 6uxu5v.48n4vc7phxb8mkcr \
    --discovery-token-ca-cert-hash sha256:176c11030c253e58cfdce1637da308260e7632153c49777d005c08f519eab120

On the master

Check if the nodes are joined

$ kubectl get nodes -o wide
NAME          STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
k8s-master1   NotReady    master   18h   v1.18.9   172.16.125.115   <none>        Ubuntu 18.04.5 LTS   4.15.0-118-generic   docker://18.9.9
k8s-master2   NotReady    master   18h   v1.18.9   172.16.125.116   <none>        Ubuntu 18.04.5 LTS   4.15.0-118-generic   docker://18.9.9
k8s-master3   NotReady    master   18h   v1.18.9   172.16.125.117   <none>        Ubuntu 18.04.5 LTS   4.15.0-118-generic   docker://18.9.9
k8s-node1     NotReady    <none>   18h   v1.18.9   172.16.125.118   <none>        Ubuntu 18.04.5 LTS   4.15.0-112-generic   docker://18.9.9
k8s-node2     NotReady    <none>   18h   v1.18.9   172.16.125.119   <none>        Ubuntu 18.04.5 LTS   4.15.0-112-generic   docker://18.9.9

Remove the taints on the master nodes, so you can schedule pods on them.

$ kubectl taint nodes --all node-role.kubernetes.io/master-

Create secret for downloading Contrail docker images

$ kubectl create secret docker-registry contrail-registry --docker-server=hub.juniper.net/contrail-nightly --docker-username=JNPR-FieldUserXXX --docker-password=XXXXXXXXXXX [email protected] -n kube-system

Install Contrail by applying the single yaml file. Change %MASTER_IP% variable with master ip address before applying

$ kubectl apply -f [contrail_single.yaml](https://github.com/ovaleanujnpr/kubernetes/blob/master/single_yaml/contrail_single.yaml)

You can install Contrail also without optional analytics components. In this case you will use the following yaml file

kubectl apply -f [contrail_single_wo_analytics.yaml](https://github.com/ovaleanujnpr/kubernetes/blob/master/single_yaml/contrail_single_wo_analytics.yaml)

Watch contrail pods being created.

$ watch -n5 kubectl get pods -n kube-system

Once is finished all the pods should be up and running. Note: This is an example of the cluster running without optional analytics components.

$ kubectl get pods -n kube-system
NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE
NAME                                  READY   STATUS    RESTARTS   AGE
config-zookeeper-4klts                1/1     Running   0          18h
config-zookeeper-cs2fk                1/1     Running   0          18h
config-zookeeper-wgrtb                1/1     Running   0          18h
contrail-agent-ch8kv                  3/3     Running   2          18h
contrail-agent-kh9cf                  3/3     Running   1          18h
contrail-agent-kqtmz                  3/3     Running   0          18h
contrail-agent-m6nrz                  3/3     Running   1          18h
contrail-agent-qgzxt                  3/3     Running   3          18h
contrail-analytics-6666s              4/4     Running   1          18h
contrail-analytics-jrl5x              4/4     Running   4          18h
contrail-analytics-x756g              4/4     Running   4          18h
contrail-configdb-2h7kd               3/3     Running   4          18h
contrail-configdb-d57tb               3/3     Running   4          18h
contrail-configdb-zpmsq               3/3     Running   4          18h
contrail-controller-config-c2226      6/6     Running   9          18h
contrail-controller-config-pbbmz      6/6     Running   5          18h
contrail-controller-config-zqkm6      6/6     Running   4          18h
contrail-controller-control-2kz4c     5/5     Running   2          18h
contrail-controller-control-k522d     5/5     Running   0          18h
contrail-controller-control-nr54m     5/5     Running   2          18h
contrail-controller-webui-5vxl7       2/2     Running   0          18h
contrail-controller-webui-mzpdv       2/2     Running   1          18h
contrail-controller-webui-p8rc2       2/2     Running   1          18h
contrail-kube-manager-88c4f           1/1     Running   0          18h
contrail-kube-manager-fsz2z           1/1     Running   0          18h
contrail-kube-manager-qc27b           1/1     Running   0          18h
coredns-684f7f6cb4-4mmgc              1/1     Running   0          39m
coredns-684f7f6cb4-dvpjk              1/1     Running   0          53m
coredns-684f7f6cb4-m6sj7              1/1     Running   0          30m
coredns-684f7f6cb4-nfkfh              1/1     Running   0          30m
coredns-684f7f6cb4-tk48d              1/1     Running   0          32m
etcd-k8s-master1                      1/1     Running   0          40m
etcd-k8s-master2                      1/1     Running   0          41m
etcd-k8s-master3                      1/1     Running   0          38m
kube-apiserver-k8s-master1            1/1     Running   0          40m
kube-apiserver-k8s-master2            1/1     Running   0          41m
kube-apiserver-k8s-master3            1/1     Running   0          38m
kube-controller-manager-k8s-master1   1/1     Running   0          40m
kube-controller-manager-k8s-master2   1/1     Running   0          41m
kube-controller-manager-k8s-master3   1/1     Running   0          38m
kube-proxy-975tn                      1/1     Running   0          54m
kube-proxy-9qzc9                      1/1     Running   0          54m
kube-proxy-fgwqt                      1/1     Running   0          55m
kube-proxy-n6nnq                      1/1     Running   0          55m
kube-proxy-wf289                      1/1     Running   0          54m
kube-scheduler-k8s-master1            1/1     Running   0          40m
kube-scheduler-k8s-master2            1/1     Running   0          41m
kube-scheduler-k8s-master3            1/1     Running   0          36m
rabbitmq-82lmk                        1/1     Running   0          18h
rabbitmq-b2lz8                        1/1     Running   0          18h
rabbitmq-f2nfc                        1/1     Running   0          18h
redis-42tkr                           1/1     Running   0          18h
redis-bj76v                           1/1     Running   0          18h
redis-ctzhg                           1/1     Running   0          18h

If you need to add PV using local-storage, follow the installation guide here

crictl is a tool that is installed during installation of the Kubernetes components. For clusters using containerd or cri-o container runtime, use crictl to pull images, check containers or pods status.

To pull a image from a private docker repo use:

# crictl pull --creds JNPR-FieldUserXXX:XXXXXXXXXXX hub.juniper.net/contrail-nightly/contrail-status:master.latest

To check the status of images, containers

# crictl images

crictl ps

Check crictl help for more options.

Clone this wiki locally