Skip to content

Latest commit

 

History

History
498 lines (348 loc) · 15.9 KB

File metadata and controls

498 lines (348 loc) · 15.9 KB

Software Architecture with C++, Second Edition

Software Architecture with C++: Designing Robust C++ Systems with Modern Architectural Practices, Second Edition, published by Packt

Chapter 18: Cloud Native Design

Important: The deployment was mainly tested with MicroK8s on Linux. It required to enable the dns, ingress and helm addons, check the status, build the customer image, deploy Aspire dashboard and the customer application. The domain names were resolved relatively the loopback IP address 127.0.0.1. Your configuration may differ.

MicroK8s is a production-grade conformant K8s tool. This manual describes different K8S tools and ways to configure the development environment.

MicroK8s

The examples were tested with MicroK8S (getting started) on Linux. Using this tool on macOS and Windows is similar.

Join the group microk8s not to apply sudo microk8s every time:

sudo usermod -a -G microk8s $USER

MicroK8s addons (how to manage addons):

Mandatory addons (for the examples):

  • dns - deploys CoreDNS
  • helm - installs the Helm package manager
  • ingress - a simple ingress controller for external access
microk8s enable dns
microk8s enable ingress
microk8s enable helm

Optional addons:

microk8s enable dashboard
microk8s enable registry
microk8s enable community

Check the status:

microk8s status --wait-ready

You can get an IP address of your Kubernetes node with this command and provide that address to MetalLB to assign the external IP:

microk8s kubectl get nodes -o wide
microk8s enable metallb

The alternative is to provide the address as the IP address pool parameter:

microk8s enable metallb <node-ip>-<node-ip>

Or enable the addon host-access instead of retrieving a node IP address. The default IP address is 10.0.1.1:

microk8s enable host-access

Alternatively, you can provide a different IP address when enabling the addon:

microk8s enable host-access:ip=<desired-ip>

Using these plugins may result in connection errors to the dashboard, as the IP addresses used may not be in the list of allowed addresses in the SSL certificate.

This command is to open the Kubernetes dashboard:

microk8s dashboard-proxy

Kubernetes UI:

Aspire Dashboard was chosen just as a simple 3-in-1 solution for local development: metrics, logs and traces. Deploy the dashboard on Kubernetes with helm

Helm charts:

The alternatives (vendors and integrations):

The Helm chart to deploy the aspire-dashboard to kubernetes

microk8s helm repo add aspire-dashboard https://kube-the-home.github.io/aspire-dashboard-helm/
microk8s helm install -f values.yaml aspire-dashboard aspire-dashboard/aspire-dashboard

MicroK8S runs on Multipass on macOS, Windows and as a deployment option on Linux. MicroK8s can also be installed inside an LXD (Linux Container Daemon) VM and on WSL2 (Windows Subsystem for Linux 2).

Transfer the file values.yaml to the virtual machine (VM) if MicroK8S runs on Multipass and Helm complains the file is not found.
Error: INSTALLATION FAILED: open values.yaml: no such file or directory

multipass transfer values.yaml microk8s-vm:

To open the shell on the VM in Multipass:

multipass shell microk8s-vm

To uninstall the dashboard:

microk8s helm uninstall aspire-dashboard

MicroK8S provides built-in kubectl and helm commands. The name aspire-dashboard is important because this name is used in kubernetes/manifest.yaml as a server name of the OpenTelemetry collector.

All the namespaces:

microk8s kubectl get all -A

The customer application

Build the Docker image

There are two options to load the image to Kubernetes in MicroK8s:

mkdir -p build && cd build
docker save customer > customer.tar
microk8s ctr image import customer.tar

Transfer the file customer.tar to the virtual machine (VM) if MicroK8S runs on Multipass and MicroK8S complains the file is not found.
ctr: open customer.tar: no such file or directory

multipass transfer customer.tar microk8s-vm:

Deploy the app on Kubernetes with kubectl

microk8s kubectl apply -f manifest.yaml

Select namespaces:

microk8s kubectl get all -n default
microk8s kubectl get all -n ingress

To delete the app:

microk8s kubectl delete -f manifest.yaml

kubectl can be executed directly. Apply it again when the IP address in changed:

cd $HOME
mkdir .kube
cd .kube
microk8s config > config

Accessing the customer app and Aspire Dashboard

Get the IP address of your Kubernetes node by using the command above. Or use these command:

ip route | grep default
hostname -I

Alternatively, use this recipe if the jq command is installed:

microk8s kubectl get node -o json | jq '.items[].status.addresses[] | select(.type=="InternalIP") | .address'

The host names are specified in kubernetes/manifest.yaml and kubernetes/values.yaml. MicroK8S redirects all the requests to 127.0.0.1 to its Kubernetes node if host names in the Ingress configuration are not set. Replace and run this command in the console:

curl --header "Host: customer.local" http://<node-ip>/customer/v1?name=anonymous

The address is static if the addon host-access is enabled:

curl --header "Host: customer.local" http://10.0.1.1/customer/v1?name=anonymous

You need to resolve DNS domain names to open the app and dashboard in a browser. The simplest solution is to change /etc/hosts because Kubernetes Ingress works as a router when the HTTP header Host is provided. A browser sets that header automatically:

<node-ip> customer.local
<node-ip> opentelemetry.local  # opentelemetry-collector
<node-ip> dashboard.local      # aspire-dashboard

In Windows, the default path to the file looks like this: %SystemRoot%\system32\drivers\etc\hosts

The changes when DHCP (Dynamic Host Configuration Protocol) assigns a new IP address, and you must change these fully qualified domain names (FQDN) in /etc/hosts in this case, but the address can be static if the addon host-access is enabled. Or assign static DHCP IP addresses (DHCP reservations) specific to your environment:

10.0.1.1 customer.local
10.0.1.1 opentelemetry.local
10.0.1.1 dashboard.local

Network redirection is configuration sensitive. Using 127.0.0.1 also works locally:

127.0.0.1 customer.local
127.0.0.1 opentelemetry.local
127.0.0.1 dashboard.local

Open customer app and dashboard in a browser.

Tutorials and guides for MicroK8S.

Refresh the certificates if microk8s dashboard-proxy fails to verify the certificate:
error: error upgrading connection: error dialing backend: tls: failed to verify certificate: x509: certificate is valid for
Unable to connect to the server: x509: certificate is valid for Unable to connect to the server: x509: certificate has expired or is not yet valid
The addresses are set here /var/snap/microk8s/current/certs/csr.conf.template. But this does not always help, so you need to conduct further research in this case because it depends on your environment.

sudo microk8s refresh-certs --cert ca.crt
sudo microk8s refresh-certs --cert front-proxy-client.crt
sudo microk8s refresh-certs --cert server.crt

Clear browser data if Aspire Dashboard partially does not work after redeployment:
The key {#} was not found in the key ring. (check Kubernetes logs)

MiniKube

This tool has similar to MicroK8S features:

Running the Kubernetes cluster

Start your cluster:

minikube start

To choose the driver (for example, KVM2 on Linux):

minikube start --driver=kvm2

minikube supports static IPs:

minikube start --driver docker --static-ip 192.168.200.200

Interact with your cluster:

kubectl get po -A

Alternatively:

minikube kubectl -- get po -A

Open the Kubernetes Dashboard:

minikube dashboard

Building the customer image in minikube

Run this command before building Docker images to push the images to the minikube Docker registry directly:

eval $(minikube -p minikube docker-env)

This command sets the environment variables to use minikube Docker context:

export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://127.0.0.1:36195"
export DOCKER_CERT_PATH="/home/user/.minikube/certs"
export MINIKUBE_ACTIVE_DOCKERD="minikube"

Deploying Aspire Dashboard and the customer application

Enable the Ingress DNS addons:

minikube addons enable ingress
minikube addons enable ingress-dns

To list the addons:

minikube addons list

To deploy the dashboard:

helm repo add aspire-dashboard https://kube-the-home.github.io/aspire-dashboard-helm/
helm install -f values.yaml aspire-dashboard aspire-dashboard/aspire-dashboard

To build the image which looks like this:

cd ../cmake-build-release
eval $(minikube -p minikube docker-env)
cmake --build . --target docker

To deploy the application:

kubectl apply -f manifest.yaml

List all Kubernetes (K8S) resources:

kubectl get all -A
kubectl get ingress -A

minikube supports NodePort and LoadBalancer access

NodePort access
kubectl get svc -A
minikube service customer --url
minikube service aspire-dashboard-ui-clusterip --url
LoadBalancer access

Adding FQDN entities to /etc/hosts also works with minikube ip:

minikube tunnel

You will see something like when the tunnel works (tested with KVM2):

Status:
    machine: minikube
    pid: 3510325
    route: 10.96.0.0/12 -> 192.168.39.166
    minikube: Running
    services: []
    errors:
        minikube: no errors
        router: no errors
        loadbalancer emulator: no errors

Open customer app and dashboard in a browser.

Functionality of minikube depends on the driver and operating system therefore please check the instructions in its tutorials and handbook.

Docker Desktop and Rancher Desktop

Turn on Kubernetes

To deploy Ingress-Nginx Controller

helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx --create-namespace

To deploy the dashboard:

helm repo add aspire-dashboard https://kube-the-home.github.io/aspire-dashboard-helm/
helm install -f values.yaml aspire-dashboard aspire-dashboard/aspire-dashboard

To build the image which looks like this:

cd ../cmake-build-release
cmake --build . --target docker

To deploy the application:

kubectl apply -f manifest.yaml

Using FQDN entities in /etc/hosts also works:

127.0.0.1 customer.local
127.0.0.1 opentelemetry.local
127.0.0.1 dashboard.local

Open customer app and dashboard in a browser.

Rancher Desktop supports Kubernetes in a similar way:

There are many Ingress Controllers. You may deploy any number of ingress controllers within a cluster.