This is my personal Kubernetes setup for my home-lab running on my Raspberry Pi4 cluster.
- K3s - Lightweight Kubernetes distribution perfect for IoT & Edge computing
- Cilium - eBPF-based networking, observability & security
- Traefik - Cloud native ingress controller for handling incoming traffic and routing requests
- Prometheus - Metrics collection and storage
- Grafana - Metrics visualization and dashboarding
- AlertManager - Alerting and notifications
- Argo CD - Declarative continuous delivery
- Argo Workflows - Kubernetes-native workflow engine
- Argo Rollouts - Progressive delivery controller
- CloudNativePG - CloudNativePG operator to manage PostgreSQL
For this setup I've disable
wlan0interface and use onlyeth0for performance reasons. I also disablebrcmfmac_wcc,brcmfmac,brcmutilandcfg80211modules to avoid thewlan0interface to be used.
modprobe -r brcmfmac_wcc
modprobe -r brcmfmac
modprobe -r brcmutil
modprobe -r cfg80211
echo "blacklist brcmfmac_wcc" > /etc/modprobe.d/blacklist-brcmfmac.conf
echo "blacklist brcmfmac" >> /etc/modprobe.d/blacklist-brcmfmac.conf
shutdown -r nowexport K3S_KUBECONFIG_MODE="644"
export INSTALL_K3S_EXEC=" --flannel-backend=none --disable-network-policy --disable servicelb --disable traefik"
curl -sfL https://get.k3s.io | sh -After everything it's up and running I've changed k3s to use etcd as a data store.
I've added --cluster-init on /etc/systemd/system/k3s.service
ExecStart=/usr/local/bin/k3s \
server \
'--cluster-init' \
'--flannel-backend=none' \
'--disable-network-policy' \
'--disable' \
'servicelb' \
'--disable' \
'traefik' \systemctl daemon-reload
systemctl restart k3sTo install additional agent nodes and add them to the cluster, run the installation script with the K3S_URL and K3S_TOKEN environment variables. Here is an example showing how to join an agent:
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -Setting the K3S_URL parameter causes the installer to configure K3s as an agent, instead of a server. The K3s agent will register with the K3s server listening at the supplied URL. The value to use for K3S_TOKEN is stored at /var/lib/rancher/k3s/server/node-token on your server node.
https://docs.k3s.io/quick-start
helm repo add cilium https://helm.cilium.io/
helm repo update
helm upgrade --install cilium cilium/cilium --version v1.15.6 \
--set operator.replicas=1 \
--set ipam.operator.clusterPoolIPv4PodCIDRList=10.42.0.0/16 \
--set ipv4NativeRoutingCIDR=10.42.0.0/16 \
--set ipv4.enabled=true \
--set loadBalancer.mode=dsr \
--set routingMode=native \
--set autoDirectNodeRoutes=true \
--set l2announcements.enabled=true \
--set kubeProxyReplacement=true \
--set k8sClientRateLimit.qps=50 \
--set k8sClientRateLimit.burst=100 \
--set k8sServiceHost=192.168.1.106 \
--set k8sServicePort=6443 \
--set l2announcements.leaseDuration=3s \
--set l2announcements.leaseRenewDeadline=1s \
--set l2announcements.leaseRetryPeriod=200ms \
--set ingressController.Enabled=true \
--set enable-bgp-control-plane.enabled=true- Be attention to your
k8sServicePort, which it's the interface advertised from yourk3s.
kubectl edit cm -n kube-system cilium-configbpf-lb-sock-hostns-only: "true"
enable-host-legacy-routing: "true"
device: eth0
enable-bpf-masquerade: "true"kubectl -n kube-system rollout restart ds/ciliumkubectl create -f cilium/CiliumL2AnnouncementPolicy-IPPool.yamlhttps://docs.cilium.io/en/stable/installation/k8s-install-helm/
Run certificate.sh on certs folder.
./certificate.sh
Certificate request self-signature ok
subject=C = BR, ST = SP, L = Sao Paulo, O = MyKubernetes, CN = traefik.mykubernetes.com
secret/traefik-dashboard-cert createdAdd your ca.crt to the system keychain. If you are using macOS:
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ./ca.crtIf you need to update the certificate and there are already old certificates in the keychain:
- Find existing certificates:
security find-certificate -a -c "MyKubernetes CA" -Z /Library/Keychains/System.keychain | grep "SHA-1 hash" | awk '{print $3}'- Remove old certificates by SHA-1 hash:
sudo security delete-certificate -Z <SHA-1_HASH> /Library/Keychains/System.keychainRepeat for each old certificate hash found in step 1.
- Add the new certificate:
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ./ca.crt- Verify the certificate was added:
security find-certificate -c "MyKubernetes CA" /Library/Keychains/System.keychain -Z | grep "SHA-1 hash"The SHA-1 hash should match your new certificate:
openssl x509 -in ca.crt -noout -fingerprint -sha1 | cut -d= -f2 | tr ':' ' ' | tr -d ' '- Clear browser HSTS cache (Chrome):
- Open
chrome://net-internals/#hsts - In "Delete domain security policies", enter:
traefik.mykubernetes.com - Click "Delete"
- Or clear all browser cache and restart the browser
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
kubectl create namespace monitoring
helm install my-kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 68.3.0 -n monitoring -f monitoring/prometheus-values.yamlSetting my monitoring solution to reach at /prometheus, /grafana and /alertmanager.
See monitoring/README.md for more details.
# Add the secret to the monitoring namespace
kubectl get secret traefik-dashboard-cert -n traefik -o yaml | sed 's/namespace: traefik/namespace: monitoring/' | kubectl apply -f -
# Create the ingressroute for the monitoring namespace
kubectl create -f monitoring/ingressroute.yamlkubectl create namespace traefik
helm repo add traefik https://traefik.github.io/charts
helm repo update
helm install traefik traefik/traefik --namespace traefik --values traefik/values.yaml
kubectl create -f traefik/dashboard.yamlhttps://doc.traefik.io/traefik/getting-started/install-traefik/#use-the-helm-chart
https://github.com/traefik/traefik-helm-chart/blob/master/traefik/values.yaml
kubectl create namespace argo
kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v3.5.8/install.yamlIn this setup I've changed the Base_HREF to /argo/ to be able to reach the workflows UI at /argo/.
kubectl edit deploy/argo-server -n argo- args:
- server
- --auth-mode=server
env:
- name: BASE_HREF
value: /argo/Argo Workflows need a service account in the respective namespace where the workloads it's going to run order to work properly. This service account needs some permissions to manage workflows, interact with pods and etctera. You can find more info here.
kubectl get secret traefik-dashboard-cert -n traefik -o yaml | sed 's/namespace: traefik/namespace: argo/' | kubectl apply -f -
kubectl create -f argo/rbac.yamlTo install argo-cd I've followed the https://argo-cd.readthedocs.io/en/stable/getting_started/.
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yamlFor this installation I've managed the UI through Traefik to access on traefik.mykubernetes.com/argocd.
kubectl patch deployment argocd-server -n argocd --type='json' -p='[{"op":"replace","path":"/spec/template/spec/containers/0/args","value":["/usr/local/bin/argocd-server","--insecure","--basehref=/argocd","--rootpath=/argocd"]}]'kubectl get deploy argocd-server -n argocd -o jsonpath='{.spec.template.spec.containers[0].args}' | jqkubectl create -f argo-stack/argo-cd/ingressroute.yamlReference: https://argo-cd.readthedocs.io/en/latest/operator-manual/ingress/
UI password:
kubectl get secret/argocd-initial-admin-secret -n argocd -o jsonpath='{.data.password}' | base64 --decode
-> argo-cd has an issue using basehref with /argocd;
Bug fix:
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.14.10/manifests/install.yaml-> Don't forget to patch the deploment after the upgrade.
To enable the argo-rollouts on the UI I've use this extension: https://github.com/argoproj-labs/rollout-extension
kubectl create namespace argo-rollouts
helm repo add argo-rollouts https://argoproj.github.io/argo-helm -n argo-rollouts
helm repo update
helm install argo-rollouts argo/argo-rollouts -n argo-rollouts --set dashboard.enabled=truekubectl patch deployment argocd-server -n argocd --patch "$(cat argo-stack/argo-rollouts/patch-argocd-server.yaml)"kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj-labs/argocd-image-updater/master/manifests/install.yamlWe've to create a secret from Docker Hub registry to store the credentials for the image updater.
kubectl create secret docker-registry regcred \
--namespace argocd \
--docker-server=https://index.docker.io/v1/ \
--docker-username=ambrosiaaaaa\
--docker-password=token-xyz\
--docker-email=myemail@gmail.com
Also we need a credentials for the GitHub registry.
kubectl create secret generic git-creds \
-n argocd \
--from-literal=username=myusername \
--from-literal=password=token-xyz
Now we just need to "annotate" the application with the image updater.
kubectl get application -n argocd
NAME SYNC STATUS HEALTH STATUS
foobar Synced Healthy
kubectl annotate application foobar \
argocd-image-updater.argoproj.io/credentials="docker.io=secret:dockerhub-secret" \
argocd-image-updater.argoproj.io/image-list="ambrosiaaaaa/foobar-api" \
argocd-image-updater.argoproj.io/update-strategy="semver" \
argocd-image-updater.argoproj.io/write-back-method="git:secret:argocd/git-creds" \
-n argocdArgo CD Image Updater
Update Strategies
Git Write-back Method
Exemples
