Homelab setup based on Omni and Talos.
This repository contains the configuration files for my homelab. The homelab is a collection of servers and services that I run at home or in the cloud. The homelab is used for learning, testing, and hosting projects.
To avoid headaches and to keep things simple, I use Talos to manage the Kubernetes cluster (don't hesitate to check a little article I wrote about it). To be more specific, I have a self-hosted Omni instance to manage all clusters with a single endpoint and secure them with SSO.
- Omni (Self-hosted) : Manage all nodes between clusters and regions.
- Cilium as CNI and LB (ARP mode)
- ArgoCD to manage the GitOps workflow
Nginx Ingress Controller for Ingress management (and Istio deployed on some clusters)- Traefik Ingress Controller for Ingress management (as well as for middleware management).
- Cert Manager for TLS certificates.
- Storage:
- Rook for multiple nodes cluster.
- OpenEBS + LVM (or ZFS) for single-node cluster.
- ZFS + Local-Path-Provisioner (Only on Cortado cluster).
Reflector to sync secrets across namespaces (requirement for External Secrets + Vault).(Removed 16/12/2024)- External Secrets to fetch secrets from a remote store.
- Vault as a secret store to store secrets.
- Cloudflare Tunnels to expose services to the internet (Only on the
turing
cluster). - Volsync to create backup and send backup (using restic) to a minio server (Only on Cortado cluster).
- Cortado : Single node bare-metal cluster hosted by OVH. - This cluster is mainly used for backups and small applications (gaming, small sites, etc.).
- Mocha : Another node bare-metal cluster hosted by OVH, production cluster (128GB RAM, 8 CPU, 2x512Go NVMe)
- Turing : A cluster based on small devices (ARM and x86) at home. This cluster is used for local hosting and testing. (Prometheus not yet available)
While this repository primarily contains three physical clusters (mocha
, cortado
, and turing
), a fourth cluster configuration exists in the kubevirt
directory. This is a virtual cluster that runs as workloads on top of another cluster (primarily hosted on mocha
due to its greater resource capacity).
This virtual cluster is provisioned using Kubevirt technology integrated with the Omni Infrastructure provider. For a detailed exploration of this setup, you can read my article about Omni and Kubevirt integration.
There are two primary motivations behind maintaining a separate virtual cluster:
-
Network Isolation: It allows for different pod CIDR and service CIDR ranges than the host cluster, preventing network overlaps that could cause routing issues.
-
Configuration Testing: It provides an isolated environment that mirrors my production clusters with the same core components (metrics server, ArgoCD, ApplicationSet, etc.), making it perfect for testing configurations and upgrades safely.
To use this repository, you need to have the Omni CLI installed. You can find the installation instructions here.
Download the omniconfig
file from the Omni instance and merge it with the one in your home directory.
omnictl config merge ./omniconfig.yaml
Then, you can deploy the cluster based on the MachineClass you have configured.
cd lungo
omnictl cluster template sync -f template.yaml
This will create a new cluster based on the configuration you have set in the template.yaml
file. You can download the kubeconfig file using the following command:
omnictl kubeconfig --cluster lungo
Example of kubeconfig file
apiVersion: v1
kind: Config
clusters:
- cluster:
server: https://omni.home.une-tasse-de.cafe:8100/
name: omni-lungo
contexts:
- context:
cluster: omni-lungo
namespace: default
user: [email protected]
name: omni-lungo
current-context: omni-lungo
users:
- name: [email protected]
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://omni.home.une-tasse-de.cafe/oidc
- --oidc-client-id=native
- --oidc-extra-scope=cluster:lungo
command: kubectl
env: null
provideClusterInfo: false