Terraform module to deploy your own self-hosted platform on Kubernetes on Raspberry Pi.
- Configure Kubernetes cluster
- Self-host password manager: Bitwarden
- Self-host IoT dev platform: Node-RED
- Self-host home cloud: NextCloud
- Self-host home Media Center
- Transmission
- Flaresolverr
- Jackett
- Sonarr
- Radarr
- Plex
- Self-host ads/trackers protection: Pi-Hole
- Accessible K8s/K3s cluster on your Pi.
- With
cert-managerCustomResourceDefinition installed:kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.16.0/cert-manager.crds.yaml
- With
- For transmission bittorrent client, an OpenVPN config file stored in
openvpn.ignore.ovpn, withauth-user-passset to/config/openvpn-credentials.txt(auto auth), including cert and key.
Configure your environment:
$ mv terraform.tfvars.template terraform.tfvars
$ vim terraform.tfvarsOnce it's done you can start deploying resources:
$ source scripts/init.sh # Generates service passwords
$ terraform init
$ terraform plan
$ terraform apply --auto-approve
... output ommited ...
Apply complete! Resources: 32 added, 0 changed, 0 destroyed.To destroy all the resources:
$ terraform destroy --auto-approve
... output ommited ...
Apply complete! Resources: 0 added, 0 changed, 32 destroyed.Note: here we'll set up pi-master i.e. our master pi, if you have additionnal workers (optionnal) you'll then have to repeat the following steps for each of the workers, replacing references to pi-master by pi-worker-1, pi-worker-2, etc.
- Connect via SSH to the pi:
user@workstation $ ssh pi@<PI_IP> ... output ommited ... pi@raspberrypi:~ $
- Change password:
pi@raspberrypi:~ $ passwd ... output ommited ... passwd: password updated successfully - Change hostnames:
pi@raspberrypi:~ $ sudo -i root@raspberrypi:~ $ echo "pi-master" > /etc/hostname root@raspberrypi:~ $ sed -i "s/$HOSTNAME/pi-master/" /etc/hosts
- Enable container features:
root@raspberrypi:~ $ sed -i 's/$/ cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory/' /boot/cmdline.txt
- Make sure the system is up-to-date:
root@raspberrypi:~ $ apt update && apt upgrade -y
- Configure a static IP, Note that This could be also done at the network level via the router admin (DHCP):
root@raspberrypi:~ $ cat <<EOF >> /etc/dhcpcd.conf interface eth0 static ip_address=<YOUR_STATIC_IP_HERE>/24 static routers=192.168.1.1 static domain_name_servers=1.1.1.1 EOF
- Reboot:
root@raspberrypi:~ $ reboot - Wait for a few sec, then connect via SSH to the pi using the new static IP you've just configured:
user@workstation $ ssh pi@<PI_IP> ... output ommited ... pi@pi-master:~ $
- On master pi, run the command
fdisk -lto list all the connected disks to the system (includes the RAM) and try to identify the disk.pi@pi-master:~ $ sudo fdisk -l - If your disk is new and freshly out of the package, you will need to create a partition.
pi@pi-master:~ $ sudo mkfs.ext4 /dev/sda - You can manually mount the disk to the directory
/mnt/hdd.pi@pi-master:~ $ sudo mkdir /mnt/hdd pi@pi-master:~ $ sudo chown -R pi:pi /mnt/hdd/ pi@pi-master:~ $ sudo mount /dev/sda /mnt/hdd
- To automatically mount the disk on startup, you first need to find the Unique ID of the disk using the command
blkid:pi@pi-master:~ $ sudo blkid ... output ommited ... /dev/sda: UUID="0ac98c2c-8c32-476b-9009-ffca123a2654" TYPE="ext4"
- Edit the file
/etc/fstaband add the following line to configure auto-mount of the disk on startup:pi@pi-master:~ $ sudo -i root@pi-master:~ $ echo "UUID=0ac98c2c-8c32-476b-9009-ffca123a2654 /mnt/hdd ext4 defaults 0 0" >> /etc/fstab root@pi-master:~ $ exit
- Reboot the system
pi@pi-master:~ $ sudo reboot - Verify the disk is correctly mounted on startup with the following command:
pi@pi-master:~ $ df -ha /dev/sda Filesystem Size Used Avail Use% Mounted on /dev/sda 458G 73M 435G 1% /mnt/hdd - Install the required dependencies:
pi@pi-master:~ $ sudo apt install nfs-kernel-server -y - Edit the file
/etc/exportsby running the following command:pi@pi-master:~ $ sudo -i root@pi-master:~ $ echo "/mnt/hdd-2 *(rw,no_root_squash,insecure,async,no_subtree_check,anonuid=1000,anongid=1000)" >> /etc/exports root@pi-master:~ $ exit
- Start the NFS Server:
pi@pi-master:~ $ sudo exportfs -ra
Note: repeat the following steps for each of the workers pi-worker-1, pi-worker-2, etc.
- Install the necessary dependencies:
pi@pi-worker-x:~ $ sudo apt install nfs-common -y - Create the directory to mount the NFS Share:
pi@pi-worker-x:~ $ sudo mkdir /mnt/hdd pi@pi-worker-x:~ $ sudo chown -R pi:pi /mnt/hdd
- Configure auto-mount of the NFS Share by adding the following line, where
<MASTER_IP>:/mnt/hddis the IP ofpi-masterfollowed by the NFS share path:pi@pi-worker-x:~ $ sudo -i root@pi-worker-x:~ $ echo "<MASTER_IP>:/mnt/hdd /mnt/hdd nfs rw 0 0" >> /etc/fstab root@pi-worker-x:~ $ exit
- Reboot the system
pi@pi-worker-x:~ $ sudo reboot - Optionnal: to mount manually you can run the following command, where
<MASTER_IP>:/mnt/hddis the IP ofpi-masterfollowed by the NFS share path:pi@pi-worker-x:~ $ sudo mount -t nfs <MASTER_IP>:/mnt/hdd /mnt/hdd
pi@pi-master:~ $ curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC=" --no-deploy servicelb --no-deploy traefik" sh -- Get K3s token on master pi, copy the result:
pi@pi-master:~ $ sudo cat /var/lib/rancher/k3s/server/node-token K103166a17...eebca269271 - Run K3s installer on worker (repeat on each worker):
pi@pi-worker-x:~ $ curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" K3S_URL="https://<MASTER_IP>:6443" K3S_TOKEN="K103166a17...eebca269271" sh -
- Copy kube config file from master pi:
user@workstation:~ $ scp pi@<MASTER_IP>:/etc/rancher/k3s/k3s.yaml ~/.kube/config
- Edit kube config file to replace
127.0.0.1with<MASTER_IP>:user@workstation:~ $ vim ~/.kube/config
- Test everything by running a
kubectlcommand:user@workstation:~ $ kubectl get nodes -o wide
- Worker(s)
user@workstation:~ $ sudo /usr/local/bin/k3s-agent-uninstall.sh- Master
pi@pi-master:~ $ sudo /usr/local/bin/k3s-uninstall.shNode-RED authentication isn't set up by default atm, you can set it up by scaling the deployment down, editing the settings.js file to enable authentication and scaling the deployment back up:
pi@pi-master:~ $ kubectl scale deployment/node-red --replicas=0 -n node-red
pi@pi-master:~ $ vim /path/to/node-red/settings.js
pi@pi-master:~ $ kubectl scale deployment/node-red --replicas=1 -n node-red
You can either set up authentication through GitHub (Documentation):
# settings.js
... Ommited ...
adminAuth: require('node-red-auth-github')({
clientID: "<GITHUB_CLIENT_ID>",
clientSecret: "<GITHUB_CLIENT_SECRET>",
baseURL: "https://node-red.<DOMAIN>/",
users: [
{ username: "<GITHUB_USERNAME>", permissions: ["*"]}
]
}),
... Ommited ...Or classic user-pass authentication (generate a password hash using node -e "console.log(require('bcryptjs').hashSync(process.argv[1], 8));" <your-password-here>):
# settings.js
... Ommited ...
adminAuth: {
type: "credentials",
users: [
{
username: "admin",
password: "$2a$08$zZWtXTja0fB1pzD4sHCMyOCMYz2Z6dNbM6tl8sJogENOMcxWV9DN.",
permissions: "*"
},
{
username: "guest",
password: "$2b$08$wuAqPiKJlVN27eF5qJp.RuQYuy6ZYONW7a/UWYxDTtwKFCdB8F19y",
permissions: "read"
}
]
},
... Ommited ...More information in the Docs: Securing Node-RED.