Skip to content

Commit 18d37b2

Browse files
committed
Add a terraform deployment for a VM
1 parent d945237 commit 18d37b2

10 files changed

+620
-0
lines changed

.gitattributes

+2
Original file line numberDiff line numberDiff line change
@@ -2,3 +2,5 @@ client/container_preparation/input_logic/age filter=lfs diff=lfs merge=lfs -text
22
client/container_preparation/input_logic/curl filter=lfs diff=lfs merge=lfs -text
33
client/container_preparation/input_logic/jq filter=lfs diff=lfs merge=lfs -text
44
client/container_preparation/input_logic/tar filter=lfs diff=lfs merge=lfs -text
5+
# encrypted terraform secrets
6+
terraform/secrets/** filter=git-crypt diff=git-crypt

.gitignore

+9
Original file line numberDiff line numberDiff line change
@@ -10,3 +10,12 @@
1010

1111
# Undo-tree save-files
1212
*.~undo-tree
13+
14+
# openrc configs
15+
*-openrc.sh
16+
17+
# terraform
18+
.terraform*
19+
## user specific secrets
20+
terraform/secrets/public_keys
21+
terraform/secrets/tunnel_keys

terraform/README.md

+97
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,97 @@
1+
# kind VM recipe
2+
3+
Recipe to deploy a simple VM with a running [kind](https://kind.sigs.k8s.io/) in Pouta.
4+
5+
## VM deployment
6+
7+
The VM is defined in [Terraform](https://www.terraform.io/) with state stored in `<project name>-terraform-state` bucket deployed under you project in allas.
8+
9+
To deploy/update, download a config file from Pouta for authentication (the `<project name>-openrc.sh`).
10+
You will also need `S3` credentials for accessing the bucket, in the below recipe it assumes you have them nicely stored in [pass](https://www.passwordstore.org/).
11+
Currently the VM also needs 2 secrets:
12+
- host SSH private key
13+
- host SSH public key (not really secret but we have it classified as such)
14+
Code is looking for them in following locations:
15+
- `secrets/ssh_host_ed25519_key`
16+
- `secrets/ssh_host_ed25519_key.pub`
17+
18+
After cloning the repository unlock the secrets with
19+
20+
-> git-crypt unlock
21+
22+
Put public SSH keys with admin access to the `secrets/public_keys` file.
23+
If you want some users to have just access to tunnel ports from the VM, add their keys to the `secrets/tunnel_keys` file, if not just `touch secrets/tunnel_keys`.
24+
After both of those files are present, you should be able to deploy the VM:
25+
26+
# authenticate
27+
-> source project_2007468-openrc.sh
28+
# for simplicity of this example we just export S3 creentials
29+
-> export AWS_ACCESS_KEY_ID=$(pass fancy_project/aws_key)
30+
-> export AWS_SECRET_ACCESS_KEY=$(pass fancy_project/aws_secret)
31+
# init
32+
-> terraform init
33+
# apply
34+
-> terraform apply
35+
36+
And wait for things to finish, including package udpates and installations on the VM.
37+
As one of the outputs you should see the address of your VM, e.g.:
38+
39+
Outputs:
40+
41+
address = "128.214.254.127"
42+
43+
## Connecting to kind
44+
45+
It takes a few moments for everything to finish setting up on the VM.
46+
Once it finishes the VM should be running a configured `kind` cluster with a dashboard running.
47+
You can download you config file and access the cluster, notice the access to the API is restricted to trusted networks only
48+
49+
-> scp [email protected]:.kube/remote-config .
50+
-> export KUBECONFIG=$(pwd)/remote-config
51+
-> kubectl auth whoami
52+
ATTRIBUTE VALUE
53+
Username kubernetes-admin
54+
Groups [kubeadm:cluster-admins system:authenticated]
55+
56+
To, for example, check if the dashboard is ready
57+
58+
-> kubectl get all --namespace kubernetes-dashboard
59+
NAME READY STATUS RESTARTS AGE
60+
pod/kubernetes-dashboard-api-5cd64dbc99-xjbj8 1/1 Running 0 2m54s
61+
pod/kubernetes-dashboard-auth-5c8859fcbd-zt2lm 1/1 Running 0 2m54s
62+
pod/kubernetes-dashboard-kong-57d45c4f69-5gv2d 1/1 Running 0 2m54s
63+
pod/kubernetes-dashboard-metrics-scraper-df869c886-chxx4 1/1 Running 0 2m54s
64+
pod/kubernetes-dashboard-web-6ccf8d967-fsctp 1/1 Running 0 2m54s
65+
66+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
67+
service/kubernetes-dashboard-api ClusterIP 10.96.149.208 <none> 8000/TCP 2m55s
68+
service/kubernetes-dashboard-auth ClusterIP 10.96.140.195 <none> 8000/TCP 2m55s
69+
service/kubernetes-dashboard-kong-proxy ClusterIP 10.96.35.136 <none> 443/TCP 2m55s
70+
service/kubernetes-dashboard-metrics-scraper ClusterIP 10.96.222.176 <none> 8000/TCP 2m55s
71+
service/kubernetes-dashboard-web ClusterIP 10.96.139.1 <none> 8000/TCP 2m55s
72+
73+
NAME READY UP-TO-DATE AVAILABLE AGE
74+
deployment.apps/kubernetes-dashboard-api 1/1 1 1 2m54s
75+
deployment.apps/kubernetes-dashboard-auth 1/1 1 1 2m54s
76+
deployment.apps/kubernetes-dashboard-kong 1/1 1 1 2m54s
77+
deployment.apps/kubernetes-dashboard-metrics-scraper 1/1 1 1 2m54s
78+
deployment.apps/kubernetes-dashboard-web 1/1 1 1 2m54s
79+
80+
NAME DESIRED CURRENT READY AGE
81+
replicaset.apps/kubernetes-dashboard-api-5cd64dbc99 1 1 1 2m54s
82+
replicaset.apps/kubernetes-dashboard-auth-5c8859fcbd 1 1 1 2m54s
83+
replicaset.apps/kubernetes-dashboard-kong-57d45c4f69 1 1 1 2m54s
84+
replicaset.apps/kubernetes-dashboard-metrics-scraper-df869c886 1 1 1 2m54s
85+
replicaset.apps/kubernetes-dashboard-web-6ccf8d967 1 1 1 2m54s
86+
87+
Dashboard by default in this case is not overly secure so the external route is not setup, to access:
88+
89+
# Generate a token to login to the dashboard with
90+
-> kubectl -n kubernetes-dashboard create token admin-user
91+
# Forward the dashboard to your machine
92+
-> kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443
93+
Forwarding from 127.0.0.1:8443 -> 8443
94+
Forwarding from [::1]:8443 -> 8443
95+
96+
And view the dashboard in your browser under `https://localhost:8443` using the generated token to login.
97+
Note that the cluster and the dashboard use a self signed certificate so your browser is not going to like it.

terraform/cloud-config.yaml

+109
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,109 @@
1+
#cloud-config
2+
package_update: true
3+
package_upgrade: true
4+
package_reboot_if_required: true
5+
apt:
6+
sources:
7+
docker.list:
8+
source: deb [arch=amd64] https://download.docker.com/linux/ubuntu $RELEASE stable
9+
keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
10+
helm.list:
11+
source: deb [arch=amd64] https://baltocdn.com/helm/stable/debian/ all main
12+
keyid: 81BF832E2F19CD2AA0471959294AC4827C1A168A # https://baltocdn.com/helm/signing.asc
13+
packages:
14+
- ca-certificates
15+
- containerd.io
16+
- curl
17+
- docker-ce
18+
- docker-ce-cli
19+
- gnupg
20+
- helm
21+
- lsb-release
22+
- uidmap
23+
- net-tools
24+
- yq
25+
# fun utils
26+
- git
27+
- tmux
28+
- wget
29+
groups:
30+
- docker
31+
users:
32+
- name: ubuntu
33+
lock_passwd: true
34+
shell: /bin/bash
35+
ssh_authorized_keys:
36+
%{ for key in public_keys ~}
37+
- ${key}
38+
%{ endfor ~}
39+
groups:
40+
- docker
41+
- sudo
42+
sudo:
43+
- ALL=(ALL) NOPASSWD:ALL
44+
- name: k8s-api
45+
lock_passwd: true
46+
shell: /usr/sbin/nologin
47+
ssh_authorized_keys:
48+
%{ for key in public_keys ~}
49+
- ${key}
50+
%{ endfor ~}
51+
%{ for key in tunnel_keys ~}
52+
- ${key}
53+
%{ endfor ~}
54+
ssh_genkeytypes:
55+
- ed25519
56+
ssh_keys:
57+
ed25519_private: |
58+
${ed25519_private}
59+
ed25519_public: ${ed25519_public}
60+
runcmd:
61+
- systemctl disable --now docker.service docker.socket
62+
- rm -f /var/run/docker.sock
63+
- loginctl enable-linger ubuntu
64+
- chown ubuntu:root /home/ubuntu # in some versions docker setup has problems without it
65+
- su - ubuntu -c '/usr/local/sbin/setup.sh'
66+
write_files:
67+
- encoding: b64
68+
content: ${setup_sha512}
69+
owner: root:root
70+
path: /etc/setup-sha512
71+
- content: net.ipv4.ip_unprivileged_port_start=80
72+
path: /etc/sysctl.d/unprivileged_port_start.conf
73+
- encoding: b64
74+
content: ${setup_sh}
75+
owner: root:root
76+
path: /usr/local/sbin/setup.sh
77+
permissions: '0755'
78+
- encoding: b64
79+
content: ${hpcs_cluster_yaml}
80+
owner: root:root
81+
path: /etc/hpcs/hpcs-cluster.yaml
82+
permissions: '0644'
83+
- encoding: b64
84+
content: ${kind_dashboard_admin_yaml}
85+
owner: root:root
86+
path: /etc/hpcs/admin-user.yaml
87+
permissions: '0644'
88+
- source:
89+
uri: https://kind.sigs.k8s.io/dl/v0.24.0/kind-Linux-amd64
90+
owner: root:root
91+
path: /usr/bin/kind
92+
permissions: '0755'
93+
- source:
94+
uri: https://dl.k8s.io/v1.31.2/bin/linux/amd64/kubectl
95+
owner: root:root
96+
path: /usr/bin/kubectl
97+
permissions: '0755'
98+
fs_setup:
99+
- label: data
100+
filesystem: 'ext4'
101+
device: /dev/vdb
102+
overwrite: false
103+
- label: docker
104+
filesystem: 'ext4'
105+
device: /dev/vdc
106+
overwrite: false
107+
mounts:
108+
- ['LABEL=data', /var/lib/data, "ext4", "defaults"]
109+
- ['LABEL=docker', /var/lib/docker, "ext4", "defaults"]

terraform/files/admin-user.yaml

+18
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
apiVersion: v1
2+
kind: ServiceAccount
3+
metadata:
4+
name: admin-user
5+
namespace: kubernetes-dashboard
6+
---
7+
apiVersion: rbac.authorization.k8s.io/v1
8+
kind: ClusterRoleBinding
9+
metadata:
10+
name: admin-user
11+
roleRef:
12+
apiGroup: rbac.authorization.k8s.io
13+
kind: ClusterRole
14+
name: cluster-admin
15+
subjects:
16+
- kind: ServiceAccount
17+
name: admin-user
18+
namespace: kubernetes-dashboard

terraform/files/hpcs-cluster.yaml

+39
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
kind: Cluster
2+
apiVersion: kind.x-k8s.io/v1alpha4
3+
name: hpcs
4+
networking:
5+
apiServerAddress: 0.0.0.0
6+
apiServerPort: 6444
7+
nodes:
8+
- role: control-plane
9+
kubeadmConfigPatches:
10+
- |
11+
kind: InitConfiguration
12+
nodeRegistration:
13+
kubeletExtraArgs:
14+
node-labels: "ingress-ready=true"
15+
authorization-mode: "AlwaysAllow"
16+
extraPortMappings:
17+
- containerPort: 80
18+
hostPort: 80
19+
- containerPort: 443
20+
hostPort: 443
21+
- containerPort: 30001
22+
hostPort: 30001
23+
- containerPort: 30002
24+
hostPort: 30002
25+
- containerPort: 30003
26+
hostPort: 30003
27+
- containerPort: 30004
28+
hostPort: 30004
29+
kubeadmConfigPatchesJSON6902:
30+
- group: kubeadm.k8s.io
31+
version: v1beta3
32+
kind: ClusterConfiguration
33+
patch: |
34+
- op: add
35+
path: /apiServer/certSANs/-
36+
value: MY_PUBLIC_IP
37+
- op: add
38+
path: /apiServer/certSANs/-
39+
value: MY_PUBLIC_HOSTNAME

terraform/files/setup.sh

+18
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
#!/bin/bash -eu
2+
export XDG_RUNTIME_DIR=/run/user/1000
3+
4+
/usr/bin/dockerd-rootless-setuptool.sh install -f
5+
6+
MY_PUBLIC_IP=$(curl ifconfig.io 2> /dev/null)
7+
export MY_PUBLIC_IP=${MY_PUBLIC_IP}
8+
MY_PUBLIC_HOSTNAME=$(host "${MY_PUBLIC_IP}" | rev | cut -d " " -f 1 | tail -c +2 | rev)
9+
export MY_PUBLIC_HOSTNAME=${MY_PUBLIC_HOSTNAME}
10+
sed -e "s/MY_PUBLIC_IP/${MY_PUBLIC_IP}/" /etc/hpcs/hpcs-cluster.yaml > "${HOME}/hpcs-cluster.yaml"
11+
sed -i -e "s/MY_PUBLIC_HOSTNAME/${MY_PUBLIC_HOSTNAME}/" "${HOME}/hpcs-cluster.yaml"
12+
/usr/bin/kind create cluster --config "${HOME}/hpcs-cluster.yaml"
13+
14+
yq --yaml-output ".clusters[0].cluster.server = \"https://${MY_PUBLIC_HOSTNAME}:6444\"" "${HOME}/.kube/config" > "${HOME}/.kube/remote-config"
15+
16+
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
17+
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
18+
kubectl apply -f /etc/hpcs/admin-user.yaml
421 Bytes
Binary file not shown.
113 Bytes
Binary file not shown.

0 commit comments

Comments
 (0)