|
| 1 | +# kind VM recipe |
| 2 | + |
| 3 | +Recipe to deploy a simple VM with a running [kind](https://kind.sigs.k8s.io/) in Pouta. |
| 4 | + |
| 5 | +## VM deployment |
| 6 | + |
| 7 | +The VM is defined in [Terraform](https://www.terraform.io/) with state stored in `<project name>-terraform-state` bucket deployed under you project in allas. |
| 8 | + |
| 9 | +To deploy/update, download a config file from Pouta for authentication (the `<project name>-openrc.sh`). |
| 10 | +You will also need `S3` credentials for accessing the bucket, in the below recipe it assumes you have them nicely stored in [pass](https://www.passwordstore.org/). |
| 11 | +Currently the VM also needs 2 secrets: |
| 12 | +- host SSH private key |
| 13 | +- host SSH public key (not really secret but we have it classified as such) |
| 14 | +Code is looking for them in following locations: |
| 15 | +- `secrets/ssh_host_ed25519_key` |
| 16 | +- `secrets/ssh_host_ed25519_key.pub` |
| 17 | + |
| 18 | +After cloning the repository unlock the secrets with |
| 19 | + |
| 20 | + -> git-crypt unlock |
| 21 | + |
| 22 | +Put public SSH keys with admin access to the `secrets/public_keys` file. |
| 23 | +If you want some users to have just access to tunnel ports from the VM, add their keys to the `secrets/tunnel_keys` file, if not just `touch secrets/tunnel_keys`. |
| 24 | +After both of those files are present, you should be able to deploy the VM: |
| 25 | + |
| 26 | + # authenticate |
| 27 | + -> source project_2007468-openrc.sh |
| 28 | + # for simplicity of this example we just export S3 creentials |
| 29 | + -> export AWS_ACCESS_KEY_ID=$(pass fancy_project/aws_key) |
| 30 | + -> export AWS_SECRET_ACCESS_KEY=$(pass fancy_project/aws_secret) |
| 31 | + # init |
| 32 | + -> terraform init |
| 33 | + # apply |
| 34 | + -> terraform apply |
| 35 | + |
| 36 | +And wait for things to finish, including package udpates and installations on the VM. |
| 37 | +As one of the outputs you should see the address of your VM, e.g.: |
| 38 | + |
| 39 | + Outputs: |
| 40 | + |
| 41 | + address = "128.214.254.127" |
| 42 | + |
| 43 | +## Connecting to kind |
| 44 | + |
| 45 | +It takes a few moments for everything to finish setting up on the VM. |
| 46 | +Once it finishes the VM should be running a configured `kind` cluster with a dashboard running. |
| 47 | +You can download you config file and access the cluster, notice the access to the API is restricted to trusted networks only |
| 48 | + |
| 49 | + -> scp [email protected]:.kube/remote-config . |
| 50 | + -> export KUBECONFIG=$(pwd)/remote-config |
| 51 | + -> kubectl auth whoami |
| 52 | + ATTRIBUTE VALUE |
| 53 | + Username kubernetes-admin |
| 54 | + Groups [kubeadm:cluster-admins system:authenticated] |
| 55 | + |
| 56 | +To, for example, check if the dashboard is ready |
| 57 | + |
| 58 | + -> kubectl get all --namespace kubernetes-dashboard |
| 59 | + NAME READY STATUS RESTARTS AGE |
| 60 | + pod/kubernetes-dashboard-api-5cd64dbc99-xjbj8 1/1 Running 0 2m54s |
| 61 | + pod/kubernetes-dashboard-auth-5c8859fcbd-zt2lm 1/1 Running 0 2m54s |
| 62 | + pod/kubernetes-dashboard-kong-57d45c4f69-5gv2d 1/1 Running 0 2m54s |
| 63 | + pod/kubernetes-dashboard-metrics-scraper-df869c886-chxx4 1/1 Running 0 2m54s |
| 64 | + pod/kubernetes-dashboard-web-6ccf8d967-fsctp 1/1 Running 0 2m54s |
| 65 | + |
| 66 | + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE |
| 67 | + service/kubernetes-dashboard-api ClusterIP 10.96.149.208 <none> 8000/TCP 2m55s |
| 68 | + service/kubernetes-dashboard-auth ClusterIP 10.96.140.195 <none> 8000/TCP 2m55s |
| 69 | + service/kubernetes-dashboard-kong-proxy ClusterIP 10.96.35.136 <none> 443/TCP 2m55s |
| 70 | + service/kubernetes-dashboard-metrics-scraper ClusterIP 10.96.222.176 <none> 8000/TCP 2m55s |
| 71 | + service/kubernetes-dashboard-web ClusterIP 10.96.139.1 <none> 8000/TCP 2m55s |
| 72 | + |
| 73 | + NAME READY UP-TO-DATE AVAILABLE AGE |
| 74 | + deployment.apps/kubernetes-dashboard-api 1/1 1 1 2m54s |
| 75 | + deployment.apps/kubernetes-dashboard-auth 1/1 1 1 2m54s |
| 76 | + deployment.apps/kubernetes-dashboard-kong 1/1 1 1 2m54s |
| 77 | + deployment.apps/kubernetes-dashboard-metrics-scraper 1/1 1 1 2m54s |
| 78 | + deployment.apps/kubernetes-dashboard-web 1/1 1 1 2m54s |
| 79 | + |
| 80 | + NAME DESIRED CURRENT READY AGE |
| 81 | + replicaset.apps/kubernetes-dashboard-api-5cd64dbc99 1 1 1 2m54s |
| 82 | + replicaset.apps/kubernetes-dashboard-auth-5c8859fcbd 1 1 1 2m54s |
| 83 | + replicaset.apps/kubernetes-dashboard-kong-57d45c4f69 1 1 1 2m54s |
| 84 | + replicaset.apps/kubernetes-dashboard-metrics-scraper-df869c886 1 1 1 2m54s |
| 85 | + replicaset.apps/kubernetes-dashboard-web-6ccf8d967 1 1 1 2m54s |
| 86 | + |
| 87 | +Dashboard by default in this case is not overly secure so the external route is not setup, to access: |
| 88 | + |
| 89 | + # Generate a token to login to the dashboard with |
| 90 | + -> kubectl -n kubernetes-dashboard create token admin-user |
| 91 | + # Forward the dashboard to your machine |
| 92 | + -> kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443 |
| 93 | + Forwarding from 127.0.0.1:8443 -> 8443 |
| 94 | + Forwarding from [::1]:8443 -> 8443 |
| 95 | + |
| 96 | +And view the dashboard in your browser under `https://localhost:8443` using the generated token to login. |
| 97 | +Note that the cluster and the dashboard use a self signed certificate so your browser is not going to like it. |
0 commit comments