- To debug the templates, you can use the following command:
helm template prod -f value.prod.yaml . --debug > tmp/templates.yaml- This will dump the template manifests into a file called
templates.yamlin the current directory - You can then use this file to debug the templates
- Installation on the Cloud Provider is a 3 part process:
- Ingress Controller - A Cluster Wide Ingress Controller for all namespaces
- Cert-Manager - A Cluster Wide Cert-Manager for all namespaces
- App Installation - Installation of the App in a specific namespace
- 1st 2 steps are one time setup & could be skipped if already done
- 1st 2 steps are also
OPTIONALfor testing on a local machine
-
Ingress Controlleris a combination of resources that routes traffic from clound provider to the application -
To understand it better, refer documentation
-
There are 2 major ingresses available:
| Ingress Name | Description | URL | Cost |
|---|---|---|---|
| Kubernetes/ingress-nginx | Open Source Ingress managed by K8s Community. | Link | Totally Free |
| Nginx-Ingress | From Nginx Inc. Free with limited features. | Link | Free (Limited) / Nginx+ |
- Going with
Kubernetes/ingress-nginx, as it's free & does not require any additional setup
-
Ingress Controllerwould be installated atCluster Levelas aShared resource -
This shared ingress controller would manage the
routing for all the namespacesin the cluster -
All a namespace needs to do is to create an
ingress resourcewith the required routes -
To begin, create a namespace for ingress controller:
kubectl create namespace ingress- Add the ingress-nginx Helm repository:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update- Install ingress controller in the namespace:
helm install cluster-ingress ingress-nginx/ingress-nginx -n ingress-
This will also provision a
External Load Balancer(ELB)in the cloud provider which will be used to route traffic to the k8s cluster -
To see all resources created by Ingress Controller & debug:
kubectl get all,configmaps,secrets -n ingress- Cert-Manager is a Kubernetes add-on to automate the management and issuance of TLS certificates from various issuing sources.
- It is used to manage SSL Certificates for the application
SSL Terminationin k8s can be done at 2 levels:External Load Balancer- Provisioned by Cloud ProviderIngress Controller
Following table explains the pros & cons of both approaches:
| Feature/Aspect | External Load Balancer Termination | Ingress Controller Termination |
|---|---|---|
SSL Processing Overhead |
Offloaded to Load Balancer | Handled by Kubernetes nodes |
SSL Certificate Management |
Managed externally (manual or specific integrations) | Managed within the cluster with cert-manager |
Connection Encryption |
Encrypted only up to the Load Balancer | Encrypted end-to-end up to the pod |
Centralized SSL Management |
Yes (all certs managed in one place) | No (each ingress might have its own certs) |
Cost |
Potential extra costs for LB-based SSL processing | Might save on LB costs but use more node resources |
Ease of Setup |
Varies & Depends on Cloud Provider | Consistent with cert-manager |
End-to-end Encryption |
No (traffic decrypted at LB) | Yes (fully encrypted up to the pod) |
Integration with Let's Encrypt |
Manual or specific integrations | Direct (e.g., cert-manager) |
Auto Renewal of Certificates |
Depends on provider | Generally automated with cert-manager |
Latency |
Potential reduction as SSL processing is offloaded | Slight increase due to processing within the cluster |
-
For this project, we'll go with
SSL Termination at Ingress Controllerfollowing reasons:- Fully Automated Setup with
cert-manager - Auto Renewal of Certificates
- Consistent across all Cloud Providers
- Integration with
Let's Encrypt
- Fully Automated Setup with
-
Installation Process: -
Add the Jetstack Helm repository:
helm repo add jetstack https://charts.jetstack.io
helm repo update- Install the Cert-Manager Helm chart:
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.13.1 --set installCRDs=true Docs for Reference: following docs illustrate the steps for SSL Managament at ELB & Ingress-Controller seperately:
- Installation would vary based on underlying infra
- For a
cloud provider, please ensure the deployment ofIngress Controller&Cert-Manageris complete - Use following values.yaml file, based on the infra:
values.yaml- For local installation (Docker Desktop)values.dev.yaml- For Development Environment on Cloud Providervalues.prod.yaml- For Production Environment on Cloud Provider
- The image architecture should match the node architecture
- To check the node architecture, run the following command:
kubectl get nodes -o=jsonpath='{.items[0].status.nodeInfo.architecture}'- Then, ensure that you have a docker image with the same architecture
- To create an image with a specific architecture, you can use the following command:
docker buildx create --use
docker buildx build --platform linux/amd64 -t varunbotiga/botiga-server:1.0.0-amd64 . --push- Platform could have multiple values like
linux/amd64,linux/arm64,linux/arm/v7 - Image Name recommended format is
<docker-username>/<image-name>:<version>-<architecture>
- As this cluster could be used for multiple applications with different environments, please create a namespace for the application
- Nomenclature the namespace as
<env>-<app-name>, e.g.dev-botigaprod-botiga
kubectl create namespace prod-botiga- This steps is optional
- It simply avoids the need of adding
-n prod-botigato everykubectlcommand
kubectl config set-context --current --namespace=prod-botiga- Secret for pulling images from docker registry of type -
docker-registry
kubectl create secret docker-registry docker-registry-secret \
--docker-server=docker.io \
--docker-username=varunbotiga \
--docker-password=<Your-Docker-Registry-Token> \
--docker-email=varun@botiga.app- Upload app confidential information from
.envfile to a secret of type -generic - This approach gives us flexibility to set custom secret values based on environments
kubectl create secret generic app-secret --from-env-file=.env.prod- To verify the secrets, you can use the following command:
kubectl describe secrets app-secret-
This will not show the actual secret values, but will give you other metadata like when the secret was created.
-
To verify the secret values, you can use the following command:
kubectl get secret docker-registry-secret -o jsonpath="{.data.\.dockerconfigjson}" | base64 --decode
kubectl get secret app-secret -o jsonpath="{.data.NODE_ENV}" | base64 --decode- Firebase SDK File is required to access Firebase Services from the application
- As it a JSON file, it need to be
mount as a volume& the path of the volume should be set as ourGOOGLE_APPLICATION_CREDENTIALSenvironment variable - To mount this file,
Create a Secret from JSON File:
kubectl create secret generic firebase-sdk --from-file=firebase-sdk.json=<path-to-firebase-sdk-json-file>-
Please ensure that the variable name in secret is
firebase-sdk.jsonas it is referenced in the deployment template -
Accessing in Template:
Note: Following 2 code pieces are already embedded in deployment template & are here only for clarity:
-
Mount the Secret as a Volume in a Pod:
apiVersion: apps/v1 kind: Deployment spec: template: spec: containers: - name: my-nodejs-container image: my-nodejs-image volumeMounts: - name: firebase-sdk-volume mountPath: "/etc/firebase-sdk" volumes: - name: firebase-sdk-volume secret: secretName: firebase-sdk
-
Set variable Path to this file:
env: - name: GOOGLE_APPLICATION_CREDENTIALS value: "/etc/firebase-sdk/firebase-sdk.json"
- For local installation
ingress.enabledis set tofalseinvalues.yaml deployment.service.typeis set toNodePortwith anodeportvalue.
helm install prod . -f values.yaml- If installation is successful and
service.typeisNodePort, then, service for Docker-Desktop could be tested athttp://localhost:<node-port>
- To install the app, run the following command:
helm install prod . -f values.prod.yaml- This will install the app in the
prod-botiganamespace - Next step is to create a
A Recordin the DNS provider for the domain name to direct traffic to theELB IP Address - Also, check if the
SSL Certificateis provisioned byCert-Managerfor the domain name
kubectl get certificates
kubectl describe certificate prod-botiga-tls- Following resouces would be created in the app namespace:
kubectl get all,configmaps,secrets
NAME READY STATUS RESTARTS AGE
pod/dev-botiga-backend-77fbd9b886-pfh4l 1/1 Running 0 29m
pod/dev-botiga-backend-77fbd9b886-whwc7 1/1 Running 0 23h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dev-botiga-backend ClusterIP 10.245.245.161 <none> 80/TCP 23h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dev-botiga-backend 2/2 2 2 23h
NAME DESIRED CURRENT READY AGE
replicaset.apps/dev-botiga-backend-77fbd9b886 2 2 2 23h
NAME DATA AGE
configmap/kube-root-ca.crt 1 24h
NAME TYPE DATA AGE
secret/app-secret Opaque 20 24h
secret/dev-botiga-tls kubernetes.io/tls 2 28m
secret/docker-registry-secret kubernetes.io/dockerconfigjson 1 24h
secret/firebase-sdk Opaque 1 24h
secret/letsencrypt-dev Opaque 1 29m
secret/sh.helm.release.v1.dev.v1 helm.sh/release.v1 1 23hhelm uninstall prod- Check all the resouces created:
kubectl get all- You can also check the status of the pods:
kubectl get pods- Describe Pod for details:
kubectl describe pod <pod-name>- Check the logs of the pod:
kubectl logs <pod-name>- Check 1st 100 lines of logs:
kubectl logs [POD_NAME] | head -n 100- Access Shell of the Container
kubectl exec -it <pod_name> -n -- /bin/sh