Skip to content

varmeh/helm-botiga-server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Helm Chart for Botiga Backend

Debug Templates

  • To debug the templates, you can use the following command:
helm template prod -f value.prod.yaml . --debug > tmp/templates.yaml
  • This will dump the template manifests into a file called templates.yaml in the current directory
  • You can then use this file to debug the templates

Installation

  • Installation on the Cloud Provider is a 3 part process:
    1. Ingress Controller - A Cluster Wide Ingress Controller for all namespaces
    2. Cert-Manager - A Cluster Wide Cert-Manager for all namespaces
    3. App Installation - Installation of the App in a specific namespace
  • 1st 2 steps are one time setup & could be skipped if already done
  • 1st 2 steps are also OPTIONAL for testing on a local machine

1. Ingress Controller

  • Ingress Controller is a combination of resources that routes traffic from clound provider to the application

  • To understand it better, refer documentation

  • There are 2 major ingresses available:

Ingress Name Description URL Cost
Kubernetes/ingress-nginx Open Source Ingress managed by K8s Community. Link Totally Free
Nginx-Ingress From Nginx Inc. Free with limited features. Link Free (Limited) / Nginx+
  • Going with Kubernetes/ingress-nginx, as it's free & does not require any additional setup
  • Ingress Controller would be installated at Cluster Level as a Shared resource

  • This shared ingress controller would manage the routing for all the namespaces in the cluster

  • All a namespace needs to do is to create an ingress resource with the required routes

  • To begin, create a namespace for ingress controller:

kubectl create namespace ingress
  • Add the ingress-nginx Helm repository:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
  • Install ingress controller in the namespace:
helm install cluster-ingress ingress-nginx/ingress-nginx -n ingress
  • This will also provision a External Load Balancer(ELB) in the cloud provider which will be used to route traffic to the k8s cluster

  • To see all resources created by Ingress Controller & debug:

kubectl get all,configmaps,secrets -n ingress

2. Cert-Manager

  • Cert-Manager is a Kubernetes add-on to automate the management and issuance of TLS certificates from various issuing sources.
  • It is used to manage SSL Certificates for the application
  • SSL Termination in k8s can be done at 2 levels:
    • External Load Balancer - Provisioned by Cloud Provider
    • Ingress Controller

Following table explains the pros & cons of both approaches:

Feature/Aspect External Load Balancer Termination Ingress Controller Termination
SSL Processing Overhead Offloaded to Load Balancer Handled by Kubernetes nodes
SSL Certificate Management Managed externally (manual or specific integrations) Managed within the cluster with cert-manager
Connection Encryption Encrypted only up to the Load Balancer Encrypted end-to-end up to the pod
Centralized SSL Management Yes (all certs managed in one place) No (each ingress might have its own certs)
Cost Potential extra costs for LB-based SSL processing Might save on LB costs but use more node resources
Ease of Setup Varies & Depends on Cloud Provider Consistent with cert-manager
End-to-end Encryption No (traffic decrypted at LB) Yes (fully encrypted up to the pod)
Integration with Let's Encrypt Manual or specific integrations Direct (e.g., cert-manager)
Auto Renewal of Certificates Depends on provider Generally automated with cert-manager
Latency Potential reduction as SSL processing is offloaded Slight increase due to processing within the cluster
  • For this project, we'll go with SSL Termination at Ingress Controller following reasons:

    • Fully Automated Setup with cert-manager
    • Auto Renewal of Certificates
    • Consistent across all Cloud Providers
    • Integration with Let's Encrypt
  • Installation Process:

  • Add the Jetstack Helm repository:

helm repo add jetstack https://charts.jetstack.io
helm repo update
  • Install the Cert-Manager Helm chart:
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.13.1 --set installCRDs=true 

3. App Installation

  • Installation would vary based on underlying infra
  • For a cloud provider, please ensure the deployment of Ingress Controller & Cert-Manager is complete
  • Use following values.yaml file, based on the infra:
    • values.yaml - For local installation (Docker Desktop)
    • values.dev.yaml - For Development Environment on Cloud Provider
    • values.prod.yaml - For Production Environment on Cloud Provider

Image Architecture

  • The image architecture should match the node architecture
  • To check the node architecture, run the following command:
kubectl get nodes -o=jsonpath='{.items[0].status.nodeInfo.architecture}'
  • Then, ensure that you have a docker image with the same architecture
  • To create an image with a specific architecture, you can use the following command:
docker buildx create --use
docker buildx build --platform linux/amd64 -t varunbotiga/botiga-server:1.0.0-amd64 . --push
  • Platform could have multiple values like linux/amd64,linux/arm64,linux/arm/v7
  • Image Name recommended format is <docker-username>/<image-name>:<version>-<architecture>

Create Namespace

  • As this cluster could be used for multiple applications with different environments, please create a namespace for the application
  • Nomenclature the namespace as <env>-<app-name>, e.g. dev-botiga prod-botiga
kubectl create namespace prod-botiga

Set Default Namespace

  • This steps is optional
  • It simply avoids the need of adding -n prod-botiga to every kubectl command
kubectl config set-context --current --namespace=prod-botiga

Create Secrets

Docker Registry Secret
  • Secret for pulling images from docker registry of type - docker-registry
kubectl create secret docker-registry docker-registry-secret \
  --docker-server=docker.io \
  --docker-username=varunbotiga \
  --docker-password=<Your-Docker-Registry-Token> \
  --docker-email=varun@botiga.app
App Secrets
  • Upload app confidential information from .env file to a secret of type - generic
  • This approach gives us flexibility to set custom secret values based on environments
kubectl create secret generic app-secret --from-env-file=.env.prod
Verifying Secrets
  • To verify the secrets, you can use the following command:
kubectl describe secrets app-secret
  • This will not show the actual secret values, but will give you other metadata like when the secret was created.

  • To verify the secret values, you can use the following command:

kubectl get secret docker-registry-secret -o jsonpath="{.data.\.dockerconfigjson}" | base64 --decode
kubectl get secret app-secret -o jsonpath="{.data.NODE_ENV}" | base64 --decode
Mounting Firebase SDK File
  • Firebase SDK File is required to access Firebase Services from the application
  • As it a JSON file, it need to be mount as a volume & the path of the volume should be set as our GOOGLE_APPLICATION_CREDENTIALS environment variable
  • To mount this file, Create a Secret from JSON File:
kubectl create secret generic firebase-sdk --from-file=firebase-sdk.json=<path-to-firebase-sdk-json-file>
  • Please ensure that the variable name in secret is firebase-sdk.json as it is referenced in the deployment template

  • Accessing in Template:

Note: Following 2 code pieces are already embedded in deployment template & are here only for clarity:

  1. Mount the Secret as a Volume in a Pod:

      apiVersion: apps/v1
      kind: Deployment
      spec:
        template:
          spec:
            containers:
            - name: my-nodejs-container
              image: my-nodejs-image
              volumeMounts:
              - name: firebase-sdk-volume
                mountPath: "/etc/firebase-sdk"
            volumes:
            - name: firebase-sdk-volume
              secret:
                secretName: firebase-sdk
  2. Set variable Path to this file:

      env:
        - name: GOOGLE_APPLICATION_CREDENTIALS
          value: "/etc/firebase-sdk/firebase-sdk.json"

Install App

Local Installation
  • For local installation ingress.enabled is set to false in values.yaml
  • deployment.service.type is set to NodePort with a nodeport value.
helm install prod . -f values.yaml
  • If installation is successful and service.type is NodePort, then, service for Docker-Desktop could be tested at http://localhost:<node-port>
Cloud Installation
  • To install the app, run the following command:
helm install prod . -f values.prod.yaml
  • This will install the app in the prod-botiga namespace
  • Next step is to create a A Record in the DNS provider for the domain name to direct traffic to the ELB IP Address
  • Also, check if the SSL Certificate is provisioned by Cert-Manager for the domain name
kubectl get certificates
kubectl describe certificate prod-botiga-tls
  • Following resouces would be created in the app namespace:
kubectl get all,configmaps,secrets

NAME                                      READY   STATUS    RESTARTS   AGE
pod/dev-botiga-backend-77fbd9b886-pfh4l   1/1     Running   0          29m
pod/dev-botiga-backend-77fbd9b886-whwc7   1/1     Running   0          23h

NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/dev-botiga-backend   ClusterIP   10.245.245.161   <none>        80/TCP    23h

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dev-botiga-backend   2/2     2            2           23h

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/dev-botiga-backend-77fbd9b886   2         2         2       23h

NAME                         DATA   AGE
configmap/kube-root-ca.crt   1      24h

NAME                               TYPE                             DATA   AGE
secret/app-secret                  Opaque                           20     24h
secret/dev-botiga-tls              kubernetes.io/tls                2      28m
secret/docker-registry-secret      kubernetes.io/dockerconfigjson   1      24h
secret/firebase-sdk                Opaque                           1      24h
secret/letsencrypt-dev             Opaque                           1      29m
secret/sh.helm.release.v1.dev.v1   helm.sh/release.v1               1      23h

Uninstall Chart

helm uninstall prod

Debug Installation

  • Check all the resouces created:
kubectl get all
  • You can also check the status of the pods:
kubectl get pods
  • Describe Pod for details:
kubectl describe pod <pod-name>
  • Check the logs of the pod:
kubectl logs <pod-name>
  • Check 1st 100 lines of logs:
kubectl logs [POD_NAME] | head -n 100
  • Access Shell of the Container
kubectl exec -it <pod_name> -n -- /bin/sh

About

Helm Charts for all Products

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages