Skip to content

AnilKumar3494/k8sCluster-with-CI-CD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Creating Deployments, Using EKS and Ingress Controller

Standard "Infrastructure as Code" (IaC) approach using eksctl (the official CLI for Amazon EKS). Building a Kubernetes cluster in two distinct phases: first the Control Plane, then the Worker Nodes.

Step 1: Configure AWS CLI

Command:

aws configure

What it does: It authenticates your terminal session with AWS. It creates local files (~/.aws/credentials and ~/.aws/config) storing your keys.

Context: In a real project, you rarely type these keys manually. Instead, CI/CD pipelines (like Jenkins or GitHub Actions) inject these credentials as environment variables so automation scripts can talk to the AWS API securely.

Step 2: Creating EKS Cluster (The Control Plane)

Command:

eksctl create cluster --name=<cluster-name> --without-nodegroup

This provisions the EKS Control Plane. This includes the Kubernetes API Server, etcd database (storage), Scheduler, and Controller Manager.

Resources Created: A CloudFormation stack that builds a VPC (Virtual Private Cloud), Subnets, Route Tables, Internet Gateway, and the EKS Control Plane endpoints.

Why --without-nodegroup? By default, eksctl tries to create worker nodes immediately. Using this flag tells AWS: "Just build the master server for now; I will configure the worker servers specifically later."

Context: Separating the Control Plane from the Node Groups is best practice. It allows you to upgrade the Kubernetes version of the Control Plane without immediately forcing an update on your worker nodes, reducing downtime risk.

Step 3: Associate IAM OIDC Provider

Command:

eksctl utils associate-iam-oidc-provider --cluster <cluster-name> --approve

This creates an OpenID Connect (OIDC) identity provider in AWS IAM. It links your Kubernetes cluster’s RBAC (Role-Based Access Control) system to AWS IAM.

Context (Crucial): IRSA (IAM Roles for Service Accounts): Without this, if a specific Pod needs to access an S3 bucket, you have to give permission to the entire Node. With OIDC, you can give permission only to that specific Pod. This follows the Principle of Least Privilege.

Step 4: Create Node Group (The Workers)

Command:

eksctl create nodegroup --cluster=<cluster-name> --name=<node-group-name> --node-type=t2.small --nodes=2 --nodes-min=2 --nodes-max=4 --node-volume-size=8 --ssh-access --ssh-public-key=<ssh-key-name> --managed --asg-access --external-dns-access --full-ecr-access --appmesh-access --alb-ingress-access

This command provisions the EC2 instances where your actual applications (Pods) will run.

Flag Technical Meaning Real Project Usage
--managed Creates an AWS Managed Node Group. AWS handles the patching and updating of the EC2 operating system (AMI) for you. You just click "Update" in the console.
--node-type=t2.micro Defines the CPU/RAM size of the server. Warning: t2.micro (1 vCPU, 1GB RAM) is usually too small for EKS. The system components (daemonsets) will eat all the RAM, leaving no room for your app. In real projects, we usually start with t3.medium or m5.large.
--nodes-min=2 --nodes-max=4 Configures an Auto Scaling Group (ASG). If your app gets popular and traffic spikes, AWS automatically adds servers (up to 4). When traffic drops, it kills servers (down to 2) to save money.
--external-dns-access Attaches IAM Policy for Route53. Allows your cluster to automatically create DNS records (like myapp.example.com) when you deploy services.
--alb-ingress-access Attaches IAM Policy for Load Balancers. Allows the cluster to automatically provision an Application Load Balancer (ALB) to route internet traffic to your pods.

Verifications

View all EKS Clusters in your AWS Account:

aws eks list-clusters

If using profiles or specific regions, you may need to add flags:

aws eks list-clusters --region us-east-1

View Clusters saved in your local configuration:

kubectl config get-clusters

Step 1: Connect your terminal to the cluster

Even though you created the cluster, your local kubectl tool might not know which cluster to talk to yet. You need to update your "kubeconfig" file.

Command:

aws eks update-kubeconfig --region us-east-1 --name <cluster-name>

Success Indicator: It should say Updated context arn:aws:eks:us-east-1:XXXX:cluster/<cluster-name> in /Users/yourname/.kube/config.

Step 2: Check the Nodes

Command:

kubectl get nodes

STATUS: This is the most important column.

  • Ready: Success. The node is healthy, the kubelet is running, and it has networking.
  • NotReady: The node is online, but the Kubernetes networking (CNI) plugin hasn't started yet.
  • Unknown or (Empty): If the list is empty, your Node Group failed to create (likely an IAM issue or the t2.micro instances timed out).

Step 3: Check the System Pods (The "Core Components")

If nodes are in NotReady status, or you just want to be 100% sure everything is healthy, check the kube-system namespace. This is where AWS runs its networking and proxy tools.

Command:

kubectl get pods -n kube-system

Troubleshooting: If any errors check -> Go to AWS Console -> CloudFormation.

Step 4: Check for Worker Nodes

Command:

kubectl get nodes
# 1. Type: NodePort - Creating Namespaces and Deployments

By now we have a Master Node and Min 2 Worker Nodes with Max Limit of 4.

Organize files by environment or by resource type. Since this is for practice, will organize by resource type.

- `namespaces/`: Defines the virtual boundaries (e.g., dev, prod).
- `deployments/`: Defines the applications (the "software").
- `services/`: Defines the networking (how to talk to the software).

**Commands to create directory structure:**

```bash
mkdir -p k8s-practice/00-namespaces
mkdir -p k8s-practice/01-deployments
mkdir -p k8s-practice/02-services
cd k8s-practice
```

1. Creating Namespace yaml

File: 00-namespaces/production.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: prod-apps

2. Deployments

This tells Kubernetes what to run.

  • Labels (app: name): This is the most critical part. This is how the Service (networking) finds the Pods later.
  • Replicas: We want 2 copies of each for High Availability.

File: 01-deployments/all-apps.yaml (Note: These would be separate files, but for this, we can put them in one file using --- as a separator)

# App 1: Portfolio App
apiVersion: apps/v1
kind: Deployment
metadata:
  name: <portfolio-app>-deploy
  namespace: prod-apps # <--- Puts it in our custom namespace
spec:
  replicas: 2
  selector:
    matchLabels:
      app: <portfolio-app> # <--- The ID tag
  template:
    metadata:
      labels:
        app: <portfolio-app> # <--- Stamping the Pods with the ID
    spec:
      containers:
        - name: portfolio-container
          image: <docker-hub-username>/<portfolio-image>:latest
          ports:
            - containerPort: 80

---
# App 2: Restaurant App
apiVersion: apps/v1
kind: Deployment
metadata:
  name: <restaurant-app>-deploy
  namespace: prod-apps
spec:
  replicas: 2
  selector:
    matchLabels:
      app: <restaurant-app>
  template:
    metadata:
      labels:
        app: <restaurant-app>
    spec:
      containers:
        - name: <restaurant-app>-container
          image: <docker-hub-username>/<restaurant-image>:12
          ports:
            - containerPort: 80

---
# App 3: Apache Web Server (Simple Test)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: apache-deploy
  namespace: prod-apps
spec:
  replicas: 2
  selector:
    matchLabels:
      app: apache
  template:
    metadata:
      labels:
        app: apache
    spec:
      containers:
        - name: apache-container
          image: httpd:alpine
          ports:
            - containerPort: 80

3. Services (Type: NodePort Networking)

[Image of Kubernetes NodePort Service diagram]

Why: Pods are ephemeral (they die and get new IPs). Services provide a stable address. The NodePort Strategy: It punches a hole in the firewall of every worker node at a specific port (30000-32767).

  • Why use it here? It proves the app is running and reachable.
  • Why NOT in the Real World?
    • Security: Since opening ports directly on your servers. (SG can prevent and access can prevent unauthorised access but causes inconvenience).
    • Inconvenience: Users have to type http://192.168.1.50:30001. They can't remember that.
    • Scale: You only have ~2700 ports available.

File: 02-services/nodeport-services.yaml

# Service for Portfolio (Port 30001)
apiVersion: v1
kind: Service
metadata:
  name: <portfolio-app>-svc
  namespace: prod-apps
spec:
  type: NodePort # <--- The Strategy
  selector:
    app: <portfolio-app> # <--- Must match Deployment labels exactly!
  ports:
    - port: 80 # Port the Service listens on internally
      targetPort: 80 # Port the Container is running on
      nodePort: 30001 # External Port (We hardcode it for practice)

---
# Service for Restaurant App (Port 30002)
apiVersion: v1
kind: Service
metadata:
  name: <restaurant-app>-svc
  namespace: prod-apps
spec:
  type: NodePort
  selector:
    app: <restaurant-app>
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30002

---
# Service for Apache (Port 30003)
apiVersion: v1
kind: Service
metadata:
  name: apache-svc
  namespace: prod-apps
spec:
  type: NodePort
  selector:
    app: apache
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30003

4. Execute and Verify

Now we apply the files in order:

kubectl apply -f 00-namespaces/
kubectl apply -f 01-deployments/
kubectl apply -f 02-services/
kubectl get all -n prod-apps

5. Access the Apps

We need the Public IP of one of your worker nodes.

  1. Get the nodes:
    kubectl get nodes -o wide
  2. Copy the EXTERNAL-IP of any node.
  3. Open your browser and visit:
    • http://<NODE-EXTERNAL-IP>:30001 (<portfolio-app>)
    • http://<NODE-EXTERNAL-IP>:30002 (<restaurant-app>)
    • http://<NODE-EXTERNAL-IP>:30003 (Apache)

Note: Since you are on AWS EKS, by default, the AWS Security Group (Firewall) might block ports 30001-30003. If the browser spins and times out, we will need to add a rule to the Security Group to allow "Inbound Custom TCP 30000-30003".

2. Type: LoadBalancers

[Image of Kubernetes LoadBalancer diagram]

Instead of punching holes in your nodes and hoping the firewall lets you in, we tell Kubernetes: "Please ask AWS to build me a real Load Balancer for this specific app”.

  • Pros: It is incredibly stable. AWS gives you a nice URL (e.g., my-loadbalancer.us-east-1.elb.amazonaws.com). It handles traffic distribution automatically.
  • Cons: Cost. AWS charges (roughly $15-$20/month) for each Load Balancer. If you have 100 microservices, that’s $2,000/month just for networking. This is why we eventually move to Phase 3 (Ingress), but Phase 2 is standard for simple setups.

Clean Up (Remove NodePorts)

We need to remove the old networking rules so they don't conflict. The Apps (Deployments) stay running.

# Delete the old NodePort services
kubectl delete -f 02-services/nodeport-services.yaml

# Verify they are gone (Deployments should still be there)
kubectl get all -n prod-apps

1. Create the LoadBalancer Manifest

File: 02-services/loadbalancer-services.yaml

  • type: Changed from NodePort to LoadBalancer.
  • nodePort: Removed. We don't care what port is used internally; AWS handles the external entry point.
# Load Balancer for Portfolio
apiVersion: v1
kind: Service
metadata:
  name: <portfolio-app>-lb
  namespace: prod-apps
  annotations: # <--- NEW SECTION
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
spec:
  type: LoadBalancer
  selector:
    app: <portfolio-app>
  ports:
    - port: 80
      targetPort: 80

---
# Load Balancer for Restaurant App
apiVersion: v1
kind: Service
metadata:
  name: <restaurant-app>-lb
  namespace: prod-apps
  annotations: # <--- NEW SECTION
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
spec:
  type: LoadBalancer
  selector:
    app: <restaurant-app>
  ports:
    - port: 80
      targetPort: 80

---
# Load Balancer for Apache
apiVersion: v1
kind: Service
metadata:
  name: apache-lb
  namespace: prod-apps
  annotations: # <--- NEW SECTION
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
spec:
  type: LoadBalancer
  selector:
    app: apache
  ports:
    - port: 80
      targetPort: 80

2. Apply and Provision

kubectl apply -f 02-services/loadbalancer-services.yaml

Unlike NodePort (which is instant), this step takes 2-5 minutes. Kubernetes calls the AWS API, and AWS starts spinning up infrastructure in the background.

Watch:

kubectl get services -n prod-apps --watch
  1. Initially, the EXTERNAL-IP will say <pending>.
  2. After a few minutes, it will change to a long DNS name (e.g., a8b9c...us-east-1.elb.amazonaws.com).

Once you see the DNS names:

  1. Copy the DNS address for <portfolio-app>-lb.
  2. Paste it into your browser.
  3. Do the same for <restaurant-app>-lb and apache-lb.

Notice: You do not need to specify a port (like :30001) anymore. The Load Balancer accepts traffic on standard Port 80.

Important: Make sure AWS Console -> EC2 -> Security Groups has inbound for Port range: 30000-32767 (This is the full range Kubernetes uses).

Trouble I had: "Internal" vs "Internet-facing"

Issue: Load Balancer has its own Security Group (different from the Worker Node one). Look at the "Scheme" field on the left side of the AWS Console. It says: Internal.

The Diagnosis: You created a Load Balancer, but AWS created it as a Private (Internal) Load Balancer.

  • Internal: Only accessible by other servers inside your AWS network (VPC).
  • Internet-facing: Accessible by you, me, and the rest of the world.

Why did this happen? Since you didn't specify what kind of Load Balancer you wanted, AWS EKS looked at your subnets (which are likely private by default in your setup) and decided: "Safe bet, let's make this private."

The Fix: That is why we added the annotations section in the YAML above.

Type 3: Ingress

Yes, there are two main ways to do Ingress on AWS, and we will learn both.

1. Nginx Ingress Controller (The Universal Way)

  • How it works: You run "Nginx" pods inside your cluster. You have one Classic Load Balancer that sends all traffic to these Nginx pods. Nginx then looks at the URL (e.g., /portfolio) and routes it to the right app.
  • Pros: It works exactly the same on AWS, Azure, Google, or your laptop. It's the industry standard for learning.
  • Cons: You manage the Nginx configuration (via YAML).

2. AWS Load Balancer Controller (The AWS Native Way)

  • How it works: A controller watches your Ingress YAML and automatically provisions a real AWS Application Load Balancer (ALB).
  • Pros: Deep integration with AWS (WAF, SSL certificates, etc.).
  • Cons: Setup is complex (requires OIDC, IAM Policies).

Since the Ingress Controller sits at the edge, your apps (, ) don't need to talk to the internet directly anymore. They can hide inside the cluster.

We will create ClusterIP services. This is the default service type that is only accessible from inside Kubernetes.

File: 02-services/clusterip-services.yaml

# Portfolio (Internal)
apiVersion: v1
kind: Service
metadata:
  name: <portfolio-app>-svc
  namespace: prod-apps
spec:
  type: ClusterIP # <--- Internal Only
  selector:
    app: <portfolio-app>
  ports:
    - port: 80
      targetPort: 80

---
# Restaurant App (Internal)
apiVersion: v1
kind: Service
metadata:
  name: <restaurant-app>-svc
  namespace: prod-apps
spec:
  type: ClusterIP
  selector:
    app: <restaurant-app>
  ports:
    - port: 80
      targetPort: 80

---
# Apache (Internal)
apiVersion: v1
kind: Service
metadata:
  name: apache-svc
  namespace: prod-apps
spec:
  type: ClusterIP
  selector:
    app: apache
  ports:
    - port: 80
      targetPort: 80

Apply:

kubectl apply -f 02-services/clusterip-services.yaml

1. Nginx Ingress Setup

1. Install the Nginx Ingress Controller

This is a piece of software we install into the cluster. It creates the One Ring (Load Balancer) to rule them all.

kubectl apply -f [https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/aws/deploy.yaml](https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/aws/deploy.yaml)

Verify it installed: It installs into a special namespace called ingress-nginx. By default this Service (which requests the Load Balancer from AWS) is private.

Action Required:

  1. Edit the service:
    kubectl edit service ingress-nginx-controller -n ingress-nginx
  2. Add the annotation under metadata: annotations:
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
  3. Save and exit (Standard Vim commands: Esc, :wq, Enter).

Verify:

kubectl get all -n ingress-nginx

2. The Ingress Resource (The Traffic Rules)

Now we tell Nginx how to route traffic. We will use Host-Based Routing.

  • portfolio.<your-domain>.com -> <portfolio-app>
  • raindrops.<your-domain>.com -> <restaurant-app>
  • www.<your-domain>.com -> Apache

3. Ingress Manifest

Create directory:

mkdir 03-ingress

File: 03-ingress/my-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: main-ingress
  namespace: prod-apps
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
    # 1. Portfolio Subdomain
    - host: portfolio.<your-domain>.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: <portfolio-app>-svc
                port:
                  number: 80

    # 2. Restaurant App Subdomain
    - host: raindrops.<your-domain>.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: <restaurant-app>-svc
                port:
                  number: 80

    # 3. Apache (WWW) Subdomain
    - host: www.<your-domain>.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: apache-svc
                port:
                  number: 80

Apply:

kubectl apply -f 03-ingress/my-ingress.yaml

4. Working

Get the URL: Find the external address of the Ingress Controller (not your apps).

kubectl get service ingress-nginx-controller -n ingress-nginx

⚠️ Prerequisite: DNS Configuration For this to work, your DNS provider (GoDaddy, AWS Route53, Cloudflare, etc.) must have records pointing these subdomains to your Kubernetes Ingress LoadBalancer IP.

  • portfolio.<your-domain>.com -> A Record -> [Ingress Public IP]
  • raindrops.<your-domain>.com -> A Record -> [Ingress Public IP]
  • www.<your-domain>.com -> A Record -> [Ingress Public IP]

The "Price Menu" (What you are paying right now)

  • Load Balancers: You likely have 3 of them running. (~$0.025/hr each = $0.075/hr)
  • EC2 Nodes: You have 2 t3.small instances. (~$0.04/hr total)
  • EKS Control Plane: This is the hidden cost. Just having a cluster exists costs $0.10/hr (approx $72/month).
  • NAT Gateway: If your cluster created a private subnet, this costs $0.045/hr.

Option 1: "I'm coming back in a few hours" (The Pause)

If you are just grabbing dinner and coming back, you want to delete the expensive Load Balancers but keep the cluster alive so you don't have to wait 20 minutes to rebuild it.

Step 1: Delete the Load Balancers (Crucial)

Run this immediately. This deletes the AWS ELBs (the $0.075/hr cost).

kubectl delete service --all -n <namespace>

Step 2: Scale Nodes to Zero (Optional)

To save the EC2 cost ($0.04/hr), you can tell the Auto Scaling Group to shrink to 0.

Warning: Do NOT stop them in the EC2 Console. If you click "Stop" in the console, the Auto Scaling Group sees a "dead" node and immediately launches a new one to replace it. You will fight a losing battle.

Use this command instead:

eksctl scale nodegroup --cluster <cluster-name> --name <node-group-name> --nodes 0 --nodes-min 0

(Replace <node-group-name> with the specific name if it's different, e.g., standard-workers).


Option 2: "I'm done for the day" (The Nuke)

If you are finished until tomorrow, delete everything.

Why? The EKS Control Plane costs $2.40/day even if you have 0 nodes running. It is cheaper to delete and recreate it tomorrow.

Step 1: The Master Delete Command

This one command deletes the Services, the EC2 nodes, the VPC, and the Control Plane.

eksctl delete cluster --name <cluster-name>

Step 2: Verify

Go to the AWS Console -> CloudFormation. Watch the stack eksctl-<cluster-name>-cluster status. Once it says DELETE_COMPLETE (or disappears), you are 100% safe from costs.


Summary Recommendation

Since you are learning: Use Option 2. It is good practice to tear down and rebuild clusters. It reinforces the "Infrastructure as Code" mindset—your cluster should be disposable, not a pet you have to keep alive.

Action: Run the delete command now. When you return, run your create command again, and you'll be back in business in ~15 minutes:

eksctl delete cluster --name <cluster-name>

To Restore (When you return):

eksctl create cluster --name <cluster-name> --node-type t3.small --nodes 2 --nodes-min 2 --nodes-max 3 --managed

About

Creating Deployments, Using EKS and Ingress Controller

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published