This project sets up an AWS EKS cluster to run a full-stack Spotify clone application. The repository includes:
- Terraform configurations for provisioning the EKS infrastructure (located in the
terraform
directory with its own README). - Kubernetes manifests for deploying the backend and frontend services.
- AWS Load Balancer Controller installed within the cluster.
- Nginx configuration to proxy frontend requests to the backend service.
- Architecture
- Prerequisites
- Setup Instructions
- Nginx Configuration
- Environment Variables and Secrets
- Notes
- Architecture Diagrams
The application consists of a React frontend and a Node.js backend, both deployed on AWS EKS and exposed via a Network Load Balancer (NLB). Nginx is used in the frontend container to proxy API requests to the backend service.
- AWS Account with permissions to create EKS clusters and related resources.
- Terraform installed on your local machine.
- kubectl configured to interact with your EKS cluster.
- AWS Load Balancer Controller installed in the cluster.
- AWS IAM permissions for necessary services.
- AWS ACM Certificate ARN for SSL termination.
Navigate to the terraform
directory and follow the instructions in its README to set up the AWS EKS cluster.
link to folder : https://github.com/barmoshe/Wix-devops-workshop-final-project/tree/deploy-fullstack-to-kuberntes/terraform
cd terraform
# Follow the instructions in terraform/README.md
Ensure that the AWS Load Balancer Controller is installed in your EKS cluster. here is the terraform code for it.
Create the spotify
namespace where all the resources will be deployed:
kubectl create namespace spotify
Before deploying the backend, create the necessary secrets in the spotify
namespace:
kubectl create secret generic db-url-secret -n spotify --from-literal=DB_URL='your_database_url'
kubectl create secret generic open-ai-api-key -n spotify --from-literal=OPEN_AI_API_KEY='your_openai_api_key'
Apply the Kubernetes manifests for the backend deployment and service:
kubectl apply -f spotify-yamls/backend-deployment.yaml
kubectl apply -f spotify-yamls/backend-service.yaml
Apply the Kubernetes manifests for the frontend deployment and service:
kubectl apply -f spotify-yamls/frontend-deployment.yaml
kubectl apply -f spotify-yamls/frontend-service.yaml
Note: Replace
"your_acm_certificate_arn"
infrontend-service.yaml
with your own AWS ACM Certificate ARN.
Retrieve the DNS name of the Load Balancer:
kubectl get svc -n spotify
Look for the EXTERNAL-IP
associated with vite-react-service
. Open this address in your browser to access the Spotify clone application.
The frontend application uses Nginx to proxy API requests to the backend service. The Nginx configuration is as follows:
location /api {
proxy_pass http://backend-service;
}
This configuration ensures that any requests to /api
on the frontend are forwarded to the backend service.
links to docker files and nginx configuration :
Note: This is a temporary workaround for routing before installing an Ingress Controller to handle the routing.
The backend deployment uses the following environment variables:
NODE_ENV
: Set toproduction
.DB_URL
: Database connection string, retrieved from a Kubernetes secret nameddb-url-secret
.OPEN_AI_API_KEY
: API key for OpenAI, retrieved from a Kubernetes secret namedopen-ai-api-key
.
As shown in Step 4, make sure to create the necessary secrets before deploying the backend.
- The services are configured to use an AWS Network Load Balancer (NLB) and are internet-facing.
- SSL termination is handled by specifying an AWS ACM certificate ARN in the service annotations. Update the ARN in
frontend-service.yaml
with your own certificate ARN. - Ensure that the
spotify
namespace exists in your cluster.
The next step is to add an Nginx Ingress Controller within the cluster to improve traffic routing. Currently, the frontend Nginx image includes a reverse proxy configuration to route /api
requests to the backend service. While functional, this approach is not ideal, as it adds extra complexity within the frontend image itself.
With an Ingress Controller, routing can be managed directly in the cluster, allowing for more maintainable and centralized traffic control:
- Requests to the root path (
/
) will be directed to the frontend service. - Requests to
/api
will be routed to the backend service.
This setup will eliminate the need for custom Nginx configurations in the frontend image. The AWS Network Load Balancer (NLB) will direct external traffic to the Ingress Controller, which will handle routing to the appropriate services within the cluster.
graph TD
subgraph Client
User
end
subgraph AWS
NLB["AWS Network Load Balancer"]
end
subgraph Nginx
IngressController["Nginx Ingress Controller"]
subgraph Frontend
FrontendService["Frontend Service<br>(vite-react-service)"]
FrontendPods["Frontend Pods<br>Nginx:<br/> serves React app <br/> proxy (/api) requests"]
end
subgraph Backend
BackendService["Backend Service<br>(backend-service)"]
BackendPods["Backend Pods<br>(Node.js)"]
end
end
subgraph External_Services
Database["Database"]
External_APIs["External_APIs"]
end
User --> NLB
NLB --> IngressController
IngressController -->|"Path: /"| FrontendService
IngressController -->|"Path: /api"| BackendService
FrontendService --> FrontendPods
BackendService --> BackendPods
FrontendPods -->|"Serve React App"| User
BackendPods -->|"API Responses"| IngressController
BackendPods -->|"Database Connection"| Database
BackendPods -->|"External API Calls"| External_APIs