App Deployment on AWS EKS with Terraform & CI/CD#13
App Deployment on AWS EKS with Terraform & CI/CD#13HARSH-Sehrawat wants to merge 44 commits intoiemafzalhassan:mainfrom
Conversation
- /infra/ecr/backend.tf - /infra/ecr/main.tf - /infra/ecr/outputs.tf - /infra/eks/main.tf - /infra/eks/outputs.tf - /.gitlab-ci.infra.yml
- /k8s/backend-deployment.yaml - /k8s/frontend-deployment.yaml - /k8s/mongodb-configmap.yaml - /k8s/mongodb-secret.yaml - /k8s/mongodb-deployment.yaml - /.gitlab-ci.deploy.yml
- /k8s/frontend-ingress.yaml - /.gitlab-ci.deploy.yml
- /k8s/mongo-pv.yaml - /k8s/mongo-pvc.yaml
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
WalkthroughThis update transitions the project from a local Docker/Kubernetes setup to a cloud-native AWS EKS deployment, managed via Terraform and automated with GitLab CI/CD. It introduces comprehensive Terraform configurations for AWS infrastructure, refactors Kubernetes manifests for cloud compatibility, adds GitLab pipeline definitions, and removes legacy Jenkins and Kind cluster files. Documentation and .gitignore are updated to reflect the new workflow. Changes
Sequence Diagram(s)sequenceDiagram
participant Dev as Developer
participant GitLab as GitLab CI/CD
participant AWS as AWS (S3, DynamoDB, EKS, ECR)
participant K8s as Kubernetes Cluster
Dev->>GitLab: Push code/trigger pipeline
GitLab->>AWS: Provision infra via Terraform (infra jobs)
GitLab->>AWS: Build & push Docker images to ECR
GitLab->>K8s: Update kubeconfig, apply manifests
K8s->>AWS: Pull images from ECR
K8s->>Dev: Application running on EKS (via ALB Ingress)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 21
🔭 Outside diff range comments (4)
.gitlab-ci.yml (1)
1-4: Convert CRLF line endings to LF to satisfy YAML linters
yamllintflags “wrong new line character” on Line 1. Ensure the file is saved with Unix LF endings; Git can enforce this with.gitattributes:+*.yml text eol=lfAlso validate the pipeline after the conversion—GitLab sometimes mis-parses files that silently contain CRLF.
k8s/mongodb-deployment.yaml (1)
39-43: Volume declared but not mounted – data will be ephemeral
mongo-pvcis defined involumes, yet/data/dbis no longer mounted after
removing thevolumeMountsblock. MongoDB will write to the container
filesystem and lose data on restart.- name: mongodb image: mongo:6.0 ports: - containerPort: 27017 + volumeMounts: + - name: mongo-data + mountPath: /data/db ... volumes: - name: mongo-data persistentVolumeClaim: claimName: mongo-pvck8s/frontend-deployment.yaml (1)
40-55: Indentation errors break YAML –volumeMountsandvolumesare mis-aligned
- name:lines must be indented two spaces deeper than their parent keys.
Current layout renders invalid YAML andkubectl applywill fail.volumeMounts: - - name: nginx-config - mountPath: /etc/nginx/conf.d/default.conf - subPath: nginx.conf + - name: nginx-config + mountPath: /etc/nginx/conf.d/default.conf + subPath: nginx.conf ... volumes: - - name: nginx-config - configMap: - name: nginx-config + - name: nginx-config + configMap: + name: nginx-config.gitlab-ci.infra.yml (1)
25-35: Pipeline file not yet includedREADME says
.gitlab-ci.ymlincludes this file, but that inclusion is not shown in the PR. Without it, the infra jobs will never run. Please ensure the root pipeline adds:include: - local: ".gitlab-ci.infra.yml"
♻️ Duplicate comments (2)
k8s/frontend-hpa.yaml (2)
1-20: Same CRLF newline issue as backend HPAPlease normalise to LF to keep linters and
kubectl diffhappy.
7-19: Mirror memory / custom metric advice from backend HPAAutoscaling logic should be symmetrical across tiers unless there is a strong reason otherwise.
🧹 Nitpick comments (20)
k8s/metrics-server.yaml (2)
19-19: Switch to the new Metrics-Server registry domain
k8s.gcr.iois deprecated. The project was migrated toregistry.k8s.io, which also avoids pull-rate throttling.- image: k8s.gcr.io/metrics-server/metrics-server:v0.6.3 + image: registry.k8s.io/metrics-server/metrics-server:v0.6.3
1-1: Convert CRLF to LF to satisfy YAML-lintThe file is committed with Windows line endings, producing
new-lineserrors. Re-save with Unix line endings to keep CI green.k8s/mongo-pvc.yaml (1)
6-13: Use a cloud-native storage class instead of binding to a hostPath PVHard-binding the claim to
mongo-pv(likely a hostPath PV) breaks portability and cannot be scheduled on EKS worker node replacements. Prefer dynamic provisioning with the EBS CSI driver:spec: storageClassName: gp2 # or gp3 / ebs-sc accessModes: - ReadWriteOnce resources: requests: storage: 5GiThen remove the static PV entirely.
infra/eks/versions.tf (1)
4-9: Consider pinning provider minor version to avoid accidental upgrades
~> 5.0picks any 5.x, which may introduce breaking changes (AWS provider is aggressive). If stability is a concern, lock to the latest known-good minor, e.g.~> 5.47.k8s/mongodb-configmap.yaml (1)
1-7: Normalize line endings and ensure trailing newline
yamllintreports CRLF usage and missing final newline. Convert to LF and add a newline at EOF to keep linters green.No functional issues with the key/value pair itself.
k8s/backend-hpa.yaml (2)
1-20: Convert Windows CRLF to LF to satisfy kubelint / CI linters
yamllintflags Line 1 for “wrong new line character”. While the Kubernetes API happily accepts CRLF, most CI pipelines (and git diff tooling) expect LF. Re-save the file with Unix line endings to silence the linter and keep consistency.
7-19: Consider adding memory or custom metrics for more balanced autoscalingScaling solely on CPU may under- or over-provision the backend if it is I/O-bound. At minimum, include memory utilisation:
metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 75or wire in custom metrics if request rate is the bottleneck.
k8s/mongodb-secret.yaml (1)
1-1: Trailing newline missing & CRLF line endingsAdd a final LF and convert to Unix line endings to satisfy
yamllintand keep consistency.k8s/mongo-pv.yaml (1)
1-1: Use LF line endings to satisfy kubectl & linters
YAMLlintflags CR-LF (\r\n) at Line 1. Convert the file to Unix LF (\n) to avoid apply failures in *nix CI runners.k8s/frontend-ingress.yaml (2)
1-1: Normalize line endingsSame CR-LF issue as above; convert to LF for consistent diff and tooling compatibility.
13-22: Consider adding ahostrule to avoid catching every domainWith no
hostspecified the ALB rule forwards all requests it receives, which can lead to shadowing when multiple ingresses share the same ALB. Define an explicit host (e.g.,chat.example.com) to isolate traffic.infra/eks/variables.tf (1)
1-11: Document & type your variables for maintainabilityAdd
type,description, and (optionally)validationblocks so that future contributors know the intent and Terraform can catch bad values earlier.-variable "aws_region" { - default = "us-east-2" +variable "aws_region" { + type = string + description = "AWS region where all resources will be created" + default = "us-east-2" }Replicate for the other variables.
.gitignore (1)
26-39: Broaden.envignore patternYou ignore only root-level
.env; tools often generate.env.local,.env.prod, etc. Consider:-.env +.env*to ensure secrets never slip into the repo.
k8s/frontend-configmap.yaml (1)
7-19: Use a block-scalar fornginx.confto regain readability and avoid escape-sequence pitfallsStoring the entire NGINX config as a single quoted string with
\nescapes makes
the manifest hard to review, increases the risk of accidental escaping/indent
errors, and drives noisy diffs for every whitespace change.
YAML already supports multi-line literals – use them.data: - nginx.conf: "server {\n listen 80; ... }\n" + nginx.conf: | + server { + listen 80; + server_name localhost; + + root /usr/share/nginx/html; + index index.html; + + location / { + try_files $uri $uri/ /index.html; + } + + location /api/ { + proxy_pass http://backend.chatapp.svc.cluster.local:5001/api/; + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + proxy_set_header Host $host; + proxy_cache_bypass $http_upgrade; + } + + location /socket.io/ { + proxy_pass http://backend.chatapp.svc.cluster.local:5001/socket.io/; + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "Upgrade"; + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Proto $scheme; + proxy_cache_bypass $http_upgrade; + } + }k8s/README.md (1)
142-145: Clarify dependency onkubensThe step assumes the
kubensplugin is installed. Add a note or install
instruction to avoid confusion for users on fresh environments.infra/eks/outputs.tf (1)
1-19: Describe outputs & mark sensitive valuesAll five outputs are missing a
descriptionargument, which makesterraform output -jsonharder to read and violates the HashiCorp style guide.
Additionally,cluster_endpointand possiblynode_group_role_arncan be considered sensitive; mark them withsensitive = trueto avoid leaking them in logs.output "cluster_endpoint" { + description = "Public endpoint URL of the EKS cluster" + sensitive = true value = module.eks.cluster_endpoint }Repeat for the other outputs as appropriate.
infra/backend/variables.tf (1)
1-11: Make the backend module reusable – avoid hard-coded defaultsProviding environment-specific defaults (“chatapp-terraform-state-harshsehrawat-dev”, etc.) ties the module to a single project and risks accidental state sharing. Prefer mandatory variables (no default) or at least add
description&typeplus a simple, generic default.-variable "s3_bucket_name" { - default = "chatapp-terraform-state-harshsehrawat-dev" +variable "s3_bucket_name" { + description = "Unique S3 bucket name for Terraform state" + type = string }Do the same for
dynamodb_table_name, and add a description foraws_region.README.md (2)
1-3: Hyphenate compound adjective“Full-Stack Chat Application” needs a hyphen to comply with standard style guides.
-# Full Stack Chat Application on AWS EKS using Terraform, GitLab CI/CD, and ALB +# Full-Stack Chat Application on AWS EKS using Terraform, GitLab CI/CD, and ALB
23-35: Specify language for fenced code blockMarkdown-lint rule MD040: add a language identifier to enable syntax highlighting.
-``` +```text . ├── infra/infra/eks/main.tf (1)
39-41: Hard-coding AZs reduces portabilityEmbedding
"us-east-2a", "us-east-2b"couples the module to a single region/account. Preferdata.aws_availability_zonesor pass AZs via a variable to keep the Terraform reusable.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (33)
.gitignore(2 hunks).gitlab-ci.deploy.yml(1 hunks).gitlab-ci.infra.yml(1 hunks).gitlab-ci.yml(1 hunks)Jenkinsfile(0 hunks)README.md(1 hunks)infra/backend/main.tf(1 hunks)infra/backend/outputs.tf(1 hunks)infra/backend/variables.tf(1 hunks)infra/eks/backend.tf(1 hunks)infra/eks/main.tf(1 hunks)infra/eks/outputs.tf(1 hunks)infra/eks/variables.tf(1 hunks)infra/eks/versions.tf(1 hunks)k8s/README.md(6 hunks)k8s/backend-deployment.yaml(2 hunks)k8s/backend-hpa.yaml(1 hunks)k8s/backend-secrets.yaml(1 hunks)k8s/backend-service.yaml(1 hunks)k8s/frontend-configmap.yaml(1 hunks)k8s/frontend-deployment.yaml(3 hunks)k8s/frontend-hpa.yaml(1 hunks)k8s/frontend-ingress.yaml(1 hunks)k8s/frontend-service.yaml(1 hunks)k8s/kind-config.yaml(0 hunks)k8s/metrics-server.yaml(1 hunks)k8s/mongo-pv.yaml(1 hunks)k8s/mongo-pvc.yaml(1 hunks)k8s/mongodb-configmap.yaml(1 hunks)k8s/mongodb-deployment.yaml(2 hunks)k8s/mongodb-secret.yaml(1 hunks)k8s/mongodb-service.yaml(1 hunks)k8s/namespace.yaml(1 hunks)
💤 Files with no reviewable changes (2)
- k8s/kind-config.yaml
- Jenkinsfile
🧰 Additional context used
🪛 Gitleaks (8.27.2)
k8s/backend-secrets.yaml
8-8: Uncovered a JSON Web Token, which may lead to unauthorized access to web applications and sensitive user data.
(jwt)
🪛 YAMLlint (1.37.1)
.gitlab-ci.yml
[error] 1-1: wrong new line character: expected \n
(new-lines)
k8s/frontend-hpa.yaml
[error] 1-1: wrong new line character: expected \n
(new-lines)
k8s/frontend-ingress.yaml
[error] 1-1: wrong new line character: expected \n
(new-lines)
k8s/mongodb-configmap.yaml
[error] 1-1: wrong new line character: expected \n
(new-lines)
[error] 7-7: no new line character at the end of file
(new-line-at-end-of-file)
k8s/mongodb-secret.yaml
[error] 1-1: wrong new line character: expected \n
(new-lines)
[error] 9-9: no new line character at the end of file
(new-line-at-end-of-file)
k8s/metrics-server.yaml
[error] 1-1: wrong new line character: expected \n
(new-lines)
k8s/backend-hpa.yaml
[error] 1-1: wrong new line character: expected \n
(new-lines)
.gitlab-ci.deploy.yml
[error] 1-1: wrong new line character: expected \n
(new-lines)
k8s/mongo-pv.yaml
[error] 1-1: wrong new line character: expected \n
(new-lines)
.gitlab-ci.infra.yml
[error] 1-1: wrong new line character: expected \n
(new-lines)
[warning] 10-10: too many blank lines (3 > 2)
(empty-lines)
🪛 Checkov (3.2.334)
k8s/backend-deployment.yaml
[MEDIUM] 1-40: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 1-40: Minimize the admission of root containers
(CKV_K8S_23)
[MEDIUM] 24-25: Basic Auth Credentials
(CKV_SECRET_4)
k8s/frontend-deployment.yaml
[MEDIUM] 1-54: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 1-54: Minimize the admission of root containers
(CKV_K8S_23)
k8s/metrics-server.yaml
[MEDIUM] 1-44: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 1-44: Minimize the admission of root containers
(CKV_K8S_23)
k8s/mongodb-deployment.yaml
[MEDIUM] 1-42: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 1-42: Minimize the admission of root containers
(CKV_K8S_23)
infra/backend/main.tf
[MEDIUM] 28-43: Ensure S3 lifecycle configuration sets period for aborting failed uploads
(CKV_AWS_300)
[HIGH] 45-59: Ensure DynamoDB point in time recovery (backup) is enabled
(CKV_AWS_28)
infra/eks/main.tf
[MEDIUM] 1-30: Ensure Terraform module sources use a commit hash
(CKV_TF_1)
[MEDIUM] 32-50: Ensure Terraform module sources use a commit hash
(CKV_TF_1)
[HIGH] 52-54: Ensure ECR image scanning on push is enabled
(CKV_AWS_163)
[HIGH] 56-58: Ensure ECR image scanning on push is enabled
(CKV_AWS_163)
🪛 LanguageTool
README.md
[uncategorized] ~1-~1: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: # Full Stack Chat Application on AWS EKS using Terra...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
🪛 markdownlint-cli2 (0.17.2)
README.md
23-23: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (7)
k8s/namespace.yaml (1)
1-4: Namespace rename looks good—ensure manifests are applied in orderAll resources must reference
chatappand the namespace object must be created before other manifests. CI/CD pipeline step ordering should guarantee this.infra/eks/versions.tf (1)
12-14: Verify thatvar.aws_regionis defined and passed in every Terraform workflowThe provider relies on
var.aws_region, but this file doesn’t declare it. Double-check thatvariables.tf(or invoking CI job) defines it with a sensible default or is set viaTF_VAR_aws_region.
If missing, Terraform will fail duringinit/plan.k8s/backend-service.yaml (1)
9-13: Indentation error identical to MongoDB serviceList items under
ports:need two extra spaces.ports: - - protocol: TCP - port: 5001 - targetPort: 5001 - nodePort: 30501 + - name: http + protocol: TCP + port: 5001 + targetPort: 5001 + nodePort: 30501Bad indentation will cause
kubectl applyto reject or silently ignore the ports definition.Likely an incorrect or invalid review comment.
infra/backend/outputs.tf (1)
1-7: Outputs look goodStraightforward and correctly reference the resource attributes.
.gitlab-ci.infra.yml (1)
17-23: The tfplan artifact may expose secrets
terraform plancan contain rendered secrets (e.g., random passwords). The artifact is stored for an hour and downloadable by anyone with project access. Consider:
- adding
when: on_failureso the plan is kept only for debugging, or- masking it with
terraform show -json | jqfiltering, or- securing the artifact with
public: false..gitlab-ci.deploy.yml (1)
40-67: Branch filter & AWS region variables – verify correctness
- The
only: DevOpsfilter is case-sensitive. Confirm the branch name exactly matches (DevOpsvsdevops).$AWS_REGIONis used in push jobs but hard-coded asus-east-2in the deploy job. Keep them consistent:- script: - - aws eks update-kubeconfig --region us-east-2 --name chatapp-cluster + script: + - aws eks update-kubeconfig --region $AWS_REGION --name chatapp-clusterinfra/eks/main.tf (1)
6-6: Verify region support for Kubernetes 1.29EKS 1.29 is still rolling out region-by-region. Double-check
us-east-2availability before merging to avoid a failed apply.Would you run
aws eks describe-addon-versions --kubernetes-version 1.29 --region us-east-2and confirm it returns data?
| stages: | ||
| - build | ||
| - push | ||
| - deploy |
There was a problem hiding this comment.
Normalize line endings to LF
Same CRLF issue as .gitlab-ci.infra.yml; re-commit with Unix line endings to silence YAML lint errors and avoid cross-platform diffs.
🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 1-1: wrong new line character: expected \n
(new-lines)
🤖 Prompt for AI Agents
In .gitlab-ci.deploy.yml at lines 1 to 4, the file uses CRLF line endings
causing YAML lint errors and cross-platform diffs. Convert all line endings in
this file to Unix-style LF line endings and recommit the file to resolve these
issues.
| stages: | ||
| - plan | ||
| - apply | ||
|
|
There was a problem hiding this comment.
Convert CRLF to LF – current line endings break linters
YAMLlint flags “wrong new line character” because the file is committed with Windows CRLF. While GitLab CI is tolerant, many tools and reviewers aren’t. Re-commit with Unix LF to keep the repo consistent.
🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 1-1: wrong new line character: expected \n
(new-lines)
🤖 Prompt for AI Agents
In .gitlab-ci.infra.yml at lines 1 to 4, the file uses Windows-style CRLF line
endings which cause YAMLlint to flag errors. Convert all line endings in this
file from CRLF to Unix-style LF and recommit to ensure compatibility with
linters and maintain consistency in the repository.
| resource "aws_s3_bucket" "terraform_state" { | ||
| bucket = var.s3_bucket_name | ||
|
|
||
| tags = { | ||
| Name = "Terraform State Bucket" | ||
| Environment = "dev" | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Block public access on the state bucket
The bucket is intended for sensitive Terraform state yet no aws_s3_bucket_public_access_block is defined. Anyone who later enables ACLs or policies could accidentally expose state files.
resource "aws_s3_bucket" "terraform_state" {
bucket = var.s3_bucket_name
+
+ # Block every form of public access
+ lifecycle {
+ prevent_destroy = true
+ }
}
+
+resource "aws_s3_bucket_public_access_block" "state_block" {
+ bucket = aws_s3_bucket.terraform_state.id
+ block_public_acls = true
+ block_public_policy = true
+ ignore_public_acls = true
+ restrict_public_buckets = true
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| resource "aws_s3_bucket" "terraform_state" { | |
| bucket = var.s3_bucket_name | |
| tags = { | |
| Name = "Terraform State Bucket" | |
| Environment = "dev" | |
| } | |
| } | |
| resource "aws_s3_bucket" "terraform_state" { | |
| bucket = var.s3_bucket_name | |
| # Block every form of public access | |
| lifecycle { | |
| prevent_destroy = true | |
| } | |
| tags = { | |
| Name = "Terraform State Bucket" | |
| Environment = "dev" | |
| } | |
| } | |
| resource "aws_s3_bucket_public_access_block" "state_block" { | |
| bucket = aws_s3_bucket.terraform_state.id | |
| block_public_acls = true | |
| block_public_policy = true | |
| ignore_public_acls = true | |
| restrict_public_buckets = true | |
| } |
🤖 Prompt for AI Agents
In infra/backend/main.tf around lines 1 to 8, the aws_s3_bucket resource for the
Terraform state bucket lacks a public access block, which risks accidental
exposure of sensitive state files. Add an aws_s3_bucket_public_access_block
resource referencing this bucket to explicitly block all public access settings,
including blocking public ACLs and policies, to ensure the bucket remains
private and secure.
| resource "aws_s3_bucket_lifecycle_configuration" "prevent_destroy" { | ||
| bucket = aws_s3_bucket.terraform_state.id | ||
|
|
||
| rule { | ||
| id = "retain-state" | ||
| status = "Enabled" | ||
|
|
||
| filter { | ||
| prefix = "" | ||
| } | ||
|
|
||
| noncurrent_version_expiration { | ||
| noncurrent_days = 3650 # Retain old versions for 10 years | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Abort incomplete multipart uploads to avoid unbounded storage costs
Checkov CKV_AWS_300 flags the absence of an abort_incomplete_multipart_upload rule.
noncurrent_version_expiration {
noncurrent_days = 3650 # Retain old versions for 10 years
}
+
+ abort_incomplete_multipart_upload {
+ days_after_initiation = 7
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| resource "aws_s3_bucket_lifecycle_configuration" "prevent_destroy" { | |
| bucket = aws_s3_bucket.terraform_state.id | |
| rule { | |
| id = "retain-state" | |
| status = "Enabled" | |
| filter { | |
| prefix = "" | |
| } | |
| noncurrent_version_expiration { | |
| noncurrent_days = 3650 # Retain old versions for 10 years | |
| } | |
| } | |
| } | |
| resource "aws_s3_bucket_lifecycle_configuration" "prevent_destroy" { | |
| bucket = aws_s3_bucket.terraform_state.id | |
| rule { | |
| id = "retain-state" | |
| status = "Enabled" | |
| filter { | |
| prefix = "" | |
| } | |
| noncurrent_version_expiration { | |
| noncurrent_days = 3650 # Retain old versions for 10 years | |
| } | |
| abort_incomplete_multipart_upload { | |
| days_after_initiation = 7 | |
| } | |
| } | |
| } |
🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 28-43: Ensure S3 lifecycle configuration sets period for aborting failed uploads
(CKV_AWS_300)
🤖 Prompt for AI Agents
In infra/backend/main.tf between lines 28 and 43, the
aws_s3_bucket_lifecycle_configuration resource lacks an
abort_incomplete_multipart_upload rule, which is needed to prevent unbounded
storage costs from incomplete multipart uploads. Add an
abort_incomplete_multipart_upload block inside the rule with a suitable
days_after_initiation value (e.g., 7) to automatically abort incomplete
multipart uploads after that many days.
| resource "aws_dynamodb_table" "terraform_locks" { | ||
| name = var.dynamodb_table_name | ||
| billing_mode = "PAY_PER_REQUEST" | ||
| hash_key = "LockID" | ||
|
|
||
| attribute { | ||
| name = "LockID" | ||
| type = "S" | ||
| } | ||
|
|
||
| tags = { | ||
| Name = "Terraform Lock Table" | ||
| Environment = "dev" | ||
| } | ||
| } |
There was a problem hiding this comment.
Enable point-in-time recovery on the DynamoDB lock table
Losing this table means Terraform cannot guarantee state-lock integrity.
Add:
resource "aws_dynamodb_table" "terraform_locks" {
name = var.dynamodb_table_name
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
+ point_in_time_recovery {
+ enabled = true
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| resource "aws_dynamodb_table" "terraform_locks" { | |
| name = var.dynamodb_table_name | |
| billing_mode = "PAY_PER_REQUEST" | |
| hash_key = "LockID" | |
| attribute { | |
| name = "LockID" | |
| type = "S" | |
| } | |
| tags = { | |
| Name = "Terraform Lock Table" | |
| Environment = "dev" | |
| } | |
| } | |
| resource "aws_dynamodb_table" "terraform_locks" { | |
| name = var.dynamodb_table_name | |
| billing_mode = "PAY_PER_REQUEST" | |
| hash_key = "LockID" | |
| point_in_time_recovery { | |
| enabled = true | |
| } | |
| attribute { | |
| name = "LockID" | |
| type = "S" | |
| } | |
| tags = { | |
| Name = "Terraform Lock Table" | |
| Environment = "dev" | |
| } | |
| } |
🧰 Tools
🪛 Checkov (3.2.334)
[HIGH] 45-59: Ensure DynamoDB point in time recovery (backup) is enabled
(CKV_AWS_28)
🤖 Prompt for AI Agents
In infra/backend/main.tf around lines 45 to 59, the DynamoDB table resource for
terraform_locks lacks point-in-time recovery enabled, which is critical for
state-lock integrity. Add the point_in_time_recovery block with enabled set to
true inside the aws_dynamodb_table resource to enable this feature.
| spec: | ||
| capacity: | ||
| storage: 1Gi | ||
| accessModes: | ||
| - ReadWriteOnce | ||
| hostPath: | ||
| path: "/mnt/data/mongo" | ||
| persistentVolumeReclaimPolicy: Retain |
There was a problem hiding this comment.
hostPath will not persist on AWS EKS – switch to an EBS/EFS StorageClass
hostPath ties the volume to a single node’s filesystem, which is ephemeral in an autoscaling EKS cluster. Pods rescheduled to another node will lose data.
Use a dynamic provisioned gp3 (EBS CSI driver) or an EFS CSI-backed StorageClass instead:
- hostPath:
- path: "/mnt/data/mongo"
+ storageClassName: gp3…and remove the manual PV; let the PVC create it automatically.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In k8s/mongo-pv.yaml lines 5 to 12, the use of hostPath for persistent volume
will not work reliably on AWS EKS because it ties storage to a single node's
ephemeral filesystem. To fix this, remove the manual PersistentVolume definition
using hostPath and instead configure a PersistentVolumeClaim that uses a dynamic
provisioned StorageClass such as gp3 (EBS CSI driver) or an EFS CSI-backed
StorageClass. This change ensures data persistence across node rescheduling in
the autoscaling EKS environment.
| persistentVolumeReclaimPolicy: Retain | ||
| claimRef: | ||
| name: mongo-pvc | ||
| namespace: chatapp |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Hard-binding with claimRef defeats PVC scheduling flexibility
Embedding a claimRef statically binds the PV to one PVC/namespace and blocks the scheduler from selecting a more appropriate volume. Omit this stanza and rely on label selectors or dynamic provisioning.
🤖 Prompt for AI Agents
In k8s/mongo-pv.yaml around lines 12 to 15, remove the entire claimRef section
to avoid hard-binding the PersistentVolume to a specific PersistentVolumeClaim
and namespace. This will allow the scheduler to flexibly bind the PV to any
suitable PVC based on label selectors or dynamic provisioning, improving
scheduling flexibility.
| envFrom: | ||
| env: |
There was a problem hiding this comment.
envFrom: key is empty → manifest will fail validation
envFrom expects an array; leaving it without list items results in:
error validating data: ValidationError(Deployment.spec.template.spec.containers[0]): missing required field "configMapRef"...
Remove the key or populate it.
- envFrom:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| envFrom: | |
| env: | |
| env: |
🤖 Prompt for AI Agents
In k8s/mongodb-deployment.yaml around lines 21 to 22, the envFrom key is present
but empty, causing manifest validation to fail. To fix this, either remove the
envFrom key entirely if not used, or provide a valid array with required items
such as configMapRef or secretRef to satisfy the schema requirements.
| apiVersion: v1 | ||
| kind: Secret | ||
| metadata: | ||
| name: mongodb-secret | ||
| namespace: chatapp | ||
| type: Opaque | ||
| stringData: | ||
| MONGO_INITDB_ROOT_USERNAME: root | ||
| MONGO_INITDB_ROOT_PASSWORD: admin No newline at end of file |
There was a problem hiding this comment.
Plaintext credentials committed—rotate & externalise immediately
Root username/password (root/admin) are now in git history. Even though Kubernetes encodes secrets, anyone with repo access can read them. Actions:
- Rotate the MongoDB root creds in all environments.
- Replace this manifest with a templated secret (e.g., sealed-secrets, SOPS, AWS Secrets Manager CSI driver) containing no real credentials in the repo.
- Force-push removal or use GitHub/GitLab secret-scanning to mitigate exposure.
stringData:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASS}and inject via CI/CD variables.
🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 1-1: wrong new line character: expected \n
(new-lines)
[error] 9-9: no new line character at the end of file
(new-line-at-end-of-file)
🤖 Prompt for AI Agents
In k8s/mongodb-secret.yaml lines 1 to 9, the MongoDB root username and password
are stored in plaintext, which exposes sensitive credentials in the git history.
To fix this, rotate the MongoDB root credentials in all environments
immediately, then replace this manifest with a templated secret that does not
contain real credentials in the repo, such as using sealed-secrets, SOPS, or a
cloud secrets manager CSI driver. Update the manifest to reference environment
variables or CI/CD injected secrets instead of hardcoded values, and remove the
plaintext credentials from git history by force-pushing or using secret-scanning
tools.
| ports: | ||
| - protocol: TCP | ||
| port: 27017 | ||
| targetPort: 27017 | ||
| type: ClusterIP No newline at end of file | ||
| - protocol: TCP | ||
| port: 27017 | ||
| targetPort: 27017 | ||
| type: ClusterIP |
There was a problem hiding this comment.
Fix incorrect indentation under ports list
- protocol, port, and targetPort must be indented two spaces deeper than ports:; otherwise ports is interpreted as null, making the manifest invalid at apply-time.
Add a port name while touching the block.
ports:
- - protocol: TCP
- port: 27017
- targetPort: 27017
+ - name: mongodb
+ protocol: TCP
+ port: 27017
+ targetPort: 27017📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ports: | |
| - protocol: TCP | |
| port: 27017 | |
| targetPort: 27017 | |
| type: ClusterIP | |
| \ No newline at end of file | |
| - protocol: TCP | |
| port: 27017 | |
| targetPort: 27017 | |
| type: ClusterIP | |
| ports: | |
| - name: mongodb | |
| protocol: TCP | |
| port: 27017 | |
| targetPort: 27017 | |
| type: ClusterIP |
🤖 Prompt for AI Agents
In k8s/mongodb-service.yaml around lines 9 to 13, the indentation of the port
configuration under the `ports` list is incorrect, causing the manifest to be
invalid. Fix this by indenting `- protocol`, `port`, and `targetPort` two spaces
deeper than `ports:` and add a `name` field to the port block to comply with
Kubernetes best practices.
App Deployment on AWS EKS (Terraform + GitLab CI/CD)
• Provisioned and deployed a full-stack chat application on AWS EKS with complete infrastructure automation using Terraform and GitLab CI/CD
• Used Terraform to provision AWS resources: EKS, VPC, ECR, S3, IAM, and DynamoDB
• Configured S3 + DynamoDB for Terraform remote state management
• Dockerized frontend and backend apps; pushed images to Amazon ECR
• Deployed workloads to EKS using Kubernetes manifests
• Set up self-hosted GitLab Runner on EC2 for secure CI/CD pipeline
• Automated entire deployment: infrastructure provisioning → image builds → app deployment
• Used ALB Ingress Controller to expose the frontend via public URL
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores
.gitignorefor Terraform and development environment files.