Skip to content

Himanshu2561/k3s-infra-using-terraform-and-ansible

Repository files navigation

OCI K3s Infrastructure via Terraform & Ansible

This repository contains a full-stack Infrastructure-as-Code (IaC) solution to provision and manage a highly available K3s cluster on Oracle Cloud Infrastructure (OCI). It combines Terraform for infrastructure provisioning and Ansible (with dynamic inventory) for configuration management.

The setup is optimized for the OCI Always Free Tier (ARM-based Ampere A1 instances).

🏗️ Architecture Overview

The project automates the deployment of:

  • Network:
    • 1 Virtual Cloud Network (VCN) & Public Subnet.
    • Custom Security Lists for K3s (6443), SSH (22), and Web (80/443).
  • Compute (K3s Cluster):
    • 3 ARM-based Instances (VM.Standard.A1.Flex):
      • k3s-master: 2 OCPUs, 12 GB RAM.
      • k3s-worker-1: 1 OCPU, 6 GB RAM.
      • k3s-worker-2: 1 OCPU, 6 GB RAM.
    • Distributed across Fault Domains for resilience.
  • Load Balancer:
    • 1 Flexible Load Balancer serving as the cluster entry point.
  • Configuration Management:
    • Ansible Dynamic Inventory: Real-time querying of Terraform state to manage nodes without static host files.
  • Governance:
    • A budget safety net ($1 threshold) with email alerts.

📋 Prerequisites

  1. OCI Account & API Credentials.
  2. Terraform (>= 1.5.0).
  3. Ansible (>= 2.10).
  4. Python 3 (for the dynamic inventory script).
  5. OCI CLI Configured.

🚀 Getting Started

1. Provision Infrastructure with Terraform

# Initialize and apply
terraform init
terraform apply

2. Verify Ansible Dynamic Inventory

The project includes an advanced dynamic inventory script that reads live from Terraform outputs.

Note: If your SSH key has a passphrase, you must add it to your session's SSH agent first:

# Start the agent and add your private key
eval $(ssh-agent -s)
ssh-add ~/.ssh/oci_key

# Test connectivity to all nodes
ansible all -m ping

3. Deploy Playbooks

# Example: Run the connectivity test playbook
ansible-playbook ansible/ping.yml

# Install and Configure K3s Cluster
ansible-playbook ansible/site.yml

4. Accessing the Cluster

The k3s_server role automatically configures your local ~/.kube/k3s.yaml to connect to the master node's Public IP. You can use kubectl directly from your local machine.

# Set KUBECONFIG
export KUBECONFIG=~/.kube/k3s.yaml

# Verify nodes
kubectl get nodes

🛠️ Ansible Configuration

  • Dynamic Inventory: Located at ansible/terraform_inventory.py. It parses terraform output -json and groups hosts into [master], [workers], and [load_balancer].
  • Configuration: ansible.cfg is pre-configured to use the dynamic inventory and set the default SSH user to ubuntu.

📊 Outputs

  • node_public_ips: Map of node names to public IPs.
  • load_balancer_ip: Public IP of the entry point.

💰 Budget Safety

A budget named Always-Free-Safety-Budget is created at the tenancy level. If spend reaches $1, an alert is sent to alert_email.

🧹 Cleanup

terraform destroy

🛡️ License

This project is licensed under the MIT License.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors