A comprehensive Terraform module for provisioning Amazon Elastic Kubernetes Service (EKS) clusters with integrated platform services, pod identity management, and networking capabilities.
This module provides a production-ready EKS cluster with integrated platform services and best practices. It includes:
- EKS Cluster Management: Full EKS cluster provisioning with configurable versions and logging
- Platform Integrations: Built-in support for ArgoCD, cert-manager, External DNS, External Secrets, and more
- Pod Identity Management: AWS Pod Identity for secure workload-to-AWS service authentication
- Networking: Optional VPC creation with transit gateway support
- Security: Configurable security groups, access entries, and KMS encryption
- Cross-Account Support: Hub-spoke architecture for multi-account deployments
- Kubernetes version 1.32+ support
- Configurable cluster logging (API, audit, authenticator, controller manager, scheduler)
- Public/private endpoint access control
- KMS encryption for cluster secrets
- Auto-scaling with Karpenter integration
- AWS Pod Identity for secure workload authentication
- Configurable access entries for cluster access
- Security group management with customizable rules
- Cross-account role support for hub-spoke architectures
- ArgoCD: GitOps deployment platform
- cert-manager: Automated certificate management
- External DNS: Route53 integration for service discovery
- External Secrets: AWS Secrets Manager and SSM Parameter Store integration
- Terranetes: Terraform-as-a-Service platform
- AWS ACK IAM: AWS Controllers for Kubernetes
- CloudWatch Observability: Monitoring and logging
- Kubecost: Cost monitoring and optimization with AWS CUR integration
module "eks" {
source = "appvia/eks/aws"
version = "1.0.0"
cluster_name = "my-eks-cluster"
tags = {
Environment = "Production"
Product = "EKS"
Owner = "Engineering"
}
}module "eks" {
source = "appvia/eks/aws"
version = "1.0.0"
cluster_name = "production-eks"
tags = {
Environment = "Production"
Product = "EKS"
Owner = "Engineering"
}
# Enable platform services
argocd = {
enabled = true
namespace = "argocd"
service_account = "argocd"
}
cert_manager = {
enabled = true
namespace = "cert-manager"
service_account = "cert-manager"
route53_zone_arns = ["arn:aws:route53:::hostedzone/Z1234567890"]
}
external_dns = {
enabled = true
namespace = "external-dns"
service_account = "external-dns"
route53_zone_arns = ["arn:aws:route53:::hostedzone/Z1234567890"]
}
external_secrets = {
enabled = true
namespace = "external-secrets"
service_account = "external-secrets"
secrets_manager_arns = ["arn:aws:secretsmanager:*:*"]
ssm_parameter_arns = ["arn:aws:ssm:*:*:parameter/eks/*"]
}
}module "eks" {
source = "appvia/eks/aws"
version = "1.0.0"
cluster_name = "pod-identity-eks"
tags = {
Environment = "Production"
Product = "EKS"
Owner = "Engineering"
}
# Custom pod identities
pod_identity = {
my-app = {
enabled = true
name = "my-app-pod-identity"
namespace = "my-app"
service_account = "my-app-sa"
managed_policy_arns = {
"S3ReadOnly" = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
"DynamoDBReadWrite" = "arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess"
}
policy_statements = [
{
sid = "CustomPolicy"
effect = "Allow"
actions = ["s3:GetObject"]
resources = ["arn:aws:s3:::my-bucket/*"]
}
]
}
}
}module "eks" {
source = "appvia/eks/aws"
version = "1.0.0"
cluster_name = "spoke-eks"
tags = {
Environment = "Production"
Product = "EKS"
Owner = "Engineering"
}
# Hub account configuration
hub_account_id = "123456789012"
hub_account_role = "argocd-pod-identity-hub"
hub_account_roles_prefix = "argocd-cross-account-*"
# Enable ArgoCD for GitOps
argocd = {
enabled = true
namespace = "argocd"
service_account = "argocd"
}
}The module assumes the account alread has an existing VPC to provision the cluster within. We need the VPC ID and the subnet IDs for the private subnets where the cluster should be located.
# Use existing VPC
vpc_id = "vpc-1234567890"
private_subnet_ids = ["subnet-1234567890", "subnet-0987654321"]Configure cluster access using access entries.
access_entries = {
admin = {
principal_arn = "arn:aws:iam::123456789012:role/AdminRole"
policy_associations = {
cluster_admin = {
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
access_scope = {
type = "cluster"
}
}
}
}
}Secure workload-to-AWS service authentication.
pod_identity = {
my-workload = {
enabled = true
name = "my-workload-identity"
namespace = "my-namespace"
service_account = "my-service-account"
managed_policy_arns = {
"S3Access" = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}
}
}Customize security group rules for cluster and nodes.
cluster_security_group_additional_rules = {
custom_ingress = {
description = "Custom ingress rule"
protocol = "tcp"
from_port = 8080
to_port = 8080
type = "ingress"
cidr_blocks = ["10.0.0.0/8"]
}
}
node_security_group_additional_rules = {
custom_egress = {
description = "Custom egress rule"
protocol = "tcp"
from_port = 443
to_port = 443
type = "egress"
cidr_blocks = ["0.0.0.0/0"]
}
}GitOps deployment platform for Kubernetes applications.
argocd = {
enabled = true
namespace = "argocd"
service_account = "argocd"
}Automated certificate management for Kubernetes.
cert_manager = {
enabled = true
namespace = "cert-manager"
service_account = "cert-manager"
route53_zone_arns = ["arn:aws:route53:::hostedzone/Z1234567890"]
}Route53 integration for automatic DNS record management.
external_dns = {
enabled = true
namespace = "external-dns"
service_account = "external-dns"
route53_zone_arns = ["arn:aws:route53:::hostedzone/Z1234567890"]
}AWS Secrets Manager and SSM Parameter Store integration.
external_secrets = {
enabled = true
namespace = "external-secrets"
service_account = "external-secrets"
secrets_manager_arns = ["arn:aws:secretsmanager:*:*"]
ssm_parameter_arns = ["arn:aws:ssm:*:*:parameter/eks/*"]
}Terraform-as-a-Service platform for infrastructure management.
terranetes = {
enabled = true
namespace = "terraform-system"
service_account = "terranetes-executor"
managed_policy_arns = {
"AdministratorAccess" = "arn:aws:iam::aws:policy/AdministratorAccess"
}
}AWS Controllers for Kubernetes IAM management.
aws_ack_iam = {
enabled = true
namespace = "ack-system"
service_account = "ack-iam-controller"
managed_policy_arns = {}
}Monitoring and logging with CloudWatch.
cloudwatch_observability = {
enabled = true
namespace = "cloudwatch-observability"
service_account = "cloudwatch-observability"
}Kubecost provides comprehensive cost monitoring and optimization for Kubernetes clusters with advanced AWS integration capabilities.
Kubecost offers three main deployment modes:
- Standalone: Single cluster cost monitoring
- Federated Storage: Multi-cluster aggregation for centralized monitoring
- Cloud Costs: Integration with AWS Cost and Usage Reports (CUR) via Athena
Before setting up Kubecost, ensure you have:
- An active AWS account with appropriate permissions
- S3 buckets for data storage and Athena query results
- AWS Cost and Usage Report (CUR) configured (for cloud costs feature)
- Amazon Athena setup with Glue database and table (for cloud costs feature)
To enable cloud costs analysis, you need to set up AWS Cost and Usage Reports:
-
Create CUR in AWS Billing Console:
- Navigate to AWS Billing Dashboard
- Create a new Cost and Usage Report with daily granularity
- Enable Resource IDs and Athena integration
- Specify an S3 bucket for CUR data storage
-
Set up Athena Integration:
- Use the AWS-provided CloudFormation template to create Athena resources
- Create an S3 bucket for Athena query results
- Configure Athena workgroup and database
The module automatically provisions the necessary IAM roles and policies:
- S3 Access: Read/write access to federated and CUR buckets
- Athena Operations: Query execution, monitoring, and result retrieval
- Glue Metadata: Database and table schema access for CUR data
Cost monitoring and optimization for Kubernetes clusters with AWS integration.
Basic cost monitoring for a single cluster.
kubecosts = {
enabled = true
}Aggregate cost data from multiple clusters into a primary cluster for centralized monitoring.
# Primary cluster (aggregates data from all clusters)
kubecosts = {
enabled = true
federated_storage = {
federated_bucket_arn = "arn:aws:s3:::kubecost-federated-bucket"
create_bucket = true
allowed_principals = [
"ACCOUNT_ID"
]
}
}
# Secondary clusters (send data to primary)
kubecosts_agent = {
enabled = true
federated_bucket_name = "arn:aws:s3:::kubecost-federated-bucket"
}Integrate with AWS Cost and Usage Reports (CUR) via Amazon Athena for comprehensive cloud cost analysis.
kubecosts = {
enabled = true
fedarated_storage = {
federated_bucket_arn = "my-kubecost-bucket"
}
# Cloud costs integration with AWS CUR via Athena
cloud_costs = {
enable = true
cur_bucket_name = "my-cur-bucket"
athena_bucket_arn = "arn:s3:aws:::my-athena-results-bucket"
athena_database_name = "cost_and_usage_data"
athena_table_name = "cur_table"
}
}After deployment, verify Kubecost is working correctly:
-
Access the Dashboard:
kubectl port-forward -n kubecost svc/kubecost-cost-analyzer 9090:9090
-
Check Cloud Integration:
- Navigate to Settings → Cloud Cost Settings
- Verify AWS integration is active
- Check for any error messages
-
Monitor Logs:
kubectl logs -n kubecost deployment/kubecost-cost-analyzer
- Kubecost Documentation
- AWS Cloud Billing Integration
- Multi-Cluster Setup
- AWS CUR Setup Guide
- Athena Integration
See the examples directory for complete usage examples:
- Basic EKS Cluster - Simple EKS cluster setup
- Platform Services - EKS with integrated platform services
- Custom Networking - EKS with custom VPC and transit gateway
- Pod Identity - EKS with custom pod identities
- Kubecost Cost Monitoring - EKS with Kubecost cost monitoring and AWS integration
| Name | Version |
|---|---|
| terraform | >= 1.0 |
| aws | >= 5.34 |
| Name | Version |
|---|---|
| aws | >= 5.34 |
The terraform-docs utility is used to generate this README. Follow the below steps to update:
- Make changes to the
.terraform-docs.ymlfile - Fetch the
terraform-docsbinary (https://terraform-docs.io/user-guide/installation/) - Run
terraform-docs markdown table --output-file ${PWD}/README.md --output-mode inject .
| Name | Version |
|---|---|
| aws | >= 6.0 |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| cluster_name | Name of the Kubenetes cluster | string |
n/a | yes |
| private_subnet_ids | List of private subnet IDs, if you want to use existing subnets | list(string) |
n/a | yes |
| tags | Tags to apply to all resources | map(string) |
n/a | yes |
| vpc_id | ID of the VPC where the EKS cluster will be created | string |
n/a | yes |
| access_entries | Map of access entries to add to the cluster. This is required if you use a different IAM Role for Terraform Plan actions. | map(object({ |
null |
no |
| addons | Map of EKS addons to enable | map(object({ |
null |
no |
| argocd | The ArgoCD configuration | object({ |
{} |
no |
| aws_ack_iam | The AWS ACK IAM configuration | object({ |
{} |
no |
| aws_eks_ack | The AWS EKS ACK Controller configuration | object({ |
{} |
no |
| aws_prometheus | The AWS Prometheus configuration | object({ |
{} |
no |
| cert_manager | The cert-manager configuration | object({ |
{} |
no |
| cloudwatch_observability | The CloudWatch Observability configuration | object({ |
{} |
no |
| cluster_enabled_log_types | List of log types to enable for the EKS cluster. | list(string) |
[ |
no |
| create_kms_key | Whether to create a KMS key for the EKS cluster. | bool |
true |
no |
| ebs_csi_driver | The EBS CSI driver configuration | object({ |
{} |
no |
| efs_csi_driver | The EFS CSI driver configuration | object({ |
{} |
no |
| enable_cluster_creator_admin_permissions | Whether to enable cluster creator admin permissions (else create access entries for the cluster creator) | bool |
false |
no |
| enable_irsa | Whether to enable IRSA for the EKS cluster. | bool |
true |
no |
| enable_private_access | Whether to enable private access to the EKS API server endpoint. | bool |
true |
no |
| enable_public_access | Whether to enable public access to the EKS API server endpoint. | bool |
false |
no |
| endpoint_public_access_cidrs | List of CIDR blocks which can access the Amazon EKS API server endpoint. | list(string) |
[ |
no |
| external_dns | The External DNS configuration | object({ |
{} |
no |
| external_secrets | The External Secrets configuration | object({ |
{} |
no |
| hub_account_id | The AWS account ID of the hub account | string |
null |
no |
| hub_account_role | Indicates we should create a cross account role for the hub to assume | string |
"argocd-pod-identity-hub" |
no |
| hub_account_roles_prefix | The prefix of the roles we are permitted to assume via the argocd pod identity | string |
"argocd-cross-account-*" |
no |
| kms_key_administrators | A list of IAM ARNs for EKS key administrators. If no value is provided, the current caller identity is used to ensure at least one key admin is available. | list(string) |
[] |
no |
| kms_key_service_users | A list of IAM ARNs for EKS key service users. | list(string) |
[] |
no |
| kms_key_users | A list of IAM ARNs for EKS key users. | list(string) |
[] |
no |
| kubecosts | The Kubecost configuration | object({ |
null |
no |
| kubecosts_agent | The Kubecost Agent configuration | object({ |
null |
no |
| kubernetes_version | Kubernetes version for the EKS cluster | string |
"1.34" |
no |
| node_pools | Collection of nodepools to create via auto-mote karpenter | list(string) |
[ |
no |
| node_security_group_additional_rules | List of additional security group rules to add to the node security group created. Set source_cluster_security_group = true inside rules to set the cluster_security_group as source. |
any |
{} |
no |
| pod_identity | The pod identity configuration | map(object({ |
{} |
no |
| registries | Provision pull-through cache for registries | map(object({ |
{} |
no |
| security_group_additional_rules | List of additional security group rules to add to the cluster security group created | any |
{} |
no |
| terranetes | The Terranetes platform configuration | object({ |
{} |
no |
| Name | Description |
|---|---|
| account_id | The AWS account ID. |
| cluster_arn | The ARN of the EKS cluster |
| cluster_certificate_authority_data | The base64 encoded certificate data for the EKS cluster |
| cluster_endpoint | The endpoint for the EKS Kubernetes API |
| cluster_name | The name of the EKS cluster. |
| cluster_oidc_provider_arn | The ARN of the OIDC provider for the EKS cluster |
| cross_account_role_arn | The cross account arn when we are using a hub |
| ebs_csi_driver_pod_identity_arn | The ARN of the EBS CSI driver pod identity |
| efs_csi_driver_pod_identity_arn | The ARN of the EFS CSI driver pod identity |
| region | The AWS region in which the cluster is provisioned |