This repository provides a set of reusable, self-contained Terraform modules to deploy Materialize on the AWS cloud platform. You can use these modules individually or combine them to create your own custom infrastructure stack.
-> Note -> We recommend pinning your module sources to specific tags to avoid unexpected breaking changes in future versions. -> We recommend updating your module source tags when updating Materialize versions, taking care to follow any instructions in the release notes.
Before using these modules, ensure you have the following tools installed:
Each module is designed to be used independently. You can compose them in any way that fits your use case.
See examples/simple/ for a working example that ties the modules together into a complete environment.
AWS Specific Modules:
| Module | Description |
|---|---|
modules/networking |
VPC, subnets, NAT gateways, and networking resources |
modules/eks |
EKS cluster with OIDC provider and security groups |
modules/eks-node-group |
EKS managed node groups for base workloads |
modules/karpenter |
Karpenter for advanced node autoscaling |
modules/karpenter-ec2nodeclass |
EC2NodeClass for Karpenter provisioning |
modules/karpenter-nodepool |
NodePool for Karpenter workload scheduling |
modules/database |
RDS PostgreSQL database for Materialize metadata |
modules/storage |
S3 bucket with IRSA for Materialize persistence |
modules/aws-lbc |
AWS Load Balancer Controller for NLB management |
modules/nlb |
Network Load Balancer for Materialize instance access |
modules/operator |
Materialize Kubernetes operator installation |
Cloud-Agnostic Kubernetes Modules:
For Kubernetes-specific modules (cert-manager, Materialize instance, etc.) that work across all cloud providers, see the kubernetes/ directory.
See the Kubernetes Modules README for details on:
- cert-manager installation
- Self-signed certificate issuer
- Materialize instance deployment
Depending on your needs, you can use the modules individually or combine them to create a setup that fits your needs.
To deploy a simple end-to-end environment, see the examples/simple folder.
module "networking" {
source = "../../modules/networking"
name_prefix = "mz"
# ... networking vars
}
module "eks" {
source = "../../modules/eks"
name_prefix = "mz"
vpc_id = module.networking.vpc_id
private_subnet_ids = module.networking.private_subnet_ids
# ... eks vars
}
# See full working setup in the examples/simple/main.tf fileEnsure you configure the AWS, Kubernetes, and Helm providers. Here's a minimal setup:
provider "aws" {
region = var.aws_region
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
command = "aws"
}
}
provider "helm" {
kubernetes {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
command = "aws"
}
}
}