AICR provides tooling for deploying optimized and validated GPU-accelerated AI runtime in Kubernetes. It captures known-good combinations of drivers, operators, kernels, and system configurations to create a reproducible artifacts for common Kubernetes deployment frameworks like Helm and ArgoCD.
Running GPU-accelerated Kubernetes clusters reliably is hard. Small differences in kernel versions, drivers, container runtimes, operators, and Kubernetes releases can cause failures that are difficult to diagnose and expensive to reproduce.
Historically, this knowledge has lived in internal validation pipelines, playbooks, and tribal knowledge. AICR exists to externalize that experience. Its goal is to make validated configurations visible, repeatable, and reusable across environments.
AICR is a source of validated configuration knowledge for NVIDIA-accelerated Kubernetes environments.
It is:
- A curated set of tested and validated component combinations
- A reference for how NVIDIA-accelerated Kubernetes clusters are expected to be configured
- A foundation for generating reproducible deployment artifacts
- Designed to integrate with existing provisioning, CI/CD, and GitOps workflows
It is not:
- A Kubernetes distribution
- A cluster provisioning or lifecycle management system
- A managed control plane or hosted service
- A replacement for cloud provider or OEM platforms
AICR separates validated configuration knowledge from how that knowledge is consumed.
- Human-readable documentation lives under
docs/. - Version-locked configuration definitions (“recipes”) capture known-good system states.
- Those definitions can be rendered into concrete artifacts such as Helm values, Kubernetes manifests, or install scripts.- Recipes can be validated against actual system configurations to verify compatibility.
This separation allows the same validated configuration to be applied consistently across different environments and automation systems.
For example, a configuration validated for GB200 on Ubuntu 22.04 with Kubernetes 1.34 can be rendered into Helm values and manifests suitable for use in an existing GitOps pipeline.
Some tooling and APIs are under active development; documentation reflects current and near-term capabilities.
Install the latest version using the installation script:
curl -sfL https://raw.githubusercontent.com/NVIDIA/aicr/main/install | bash -s --See Installation Guide for manual installation, building from source, and container images.
Get started quickly with AICR:
- Review the documentation under
docs/to understand supported platforms and required components. - Identify your target environment:
- GPU architecture
- Operating system and kernel
- Kubernetes distribution and version
- Workload intent (for example, training or inference)
- Apply the validated configuration guidance using your existing tools (Helm, kubectl, CI/CD, or GitOps).
- Validate and iterate as platforms and workloads evolve.
Example: Generate a validated configuration for GB200 on EKS with Ubuntu, optimized for Kubeflow training:
# Generate a recipe for your environment
aicr recipe --service eks --accelerator gb200 --os ubuntu --intent training --platform kubeflow -o recipe.yaml
# Render the recipe into Helm values for your GitOps pipeline
aicr bundle --recipe recipe.yaml -o ./bundlesThe generated bundles/ directory contains a Helm per-component bundle ready to deploy or commit to your GitOps repository. See CLI Reference for more options.
Choose the documentation path that matches how you'll use AICR.
User – Platform and Infrastructure Operators
You deploy and operate GPU-accelerated Kubernetes clusters using validated configurations.
- Installation Guide – Install the aicr CLI (automated script, manual, or build from source)
- CLI Reference – Complete command reference with examples
- API Reference – REST API quick start
- Agent Deployment – Deploy the Kubernetes agent for automated snapshots
Contributor – Developers and Maintainers
You contribute code, extend functionality, or work on AICR internals.
- Agent Instructions – Core coding-agent guidance for Codex/Copilot (mirrors
.claude/CLAUDE.md) - Contributing Guide – Development setup, testing, and PR process
- Development Guide – Local development, Make targets, and tooling
- Architecture Overview – System design and components
- Bundler Development – How to create new bundlers
- Data Architecture – Recipe data model and query matching
Integrator – Automation and Platform Engineers
You integrate AICR into CI/CD pipelines, GitOps workflows, or larger platforms.
- API Reference – REST API endpoints and usage examples
- Data Flow – Understanding snapshots, recipes, and bundles
- Automation Guide – CI/CD integration patterns
- Kubernetes Deployment – Self-hosted API server setup
- Recipe Development – Adding and modifying recipe metadata
- Documentation – Documentation, guides, and examples.
- Roadmap – Feature priorities and development timeline
- Overview - Detailed system overview and glossary
- Security - Security-related resources
- Releases - Binaries, SBOMs, and other artifacts
- Issues - Bugs, feature requests, and questions
Contributions are welcome. See CONTRIBUTING.md for development setup, contribution guidelines, and the pull request process.