At Red Hat, we provide an internal OpenShift AI environment known as the RHOAI BU Cluster, where "BU" stands for our AI Business Unit. This cluster provides a centralized platform for experimentation, prototyping, and the scalable deployment of AI solutions across the company.
Its operations are managed through this GitOps Repository, which implements a comprehensive GitOps approach using declarative configuration to maintain and evolve the infrastructure behind Red Hat OpenShift AI.
Check the Fully GitOpsified implementation of a RHOAI platform blog post to know more!
- Two OpenShift clusters: development (rhoaibu-cluster-dev in dev branch) and production (rhoaibu-cluster-prod in the main branch)
- Complete AI/ML platform infrastructure using GitOps practices
- Models as a Service (MaaS) platform with 15+ AI models.
- Working example of GitOps for AI infrastructure
- Reference architecture for organizations implementing AI/ML platforms
NOTE: Want to accelerate your AI infrastructure deployment? Check the AI-Accelerators repository!
- Openshift Cert Manager Operator
- Openshift Data Foundation Operator
- Nvidia Operator
- Nfd
- Kiali Operator
- Devspaces
- Openshift Gitops Operator
- Openshift Pipelines Operator
- Openshift Serverless Operator
- Openshift Service Mesh
- RHOAI Operator
- Web Terminal
- Authorino Operator
- Openshift Lightspeed
- Openshift Virtualization
- OpenShift Red Hat Build of OpenTelemetry Operator
- Aikit Operator Instance
- Ovms Operator Instance
- Redis Enterprise Operator Instance
- Starbust Operator Instance
- Codeflare Operator Instance
- Pachyderm Operator Instance
- Run Ai Operator Instance