Complete automation framework for deploying NVIDIA DPF (DPU Platform Framework) on Red Hat OpenShift clusters with NVIDIA BlueField-3 DPUs.
One command does everything:
make allThis handles the complete deployment lifecycle:
- Creates OpenShift clusters using Red Hat Assisted Installer
- Deploys NVIDIA DPF operator with all prerequisites
- Sets up DPU-accelerated networking with OVN-Kubernetes
- Provisions worker nodes automatically via Bare Metal Operator
- Host: RHEL 8/9, 64GB+ RAM, 16+ CPU cores
- Hardware: NVIDIA BlueField-3 DPUs on worker nodes
- Network: Internet access, management and high-speed networks
- OpenShift CLI (
oc) - Red Hat Assisted Installer CLI (
aicli) - Helm 3.x
- Standard tools:
jq,git,curl
- Red Hat Pull Secret: Download from Red Hat Console
- Red Hat Offline Token: Generate at cloud.redhat.com/openshift/token
- NVIDIA NGC API Key: Create at NGC Portal β Account β Setup
git clone https://github.com/rh-ecosystem-edge/openshift-dpf.git
cd openshift-dpf
cp .env.example .env# Add Red Hat offline token
mkdir -p ~/.aicli
echo "YOUR_OFFLINE_TOKEN" > ~/.aicli/offlinetoken.txt
# Add OpenShift pull secret (downloaded from Red Hat)
cp ~/Downloads/openshift-pull-secret.json openshift_pull.json
# Create NGC pull secret
cat > pull-secret.txt << 'EOF'
{
"auths": {
"nvcr.io": {
"username": "$oauthtoken",
"password": "YOUR_NGC_API_KEY",
"auth": "BASE64_ENCODED_CREDENTIALS"
}
}
}
EOF# Edit .env file with your settings
nano .env
# Essential settings:
CLUSTER_NAME=my-dpf-cluster
BASE_DOMAIN=example.com
VM_COUNT=1 # 1=SNO, 3+=Multi-node
DPF_VERSION=v25.7.1make allβ±οΈ Takes 2-3 hours - fully automated, no user interaction needed
All scripts and Make targets read configuration from a single .env file at the
repo root. This file is generated β never edit the source files in ci/
for your local setup.
| File | Role |
|---|---|
ci/env.defaults |
Default values for every optional variable. User environment variables always overwrite these. |
ci/env.required |
Variables that have no default and must be provided. Generation fails immediately if these are not set in user environment. |
ci/env.template |
The canonical set variables used by the scripts .env and in what order. |
-
Export your required variables (and any optional overrides):
export CLUSTER_NAME=my-cluster export BASE_DOMAIN=example.com export API_VIP=10.1.150.100 export INGRESS_VIP=10.1.150.101 export DPU_HOST_CIDR=10.0.110.0/24 export BFB_URL=https://content.mellanox.com/BlueField/...
To keep overrides reusable, put them in a personal file (e.g.
user.env) and source it first:source user.env make generate-env -
Run the generator:
make generate-env # creates .env (fails if .env already exists) make generate-env FORCE=true # overwrites an existing .env
ci/env.defaultsis sourced β sets defaults for every variable, but does not overwrite anything already exported in your shell.ci/env.requiredis sourced β aborts with an error if any required variable is still unset.envsubstrendersci/env.templateinto.env, substituting every${VAR}with its resolved value.
A flat KEY=VALUE file at the repo root (.env) containing the merged
result of your overrides + the defaults. This is consumed by make and scripts/env.sh at runtime.
make validate-env-filesChecks that every variable in ci/env.defaults has a corresponding entry in
ci/env.template so nothing is silently dropped, and report template-only variables that have no default.
| Guide | Purpose |
|---|---|
| Getting Started | Step-by-step setup guide |
| Configuration | Environment variables |
| Worker Provisioning | Add physical worker nodes |
| Troubleshooting | Fix common issues |
| External storage | SKIP_DEPLOY_STORAGE and storage requirements |
Perfect for development and edge computing:
VM_COUNT=1
RAM=32768 # 32GB recommended
make allFor high-availability environments:
VM_COUNT=3
API_VIP=10.1.150.100 # Required for multi-node
INGRESS_VIP=10.1.150.101 # Required for multi-node
make allFull DPU acceleration with worker nodes:
WORKER_COUNT=2 # Number of physical worker nodes
# Configure WORKER_*_BMC_IP, WORKER_*_BMC_USER, etc.
make allTo assign fixed IPs to cluster VMs (e.g. for predictable addressing or firewall rules), set VM_STATIC_IP=true and provide these required variables in .env:
| Variable | Description |
|---|---|
VM_EXT_IPS |
Comma-separated list of static IPs, one per VM (must have at least VM_COUNT entries). Example: 10.8.2.110,10.8.2.111,10.8.2.112 |
VM_EXT_PL |
Prefix length for the subnet (e.g. 24 for /24) |
VM_GW |
Default gateway IP |
VM_DNS |
DNS server IP(s), comma-separated |
Example for 3 VMs:
VM_STATIC_IP=true
VM_EXT_IPS=10.8.2.110,10.8.2.111,10.8.2.112
VM_EXT_PL=24
VM_GW=10.8.2.1
VM_DNS=8.8.8.8,4.4.4.4Optional: PRIMARY_IFACE (default enp1s0) β interface name used for static config on the VMs.
make all # Full automated deploymentmake create-cluster # Create OpenShift cluster
make deploy-dpf # Deploy DPF operator
make add-worker-nodes # Add worker nodesmake worker-status # Check worker status
make run-dpf-sanity # Health checks
make clean-all # Complete cleanup- OpenShift: 4.20 (only supported version)
- DPF: v25.7+ (production), v25.4+ (legacy support)
- Hardware: NVIDIA BlueField-3 DPUs on Dell/HPE/Supermicro servers
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- Issues: Report problems in GitHub Issues
- Documentation: Complete user guides in
docs/user-guide/ - Community: Join discussions in repository discussions
Get started with your first deployment: Getting Started Guide