Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 40 additions & 6 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Include environment variables (skip for generate-env so it starts clean)
ifeq ($(filter generate-env,$(MAKECMDGOALS)),)
# Include environment variables (skip for targets that don't need a .env)
ifeq ($(filter generate-env validate-env-files help,$(MAKECMDGOALS)),)
Comment on lines +1 to +2
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "Makefile" -type f | head -20

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 85


🏁 Script executed:

cat -n Makefile | head -30

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 1494


🏁 Script executed:

# Search for uses of MAKECMDGOALS and .env handling
rg "MAKECMDGOALS|\.env|generate-env|validate-env" -A 2 -B 2

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 21268


🏁 Script executed:

# Check if there are test files or documentation about Makefile usage
find . -type f \( -name "*test*" -o -name "*README*" -o -name "*.md" \) -exec grep -l "make.*validate-env\|make.*generate-env\|make.*deploy" {} \; | head -10

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 329


🏁 Script executed:

# Search for CI/CD configs and test patterns
find . -type f \( -name "*.yml" -o -name "*.yaml" -o -name "*.sh" \) -path "*/ci/*" -o -path "*/.github/*" | head -20

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 57


🏁 Script executed:

# Look for any shell scripts or docs that show example make invocations
rg "make\s+\w+\s+\w+" --type md --type sh | grep -E "make\s+[a-z-]+\s+[a-z-]+" | head -20

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 57


🏁 Script executed:

# Check if there's a .github directory or CI configuration
find . -type d \( -name ".github" -o -name "ci" -o -name ".gitlab-ci" \) 2>/dev/null

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 79


🏁 Script executed:

# Look at the actual Makefile targets and see which ones are typically run together
rg "^[a-z].*:" Makefile | head -30

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 685


🏁 Script executed:

# Check if there are any scripts that invoke make with multiple targets
rg "make\s+" --type sh --type bash | head -20

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 103


🏁 Script executed:

# Look for make invocations in shell scripts
rg "make\s+" --type sh | head -20

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 339


🏁 Script executed:

# Check the ci directory structure
ls -la ci/

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 446


🏁 Script executed:

# Look at ci scripts and configs
find ci -type f | head -20

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 150


🏁 Script executed:

# Check if generate-env is ever called with other targets
rg "generate-env|validate-env-files" --type md --type sh | grep -v "\.env" | head -15

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 57


🏁 Script executed:

# Verify the exact filter/filter-out logic behavior with a simple test
cat > /tmp/test_make_logic.mk << 'EOF'
EXEMPT := a b c
GOALS1 := a d
GOALS2 := a
GOALS3 := d
GOALS4 :=

test:
	`@echo` "Test 1: GOALS='a d' - should include .env"
	`@echo` "filter result: $(filter a b c,a d)"
	`@echo` "filter-out result: $(filter-out a b c,a d)"
	`@echo` ""
	`@echo` "Test 2: GOALS='a' - should skip .env"
	`@echo` "filter result: $(filter a b c,a)"
	`@echo` "filter-out result: $(filter-out a b c,a)"
	`@echo` ""
	`@echo` "Test 3: GOALS='d' - should include .env"
	`@echo` "filter result: $(filter a b c,d)"
	`@echo` "filter-out result: $(filter-out a b c,d)"
	`@echo` ""
	`@echo` "Test 4: GOALS='' - should include .env"
	`@echo` "filter result: $(filter a b c,)"
	`@echo` "filter-out result: $(filter-out a b c,)"
EOF
make -f /tmp/test_make_logic.mk test

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 120


🏁 Script executed:

# Let me verify the logic conceptually by examining how the filter function works
# Check if there are actual examples of how targets depend on each other
rg "^(generate-env|validate-env-files|deploy-dpf|all):" -A 5 Makefile

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 997


🏁 Script executed:

# Check for any documentation or comments about the include guard intention
rg -B2 -A2 "Include environment variables" Makefile

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 232


🏁 Script executed:

# Verify what variables deploy-dpf and other targets actually need
rg "\$\(.*\)" Makefile | grep -E "CLUSTER|API|BASE|DOMAIN|KUBECONFIG" | head -10

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 57


Fix .env loading for mixed-goal invocations.

Line 2 currently skips .env if any exempt goal is present. Example: make validate-env-files deploy-dpf will skip .env for deploy-dpf, breaking runtime variables that scripts expect.

The filter function returns only matching goals; when non-empty, the condition is false and .env is excluded. The solution uses filter-out to detect non-exempt goals instead—only skip .env if all goals are exempt.

Proposed fix
-ifeq ($(filter generate-env validate-env-files help,$(MAKECMDGOALS)),)
-include .env
-export
-endif
+EXEMPT_ENV_GOALS := generate-env validate-env-files help
+ifeq ($(MAKECMDGOALS),)
+include .env
+export
+else ifneq ($(filter-out $(EXEMPT_ENV_GOALS),$(MAKECMDGOALS)),)
+include .env
+export
+endif
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Makefile` around lines 1 - 2, The Makefile conditional mistakenly uses filter
to decide when to skip loading .env, causing .env to be skipped whenever any
exempt goal appears; change the conditional to use filter-out so it only skips
.env when ALL goals are exempt (i.e., replace the filter-based test with a
filter-out-based test), updating the existing conditional block that checks
"$(filter generate-env validate-env-files help,$(MAKECMDGOALS))" to use
"$(filter-out generate-env validate-env-files help,$(MAKECMDGOALS))" so
mixed-goal invocations like "make validate-env-files deploy-dpf" will still load
the .env for the non-exempt targets.

include .env
export
endif
Expand Down Expand Up @@ -35,7 +35,7 @@ WORKER_SCRIPT := scripts/worker.sh
delete-dpf-hcp-provisioner-operator \
verify-deployment verify-workers verify-dpu-nodes verify-dpudeployment \
run-traffic-flow-tests tft-setup tft-cleanup tft-show-config tft-results \
generate-env
validate-env-files generate-env

all:
@mkdir -p logs
Expand Down Expand Up @@ -268,8 +268,42 @@ verify-dpu-nodes:
verify-dpudeployment:
@$(VERIFY_SCRIPT) verify-dpudeployment

validate-env-files:
@bash -c '\
set -e; \
defaults=$$(grep -oP "^\w+" ci/env.defaults | sort); \
template=$$(grep -oP "^\w+" ci/env.template | sort); \
required=$$(grep -oP "\w+(?=:)" ci/env.required | sort); \
known=$$( echo "$$defaults"; echo "$$required" ); \
missing=""; \
for var in $$defaults; do \
if ! echo "$$template" | grep -qx "$$var"; then \
missing="$$missing $$var"; \
fi; \
done; \
extra=""; \
for var in $$template; do \
if ! echo "$$known" | grep -qx "$$var"; then \
extra="$$extra $$var"; \
fi; \
done; \
if [ -n "$$missing" ]; then \
echo "ERROR: variables in ci/env.defaults that are missing from ci/env.template:"; \
for var in $$missing; do echo " - $$var"; done; \
echo ""; \
echo "These variables will be silently dropped from .env."; \
echo "Fix: add a line VAR_NAME=\$${VAR_NAME} to ci/env.template for each."; \
exit 1; \
fi; \
if [ -n "$$extra" ]; then \
count=$$(echo $$extra | wc -w | tr -d " "); \
echo "OK $$count template-only variable(s) have no default (set per-environment):$${extra}"; \
fi; \
echo "OK all ci/env.defaults variables are present in ci/env.template"; \
'

FORCE ?= false
generate-env:
generate-env: validate-env-files
@if [ -f .env ] && [ "$(FORCE)" != "true" ]; then \
echo "ERROR: .env already exists. To overwrite, run: make generate-env FORCE=true"; \
exit 1; \
Expand Down Expand Up @@ -441,5 +475,5 @@ help:
@echo " TFT_DURATION - Duration per test in seconds (default: 10)"
@echo " TFT_CONNECTION_TYPE - Test type: iperf-tcp, iperf-udp, etc. (default: iperf-tcp)"
@echo " TFT_KUBECONFIG - Path to cluster kubeconfig"
@echo " TFT_SERVER_NODE - K8s node name for server (default: from HBN_HOSTNAME_NODE1)"
@echo " TFT_CLIENT_NODE - K8s node name for client (default: from HBN_HOSTNAME_NODE2)"
@echo " TFT_SERVER_NODE - K8s node name for server (default: from WORKER_1_NAME)"
@echo " TFT_CLIENT_NODE - K8s node name for client (default: from WORKER_2_NAME)"
55 changes: 52 additions & 3 deletions ci/env.defaults
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,8 @@

# Cluster Configuration
OPENSHIFT_VERSION=${OPENSHIFT_VERSION:-4.20.4}
USE_V419_WORKAROUND=${USE_V419_WORKAROUND:-false}
CATALOG_SOURCE_NAME=${CATALOG_SOURCE_NAME:-redhat-operators}
CATALOG_SOURCE_IMAGE=${CATALOG_SOURCE_IMAGE:-}

# Bridge Configuration
BRIDGE_NAME=${BRIDGE_NAME:-mgmt-br}
Expand All @@ -31,28 +32,58 @@ OVN_KUBERNETES_IMAGE_TAG=${OVN_KUBERNETES_IMAGE_TAG:-de9b16e0bb85c3b5727d5250124
OVN_KUBERNETES_UTILS_IMAGE_REPO=${OVN_KUBERNETES_UTILS_IMAGE_REPO:-ghcr.io/mellanox/ovn-kubernetes-dpf-utils}
OVN_KUBERNETES_UTILS_IMAGE_TAG=${OVN_KUBERNETES_UTILS_IMAGE_TAG:-v25.7.1-cff70b1}
INJECTOR_RESOURCE_NAME=${INJECTOR_RESOURCE_NAME:-openshift.io/bf3-p0-vfs}
INJECTOR_CHART_VERSION=${INJECTOR_CHART_VERSION:-v25.7.1-cff70b1}
OVNK_NAMESPACE=${OVNK_NAMESPACE:-openshift-ovn-kubernetes}
NUM_VFS=${NUM_VFS:-46}

# GitOps Operator Configuration
GITOPS_OPERATOR_CHANNEL=${GITOPS_OPERATOR_CHANNEL:-1.16}
GITOPS_OPERATOR_VERSION=${GITOPS_OPERATOR_VERSION:-v1.16.4}

# Maintenance Operator Configuration
MAINTENANCE_OPERATOR_VERSION=${MAINTENANCE_OPERATOR_VERSION:-0.2.0}

# Hypershift Configuration
HYPERSHIFT_IMAGE=${HYPERSHIFT_IMAGE:-quay.io/hypershift/hypershift-operator:latest}
HOSTED_CLUSTER_NAME=${HOSTED_CLUSTER_NAME:-doca}
CLUSTERS_NAMESPACE=${CLUSTERS_NAMESPACE:-clusters}
HOSTED_CONTROL_PLANE_NAMESPACE=${HOSTED_CONTROL_PLANE_NAMESPACE:-clusters-doca}
OCP_RELEASE_IMAGE=${OCP_RELEASE_IMAGE:-quay.io/openshift-release-dev/ocp-release:4.20.4-multi}
DISABLE_HCP_CAPS=${DISABLE_HCP_CAPS:-false}
ENABLE_HCP_MULTUS=${ENABLE_HCP_MULTUS:-true}
DPF_CLUSTER_TYPE=${DPF_CLUSTER_TYPE:-hypershift}

# Network Configuration
POD_CIDR=${POD_CIDR:-10.128.0.0/14}
SERVICE_CIDR=${SERVICE_CIDR:-172.30.0.0/16}
API_VIP=${API_VIP:-10.8.2.100}
INGRESS_VIP=${INGRESS_VIP:-10.8.2.101}
HBN_OVN_NETWORK=${HBN_OVN_NETWORK:-10.0.120.0/22}
NODES_MTU=${NODES_MTU:-1500}
PRIMARY_IFACE=${PRIMARY_IFACE:-enp1s0}

# MetalLB Configuration
HYPERSHIFT_API_IP=${HYPERSHIFT_API_IP:-}

# Pull Secrets
OPENSHIFT_PULL_SECRET=${OPENSHIFT_PULL_SECRET:-openshift_pull.json}
DPF_PULL_SECRET=${DPF_PULL_SECRET:-pull-secret.txt}

# NFD Configuration
NFD_OPERAND_IMAGE=${NFD_OPERAND_IMAGE:-quay.io/itsoiref/nfd:latest}

# HBN DPU Services Configuration
HBN_HELM_REPO_URL=${HBN_HELM_REPO_URL:-https://helm.ngc.nvidia.com/nvidia/doca}
HBN_HELM_CHART_VERSION=${HBN_HELM_CHART_VERSION:-1.0.3}
HBN_IMAGE_REPO=${HBN_IMAGE_REPO:-nvcr.io/nvidia/doca/doca_hbn}
HBN_IMAGE_TAG=${HBN_IMAGE_TAG:-3.2.0-doca3.2.0}

# DTS Service Configuration
DTS_HELM_REPO_URL=${DTS_HELM_REPO_URL:-https://helm.ngc.nvidia.com/nvidia/doca}
DTS_HELM_CHART_VERSION=${DTS_HELM_CHART_VERSION:-1.22.1}

# DMS Hostagent Configuration
DMS_HOSTAGENT_IMAGE=${DMS_HOSTAGENT_IMAGE:-ghcr.io/killianmuldoon/hostdriver:v25.10.1-patch.1}

# VM Configuration
VM_PREFIX=${VM_PREFIX:-vm-dpf}
VM_COUNT=${VM_COUNT:-3}
Expand All @@ -61,20 +92,27 @@ VCPUS=${VCPUS:-14}
DISK_SIZE1=${DISK_SIZE1:-120}
DISK_SIZE2=${DISK_SIZE2:-80}
MAC_PREFIX=${MAC_PREFIX:-52:54:00:ee:42}
VM_STATIC_IP=${VM_STATIC_IP:-false}

# Wait Configuration
MAX_RETRIES=${MAX_RETRIES:-90}
SLEEP_TIME=${SLEEP_TIME:-60}

# Paths
DISK_PATH=${DISK_PATH:-/var/lib/libvirt/images}
ISO_FOLDER=${ISO_FOLDER:-/var/lib/libvirt/images}
ISO_FOLDER=${ISO_FOLDER:-${DISK_PATH}}
ISO_TYPE=${ISO_TYPE:-minimal}
STATIC_NET_FILE=${STATIC_NET_FILE:-./configuration_templates/static_net.yaml}

# Storage
STORAGE_TYPE=${STORAGE_TYPE:-lvm}
SKIP_DEPLOY_STORAGE=${SKIP_DEPLOY_STORAGE:-false}
BFB_STORAGE_CLASS=${BFB_STORAGE_CLASS:-lvms-vg1}
BFB_URL=${BFB_URL:-http://10.8.2.236/bfb/rhcos_4.19.0-ec.4_installer_2025-04-23_07-48-42.bfb}

# NFS Configuration
NFS_SERVER_NODE_IP=${NFS_SERVER_NODE_IP:-}
NFS_PATH=${NFS_PATH:-/}

# Kubeconfig
KUBECONFIG=${KUBECONFIG:-./kubeconfig}
Expand All @@ -83,6 +121,7 @@ KUBECONFIG=${KUBECONFIG:-./kubeconfig}
AUTO_APPROVE_WORKER_CSR=${AUTO_APPROVE_WORKER_CSR:-false}
AUTO_APPROVE_DPUCLUSTER_CSR=${AUTO_APPROVE_DPUCLUSTER_CSR:-false}
WORKER_COUNT=${WORKER_COUNT:-0}
ENABLE_SHORT_WORKER_HOSTNAMES=${ENABLE_SHORT_WORKER_HOSTNAMES:-false}

# Verification
VERIFY_DEPLOYMENT=${VERIFY_DEPLOYMENT:-false}
Expand All @@ -100,3 +139,13 @@ SANITY_TESTS_PODS_WORKLOAD_FILE=${SANITY_TESTS_PODS_WORKLOAD_FILE:-manifests/pos
SANITY_TESTS_WORKLOAD_NAMESPACE=${SANITY_TESTS_WORKLOAD_NAMESPACE:-workload}
SANITY_TESTS_PING_COUNT=${SANITY_TESTS_PING_COUNT:-20}
SANITY_TESTS_PING_HBN_TO_HBN_PODS=${SANITY_TESTS_PING_HBN_TO_HBN_PODS:-false}

# DPF HCP Provisioner Operator Configuration
DPF_HCP_PROVISIONER_OPERATOR_CHART_URL=${DPF_HCP_PROVISIONER_OPERATOR_CHART_URL:-oci://quay.io/lhadad/charts/dpf-hcp-provisioner-operator}
DPF_HCP_PROVISIONER_OPERATOR_NAMESPACE=${DPF_HCP_PROVISIONER_OPERATOR_NAMESPACE:-dpf-hcp-provisioner-system}
DPF_HCP_PROVISIONER_OPERATOR_VERSION=${DPF_HCP_PROVISIONER_OPERATOR_VERSION:-0.1.2}
DPF_HCP_PROVISIONER_OPERATOR_IMAGE_REPO=${DPF_HCP_PROVISIONER_OPERATOR_IMAGE_REPO:-quay.io/lhadad/dpf-hcp-provisioner-operator}
DPF_HCP_PROVISIONER_OPERATOR_IMAGE_TAG=${DPF_HCP_PROVISIONER_OPERATOR_IMAGE_TAG:-v0.1.2}
DPFHCPPROVISIONER_PULL_SECRET_NAME=${DPFHCPPROVISIONER_PULL_SECRET_NAME:-my-pull-secret}
DPFHCPPROVISIONER_SSH_SECRET_NAME=${DPFHCPPROVISIONER_SSH_SECRET_NAME:-my-ssh-key}
ENABLE_BLUEFIELD_VALIDATION=${ENABLE_BLUEFIELD_VALIDATION:-false}
3 changes: 0 additions & 3 deletions ci/env.required
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@
# Network / DPU
: ${API_VIP:?API_VIP must be set}
: ${INGRESS_VIP:?INGRESS_VIP must be set}
: ${DPU_INTERFACE:?DPU_INTERFACE must be set (e.g. ens5f0np0)}
: ${DPU_HOST_CIDR:?DPU_HOST_CIDR must be set (e.g. 10.0.110.0/24)}
: ${HBN_OVN_NETWORK:?HBN_OVN_NETWORK must be set (e.g. 10.0.120.0/22)}

Expand All @@ -25,8 +24,6 @@
: ${BFB_URL:?BFB_URL must be set}

# HBN DPUServices
: ${HBN_HOSTNAME_NODE1:?HBN_HOSTNAME_NODE1 must be set}
: ${HBN_HOSTNAME_NODE2:?HBN_HOSTNAME_NODE2 must be set}
: ${HBN_HELM_REPO_URL:?HBN_HELM_REPO_URL must be set}
: ${HBN_HELM_CHART_VERSION:?HBN_HELM_CHART_VERSION must be set}
: ${HBN_IMAGE_REPO:?HBN_IMAGE_REPO must be set}
Expand Down
50 changes: 44 additions & 6 deletions ci/env.template
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@
CLUSTER_NAME=${CLUSTER_NAME}
BASE_DOMAIN=${BASE_DOMAIN}
OPENSHIFT_VERSION=${OPENSHIFT_VERSION}
USE_V419_WORKAROUND=${USE_V419_WORKAROUND}
CATALOG_SOURCE_NAME=${CATALOG_SOURCE_NAME}
CATALOG_SOURCE_IMAGE=${CATALOG_SOURCE_IMAGE}

# Bridge Configuration
BRIDGE_NAME=${BRIDGE_NAME}
Expand All @@ -28,41 +29,60 @@ OVN_KUBERNETES_IMAGE_REPO=${OVN_KUBERNETES_IMAGE_REPO}
OVN_KUBERNETES_IMAGE_TAG=${OVN_KUBERNETES_IMAGE_TAG}
OVN_KUBERNETES_UTILS_IMAGE_REPO=${OVN_KUBERNETES_UTILS_IMAGE_REPO}
OVN_KUBERNETES_UTILS_IMAGE_TAG=${OVN_KUBERNETES_UTILS_IMAGE_TAG}
INJECTOR_RESOURCE_NAME=${INJECTOR_RESOURCE_NAME}
INJECTOR_CHART_VERSION=${INJECTOR_CHART_VERSION}
OVNK_NAMESPACE=${OVNK_NAMESPACE}
NUM_VFS=${NUM_VFS}

# GitOps Operator Configuration
GITOPS_OPERATOR_CHANNEL=${GITOPS_OPERATOR_CHANNEL}
GITOPS_OPERATOR_VERSION=${GITOPS_OPERATOR_VERSION}

# Maintenance Operator Configuration
MAINTENANCE_OPERATOR_VERSION=${MAINTENANCE_OPERATOR_VERSION}

# Hypershift Configuration
HYPERSHIFT_IMAGE=${HYPERSHIFT_IMAGE}
HOSTED_CLUSTER_NAME=${HOSTED_CLUSTER_NAME}
CLUSTERS_NAMESPACE=${CLUSTERS_NAMESPACE}
HOSTED_CONTROL_PLANE_NAMESPACE=${HOSTED_CONTROL_PLANE_NAMESPACE}
OCP_RELEASE_IMAGE=${OCP_RELEASE_IMAGE}
DISABLE_HCP_CAPS=${DISABLE_HCP_CAPS}
ENABLE_HCP_MULTUS=${ENABLE_HCP_MULTUS}
DPF_CLUSTER_TYPE=${DPF_CLUSTER_TYPE}

# Network Configuration
POD_CIDR=${POD_CIDR}
SERVICE_CIDR=${SERVICE_CIDR}
DPU_HOST_CIDR=${DPU_HOST_CIDR}
HBN_OVN_NETWORK=${HBN_OVN_NETWORK}
API_VIP=${API_VIP}
INGRESS_VIP=${INGRESS_VIP}
INJECTOR_RESOURCE_NAME=${INJECTOR_RESOURCE_NAME}
HBN_OVN_NETWORK=${HBN_OVN_NETWORK}
NODES_MTU=${NODES_MTU}
PRIMARY_IFACE=${PRIMARY_IFACE}

# MetalLB Configuration
HYPERSHIFT_API_IP=${HYPERSHIFT_API_IP}

# Pull Secret files
OPENSHIFT_PULL_SECRET=${OPENSHIFT_PULL_SECRET}
DPF_PULL_SECRET=${DPF_PULL_SECRET}

# NFD Configuration
NFD_OPERAND_IMAGE=${NFD_OPERAND_IMAGE}

# DPU Services Configuration
HBN_HOSTNAME_NODE1=${HBN_HOSTNAME_NODE1}
HBN_HOSTNAME_NODE2=${HBN_HOSTNAME_NODE2}
HBN_HELM_REPO_URL=${HBN_HELM_REPO_URL}
HBN_HELM_CHART_VERSION=${HBN_HELM_CHART_VERSION}
HBN_IMAGE_REPO=${HBN_IMAGE_REPO}
HBN_IMAGE_TAG=${HBN_IMAGE_TAG}

# DTS Service Configuration
DTS_HELM_REPO_URL=${DTS_HELM_REPO_URL}
DTS_HELM_CHART_VERSION=${DTS_HELM_CHART_VERSION}

# DMS Hostagent Configuration
DMS_HOSTAGENT_IMAGE=${DMS_HOSTAGENT_IMAGE}

# VM Configuration
VM_PREFIX=${VM_PREFIX}
VM_COUNT=${VM_COUNT}
Expand All @@ -71,6 +91,7 @@ VCPUS=${VCPUS}
DISK_SIZE1=${DISK_SIZE1}
DISK_SIZE2=${DISK_SIZE2}
MAC_PREFIX=${MAC_PREFIX}
VM_STATIC_IP=${VM_STATIC_IP}

# Wait Configuration
MAX_RETRIES=${MAX_RETRIES}
Expand All @@ -79,19 +100,26 @@ SLEEP_TIME=${SLEEP_TIME}
# Paths
DISK_PATH=${DISK_PATH}
ISO_FOLDER=${ISO_FOLDER}
ISO_TYPE=${ISO_TYPE}
STATIC_NET_FILE=${STATIC_NET_FILE}

# Storage
STORAGE_TYPE=${STORAGE_TYPE}
SKIP_DEPLOY_STORAGE=${SKIP_DEPLOY_STORAGE}
BFB_STORAGE_CLASS=${BFB_STORAGE_CLASS}
BFB_URL=${BFB_URL}

# NFS Configuration
NFS_SERVER_NODE_IP=${NFS_SERVER_NODE_IP}
NFS_PATH=${NFS_PATH}

# Kubeconfig
KUBECONFIG=${KUBECONFIG}
TARGETCLUSTER_API_SERVER_HOST=${TARGETCLUSTER_API_SERVER_HOST}

# Worker Node Provisioning
WORKER_COUNT=${WORKER_COUNT}
ENABLE_SHORT_WORKER_HOSTNAMES=${ENABLE_SHORT_WORKER_HOSTNAMES}

# Worker 1
WORKER_1_NAME=${WORKER_1_NAME}
Expand Down Expand Up @@ -131,3 +159,13 @@ SANITY_TESTS_PODS_WORKLOAD_FILE=${SANITY_TESTS_PODS_WORKLOAD_FILE}
SANITY_TESTS_WORKLOAD_NAMESPACE=${SANITY_TESTS_WORKLOAD_NAMESPACE}
SANITY_TESTS_PING_COUNT=${SANITY_TESTS_PING_COUNT}
SANITY_TESTS_PING_HBN_TO_HBN_PODS=${SANITY_TESTS_PING_HBN_TO_HBN_PODS}

# DPF HCP Provisioner Operator Configuration
DPF_HCP_PROVISIONER_OPERATOR_CHART_URL=${DPF_HCP_PROVISIONER_OPERATOR_CHART_URL}
DPF_HCP_PROVISIONER_OPERATOR_NAMESPACE=${DPF_HCP_PROVISIONER_OPERATOR_NAMESPACE}
DPF_HCP_PROVISIONER_OPERATOR_VERSION=${DPF_HCP_PROVISIONER_OPERATOR_VERSION}
DPF_HCP_PROVISIONER_OPERATOR_IMAGE_REPO=${DPF_HCP_PROVISIONER_OPERATOR_IMAGE_REPO}
DPF_HCP_PROVISIONER_OPERATOR_IMAGE_TAG=${DPF_HCP_PROVISIONER_OPERATOR_IMAGE_TAG}
DPFHCPPROVISIONER_PULL_SECRET_NAME=${DPFHCPPROVISIONER_PULL_SECRET_NAME}
DPFHCPPROVISIONER_SSH_SECRET_NAME=${DPFHCPPROVISIONER_SSH_SECRET_NAME}
ENABLE_BLUEFIELD_VALIDATION=${ENABLE_BLUEFIELD_VALIDATION}
1 change: 0 additions & 1 deletion docs/user-guide/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,6 @@ OVN_CHART_VERSION=v25.7.1-f073927 # Matches DPF version

```bash
# DPU Interface Settings
DPU_INTERFACE=ens7f0np0 # Physical DPU interface
NUM_VFS=46 # Number of SR-IOV VFs
DPU_HOST_CIDR=10.6.130.0/24 # DPU host network
HBN_OVN_NETWORK=10.6.150.0/27 # HBN network range
Expand Down
1 change: 0 additions & 1 deletion docs/user-guide/deployment-scenarios.md
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,6 @@ WORKER_2_BOOT_MAC=aa:bb:cc:dd:ee:02
WORKER_2_ROOT_DEVICE=/dev/sda

# DPU Configuration
DPU_INTERFACE=ens7f0np0 # Physical DPU interface
NUM_VFS=46 # Number of virtual functions
```

Expand Down
5 changes: 2 additions & 3 deletions docs/user-guide/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,9 +157,8 @@ ip link show | grep ens7f0

# Common fixes:
# 1. Wait for SR-IOV operator to configure interfaces (10+ minutes)
# 2. Verify DPU_INTERFACE setting in .env
# 3. Check DPU hardware is properly installed
# 4. Verify NUM_VFS configuration
# 2. Check DPU hardware is properly installed
# 3. Verify NUM_VFS configuration
```

## Storage Issues
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: redhat-operators-v419
name: <CATALOG_SOURCE_NAME>
namespace: openshift-marketplace
spec:
displayName: Red Hat Operators v4.19
image: registry.redhat.io/redhat/redhat-operator-index:v4.19
displayName: <CATALOG_SOURCE_NAME>
image: <CATALOG_SOURCE_IMAGE>
priority: -100
publisher: Red Hat
sourceType: grpc
Expand Down
Loading