Skip to content

draft: Add requirements tracking#92

Open
killianmuldoon wants to merge 12 commits intorh-ecosystem-edge:mainfrom
killianmuldoon:pr-requirements-tests
Open

draft: Add requirements tracking#92
killianmuldoon wants to merge 12 commits intorh-ecosystem-edge:mainfrom
killianmuldoon:pr-requirements-tests

Conversation

@killianmuldoon
Copy link
Copy Markdown
Contributor

@killianmuldoon killianmuldoon commented Mar 11, 2026

Summary by CodeRabbit

  • New Features

    • Added automated requirements sync workflow with GitHub Issues/Project integration and results reporting.
  • Documentation

    • Added a comprehensive requirements management guide describing schema, workflow, and usage.
  • Tests

    • Added many validation scripts for DPU detection/provisioning, installation health, control‑plane counts, metrics exposure, versions, mixed worker setups, per‑worker DPU checks, and end‑to‑end requirements testing.
  • Chores

    • Added a Makefile target to verify requirements and supporting automation scripts.

Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
@openshift-ci openshift-ci Bot requested review from szigmon and wabouhamad March 11, 2026 10:22
@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented Mar 11, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: killianmuldoon
Once this PR has been reviewed and has the lgtm label, please assign szigmon for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented Mar 11, 2026

Hi @killianmuldoon. Thanks for your PR.

I'm waiting for a rh-ecosystem-edge member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work.

Tip

We noticed you've done this a few times! Consider joining the org to skip this step and gain /lgtm and other bot rights. We recommend asking approvers on your previous PRs to sponsor you.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 11, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds a requirements tracking system: a new requirements/requirements.json (58 requirements), CI workflow to sync requirements to GitHub Issues, scripts to sync/update/verify/report requirement tests, a Makefile target, documentation, and ~19 new test scripts implementing requirement checks.

Changes

Cohort / File(s) Summary
GitHub Actions Workflow
.github/workflows/requirements-sync.yml
New workflow "Sync Requirements to GitHub Issues" triggered on changes to requirements/requirements.json and manual dispatch; runs sync script with minimal permissions and concurrency control.
Make Targets
Makefile
Adds verify-requirements target (documented in help) invoking ./requirements/verify-requirements.sh and adds it to .PHONY.
Documentation
requirements/README.md
New README describing requirements tooling, JSON schema, local usage, CI workflows, and secret usage.
Requirements Data
requirements/requirements.json
New file defining 58 requirements (REQ-001..REQ-058) with ids, texts, descriptions, test mappings, labels, and subtasks.
Sync & Project Automation
requirements/sync-issues.sh
Idempotent script to create/ensure labels, create/update issues and subtasks from JSON, create/associate a GitHub Project, and map issue labels to project status; supports dry-run.
Test Result Reporting
requirements/update-results.sh
Runs test_implementation commands per requirement, captures outputs and exit codes, posts comments, updates issue labels and project field statuses; supports filtering and dry-run.
Local Verification Tool
requirements/verify-requirements.sh
Executable to run tests locally, emit per-requirement PASS/FAIL/SKIP, produce JSON or Markdown reports, and exit non-zero on failures.
Status Reporting
requirements/status.sh
Script to correlate requirements.json with Issues and report per-requirement status (table or CSV), with optional filters for status/area.
Tests — Cluster & DPU Checks
test/req-001-amd64-nodes.sh, test/req-002-install-success.sh, test/req-005-host-trusted-mode.sh, test/req-006-dpu-detection.sh, test/req-007-mixed-workers.sh, test/req-009-dpu-provisioned.sh, test/req-013-dpuservices-deployed.sh, test/req-019-dts-installed.sh, test/req-022-dts-metrics-endpoint.sh, test/req-023-dts-metrics-prometheus.sh, test/req-034-dpu-per-worker.sh, test/req-046-dpucluster-version.sh, test/req-047-dpucluster-managed.sh, test/req-049-control-plane-count.sh
Adds ~19 new shell test scripts covering cluster topology, DPU detection/provisioning, services, metrics, versions, and control-plane validation; each exits non-zero on failure.

Sequence Diagram

sequenceDiagram
    participant GH as GitHub (Repo)
    participant WF as Workflow Runner
    participant Sync as sync-issues.sh
    participant Issues as GitHub Issues
    participant Project as GitHub Project

    GH->>WF: push to main / manual dispatch (requirements.json changed)
    WF->>Sync: run sync-issues.sh with GH_TOKEN & GITHUB_REPOSITORY
    Sync->>Issues: ensure status & per-REQ labels exist
    Sync->>Issues: list existing issues by REQ label
    Sync->>Issues: create or update issue per requirement (body, labels)
    Sync->>Issues: create sub-task issues as needed
    Sync->>Project: create or fetch repository project and "Requirement Status" field
    Sync->>Project: add/link issues as project items and set item status based on labels
    WF->>GH: workflow completes
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'draft: Add requirements tracking' is directly related to the main changeset, which introduces a comprehensive requirements tracking system with JSON schema, shell scripts for syncing/verifying requirements, GitHub Actions workflows, and test scripts.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can validate your CodeRabbit configuration file in your editor.

If your editor has YAML language server, you can enable auto-completion and validation by adding # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json at the top of your CodeRabbit configuration file.

@@ -0,0 +1,678 @@
{
"repo": "killianmuldoon/openshift--dpf",
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to update this if we get agreement.

Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🧹 Nitpick comments (10)
test/req-007-mixed-workers.sh (2)

30-40: Negative count scenario masks data inconsistency.

If DPU_HOST_COUNT > WORKER_COUNT (e.g., DPUs reference nodes not labeled as workers, or control-plane nodes have DPUs), NON_DPU_COUNT becomes negative. The <= 0 check handles this silently but masks a potential configuration issue.

Consider adding explicit validation:

💡 Suggested improvement
 NON_DPU_COUNT=$((WORKER_COUNT - DPU_HOST_COUNT))

+if [[ "${NON_DPU_COUNT}" -lt 0 ]]; then
+    echo "WARNING: DPU host count (${DPU_HOST_COUNT}) exceeds worker count (${WORKER_COUNT})"
+    echo "         Some DPUs may be on non-worker nodes or node labeling is inconsistent"
+    NON_DPU_COUNT=0
+fi
+
 if [[ "${DPU_HOST_COUNT}" -eq 0 ]]; then
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/req-007-mixed-workers.sh` around lines 30 - 40, Check for an invalid
negative NON_DPU_COUNT by explicitly validating DPU_HOST_COUNT against
WORKER_COUNT before computing NON_DPU_COUNT: if DPU_HOST_COUNT is greater than
WORKER_COUNT (using the DPU_HOST_COUNT and WORKER_COUNT variables), emit a clear
FAIL message indicating a configuration mismatch (DPUs referenced on non-worker
nodes or control-plane nodes) and exit non-zero; otherwise compute NON_DPU_COUNT
and keep the existing checks for zero/non-zero to preserve current behavior.

43-43: Custom-columns output may fail with master-labeled clusters.

The final oc get nodes command filters on !node-role.kubernetes.io/control-plane but doesn't fall back to the master label like the earlier node retrieval (lines 11-14). On older clusters using master labels, this could show no output or incorrect results.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/req-007-mixed-workers.sh` at line 43, The oc get nodes invocation that
currently uses the selector '!node-role.kubernetes.io/control-plane' should be
updated to also exclude the legacy master label so clusters labeled with master
are handled; locate the oc get nodes command containing the selector
'!node-role.kubernetes.io/control-plane' and change the label selector to
exclude both node-role.kubernetes.io/control-plane and
node-role.kubernetes.io/master (i.e., add the master label to the
negation/fallback) while preserving the custom-columns output format.
test/req-002-install-success.sh (2)

12-13: Hardcoded version expectations require manual updates.

The expected versions are hardcoded, meaning this test must be updated for each new DPF/OCP release. Consider parameterizing via environment variables with these as defaults:

♻️ Suggested improvement
-EXPECTED_DPF_VERSION="26.4"
-EXPECTED_OCP_VERSION="4.22"
+EXPECTED_DPF_VERSION="${EXPECTED_DPF_VERSION:-26.4}"
+EXPECTED_OCP_VERSION="${EXPECTED_OCP_VERSION:-4.22}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/req-002-install-success.sh` around lines 12 - 13, The test currently
hardcodes EXPECTED_DPF_VERSION and EXPECTED_OCP_VERSION which forces updates on
each release; change req-002-install-success.sh to read these values from
environment variables (e.g., DPF_EXPECTED_VERSION and OCP_EXPECTED_VERSION) with
the current hardcoded strings "26.4" and "4.22" used as defaults when the env
vars are not set, and update any references to EXPECTED_DPF_VERSION /
EXPECTED_OCP_VERSION to use the new variables so CI or callers can override
versions without editing the script.

63-63: jq dependency is not validated.

The script uses jq for formatting output on failure (line 63), but doesn't verify jq is available. While the || true prevents script failure, users may see unhelpful output if jq is missing.

💡 Suggested improvement
-    oc get dpfoperatorconfigs.operator.dpu.nvidia.com -A -o jsonpath='{.items[0].status.conditions}' 2>/dev/null | jq . 2>/dev/null || true
+    CONDITIONS=$(oc get dpfoperatorconfigs.operator.dpu.nvidia.com -A -o jsonpath='{.items[0].status.conditions}' 2>/dev/null || true)
+    if command -v jq &>/dev/null; then
+        echo "${CONDITIONS}" | jq .
+    else
+        echo "${CONDITIONS}"
+    fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/req-002-install-success.sh` at line 63, The script calls jq in the oc
command pipeline (oc get ... | jq .) but doesn't verify jq exists; add a check
using command -v or which for "jq" at the top of test/req-002-install-success.sh
(or just before the oc get line) and if missing either print a clear warning
instructing the user to install jq or fall back to outputting raw JSON (e.g.,
skip piping to jq), ensuring the oc get pipeline handles both cases so users see
useful output when jq is not installed.
requirements/sync-issues.sh (2)

233-235: Sub-issue linking uses beta GraphQL feature.

The sub_issues GraphQL feature requires the GraphQL-Features: sub_issues header and may not be available on all GitHub instances (e.g., GitHub Enterprise Server). The warning message on failure is appropriate, but consider documenting this limitation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/sync-issues.sh` around lines 233 - 235, The sub-issue linking
line uses the experimental GraphQL feature header "GraphQL-Features: sub_issues"
and the mutation addSubIssue (using PARENT_ID and CHILD_ID), which may fail on
GHES; update the script to document this limitation and make the failure message
explicit: add a brief comment above the gh api graphql call stating that
sub_issues is a beta GraphQL feature and may not be supported on GitHub
Enterprise Server, and enhance the existing echo "WARNING: failed to link
sub-issue" to include that the operation uses the sub_issues GraphQL feature and
may not be available on all GitHub instances (so operators know the cause).

85-86: Issue list limit of 500 may be insufficient.

The --limit 500 cap could miss existing requirement issues in repositories with extensive issue history, potentially causing duplicate issue creation.

💡 Consider paginating or increasing the limit
-ISSUE_LIST=$(gh issue list ${REPO_FLAG} --state all --limit 500 --json number,labels,title \
+ISSUE_LIST=$(gh issue list ${REPO_FLAG} --state all --limit 1000 --json number,labels,title \
     --jq '.[] | "\(.number)|\([.labels[].name] | join(","))|\(.title)"' 2>/dev/null || true)

Alternatively, filter by REQ- label prefix server-side to reduce result count.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/sync-issues.sh` around lines 85 - 86, The hardcoded `--limit
500` in the `gh issue list` call assigned to ISSUE_LIST can miss issues; remove
the fixed limit and either (A) implement pagination by repeatedly invoking `gh
issue list ${REPO_FLAG} --state all --json number,labels,title --jq ...` until
no more results, or (B) increase the limit (e.g., `--limit 1000`) and add
server-side filtering via `--label` if your requirements use a consistent label
(use `REPO_FLAG` and the same jq expression), so update the invocation of `gh
issue list` in the assignment to ISSUE_LIST accordingly.
.github/workflows/requirements-sync.yml (1)

25-29: Consider adding error visibility for project sync failures.

The fallback ${{ secrets.PROJECT_TOKEN || secrets.GITHUB_TOKEN }} will silently use GITHUB_TOKEN if PROJECT_TOKEN isn't configured, causing project operations to fail with only warnings in logs. Consider adding explicit feedback:

💡 Suggested improvement
       - name: Sync issues and project
         env:
           GH_TOKEN: ${{ secrets.PROJECT_TOKEN || secrets.GITHUB_TOKEN }}
           GITHUB_REPOSITORY: ${{ github.repository }}
+          HAS_PROJECT_TOKEN: ${{ secrets.PROJECT_TOKEN != '' }}
         run: ./requirements/sync-issues.sh

Then in sync-issues.sh, check HAS_PROJECT_TOKEN and warn early if project sync won't work.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/requirements-sync.yml around lines 25 - 29, Set an
explicit HAS_PROJECT_TOKEN indicator and surface a clear warning when only
GITHUB_TOKEN is present: update the "Sync issues and project" step to export an
env var (e.g., GH_TOKEN set from the secrets expression) and a boolean
HAS_PROJECT_TOKEN (using the GitHub Actions expression to detect if
secrets.PROJECT_TOKEN is non-empty), then update sync-issues.sh to read
HAS_PROJECT_TOKEN and, if false, emit a clear warning/error that project sync
will not work and skip or fallback the project-related operations; reference the
workflow step name "Sync issues and project", the env var GH_TOKEN, the new
HAS_PROJECT_TOKEN flag, and the sync-issues.sh script when making the change.
test/req-049-control-plane-count.sh (1)

28-33: Ready status detection relies on fragile text matching.

The pattern " Ready " depends on specific column spacing in oc get nodes output. If the output format changes or nodes have additional status conditions, this could produce false negatives.

Consider using jsonpath or a more robust approach:

♻️ Suggested improvement using jsonpath
-READY_COUNT=$(echo "${CP_NODES}" | grep -c " Ready " || true)
-if [[ "${READY_COUNT}" -ne 3 ]]; then
-    echo "FAIL: expected 3 Ready control plane nodes, found ${READY_COUNT}"
-    echo "${CP_NODES}"
-    exit 1
-fi
+READY_COUNT=$(oc get nodes -l node-role.kubernetes.io/control-plane= \
+    -o jsonpath='{range .items[*]}{.status.conditions[?(@.type=="Ready")].status}{"\n"}{end}' 2>/dev/null | grep -c "True" || true)
+if [[ -z "${READY_COUNT}" || "${READY_COUNT}" -eq 0 ]]; then
+    # Fallback to master label
+    READY_COUNT=$(oc get nodes -l node-role.kubernetes.io/master= \
+        -o jsonpath='{range .items[*]}{.status.conditions[?(@.type=="Ready")].status}{"\n"}{end}' 2>/dev/null | grep -c "True" || true)
+fi
+if [[ "${READY_COUNT}" -ne 3 ]]; then
+    echo "FAIL: expected 3 Ready control plane nodes, found ${READY_COUNT}"
+    oc get nodes -l node-role.kubernetes.io/control-plane= --no-headers 2>/dev/null || \
+        oc get nodes -l node-role.kubernetes.io/master= --no-headers
+    exit 1
+fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/req-049-control-plane-count.sh` around lines 28 - 33, The Ready
detection using CP_NODES and grep counting the literal " Ready " is fragile;
update the READY_COUNT calculation to query the API output (e.g., use oc get
nodes -o jsonpath=... or pipe to jq) to count nodes whose status.conditions has
type=="Ready" and status=="True" instead of grepping CP_NODES; change the
READY_COUNT assignment (and any use of CP_NODES) to parse JSON from oc get nodes
to produce an integer count of Ready control-plane nodes and keep the rest of
the if-check intact.
requirements/status.sh (1)

23-30: Consider validating filter values.

The --filter option accepts any value without validation. If an invalid status is passed, the script silently shows no results. Adding a quick validation could improve UX.

♻️ Proposed validation
 while [[ $# -gt 0 ]]; do
     case "${1}" in
-        --filter) FILTER_STATUS="${2}"; shift 2 ;;
+        --filter)
+            FILTER_STATUS="${2}"
+            if [[ ! "${FILTER_STATUS}" =~ ^(passing|failing|untested|no-issue)$ ]]; then
+                echo "ERROR: Invalid filter '${FILTER_STATUS}'. Use: passing, failing, untested, no-issue" >&2
+                exit 1
+            fi
+            shift 2 ;;
         --area)   FILTER_AREA="${2}"; shift 2 ;;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/status.sh` around lines 23 - 30, The --filter option currently
assigns any value to FILTER_STATUS without validation, causing silent no-results
for invalid statuses; add validation after parsing (e.g., where FILTER_STATUS is
set in the while/case block) to check the value against the allowed set of
statuses and, if invalid, print a clear error and exit non-zero. Specifically,
update the argument parsing that sets FILTER_STATUS to validate the provided
value (FILTER_STATUS) against your canonical list (e.g., "open", "closed",
"in-progress" or whatever your app uses) and handle invalid inputs by echoing
"Invalid --filter value: <value>" to stderr and exiting with status 1 so users
get immediate feedback.
requirements/verify-requirements.sh (1)

146-157: JSON construction via string concatenation is fragile.

The sed escaping on lines 150-151 handles double quotes but misses backslashes, newlines, and other special characters that would produce invalid JSON. Consider using jq for safe JSON construction.

♻️ Safer alternative using jq
 if [[ "${OUTPUT_JSON}" == "true" ]]; then
-    JSON_RESULTS="["
-    for idx in "${!RESULT_IDS[@]}"; do
-        [[ ${idx} -gt 0 ]] && JSON_RESULTS+=","
-        # Escape strings for JSON
-        DETAIL_ESC=$(echo "${RESULT_DETAILS[${idx}]}" | sed 's/"/\\"/g' | tr -d '\n')
-        TITLE_ESC=$(echo "${RESULT_TITLES[${idx}]}" | sed 's/"/\\"/g')
-        JSON_RESULTS+=$(cat <<ENTRY
-{"id":"${RESULT_IDS[${idx}]}","requirement":"${TITLE_ESC}","status":"${RESULT_STATUSES[${idx}]}","detail":"${DETAIL_ESC}","labels":"${RESULT_LABELS[${idx}]}"}
-ENTRY
-)
-    done
-    JSON_RESULTS+="]"
+    JSON_RESULTS="[]"
+    for idx in "${!RESULT_IDS[@]}"; do
+        JSON_RESULTS=$(echo "${JSON_RESULTS}" | jq \
+            --arg id "${RESULT_IDS[${idx}]}" \
+            --arg req "${RESULT_TITLES[${idx}]}" \
+            --arg status "${RESULT_STATUSES[${idx}]}" \
+            --arg detail "${RESULT_DETAILS[${idx}]}" \
+            --arg labels "${RESULT_LABELS[${idx}]}" \
+            '. + [{"id": $id, "requirement": $req, "status": $status, "detail": $detail, "labels": $labels}]')
+    done
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/verify-requirements.sh` around lines 146 - 157, The current JSON
building via string concatenation (variables JSON_RESULTS, DETAIL_ESC, TITLE_ESC
and the loop over RESULT_IDS) fails to safely escape backslashes, newlines and
other characters; replace this manual escaping with safe construction using jq:
for each iteration use jq -n --arg id "${RESULT_IDS[idx]}" --arg requirement
"${RESULT_TITLES[idx]}" --arg status "${RESULT_STATUSES[idx]}" --arg detail
"${RESULT_DETAILS[idx]}" --arg labels "${RESULT_LABELS[idx]}"
'{id:$id,requirement:$requirement,status:$status,detail:$detail,labels:$labels}'
and either stream objects into a temporary file and finally produce an array
with jq -s '.' or pipe them into jq -s to produce JSON_RESULTS, removing the
fragile DETAIL_ESC/TITLE_ESC string manipulation and direct concatenation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/requirements-sync.yml:
- Around line 10-12: The workflow is missing the projects: write permission
required by the gh project commands used in sync-issues.sh (item-add,
field-create, item-edit); update the permissions block in the workflow to
include "projects: write" so GITHUB_TOKEN can perform project operations, and/or
add a brief comment in the workflow/README clarifying that if using
PROJECT_TOKEN it must be a PAT with repo and project scopes as a fallback for
full functionality.

In `@requirements/README.md`:
- Line 9: The README line about update-results.sh is incorrect: the script does
not consume piped go test -json input but instead executes the
test_implementation commands defined in requirements.json and posts pass/fail
comments; update the sentence to state that update-results.sh reads
requirements.json, runs each test_implementation command directly, captures
their exit/results, and posts comments (remove the claim about taking go test
-json from stdin).
- Around line 21-23: Update the schema docs to stop claiming `issue_number` is
populated by `sync-issues.sh`: either remove the `issue_number` field entirely
from the table, or mark it clearly as optional/unused and note that
`sync-issues.sh` identifies issues via REQ-XXX labels (see `labels`) rather than
storing numbers; ensure the table row referencing `issue_number` and the
sentence about `sync-issues.sh` are edited accordingly so docs match actual
script behavior.
- Around line 52-57: The README's "Adding a new requirement" section is
incorrect: sync-issues.sh does not write the issue number back to
requirements.json; instead it uses labels to locate issues. Update the text in
requirements/README.md to reflect the actual workflow: instruct contributors to
add a requirement with "issue_number": null in requirements.json, explain that
sync-issues.sh (or the GitHub Action) will create or find issues by label but
will not modify requirements.json, and add a note that maintainers must manually
update the "issue_number" in requirements.json (or run a dedicated script) if
automatic writing is needed; reference requirements.json and sync-issues.sh by
name so reviewers can locate the relevant files.
- Around line 47-50: Replace the incorrect example that pipes `go test` into the
script with usage that invokes the update-results.sh script directly; update the
README example to show calling ./requirements/update-results.sh to run
requirement tests and post results, demonstrate calling it with specific
requirement IDs (e.g., REQ-001 REQ-002), and show the --dry-run option; ensure
the reference to the script name update-results.sh is used so readers run the
script itself rather than piping go test output into it.

In `@requirements/requirements.json`:
- Line 2: Update the repo field value currently set as
"killianmuldoon/openshift--dpf" to the correct repository
"rh-ecosystem-edge/openshift-dpf" so scripts that fallback to the repo value
(used by sync-issues.sh, update-results.sh, status.sh) will point to the right
owner and correct single-dash repo name.

In `@requirements/status.sh`:
- Around line 51-52: The current gh call assigning ISSUE_LIST uses a fixed
--limit 500 which can truncate results; modify the logic around ISSUE_LIST (the
gh issue list invocation that uses REPO_FLAG) to paginate through all pages or
detect when the 500 cap is hit and loop until no more results (or emit a clear
warning) so every issue is fetched; ensure the final ISSUE_LIST variable
concatenates all pages' outputs (or sets a flag/warning) before downstream
processing.

In `@requirements/update-results.sh`:
- Around line 87-88: The variable REQ_TEXT is read from REQUIREMENTS_FILE but
never used; either remove the assignment to REQ_TEXT to avoid dead code, or
incorporate REQ_TEXT into the comment body where TEST_IMPL is used (e.g.,
include the requirement description alongside the test implementation) so the
fetched requirement text is actually displayed; update the block that currently
sets REQ_TEXT and TEST_IMPL to either drop REQ_TEXT or reference REQ_TEXT when
constructing the comment message.

In `@requirements/verify-requirements.sh`:
- Around line 183-190: The table output can be broken if any requirement text
contains the pipe character; update the for-loop that iterates over RESULT_IDS
and the echo line so that before printing you sanitize RESULT_TITLES and
RESULT_LABELS by replacing or escaping '|' characters (e.g., replace '|' with
'\|' or '&#124;') using shell string replacement on the variables used in the
echo; apply this transformation to the specific variables referenced in the loop
(RESULT_TITLES[${idx}] and RESULT_LABELS[${idx}]) so the echoed table row always
has safe content.
- Around line 33-41: The argument parsing loop incorrectly uses a for loop and
unconditional shifting, causing values for --output to be skipped; replace it
with a proper while [ $# -gt 0 ] loop that inspects $1 and, for the --output
case, reads the next positional parameter into OUTPUT_FILE and shifts by 2
(i.e., handle "--output) set OUTPUT_FILE=\"$2\"; shift 2;;" and "--output=*) set
OUTPUT_FILE from $1;;"), keep the --json and REQ-* branches to set OUTPUT_JSON
and append to FILTER_IDS, and remove the unconditional extra shift at the end so
arguments are consumed only as intended.

---

Nitpick comments:
In @.github/workflows/requirements-sync.yml:
- Around line 25-29: Set an explicit HAS_PROJECT_TOKEN indicator and surface a
clear warning when only GITHUB_TOKEN is present: update the "Sync issues and
project" step to export an env var (e.g., GH_TOKEN set from the secrets
expression) and a boolean HAS_PROJECT_TOKEN (using the GitHub Actions expression
to detect if secrets.PROJECT_TOKEN is non-empty), then update sync-issues.sh to
read HAS_PROJECT_TOKEN and, if false, emit a clear warning/error that project
sync will not work and skip or fallback the project-related operations;
reference the workflow step name "Sync issues and project", the env var
GH_TOKEN, the new HAS_PROJECT_TOKEN flag, and the sync-issues.sh script when
making the change.

In `@requirements/status.sh`:
- Around line 23-30: The --filter option currently assigns any value to
FILTER_STATUS without validation, causing silent no-results for invalid
statuses; add validation after parsing (e.g., where FILTER_STATUS is set in the
while/case block) to check the value against the allowed set of statuses and, if
invalid, print a clear error and exit non-zero. Specifically, update the
argument parsing that sets FILTER_STATUS to validate the provided value
(FILTER_STATUS) against your canonical list (e.g., "open", "closed",
"in-progress" or whatever your app uses) and handle invalid inputs by echoing
"Invalid --filter value: <value>" to stderr and exiting with status 1 so users
get immediate feedback.

In `@requirements/sync-issues.sh`:
- Around line 233-235: The sub-issue linking line uses the experimental GraphQL
feature header "GraphQL-Features: sub_issues" and the mutation addSubIssue
(using PARENT_ID and CHILD_ID), which may fail on GHES; update the script to
document this limitation and make the failure message explicit: add a brief
comment above the gh api graphql call stating that sub_issues is a beta GraphQL
feature and may not be supported on GitHub Enterprise Server, and enhance the
existing echo "WARNING: failed to link sub-issue" to include that the operation
uses the sub_issues GraphQL feature and may not be available on all GitHub
instances (so operators know the cause).
- Around line 85-86: The hardcoded `--limit 500` in the `gh issue list` call
assigned to ISSUE_LIST can miss issues; remove the fixed limit and either (A)
implement pagination by repeatedly invoking `gh issue list ${REPO_FLAG} --state
all --json number,labels,title --jq ...` until no more results, or (B) increase
the limit (e.g., `--limit 1000`) and add server-side filtering via `--label` if
your requirements use a consistent label (use `REPO_FLAG` and the same jq
expression), so update the invocation of `gh issue list` in the assignment to
ISSUE_LIST accordingly.

In `@requirements/verify-requirements.sh`:
- Around line 146-157: The current JSON building via string concatenation
(variables JSON_RESULTS, DETAIL_ESC, TITLE_ESC and the loop over RESULT_IDS)
fails to safely escape backslashes, newlines and other characters; replace this
manual escaping with safe construction using jq: for each iteration use jq -n
--arg id "${RESULT_IDS[idx]}" --arg requirement "${RESULT_TITLES[idx]}" --arg
status "${RESULT_STATUSES[idx]}" --arg detail "${RESULT_DETAILS[idx]}" --arg
labels "${RESULT_LABELS[idx]}"
'{id:$id,requirement:$requirement,status:$status,detail:$detail,labels:$labels}'
and either stream objects into a temporary file and finally produce an array
with jq -s '.' or pipe them into jq -s to produce JSON_RESULTS, removing the
fragile DETAIL_ESC/TITLE_ESC string manipulation and direct concatenation.

In `@test/req-002-install-success.sh`:
- Around line 12-13: The test currently hardcodes EXPECTED_DPF_VERSION and
EXPECTED_OCP_VERSION which forces updates on each release; change
req-002-install-success.sh to read these values from environment variables
(e.g., DPF_EXPECTED_VERSION and OCP_EXPECTED_VERSION) with the current hardcoded
strings "26.4" and "4.22" used as defaults when the env vars are not set, and
update any references to EXPECTED_DPF_VERSION / EXPECTED_OCP_VERSION to use the
new variables so CI or callers can override versions without editing the script.
- Line 63: The script calls jq in the oc command pipeline (oc get ... | jq .)
but doesn't verify jq exists; add a check using command -v or which for "jq" at
the top of test/req-002-install-success.sh (or just before the oc get line) and
if missing either print a clear warning instructing the user to install jq or
fall back to outputting raw JSON (e.g., skip piping to jq), ensuring the oc get
pipeline handles both cases so users see useful output when jq is not installed.

In `@test/req-007-mixed-workers.sh`:
- Around line 30-40: Check for an invalid negative NON_DPU_COUNT by explicitly
validating DPU_HOST_COUNT against WORKER_COUNT before computing NON_DPU_COUNT:
if DPU_HOST_COUNT is greater than WORKER_COUNT (using the DPU_HOST_COUNT and
WORKER_COUNT variables), emit a clear FAIL message indicating a configuration
mismatch (DPUs referenced on non-worker nodes or control-plane nodes) and exit
non-zero; otherwise compute NON_DPU_COUNT and keep the existing checks for
zero/non-zero to preserve current behavior.
- Line 43: The oc get nodes invocation that currently uses the selector
'!node-role.kubernetes.io/control-plane' should be updated to also exclude the
legacy master label so clusters labeled with master are handled; locate the oc
get nodes command containing the selector
'!node-role.kubernetes.io/control-plane' and change the label selector to
exclude both node-role.kubernetes.io/control-plane and
node-role.kubernetes.io/master (i.e., add the master label to the
negation/fallback) while preserving the custom-columns output format.

In `@test/req-049-control-plane-count.sh`:
- Around line 28-33: The Ready detection using CP_NODES and grep counting the
literal " Ready " is fragile; update the READY_COUNT calculation to query the
API output (e.g., use oc get nodes -o jsonpath=... or pipe to jq) to count nodes
whose status.conditions has type=="Ready" and status=="True" instead of grepping
CP_NODES; change the READY_COUNT assignment (and any use of CP_NODES) to parse
JSON from oc get nodes to produce an integer count of Ready control-plane nodes
and keep the rest of the if-check intact.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 0bd3d834-afdb-4634-a098-62887092aaec

📥 Commits

Reviewing files that changed from the base of the PR and between e8baecd and 75f9490.

📒 Files selected for processing (13)
  • .github/workflows/requirements-sync.yml
  • Makefile
  • requirements/README.md
  • requirements/requirements.json
  • requirements/status.sh
  • requirements/sync-issues.sh
  • requirements/update-results.sh
  • requirements/verify-requirements.sh
  • test/req-001-amd64-nodes.sh
  • test/req-002-install-success.sh
  • test/req-006-dpu-detection.sh
  • test/req-007-mixed-workers.sh
  • test/req-049-control-plane-count.sh

Comment on lines +10 to +12
permissions:
contents: read
issues: write
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing projects: write permission for GitHub Projects integration.

The sync-issues.sh script uses gh project commands (item-add, field-create, item-edit) which require the projects: write permission. Without it, project operations will silently fail when falling back to GITHUB_TOKEN.

Either add the permission or document that PROJECT_TOKEN secret must be configured with appropriate scopes:

🔧 Option 1: Add permission (if using fine-grained PAT or GitHub App)
 permissions:
   contents: read
   issues: write
+  projects: write
📝 Option 2: Document PROJECT_TOKEN requirement

Add a comment in the workflow or README explaining that PROJECT_TOKEN secret must be a PAT with repo and project scopes for full functionality, otherwise project sync will be skipped with warnings.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
permissions:
contents: read
issues: write
permissions:
contents: read
issues: write
projects: write
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/requirements-sync.yml around lines 10 - 12, The workflow
is missing the projects: write permission required by the gh project commands
used in sync-issues.sh (item-add, field-create, item-edit); update the
permissions block in the workflow to include "projects: write" so GITHUB_TOKEN
can perform project operations, and/or add a brief comment in the
workflow/README clarifying that if using PROJECT_TOKEN it must be a PAT with
repo and project scopes as a fallback for full functionality.

Comment thread requirements/README.md

1. **`requirements.json`** is the source of truth. Each requirement declares its description, test details, labels, and (once synced) the corresponding GitHub issue number.
2. **`sync-issues.sh`** reads the JSON and creates or updates GitHub Issues and a GitHub Project.
3. **`update-results.sh`** takes `go test -json` output and posts pass/fail results as comments on the corresponding issues.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Documentation doesn't match script behavior.

Line 9 states update-results.sh "takes go test -json output," but the script actually runs test_implementation commands directly from requirements.json. It does not consume piped input.

📝 Suggested fix
-3. **`update-results.sh`** takes `go test -json` output and posts pass/fail results as comments on the corresponding issues.
+3. **`update-results.sh`** runs each requirement's `test_implementation` command and posts pass/fail results as comments on the corresponding issues.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
3. **`update-results.sh`** takes `go test -json` output and posts pass/fail results as comments on the corresponding issues.
3. **`update-results.sh`** runs each requirement's `test_implementation` command and posts pass/fail results as comments on the corresponding issues.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/README.md` at line 9, The README line about update-results.sh is
incorrect: the script does not consume piped go test -json input but instead
executes the test_implementation commands defined in requirements.json and posts
pass/fail comments; update the sentence to state that update-results.sh reads
requirements.json, runs each test_implementation command directly, captures
their exit/results, and posts comments (remove the claim about taking go test
-json from stdin).

Comment thread requirements/README.md
Comment on lines +21 to +23
| `test_implementation` | string | Go test reference: `package/path::TestFunctionName`. |
| `labels` | string[] | GitHub labels to apply (created automatically if missing). |
| `issue_number` | int\|null | Populated by `sync-issues.sh`. Do not edit manually. |
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Schema documentation references a field the scripts don't use.

The issue_number field is documented as "Populated by sync-issues.sh", but the scripts use REQ-XXX labels to find issues, not stored issue numbers. This field appears to be vestigial or planned for future use.

📝 Suggested fix

Either remove the field from the documentation:

 | `test_implementation` | string  | Go test reference: `package/path::TestFunctionName`.         |
 | `labels`           | string[]   | GitHub labels to apply (created automatically if missing).   |
-| `issue_number`     | int\|null  | Populated by `sync-issues.sh`. Do not edit manually.        |

Or clarify it's optional/unused:

-| `issue_number`     | int\|null  | Populated by `sync-issues.sh`. Do not edit manually.        |
+| `issue_number`     | int\|null  | (Optional) Stored issue number. Currently unused; scripts find issues by REQ-XXX label. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/README.md` around lines 21 - 23, Update the schema docs to stop
claiming `issue_number` is populated by `sync-issues.sh`: either remove the
`issue_number` field entirely from the table, or mark it clearly as
optional/unused and note that `sync-issues.sh` identifies issues via REQ-XXX
labels (see `labels`) rather than storing numbers; ensure the table row
referencing `issue_number` and the sentence about `sync-issues.sh` are edited
accordingly so docs match actual script behavior.

Comment thread requirements/README.md
Comment on lines +47 to +50
```bash
# Run tests and pipe to the results script
go test ./... -json 2>&1 | ./requirements/update-results.sh
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Example usage is incorrect.

The example shows piping go test output to update-results.sh, but the script doesn't read from stdin—it executes test_implementation commands directly.

📝 Suggested fix
 ### Post test results

 ```bash
-# Run tests and pipe to the results script
-go test ./... -json 2>&1 | ./requirements/update-results.sh
+# Run requirement tests and post results to issues
+./requirements/update-results.sh
+
+# Or test specific requirements
+./requirements/update-results.sh REQ-001 REQ-002
+
+# Dry run (no GitHub updates)
+./requirements/update-results.sh --dry-run
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @requirements/README.md around lines 47 - 50, Replace the incorrect example
that pipes go test into the script with usage that invokes the
update-results.sh script directly; update the README example to show calling
./requirements/update-results.sh to run requirement tests and post results,
demonstrate calling it with specific requirement IDs (e.g., REQ-001 REQ-002),
and show the --dry-run option; ensure the reference to the script name
update-results.sh is used so readers run the script itself rather than piping go
test output into it.


</details>

<!-- fingerprinting:phantom:medusa:ocelot -->

<!-- This is an auto-generated comment by CodeRabbit -->

Comment thread requirements/README.md
Comment on lines +52 to +57
### Adding a new requirement

1. Add an entry to `requirements.json` with a new `id` and `"issue_number": null`.
2. Run `sync-issues.sh` (or push to main — the GitHub Action will handle it).
3. The script creates the issue and writes the number back to the JSON.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Adding a new requirement section is inconsistent with actual workflow.

The documentation says sync-issues.sh "writes the number back to the JSON," but the script uses labels to find issues and doesn't modify requirements.json.

📝 Suggested fix
 ### Adding a new requirement

-1. Add an entry to `requirements.json` with a new `id` and `"issue_number": null`.
+1. Add an entry to `requirements.json` with a new `id`, requirement text, labels, and optionally a `test_implementation`.
 2. Run `sync-issues.sh` (or push to main — the GitHub Action will handle it).
-3. The script creates the issue and writes the number back to the JSON.
+3. The script creates the issue with a `REQ-XXX` label matching the `id`. No JSON modifications are needed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/README.md` around lines 52 - 57, The README's "Adding a new
requirement" section is incorrect: sync-issues.sh does not write the issue
number back to requirements.json; instead it uses labels to locate issues.
Update the text in requirements/README.md to reflect the actual workflow:
instruct contributors to add a requirement with "issue_number": null in
requirements.json, explain that sync-issues.sh (or the GitHub Action) will
create or find issues by label but will not modify requirements.json, and add a
note that maintainers must manually update the "issue_number" in
requirements.json (or run a dedicated script) if automatic writing is needed;
reference requirements.json and sync-issues.sh by name so reviewers can locate
the relevant files.

@@ -0,0 +1,678 @@
{
"repo": "killianmuldoon/openshift--dpf",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the correct repository name
gh repo view killianmuldoon/openshift-dpf --json name 2>/dev/null && echo "openshift-dpf exists" || echo "openshift-dpf not found"
gh repo view killianmuldoon/openshift--dpf --json name 2>/dev/null && echo "openshift--dpf exists" || echo "openshift--dpf not found"

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 123


🏁 Script executed:

# Check the repository name and owner in the current context
cd / && git -C /tmp 2>/dev/null || echo "Git info unavailable in sandbox"

# Search for repository references in the requirements file and nearby files
rg "killianmuldoon|rh-ecosystem-edge|openshift.*dpf" --type json

# Check if there are any README or documentation files that might clarify the repo
fd -e md -e txt | grep -i readme | head -5

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 2696


🏁 Script executed:

# Read the requirements.json file to understand its structure and purpose
head -20 /home/jailuser/git/requirements/requirements.json

# Also check if there are any schema files or documentation
fd schema -e json -e txt | head -10
fd -e md | grep -i require | head -5

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 769


🏁 Script executed:

# Read the requirements README to understand the repo field purpose
cat /home/jailuser/git/requirements/README.md

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 3247


🏁 Script executed:

# Check if the repo field is used in any of the scripts
rg "repo" requirements/ -A 2 -B 2

# Also check GitHub Actions workflow files
fd -e yml -e yaml | xargs rg "repo" -l 2>/dev/null || echo "No matches found"

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 7172


🏁 Script executed:

# Confirm the correct owner by checking git remote or repo metadata
git config --get remote.origin.url 2>/dev/null || git remote -v 2>/dev/null || echo "Could not determine remote"

# Also check if there's any documentation about the repo
fd -e md | xargs grep -l "rh-ecosystem-edge\|killianmuldoon" 2>/dev/null | head -5

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 139


Fix incorrect repository reference in repo field.

The repo field has two errors that will break GitHub issue synchronization and test result posting:

  1. Wrong owner: killianmuldoon should be rh-ecosystem-edge
  2. Double dash typo: openshift--dpf should be openshift-dpf

This field is used by sync-issues.sh, update-results.sh, and status.sh as a fallback for the GITHUB_REPOSITORY environment variable. The incorrect value will cause these scripts to fail if the environment variable is not explicitly set during CI runs.

🔧 Suggested fix
-  "repo": "killianmuldoon/openshift--dpf",
+  "repo": "rh-ecosystem-edge/openshift-dpf",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"repo": "killianmuldoon/openshift--dpf",
"repo": "rh-ecosystem-edge/openshift-dpf",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/requirements.json` at line 2, Update the repo field value
currently set as "killianmuldoon/openshift--dpf" to the correct repository
"rh-ecosystem-edge/openshift-dpf" so scripts that fallback to the repo value
(used by sync-issues.sh, update-results.sh, status.sh) will point to the right
owner and correct single-dash repo name.

Comment thread requirements/status.sh
Comment on lines +51 to +52
ISSUE_LIST=$(gh issue list ${REPO_FLAG} --state all --limit 500 --json number,labels \
--jq '.[] | "\(.number)|\([.labels[].name] | join(","))"' 2>/dev/null || true)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

The 500 issue limit may truncate results in large repositories.

If the repository has more than 500 issues, some requirements may incorrectly show as "no-issue" because their issues weren't fetched. Consider adding pagination or warning when the limit is reached.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/status.sh` around lines 51 - 52, The current gh call assigning
ISSUE_LIST uses a fixed --limit 500 which can truncate results; modify the logic
around ISSUE_LIST (the gh issue list invocation that uses REPO_FLAG) to paginate
through all pages or detect when the 500 cap is hit and loop until no more
results (or emit a clear warning) so every issue is fetched; ensure the final
ISSUE_LIST variable concatenates all pages' outputs (or sets a flag/warning)
before downstream processing.

Comment thread requirements/update-results.sh
Comment on lines +33 to +41
for arg in "$@"; do
case "${arg}" in
--output) shift; OUTPUT_FILE="${1:-}" ;;
--json) OUTPUT_JSON=true ;;
--output=*) OUTPUT_FILE="${arg#--output=}" ;;
REQ-*) FILTER_IDS+=("${arg}") ;;
esac
shift 2>/dev/null || true
done
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Argument parsing is broken for --output option.

The loop shifts arguments incorrectly. When --output FILE is passed, the inner shift at line 35 advances past FILE, but then line 40 shifts again unconditionally, skipping the next argument. Also, --output (without =) doesn't consume its value correctly because the shift happens before the value is read.

🐛 Proposed fix
+OUTPUT_FILE=""
+OUTPUT_JSON=false
 FILTER_IDS=()
-for arg in "$@"; do
-    case "${arg}" in
-        --output)  shift; OUTPUT_FILE="${1:-}" ;;
-        --json)    OUTPUT_JSON=true ;;
-        --output=*) OUTPUT_FILE="${arg#--output=}" ;;
-        REQ-*)     FILTER_IDS+=("${arg}") ;;
+while [[ $# -gt 0 ]]; do
+    case "${1}" in
+        --output)
+            OUTPUT_FILE="${2:-}"
+            shift 2
+            ;;
+        --output=*)
+            OUTPUT_FILE="${1#--output=}"
+            shift
+            ;;
+        --json)
+            OUTPUT_JSON=true
+            shift
+            ;;
+        REQ-*)
+            FILTER_IDS+=("${1}")
+            shift
+            ;;
+        *)
+            echo "Unknown argument: ${1}" >&2
+            exit 1
+            ;;
     esac
-    shift 2>/dev/null || true
 done
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/verify-requirements.sh` around lines 33 - 41, The argument
parsing loop incorrectly uses a for loop and unconditional shifting, causing
values for --output to be skipped; replace it with a proper while [ $# -gt 0 ]
loop that inspects $1 and, for the --output case, reads the next positional
parameter into OUTPUT_FILE and shifts by 2 (i.e., handle "--output) set
OUTPUT_FILE=\"$2\"; shift 2;;" and "--output=*) set OUTPUT_FILE from $1;;"),
keep the --json and REQ-* branches to set OUTPUT_JSON and append to FILTER_IDS,
and remove the unconditional extra shift at the end so arguments are consumed
only as intended.

Comment on lines +183 to +190
for idx in "${!RESULT_IDS[@]}"; do
case "${RESULT_STATUSES[${idx}]}" in
PASS) BADGE="PASS" ;;
FAIL) BADGE="FAIL" ;;
*) BADGE="SKIP" ;;
esac
echo "| ${RESULT_IDS[${idx}]} | ${BADGE} | ${RESULT_TITLES[${idx}]} | ${RESULT_LABELS[${idx}]} |"
done
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Pipe characters in requirement text could break markdown table.

If any requirement text contains |, it would break the table formatting. Consider escaping or replacing pipe characters.

🔧 Proposed fix
         for idx in "${!RESULT_IDS[@]}"; do
             case "${RESULT_STATUSES[${idx}]}" in
                 PASS) BADGE="PASS" ;;
                 FAIL) BADGE="FAIL" ;;
                 *)    BADGE="SKIP" ;;
             esac
-            echo "| ${RESULT_IDS[${idx}]} | ${BADGE} | ${RESULT_TITLES[${idx}]} | ${RESULT_LABELS[${idx}]} |"
+            SAFE_TITLE=$(echo "${RESULT_TITLES[${idx}]}" | sed 's/|/\\|/g')
+            SAFE_LABELS=$(echo "${RESULT_LABELS[${idx}]}" | sed 's/|/\\|/g')
+            echo "| ${RESULT_IDS[${idx}]} | ${BADGE} | ${SAFE_TITLE} | ${SAFE_LABELS} |"
         done
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/verify-requirements.sh` around lines 183 - 190, The table output
can be broken if any requirement text contains the pipe character; update the
for-loop that iterates over RESULT_IDS and the echo line so that before printing
you sanitize RESULT_TITLES and RESULT_LABELS by replacing or escaping '|'
characters (e.g., replace '|' with '\|' or '&#124;') using shell string
replacement on the variables used in the echo; apply this transformation to the
specific variables referenced in the loop (RESULT_TITLES[${idx}] and
RESULT_LABELS[${idx}]) so the echoed table row always has safe content.

Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
test/req-019-dts-installed.sh (1)

42-42: Consider adding explicit exit 0 for consistency.

The script relies on the implicit exit status of echo. While this works, other scripts in the suite use explicit exit 0 after PASS. Adding it here improves clarity and consistency.

Proposed fix
 echo "PASS: DTS DPUService '${DTS_NAME}' is installed and Ready"
+exit 0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/req-019-dts-installed.sh` at line 42, The PASS path in the test script
currently ends with echo "PASS: DTS DPUService '${DTS_NAME}' is installed and
Ready" but relies on the implicit exit status; update the script so that
immediately after this echo statement you add an explicit exit 0 to mirror other
tests and make the success exit code explicit, keeping the echo text and
placement unchanged.
test/req-046-dpucluster-version.sh (1)

47-48: Use single quotes in trap to avoid early expansion (Shellcheck SC2064).

While the current code works because TMPKUBECONFIG is set before the trap, using single quotes is more robust and avoids potential issues if the code is refactored.

Proposed fix
 TMPKUBECONFIG=$(mktemp)
-trap "rm -f ${TMPKUBECONFIG}" EXIT
+trap 'rm -f "${TMPKUBECONFIG}"' EXIT
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/req-046-dpucluster-version.sh` around lines 47 - 48, The trap uses
double quotes which performs early expansion of TMPKUBECONFIG; change the trap
invocation around the temporary kubeconfig cleanup (the trap that references
TMPKUBECONFIG) to use single-quoted command text so TMPKUBECONFIG is expanded at
trap execution time rather than at trap registration, ensuring robust behavior
if the variable assignment is moved or refactored.
requirements/verify-requirements.sh (1)

77-81: Consider reducing jq invocations per requirement.

Currently spawning jq 4 times per requirement. For large requirement sets, batch extraction would improve performance.

♻️ Optional optimization: single jq call per requirement
 for i in $(seq 0 $((REQ_COUNT - 1))); do
-    REQ_ID=$(jq -r ".requirements[${i}].id" "${REQUIREMENTS_FILE}")
-    REQ_TEXT=$(jq -r ".requirements[${i}].requirement" "${REQUIREMENTS_FILE}")
-    TEST_IMPL=$(jq -r ".requirements[${i}].test_implementation" "${REQUIREMENTS_FILE}")
-    REQ_LABELS_STR=$(jq -r ".requirements[${i}].labels | join(\", \")" "${REQUIREMENTS_FILE}")
+    read -r REQ_ID REQ_TEXT TEST_IMPL REQ_LABELS_STR < <(
+        jq -r ".requirements[${i}] | [.id, .requirement, .test_implementation, (.labels | join(\", \"))] | `@tsv`" "${REQUIREMENTS_FILE}"
+    )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/verify-requirements.sh` around lines 77 - 81, The loop in
verify-requirements.sh makes four separate jq calls per requirement (producing
REQ_ID, REQ_TEXT, TEST_IMPL, REQ_LABELS_STR); replace those with a single jq
invocation that extracts all four fields at once for the current index (for
example using ".requirements[${i}] | [.id, .requirement, .test_implementation,
(.labels|join(\", \"))] | `@tsv`") and then split/read the returned tab-separated
string into REQ_ID, REQ_TEXT, TEST_IMPL, and REQ_LABELS_STR so the for-loop
performs one jq call per requirement instead of four.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@requirements/verify-requirements.sh`:
- Around line 156-159: The current manual escaping of RESULT_DETAILS and
RESULT_TITLES (and other fields appended into JSON_RESULTS) is incomplete and
can produce invalid JSON; update the construction to use a proper JSON encoder
(e.g., jq -Rn --arg or a small python -c/json module) to safely JSON-escape each
field (RESULT_DETAILS[idx], RESULT_TITLES[idx], RESULT_IDS[idx],
RESULT_STATUSES[idx], RESULT_LABELS[idx]) and then append the encoded object to
JSON_RESULTS instead of using sed/tr. Locate the JSON assembly around the
JSON_RESULTS+= block and replace the heredoc + sed-based escaping with a call
that produces a JSON string/object via jq/python so backslashes, tabs and
control characters are correctly encoded.

In `@test/req-023-dts-metrics-prometheus.sh`:
- Around line 73-96: The PASS message prints the original METRIC_COUNT even
after the fallback broader search finds results; update the script so that when
METRIC_COUNT is "0" or "null" and the broader query yields results (BROAD_COUNT
> 0), either assign METRIC_COUNT=BROAD_COUNT (using the BROAD_COUNT computed
from RESPONSE_BROAD) or change the final echo to use BROAD_COUNT if available;
adjust references around the if-block that computes RESPONSE_BROAD, BROAD_COUNT
and the final PASS echo so the reported time series count reflects the actual
found count.

In `@test/req-046-dpucluster-version.sh`:
- Line 11: The test hardcodes EXPECTED_OCP_VERSION="4.22" which conflicts with
CI's deployed 4.21.0; update the test to use an environment-driven value or
match CI: replace the hardcoded EXPECTED_OCP_VERSION usage in
test/req-046-dpucluster-version.sh (and similarly in
test/req-002-install-success.sh) to read from an env var like
EXPECTED_OCP_VERSION (falling back to OCP_RELEASE_IMAGE or "4.21.0") so the
check uses the CI-provided version instead of a fixed "4.22".

---

Nitpick comments:
In `@requirements/verify-requirements.sh`:
- Around line 77-81: The loop in verify-requirements.sh makes four separate jq
calls per requirement (producing REQ_ID, REQ_TEXT, TEST_IMPL, REQ_LABELS_STR);
replace those with a single jq invocation that extracts all four fields at once
for the current index (for example using ".requirements[${i}] | [.id,
.requirement, .test_implementation, (.labels|join(\", \"))] | `@tsv`") and then
split/read the returned tab-separated string into REQ_ID, REQ_TEXT, TEST_IMPL,
and REQ_LABELS_STR so the for-loop performs one jq call per requirement instead
of four.

In `@test/req-019-dts-installed.sh`:
- Line 42: The PASS path in the test script currently ends with echo "PASS: DTS
DPUService '${DTS_NAME}' is installed and Ready" but relies on the implicit exit
status; update the script so that immediately after this echo statement you add
an explicit exit 0 to mirror other tests and make the success exit code
explicit, keeping the echo text and placement unchanged.

In `@test/req-046-dpucluster-version.sh`:
- Around line 47-48: The trap uses double quotes which performs early expansion
of TMPKUBECONFIG; change the trap invocation around the temporary kubeconfig
cleanup (the trap that references TMPKUBECONFIG) to use single-quoted command
text so TMPKUBECONFIG is expanded at trap execution time rather than at trap
registration, ensuring robust behavior if the variable assignment is moved or
refactored.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 5e22afe4-9c3c-48ad-bcab-6389a5d23343

📥 Commits

Reviewing files that changed from the base of the PR and between 75f9490 and d04f272.

📒 Files selected for processing (10)
  • requirements/requirements.json
  • requirements/verify-requirements.sh
  • test/req-009-dpu-provisioned.sh
  • test/req-013-dpuservices-deployed.sh
  • test/req-019-dts-installed.sh
  • test/req-022-dts-metrics-endpoint.sh
  • test/req-023-dts-metrics-prometheus.sh
  • test/req-034-dpu-per-worker.sh
  • test/req-046-dpucluster-version.sh
  • test/req-047-dpucluster-managed.sh
🚧 Files skipped from review as they are similar to previous changes (1)
  • requirements/requirements.json

Comment on lines +156 to +159
DETAIL_ESC=$(echo "${RESULT_DETAILS[${idx}]}" | sed 's/"/\\"/g' | tr -d '\n')
TITLE_ESC=$(echo "${RESULT_TITLES[${idx}]}" | sed 's/"/\\"/g')
JSON_RESULTS+=$(cat <<ENTRY
{"id":"${RESULT_IDS[${idx}]}","requirement":"${TITLE_ESC}","status":"${RESULT_STATUSES[${idx}]}","detail":"${DETAIL_ESC}","labels":"${RESULT_LABELS[${idx}]}"}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

JSON escaping is incomplete and may produce invalid output.

Only double quotes are escaped. Backslashes, tabs, and other control characters are not handled. For example, a detail containing path\to\file produces invalid JSON.

🐛 Proposed fix: use jq for proper JSON string encoding
-        DETAIL_ESC=$(echo "${RESULT_DETAILS[${idx}]}" | sed 's/"/\\"/g' | tr -d '\n')
-        TITLE_ESC=$(echo "${RESULT_TITLES[${idx}]}" | sed 's/"/\\"/g')
-        JSON_RESULTS+=$(cat <<ENTRY
-{"id":"${RESULT_IDS[${idx}]}","requirement":"${TITLE_ESC}","status":"${RESULT_STATUSES[${idx}]}","detail":"${DETAIL_ESC}","labels":"${RESULT_LABELS[${idx}]}"}
-ENTRY
-)
+        JSON_RESULTS+=$(jq -n \
+            --arg id "${RESULT_IDS[${idx}]}" \
+            --arg req "${RESULT_TITLES[${idx}]}" \
+            --arg status "${RESULT_STATUSES[${idx}]}" \
+            --arg detail "${RESULT_DETAILS[${idx}]}" \
+            --arg labels "${RESULT_LABELS[${idx}]}" \
+            '{id: $id, requirement: $req, status: $status, detail: $detail, labels: $labels}')
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
DETAIL_ESC=$(echo "${RESULT_DETAILS[${idx}]}" | sed 's/"/\\"/g' | tr -d '\n')
TITLE_ESC=$(echo "${RESULT_TITLES[${idx}]}" | sed 's/"/\\"/g')
JSON_RESULTS+=$(cat <<ENTRY
{"id":"${RESULT_IDS[${idx}]}","requirement":"${TITLE_ESC}","status":"${RESULT_STATUSES[${idx}]}","detail":"${DETAIL_ESC}","labels":"${RESULT_LABELS[${idx}]}"}
JSON_RESULTS+=$(jq -n \
--arg id "${RESULT_IDS[${idx}]}" \
--arg req "${RESULT_TITLES[${idx}]}" \
--arg status "${RESULT_STATUSES[${idx}]}" \
--arg detail "${RESULT_DETAILS[${idx}]}" \
--arg labels "${RESULT_LABELS[${idx}]}" \
'{id: $id, requirement: $req, status: $status, detail: $detail, labels: $labels}')
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/verify-requirements.sh` around lines 156 - 159, The current
manual escaping of RESULT_DETAILS and RESULT_TITLES (and other fields appended
into JSON_RESULTS) is incomplete and can produce invalid JSON; update the
construction to use a proper JSON encoder (e.g., jq -Rn --arg or a small python
-c/json module) to safely JSON-escape each field (RESULT_DETAILS[idx],
RESULT_TITLES[idx], RESULT_IDS[idx], RESULT_STATUSES[idx], RESULT_LABELS[idx])
and then append the encoded object to JSON_RESULTS instead of using sed/tr.
Locate the JSON assembly around the JSON_RESULTS+= block and replace the heredoc
+ sed-based escaping with a call that produces a JSON string/object via
jq/python so backslashes, tabs and control characters are correctly encoded.

Comment thread test/req-023-dts-metrics-prometheus.sh Outdated

set -eo pipefail

EXPECTED_OCP_VERSION="4.22"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the OCP version used in CI and manifests
echo "=== Checking OCP version in ci/env.defaults ==="
grep -n "OCP_RELEASE_IMAGE" ci/env.defaults || echo "Not found"

echo ""
echo "=== Checking OCP version in manifests ==="
grep -rn "ocpReleaseImage\|OCP_RELEASE" manifests/ || echo "Not found"

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 379


🏁 Script executed:

cat test/req-046-dpucluster-version.sh

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 2185


🏁 Script executed:

# Check if there are other version references or if EXPECTED_OCP_VERSION is meant to be parameterized
echo "=== Searching for EXPECTED_OCP_VERSION usage ==="
rg "EXPECTED_OCP_VERSION" --type sh

echo ""
echo "=== Checking for OCP version references in other test files ==="
rg "4\.21|4\.22" test/ --type sh -i | head -20

Repository: rh-ecosystem-edge/openshift-dpf

Length of output: 1297


Version mismatch: tests expect 4.22 but CI deploys 4.21.0 — will fail in CI.

Hardcoded EXPECTED_OCP_VERSION="4.22" in this file (and also in test/req-002-install-success.sh) conflicts with ci/env.defaults which sets OCP_RELEASE_IMAGE to 4.21.0-multi. Both tests will fail when run against the deployed cluster version.

Either parameterize these version checks via environment variable or update to match the deployed version 4.21.0.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/req-046-dpucluster-version.sh` at line 11, The test hardcodes
EXPECTED_OCP_VERSION="4.22" which conflicts with CI's deployed 4.21.0; update
the test to use an environment-driven value or match CI: replace the hardcoded
EXPECTED_OCP_VERSION usage in test/req-046-dpucluster-version.sh (and similarly
in test/req-002-install-success.sh) to read from an env var like
EXPECTED_OCP_VERSION (falling back to OCP_RELEASE_IMAGE or "4.21.0") so the
check uses the CI-provided version instead of a fixed "4.22".

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
requirements/verify-requirements.sh (1)

80-84: Consider extracting all fields in a single jq call per requirement.

Currently four separate jq invocations occur per iteration. For large requirement sets, this adds process overhead.

♻️ Suggested optimization
 for i in $(seq 0 $((REQ_COUNT - 1))); do
-    REQ_ID=$(jq -r ".requirements[${i}].id" "${REQUIREMENTS_FILE}")
-    REQ_TEXT=$(jq -r ".requirements[${i}].requirement" "${REQUIREMENTS_FILE}")
-    TEST_IMPL=$(jq -r ".requirements[${i}].test_implementation" "${REQUIREMENTS_FILE}")
-    REQ_LABELS_STR=$(jq -r ".requirements[${i}].labels | join(\", \")" "${REQUIREMENTS_FILE}")
+    REQ_DATA=$(jq -r ".requirements[${i}] | [.id, .requirement, .test_implementation, (.labels | join(\", \"))] | `@tsv`" "${REQUIREMENTS_FILE}")
+    IFS=$'\t' read -r REQ_ID REQ_TEXT TEST_IMPL REQ_LABELS_STR <<< "${REQ_DATA}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/verify-requirements.sh` around lines 80 - 84, The loop currently
runs jq four times per requirement to populate REQ_ID, REQ_TEXT, TEST_IMPL and
REQ_LABELS_STR; replace those four invocations with a single jq call that
selects all four fields for ".requirements[$i]" (e.g. produce a safe
delimiter-separated or JSON-encoded tuple) and then split/assign the returned
values into REQ_ID, REQ_TEXT, TEST_IMPL and REQ_LABELS_STR in one step (use
REQUIREMENTS_FILE and the existing loop/REQ_COUNT/REQ_ID variable context);
ensure the jq output handles empty/null fields and the chosen delimiter or
encoding avoids clobbering content so the downstream code that uses REQ_ID,
REQ_TEXT, TEST_IMPL and REQ_LABELS_STR remains unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@requirements/verify-requirements.sh`:
- Around line 112-132: The temporary file OUTPUT_TMP created with mktemp can be
left behind if the script exits early; add a cleanup function (e.g., cleanup())
that removes OUTPUT_TMP if set and exists, and register it with trap cleanup
EXIT near the top of the script (after initial variable declarations) so the
temp file is removed on any exit; initialize OUTPUT_TMP="" before the loop and
you can keep or remove the inline rm -f "${OUTPUT_TMP}" at the end of the loop
since the trap guarantees cleanup.

---

Nitpick comments:
In `@requirements/verify-requirements.sh`:
- Around line 80-84: The loop currently runs jq four times per requirement to
populate REQ_ID, REQ_TEXT, TEST_IMPL and REQ_LABELS_STR; replace those four
invocations with a single jq call that selects all four fields for
".requirements[$i]" (e.g. produce a safe delimiter-separated or JSON-encoded
tuple) and then split/assign the returned values into REQ_ID, REQ_TEXT,
TEST_IMPL and REQ_LABELS_STR in one step (use REQUIREMENTS_FILE and the existing
loop/REQ_COUNT/REQ_ID variable context); ensure the jq output handles empty/null
fields and the chosen delimiter or encoding avoids clobbering content so the
downstream code that uses REQ_ID, REQ_TEXT, TEST_IMPL and REQ_LABELS_STR remains
unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3a954dab-a097-4f2f-adb0-db1539d81b9b

📥 Commits

Reviewing files that changed from the base of the PR and between d04f272 and 11a2ae6.

📒 Files selected for processing (2)
  • Makefile
  • requirements/verify-requirements.sh
🚧 Files skipped from review as they are similar to previous changes (1)
  • Makefile

Comment on lines +112 to +132
OUTPUT_TMP=$(mktemp)

if [[ "${VERBOSE}" == "true" ]]; then
echo ""
echo "--------------------------------------------------------------------------------"
printf " [%s] %s\n" "${REQ_ID}" "${REQ_TEXT}"
printf " Command: %s\n" "${TEST_IMPL}"
echo "--------------------------------------------------------------------------------"
set +e
(cd "${REPO_ROOT}" && eval "${TEST_IMPL}") 2>&1 | tee "${OUTPUT_TMP}"
EXIT_CODE=${PIPESTATUS[0]}
set -e
else
set +e
(cd "${REPO_ROOT}" && eval "${TEST_IMPL}") > "${OUTPUT_TMP}" 2>&1
EXIT_CODE=$?
set -e
fi

LAST_LINES=$(tail -5 "${OUTPUT_TMP}" | tr '\n' ' ')
rm -f "${OUTPUT_TMP}"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Temp file may not be cleaned up on unexpected exit.

If the script exits between mktemp (line 112) and rm -f (line 132), the temp file is orphaned. Consider using a trap to ensure cleanup.

🛡️ Suggested fix

Add a trap near the top of the script (after line 29):

cleanup() {
    [[ -n "${OUTPUT_TMP:-}" && -f "${OUTPUT_TMP}" ]] && rm -f "${OUTPUT_TMP}"
}
trap cleanup EXIT

Then declare OUTPUT_TMP="" before the loop and remove the rm -f from inside the loop (or keep it for immediate cleanup but the trap provides safety).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/verify-requirements.sh` around lines 112 - 132, The temporary
file OUTPUT_TMP created with mktemp can be left behind if the script exits
early; add a cleanup function (e.g., cleanup()) that removes OUTPUT_TMP if set
and exists, and register it with trap cleanup EXIT near the top of the script
(after initial variable declarations) so the temp file is removed on any exit;
initialize OUTPUT_TMP="" before the loop and you can keep or remove the inline
rm -f "${OUTPUT_TMP}" at the end of the loop since the trap guarantees cleanup.

@killianmuldoon killianmuldoon force-pushed the pr-requirements-tests branch from 11a2ae6 to 6fc5a4f Compare March 11, 2026 11:19
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
@killianmuldoon killianmuldoon force-pushed the pr-requirements-tests branch from 6fc5a4f to fc4cc8a Compare March 11, 2026 14:15
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
requirements/verify-requirements.sh (4)

199-206: ⚠️ Potential issue | 🟡 Minor

Pipe characters in requirement text could break markdown table.

If any requirement text or labels contain |, the table formatting will break. Consider escaping pipe characters.

🔧 Proposed fix
         for idx in "${!RESULT_IDS[@]}"; do
             case "${RESULT_STATUSES[${idx}]}" in
                 PASS) BADGE="PASS" ;;
                 FAIL) BADGE="FAIL" ;;
                 *)    BADGE="SKIP" ;;
             esac
-            echo "| ${RESULT_IDS[${idx}]} | ${BADGE} | ${RESULT_TITLES[${idx}]} | ${RESULT_LABELS[${idx}]} |"
+            SAFE_TITLE="${RESULT_TITLES[${idx}]//|/\\|}"
+            SAFE_LABELS="${RESULT_LABELS[${idx}]//|/\\|}"
+            echo "| ${RESULT_IDS[${idx}]} | ${BADGE} | ${SAFE_TITLE} | ${SAFE_LABELS} |"
         done
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/verify-requirements.sh` around lines 199 - 206, The generated
markdown table can break if RESULT_TITLES or RESULT_LABELS contain pipe
characters; update the loop that prints rows (the for ...; do block referencing
RESULT_IDS, RESULT_STATUSES, RESULT_TITLES, RESULT_LABELS and BADGE) to escape
or replace any '|' characters before echoing (e.g., via parameter expansion or a
small helper that replaces '|' with '\|' or '&#124;') so the echoed line uses
the sanitized values for both TITLE and LABEL when constructing the "| id |
badge | title | label |" row.

112-132: ⚠️ Potential issue | 🟡 Minor

Temp file may not be cleaned up on unexpected exit.

If the script exits unexpectedly between mktemp (line 112) and rm -f (line 132), the temp file is orphaned. Consider adding a trap for cleanup.

🛡️ Suggested fix

Add a trap near the top of the script (after line 29):

OUTPUT_TMP=""
cleanup() {
    [[ -n "${OUTPUT_TMP}" && -f "${OUTPUT_TMP}" ]] && rm -f "${OUTPUT_TMP}"
}
trap cleanup EXIT

This ensures the temp file is cleaned up on any exit path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/verify-requirements.sh` around lines 112 - 132, Initialize
OUTPUT_TMP to an empty string near the start of the script, then define a
cleanup function that checks if OUTPUT_TMP is non-empty and the file exists and
removes it, and register that cleanup via trap on EXIT so the temporary file
created by mktemp (OUTPUT_TMP) is removed on any exit path; reference the
OUTPUT_TMP variable, mktemp invocation, and add a cleanup() + trap cleanup EXIT
to guarantee removal.

35-44: ⚠️ Potential issue | 🟠 Major

Argument parsing is broken for --output option.

The loop uses for arg in "$@" which iterates over the original arguments, but the shift commands inside do not affect the loop iteration. When --output FILE is passed:

  1. Line 37 shifts past the current arg
  2. ${1:-} reads the wrong value (already shifted)
  3. Line 43 shifts again unconditionally

This causes arguments to be misinterpreted or skipped.

🐛 Proposed fix
 OUTPUT_FILE=""
 OUTPUT_JSON=false
 VERBOSE=false
 FILTER_IDS=()
-for arg in "$@"; do
-    case "${arg}" in
-        --output)  shift; OUTPUT_FILE="${1:-}" ;;
-        --json)    OUTPUT_JSON=true ;;
-        --verbose) VERBOSE=true ;;
-        --output=*) OUTPUT_FILE="${arg#--output=}" ;;
-        REQ-*)     FILTER_IDS+=("${arg}") ;;
+while [[ $# -gt 0 ]]; do
+    case "$1" in
+        --output)
+            OUTPUT_FILE="${2:-}"
+            shift 2
+            ;;
+        --output=*)
+            OUTPUT_FILE="${1#--output=}"
+            shift
+            ;;
+        --json)
+            OUTPUT_JSON=true
+            shift
+            ;;
+        --verbose)
+            VERBOSE=true
+            shift
+            ;;
+        REQ-*)
+            FILTER_IDS+=("$1")
+            shift
+            ;;
+        *)
+            shift
+            ;;
     esac
-    shift 2>/dev/null || true
 done
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/verify-requirements.sh` around lines 35 - 44, The argument
parsing loop using "for arg in \"$@\"" is broken because internal shift calls
don't affect the iteration; update the parsing to use a proper shift-aware loop
(e.g., "while [[ $# -gt 0 ]]; do arg=\"$1\"; case \"$arg\" in ... esac; shift"
pattern) so the --output option correctly consumes its following value and sets
OUTPUT_FILE, and other flags still set OUTPUT_JSON, VERBOSE, and append to
FILTER_IDS as intended; ensure you handle both "--output FILE" and
"--output=FILE" branches and remove the existing for-loop and its unconditional
shift.

165-172: ⚠️ Potential issue | 🟠 Major

JSON escaping is incomplete and may produce invalid JSON.

Only double quotes are escaped via sed 's/"/\\"/g'. Backslashes, tabs, newlines, and other control characters are not handled. For example, a detail containing path\to\file or literal tabs produces invalid JSON.

🐛 Proposed fix: use jq for proper JSON string encoding
-        DETAIL_ESC=$(echo "${RESULT_DETAILS[${idx}]}" | sed 's/"/\\"/g' | tr -d '\n')
-        TITLE_ESC=$(echo "${RESULT_TITLES[${idx}]}" | sed 's/"/\\"/g')
-        JSON_RESULTS+=$(cat <<ENTRY
-{"id":"${RESULT_IDS[${idx}]}","requirement":"${TITLE_ESC}","status":"${RESULT_STATUSES[${idx}]}","detail":"${DETAIL_ESC}","labels":"${RESULT_LABELS[${idx}]}"}
-ENTRY
-)
+        JSON_RESULTS+=$(jq -n \
+            --arg id "${RESULT_IDS[${idx}]}" \
+            --arg req "${RESULT_TITLES[${idx}]}" \
+            --arg status "${RESULT_STATUSES[${idx}]}" \
+            --arg detail "${RESULT_DETAILS[${idx}]}" \
+            --arg labels "${RESULT_LABELS[${idx}]}" \
+            '{id: $id, requirement: $req, status: $status, detail: $detail, labels: $labels}')
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/verify-requirements.sh` around lines 165 - 172, The current JSON
assembly in verify-requirements.sh manually escapes only double quotes
(variables DETAIL_ESC, TITLE_ESC) which misses backslashes, newlines, tabs,
etc., leading to invalid JSON; replace the manual escaping and here-doc JSON
concatenation (JSON_RESULTS building using RESULT_IDS, RESULT_TITLES,
RESULT_STATUSES, RESULT_DETAILS, RESULT_LABELS) with proper JSON encoding via jq
(or bash + jq) so each field is passed through jq's `@json` or built with jq --arg
to produce a safe JSON object per result and then append those objects to
JSON_RESULTS; update the loop that sets DETAIL_ESC/TITLE_ESC and the cat <<ENTRY
block to instead call jq to encode each string and emit the object.
🧹 Nitpick comments (4)
test/req-005-host-trusted-mode.sh (2)

48-49: Add trailing newline at end of file.

The file appears to be missing a trailing newline. POSIX text files should end with a newline character, and some tools may behave unexpectedly without it.

♻️ Proposed fix
 echo ""
 echo "PASS: all ${DPU_COUNT} DPU(s) running in host-trusted mode (dpuInstallInterface=hostAgent)"
+
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/req-005-host-trusted-mode.sh` around lines 48 - 49, The file ends
without a trailing newline which violates POSIX expectations; open
test/req-005-host-trusted-mode.sh and add a single newline character after the
final line (the echo "PASS: all ${DPU_COUNT} DPU(s) running in host-trusted mode
(dpuInstallInterface=hostAgent)" line) so the file terminates with a newline.

31-46: Consider listing non-compliant DPUs on failure for easier debugging.

When the test fails, only a count of non-compliant DPUs is shown, not which specific ones failed. This makes debugging harder in clusters with many DPUs.

♻️ Optional: Track and report non-compliant DPU names
 NON_HOST_AGENT=0
+NON_HOST_AGENT_NAMES=""
 while IFS=' ' read -r name interface; do
     [[ -z "${name}" ]] && continue
     echo "   ${name}: dpuInstallInterface=${interface}"
     if [[ "${interface}" != "hostAgent" ]]; then
         NON_HOST_AGENT=$((NON_HOST_AGENT + 1))
+        NON_HOST_AGENT_NAMES="${NON_HOST_AGENT_NAMES} ${name}"
     fi
 done <<< "${INTERFACES}"
 
 if [[ ${NON_HOST_AGENT} -gt 0 ]]; then
     echo ""
-    echo "FAIL: ${NON_HOST_AGENT} DPU(s) not using hostAgent (host-trusted mode)"
+    echo "FAIL: ${NON_HOST_AGENT} DPU(s) not using hostAgent (host-trusted mode):${NON_HOST_AGENT_NAMES}"
     exit 1
 fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/req-005-host-trusted-mode.sh` around lines 31 - 46, The test currently
only counts non-compliant DPUs (NON_HOST_AGENT) using INTERFACES and the read
loop; update the loop to also collect the names of the non-compliant DPUs (e.g.,
push into an array or space-separated string like NON_HOST_NAMES when interface
!= "hostAgent"), and on failure print the list (or each name) along with the
count before exiting so callers can see which DPU names failed the hostAgent
check; keep the existing echo lines and simply append the collected
NON_HOST_NAMES to the failure output.
requirements/sync-issues.sh (1)

265-265: Unused variable PROJECT_ID.

PROJECT_ID is assigned on line 265 but never referenced in the script. The project operations use PROJECT_NUM and OWNER instead. Consider removing this dead code or documenting its intended future use.

🧹 Proposed fix
     echo "  Found project: #${PROJECT_NUM}"
-    PROJECT_ID=$(echo "${PROJECT_LIST}" | jq -r ".projects[] | select(.title == \"${PROJECT_NAME}\") | .id" 2>/dev/null || true)
 
     if [[ "${DRY_RUN}" == "false" ]]; then
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/sync-issues.sh` at line 265, The assignment to PROJECT_ID is
dead code—PROJECT_ID is set from PROJECT_LIST using PROJECT_NAME but never
referenced; remove the unused assignment line (the line that sets PROJECT_ID) to
clean up the script, or if PROJECT_ID was intended to be used instead of
PROJECT_NUM/OWNER, replace the callers to use PROJECT_ID consistently (search
for PROJECT_NUM and OWNER usages) and update downstream logic accordingly;
reference the symbols PROJECT_ID, PROJECT_LIST, PROJECT_NAME, PROJECT_NUM, and
OWNER when making the change.
requirements/update-results.sh (1)

128-128: REQ_TEXT is fetched but never used.

The variable REQ_TEXT is assigned on line 128 but not referenced anywhere in the script. This was flagged in a previous review and marked as addressed, but the unused assignment remains. Either remove it or use it in output/logging.

🧹 Proposed fix: remove unused variable
     REQ_ID=$(jq -r ".requirements[${i}].id" "${REQUIREMENTS_FILE}")
-    REQ_TEXT=$(jq -r ".requirements[${i}].requirement" "${REQUIREMENTS_FILE}")
     TEST_IMPL=$(jq -r ".requirements[${i}].test_implementation" "${REQUIREMENTS_FILE}")

Or use it in the output:

-    echo "  [${REQ_ID}] Running: ${TEST_IMPL}"
+    echo "  [${REQ_ID}] ${REQ_TEXT}"
+    echo "    Running: ${TEST_IMPL}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/update-results.sh` at line 128, The assignment to REQ_TEXT
(REQ_TEXT=$(jq -r ".requirements[${i}].requirement" "${REQUIREMENTS_FILE}")) is
unused; either remove this unused variable assignment or actually use REQ_TEXT
in the script's output/logging. Fix by deleting the REQ_TEXT assignment if not
needed, or replace the no-op with a logging call that references REQ_TEXT (e.g.,
include it in echo or logger where results for the current requirement are
reported) so the extracted requirement text is consumed instead of left unused.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/req-022-dts-metrics-endpoint.sh`:
- Around line 86-94: The failure diagnostic prints ${METRICS_OUTPUT} which is
never assigned because the test writes results to ${METRICS_TMP} and then
removes it; ensure you preserve or read the temp file before deleting it: either
assign METRICS_OUTPUT="$(cat "${METRICS_TMP}")" (or similar) prior to rm -f
"${METRICS_TMP}", or move the rm -f after the failure branch and print the
contents directly from ${METRICS_TMP}; update references around FOUND,
EXPECTED_METRICS, METRICS_TMP and METRICS_OUTPUT accordingly so the "First 10
lines of response" shows actual content.

---

Duplicate comments:
In `@requirements/verify-requirements.sh`:
- Around line 199-206: The generated markdown table can break if RESULT_TITLES
or RESULT_LABELS contain pipe characters; update the loop that prints rows (the
for ...; do block referencing RESULT_IDS, RESULT_STATUSES, RESULT_TITLES,
RESULT_LABELS and BADGE) to escape or replace any '|' characters before echoing
(e.g., via parameter expansion or a small helper that replaces '|' with '\|' or
'&#124;') so the echoed line uses the sanitized values for both TITLE and LABEL
when constructing the "| id | badge | title | label |" row.
- Around line 112-132: Initialize OUTPUT_TMP to an empty string near the start
of the script, then define a cleanup function that checks if OUTPUT_TMP is
non-empty and the file exists and removes it, and register that cleanup via trap
on EXIT so the temporary file created by mktemp (OUTPUT_TMP) is removed on any
exit path; reference the OUTPUT_TMP variable, mktemp invocation, and add a
cleanup() + trap cleanup EXIT to guarantee removal.
- Around line 35-44: The argument parsing loop using "for arg in \"$@\"" is
broken because internal shift calls don't affect the iteration; update the
parsing to use a proper shift-aware loop (e.g., "while [[ $# -gt 0 ]]; do
arg=\"$1\"; case \"$arg\" in ... esac; shift" pattern) so the --output option
correctly consumes its following value and sets OUTPUT_FILE, and other flags
still set OUTPUT_JSON, VERBOSE, and append to FILTER_IDS as intended; ensure you
handle both "--output FILE" and "--output=FILE" branches and remove the existing
for-loop and its unconditional shift.
- Around line 165-172: The current JSON assembly in verify-requirements.sh
manually escapes only double quotes (variables DETAIL_ESC, TITLE_ESC) which
misses backslashes, newlines, tabs, etc., leading to invalid JSON; replace the
manual escaping and here-doc JSON concatenation (JSON_RESULTS building using
RESULT_IDS, RESULT_TITLES, RESULT_STATUSES, RESULT_DETAILS, RESULT_LABELS) with
proper JSON encoding via jq (or bash + jq) so each field is passed through jq's
`@json` or built with jq --arg to produce a safe JSON object per result and then
append those objects to JSON_RESULTS; update the loop that sets
DETAIL_ESC/TITLE_ESC and the cat <<ENTRY block to instead call jq to encode each
string and emit the object.

---

Nitpick comments:
In `@requirements/sync-issues.sh`:
- Line 265: The assignment to PROJECT_ID is dead code—PROJECT_ID is set from
PROJECT_LIST using PROJECT_NAME but never referenced; remove the unused
assignment line (the line that sets PROJECT_ID) to clean up the script, or if
PROJECT_ID was intended to be used instead of PROJECT_NUM/OWNER, replace the
callers to use PROJECT_ID consistently (search for PROJECT_NUM and OWNER usages)
and update downstream logic accordingly; reference the symbols PROJECT_ID,
PROJECT_LIST, PROJECT_NAME, PROJECT_NUM, and OWNER when making the change.

In `@requirements/update-results.sh`:
- Line 128: The assignment to REQ_TEXT (REQ_TEXT=$(jq -r
".requirements[${i}].requirement" "${REQUIREMENTS_FILE}")) is unused; either
remove this unused variable assignment or actually use REQ_TEXT in the script's
output/logging. Fix by deleting the REQ_TEXT assignment if not needed, or
replace the no-op with a logging call that references REQ_TEXT (e.g., include it
in echo or logger where results for the current requirement are reported) so the
extracted requirement text is consumed instead of left unused.

In `@test/req-005-host-trusted-mode.sh`:
- Around line 48-49: The file ends without a trailing newline which violates
POSIX expectations; open test/req-005-host-trusted-mode.sh and add a single
newline character after the final line (the echo "PASS: all ${DPU_COUNT} DPU(s)
running in host-trusted mode (dpuInstallInterface=hostAgent)" line) so the file
terminates with a newline.
- Around line 31-46: The test currently only counts non-compliant DPUs
(NON_HOST_AGENT) using INTERFACES and the read loop; update the loop to also
collect the names of the non-compliant DPUs (e.g., push into an array or
space-separated string like NON_HOST_NAMES when interface != "hostAgent"), and
on failure print the list (or each name) along with the count before exiting so
callers can see which DPU names failed the hostAgent check; keep the existing
echo lines and simply append the collected NON_HOST_NAMES to the failure output.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 737c75e7-5da2-4f4a-a056-68a3da370779

📥 Commits

Reviewing files that changed from the base of the PR and between 11a2ae6 and fc4cc8a.

📒 Files selected for processing (11)
  • Makefile
  • requirements/requirements.json
  • requirements/sync-issues.sh
  • requirements/update-results.sh
  • requirements/verify-requirements.sh
  • test/req-005-host-trusted-mode.sh
  • test/req-019-dts-installed.sh
  • test/req-022-dts-metrics-endpoint.sh
  • test/req-023-dts-metrics-prometheus.sh
  • test/req-034-dpu-per-worker.sh
  • test/req-047-dpucluster-managed.sh
✅ Files skipped from review due to trivial changes (1)
  • requirements/requirements.json
🚧 Files skipped from review as they are similar to previous changes (3)
  • Makefile
  • test/req-034-dpu-per-worker.sh
  • test/req-047-dpucluster-managed.sh

Comment on lines +86 to +94
rm -f "${METRICS_TMP}"

if [[ ${FOUND} -eq 0 ]]; then
echo "FAIL: none of the expected DTS metrics found (${EXPECTED_METRICS})"
echo ""
echo " First 10 lines of response:"
echo "${METRICS_OUTPUT}" | head -10 | sed 's/^/ /'
exit 1
fi
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Bug: METRICS_OUTPUT is undefined, causing empty diagnostic output on failure.

The temp file content is stored in METRICS_TMP and deleted on line 86, but line 92 references METRICS_OUTPUT which is never defined. This results in empty diagnostic output when the test fails.

🐛 Proposed fix

Either save the content before deleting the temp file:

+METRICS_OUTPUT=$(cat "${METRICS_TMP}")
 rm -f "${METRICS_TMP}"
 
 if [[ ${FOUND} -eq 0 ]]; then
     echo "FAIL: none of the expected DTS metrics found (${EXPECTED_METRICS})"
     echo ""
     echo "   First 10 lines of response:"
     echo "${METRICS_OUTPUT}" | head -10 | sed 's/^/     /'
     exit 1
 fi

Or restructure to output before deletion:

 if [[ ${FOUND} -eq 0 ]]; then
     echo "FAIL: none of the expected DTS metrics found (${EXPECTED_METRICS})"
     echo ""
     echo "   First 10 lines of response:"
-    echo "${METRICS_OUTPUT}" | head -10 | sed 's/^/     /'
+    head -10 "${METRICS_TMP}" | sed 's/^/     /'
+    rm -f "${METRICS_TMP}"
     exit 1
 fi
+
+rm -f "${METRICS_TMP}"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
rm -f "${METRICS_TMP}"
if [[ ${FOUND} -eq 0 ]]; then
echo "FAIL: none of the expected DTS metrics found (${EXPECTED_METRICS})"
echo ""
echo " First 10 lines of response:"
echo "${METRICS_OUTPUT}" | head -10 | sed 's/^/ /'
exit 1
fi
METRICS_OUTPUT=$(cat "${METRICS_TMP}")
rm -f "${METRICS_TMP}"
if [[ ${FOUND} -eq 0 ]]; then
echo "FAIL: none of the expected DTS metrics found (${EXPECTED_METRICS})"
echo ""
echo " First 10 lines of response:"
echo "${METRICS_OUTPUT}" | head -10 | sed 's/^/ /'
exit 1
fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/req-022-dts-metrics-endpoint.sh` around lines 86 - 94, The failure
diagnostic prints ${METRICS_OUTPUT} which is never assigned because the test
writes results to ${METRICS_TMP} and then removes it; ensure you preserve or
read the temp file before deleting it: either assign METRICS_OUTPUT="$(cat
"${METRICS_TMP}")" (or similar) prior to rm -f "${METRICS_TMP}", or move the rm
-f after the failure branch and print the contents directly from ${METRICS_TMP};
update references around FOUND, EXPECTED_METRICS, METRICS_TMP and METRICS_OUTPUT
accordingly so the "First 10 lines of response" shows actual content.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
requirements/requirements.json (1)

29-52: Empty test_description fields create blank sections in GitHub issues.

43 of 58 requirements have empty test_description values. Per sync-issues.sh (lines 125-139), these are included directly in the issue body, resulting in:

## Test Description

(blank)

## Test Implementation

Consider either:

  • Populating with a placeholder like "TBD" or "Not yet defined"
  • Updating sync-issues.sh to conditionally omit the section when empty
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/requirements.json` around lines 29 - 52, Several requirement
entries (e.g., objects with "id": "REQ-003" and "REQ-004") have empty
"test_description" fields which lead to blank "## Test Description" sections in
issues; either populate those fields with a placeholder like "TBD" or "Not yet
defined" for all requirements missing a value, or update the sync script
(sync-issues.sh) to skip rendering the Test Description block when the
"test_description" value is empty. Locate the requirement objects in
requirements.json and either set "test_description": "TBD" (or similar) for each
empty entry, or modify sync-issues.sh where it builds the issue body (the block
referenced around lines 125-139) to conditionally include the "## Test
Description" header and content only when test_description is non-empty.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@requirements/requirements.json`:
- Around line 593-604: REQ-050 is a duplicate of REQ-001 (same requirement text
and same test_implementation "test/req-001-amd64-nodes.sh"); either remove
REQ-050 or make it distinct by updating its "description" and/or
"test_implementation" to reflect the intended difference (e.g., point to a new
test script or add clarifying description and different labels). Locate the
objects named "REQ-050" and "REQ-001" in the JSON and then either delete the
REQ-050 entry or modify its "description", "test_implementation", and "labels"
to clearly distinguish it from REQ-001.
- Around line 413-460: REQ-035 through REQ-038 contain placeholder tokens
("XXX"/"XXXX") in the requirement text; update each requirement string to a
concrete target (e.g., replace "XXX worker nodes" in REQ-035, "XXX pods per
worker node" in REQ-036, "XXXX Gbps" in REQ-037, and "XXX minutes" in REQ-038)
or, if targets are undecided, add a clear draft marker by appending a label like
"status/draft" or "tbd" to the labels array for each requirement and add a short
note in the description field indicating the value is to be determined; also
ensure test_description or test_implementation fields are updated or left as
"TBD" when placeholders remain so the sync to GitHub issues won’t publish
ambiguous "XXX"/"XXXX" values.

---

Nitpick comments:
In `@requirements/requirements.json`:
- Around line 29-52: Several requirement entries (e.g., objects with "id":
"REQ-003" and "REQ-004") have empty "test_description" fields which lead to
blank "## Test Description" sections in issues; either populate those fields
with a placeholder like "TBD" or "Not yet defined" for all requirements missing
a value, or update the sync script (sync-issues.sh) to skip rendering the Test
Description block when the "test_description" value is empty. Locate the
requirement objects in requirements.json and either set "test_description":
"TBD" (or similar) for each empty entry, or modify sync-issues.sh where it
builds the issue body (the block referenced around lines 125-139) to
conditionally include the "## Test Description" header and content only when
test_description is non-empty.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7773c7f2-7068-4770-be1e-d5ce80f1c086

📥 Commits

Reviewing files that changed from the base of the PR and between fc4cc8a and 4b5ec5c.

📒 Files selected for processing (1)
  • requirements/requirements.json

Comment on lines +413 to +460
{
"id": "REQ-035",
"requirement": "DPF on OCP can manage XXX worker nodes equipped with DPUs",
"description": "",
"test_description": "",
"test_implementation": "",
"labels": [
"area/scale-performance",
"type/functional"
],
"subtasks": []
},
{
"id": "REQ-036",
"requirement": "DPF on OCP can manage XXX pods per worker node equipped with DPUs",
"description": "",
"test_description": "",
"test_implementation": "",
"labels": [
"area/scale-performance",
"type/functional"
],
"subtasks": []
},
{
"id": "REQ-037",
"requirement": "Traffic between pods can reach XXXX Gbps",
"description": "",
"test_description": "",
"test_implementation": "",
"labels": [
"area/scale-performance",
"type/functional"
],
"subtasks": []
},
{
"id": "REQ-038",
"requirement": "Deploys a new DPU with the OVN-Kubernetes and HBN DPUService in XXX minutes",
"description": "",
"test_description": "",
"test_implementation": "",
"labels": [
"area/scale-performance",
"type/functional"
],
"subtasks": []
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Placeholder values need to be defined.

REQ-035 through REQ-038 contain XXX / XXXX placeholders for scale and performance targets:

ID Placeholder
REQ-035 "XXX worker nodes"
REQ-036 "XXX pods per worker node"
REQ-037 "XXXX Gbps"
REQ-038 "XXX minutes"

These will be synced to GitHub issues as-is. Consider defining concrete targets or marking these requirements as draft/TBD in the labels.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/requirements.json` around lines 413 - 460, REQ-035 through
REQ-038 contain placeholder tokens ("XXX"/"XXXX") in the requirement text;
update each requirement string to a concrete target (e.g., replace "XXX worker
nodes" in REQ-035, "XXX pods per worker node" in REQ-036, "XXXX Gbps" in
REQ-037, and "XXX minutes" in REQ-038) or, if targets are undecided, add a clear
draft marker by appending a label like "status/draft" or "tbd" to the labels
array for each requirement and add a short note in the description field
indicating the value is to be determined; also ensure test_description or
test_implementation fields are updated or left as "TBD" when placeholders remain
so the sync to GitHub issues won’t publish ambiguous "XXX"/"XXXX" values.

Comment on lines +593 to +604
{
"id": "REQ-050",
"requirement": "DPF on OCP supports amd64 worker and control plane nodes",
"description": "",
"test_description": "Verify all cluster nodes report amd64 architecture via node status",
"test_implementation": "test/req-001-amd64-nodes.sh",
"labels": [
"area/system",
"type/non-functional"
],
"subtasks": []
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Duplicate requirement detected.

REQ-050 is identical to REQ-001 — same requirement text and same test_implementation file (test/req-001-amd64-nodes.sh). The only difference is the label (type/non-functional vs type/functional).

This will create two separate GitHub issues with nearly identical content. If this duplication is intentional for categorization purposes, consider:

  • Differentiating the test scripts
  • Adding clarifying text to the description field

If unintentional, remove the duplicate.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@requirements/requirements.json` around lines 593 - 604, REQ-050 is a
duplicate of REQ-001 (same requirement text and same test_implementation
"test/req-001-amd64-nodes.sh"); either remove REQ-050 or make it distinct by
updating its "description" and/or "test_implementation" to reflect the intended
difference (e.g., point to a new test script or add clarifying description and
different labels). Locate the objects named "REQ-050" and "REQ-001" in the JSON
and then either delete the REQ-050 entry or modify its "description",
"test_implementation", and "labels" to clearly distinguish it from REQ-001.

Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
@killianmuldoon killianmuldoon force-pushed the pr-requirements-tests branch from 4b5ec5c to 6ff9897 Compare March 18, 2026 12:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant