draft: Add requirements tracking#92
draft: Add requirements tracking#92killianmuldoon wants to merge 12 commits intorh-ecosystem-edge:mainfrom
Conversation
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: killianmuldoon The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Hi @killianmuldoon. Thanks for your PR. I'm waiting for a rh-ecosystem-edge member to verify that this patch is reasonable to test. If it is, they should reply with Tip We noticed you've done this a few times! Consider joining the org to skip this step and gain Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a requirements tracking system: a new Changes
Sequence DiagramsequenceDiagram
participant GH as GitHub (Repo)
participant WF as Workflow Runner
participant Sync as sync-issues.sh
participant Issues as GitHub Issues
participant Project as GitHub Project
GH->>WF: push to main / manual dispatch (requirements.json changed)
WF->>Sync: run sync-issues.sh with GH_TOKEN & GITHUB_REPOSITORY
Sync->>Issues: ensure status & per-REQ labels exist
Sync->>Issues: list existing issues by REQ label
Sync->>Issues: create or update issue per requirement (body, labels)
Sync->>Issues: create sub-task issues as needed
Sync->>Project: create or fetch repository project and "Requirement Status" field
Sync->>Project: add/link issues as project items and set item status based on labels
WF->>GH: workflow completes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can validate your CodeRabbit configuration file in your editor.If your editor has YAML language server, you can enable auto-completion and validation by adding |
| @@ -0,0 +1,678 @@ | |||
| { | |||
| "repo": "killianmuldoon/openshift--dpf", | |||
There was a problem hiding this comment.
Need to update this if we get agreement.
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (10)
test/req-007-mixed-workers.sh (2)
30-40: Negative count scenario masks data inconsistency.If
DPU_HOST_COUNT > WORKER_COUNT(e.g., DPUs reference nodes not labeled as workers, or control-plane nodes have DPUs),NON_DPU_COUNTbecomes negative. The<= 0check handles this silently but masks a potential configuration issue.Consider adding explicit validation:
💡 Suggested improvement
NON_DPU_COUNT=$((WORKER_COUNT - DPU_HOST_COUNT)) +if [[ "${NON_DPU_COUNT}" -lt 0 ]]; then + echo "WARNING: DPU host count (${DPU_HOST_COUNT}) exceeds worker count (${WORKER_COUNT})" + echo " Some DPUs may be on non-worker nodes or node labeling is inconsistent" + NON_DPU_COUNT=0 +fi + if [[ "${DPU_HOST_COUNT}" -eq 0 ]]; then🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/req-007-mixed-workers.sh` around lines 30 - 40, Check for an invalid negative NON_DPU_COUNT by explicitly validating DPU_HOST_COUNT against WORKER_COUNT before computing NON_DPU_COUNT: if DPU_HOST_COUNT is greater than WORKER_COUNT (using the DPU_HOST_COUNT and WORKER_COUNT variables), emit a clear FAIL message indicating a configuration mismatch (DPUs referenced on non-worker nodes or control-plane nodes) and exit non-zero; otherwise compute NON_DPU_COUNT and keep the existing checks for zero/non-zero to preserve current behavior.
43-43: Custom-columns output may fail with master-labeled clusters.The final
oc get nodescommand filters on!node-role.kubernetes.io/control-planebut doesn't fall back to themasterlabel like the earlier node retrieval (lines 11-14). On older clusters usingmasterlabels, this could show no output or incorrect results.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/req-007-mixed-workers.sh` at line 43, The oc get nodes invocation that currently uses the selector '!node-role.kubernetes.io/control-plane' should be updated to also exclude the legacy master label so clusters labeled with master are handled; locate the oc get nodes command containing the selector '!node-role.kubernetes.io/control-plane' and change the label selector to exclude both node-role.kubernetes.io/control-plane and node-role.kubernetes.io/master (i.e., add the master label to the negation/fallback) while preserving the custom-columns output format.test/req-002-install-success.sh (2)
12-13: Hardcoded version expectations require manual updates.The expected versions are hardcoded, meaning this test must be updated for each new DPF/OCP release. Consider parameterizing via environment variables with these as defaults:
♻️ Suggested improvement
-EXPECTED_DPF_VERSION="26.4" -EXPECTED_OCP_VERSION="4.22" +EXPECTED_DPF_VERSION="${EXPECTED_DPF_VERSION:-26.4}" +EXPECTED_OCP_VERSION="${EXPECTED_OCP_VERSION:-4.22}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/req-002-install-success.sh` around lines 12 - 13, The test currently hardcodes EXPECTED_DPF_VERSION and EXPECTED_OCP_VERSION which forces updates on each release; change req-002-install-success.sh to read these values from environment variables (e.g., DPF_EXPECTED_VERSION and OCP_EXPECTED_VERSION) with the current hardcoded strings "26.4" and "4.22" used as defaults when the env vars are not set, and update any references to EXPECTED_DPF_VERSION / EXPECTED_OCP_VERSION to use the new variables so CI or callers can override versions without editing the script.
63-63:jqdependency is not validated.The script uses
jqfor formatting output on failure (line 63), but doesn't verifyjqis available. While the|| trueprevents script failure, users may see unhelpful output ifjqis missing.💡 Suggested improvement
- oc get dpfoperatorconfigs.operator.dpu.nvidia.com -A -o jsonpath='{.items[0].status.conditions}' 2>/dev/null | jq . 2>/dev/null || true + CONDITIONS=$(oc get dpfoperatorconfigs.operator.dpu.nvidia.com -A -o jsonpath='{.items[0].status.conditions}' 2>/dev/null || true) + if command -v jq &>/dev/null; then + echo "${CONDITIONS}" | jq . + else + echo "${CONDITIONS}" + fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/req-002-install-success.sh` at line 63, The script calls jq in the oc command pipeline (oc get ... | jq .) but doesn't verify jq exists; add a check using command -v or which for "jq" at the top of test/req-002-install-success.sh (or just before the oc get line) and if missing either print a clear warning instructing the user to install jq or fall back to outputting raw JSON (e.g., skip piping to jq), ensuring the oc get pipeline handles both cases so users see useful output when jq is not installed.requirements/sync-issues.sh (2)
233-235: Sub-issue linking uses beta GraphQL feature.The
sub_issuesGraphQL feature requires theGraphQL-Features: sub_issuesheader and may not be available on all GitHub instances (e.g., GitHub Enterprise Server). The warning message on failure is appropriate, but consider documenting this limitation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements/sync-issues.sh` around lines 233 - 235, The sub-issue linking line uses the experimental GraphQL feature header "GraphQL-Features: sub_issues" and the mutation addSubIssue (using PARENT_ID and CHILD_ID), which may fail on GHES; update the script to document this limitation and make the failure message explicit: add a brief comment above the gh api graphql call stating that sub_issues is a beta GraphQL feature and may not be supported on GitHub Enterprise Server, and enhance the existing echo "WARNING: failed to link sub-issue" to include that the operation uses the sub_issues GraphQL feature and may not be available on all GitHub instances (so operators know the cause).
85-86: Issue list limit of 500 may be insufficient.The
--limit 500cap could miss existing requirement issues in repositories with extensive issue history, potentially causing duplicate issue creation.💡 Consider paginating or increasing the limit
-ISSUE_LIST=$(gh issue list ${REPO_FLAG} --state all --limit 500 --json number,labels,title \ +ISSUE_LIST=$(gh issue list ${REPO_FLAG} --state all --limit 1000 --json number,labels,title \ --jq '.[] | "\(.number)|\([.labels[].name] | join(","))|\(.title)"' 2>/dev/null || true)Alternatively, filter by
REQ-label prefix server-side to reduce result count.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements/sync-issues.sh` around lines 85 - 86, The hardcoded `--limit 500` in the `gh issue list` call assigned to ISSUE_LIST can miss issues; remove the fixed limit and either (A) implement pagination by repeatedly invoking `gh issue list ${REPO_FLAG} --state all --json number,labels,title --jq ...` until no more results, or (B) increase the limit (e.g., `--limit 1000`) and add server-side filtering via `--label` if your requirements use a consistent label (use `REPO_FLAG` and the same jq expression), so update the invocation of `gh issue list` in the assignment to ISSUE_LIST accordingly..github/workflows/requirements-sync.yml (1)
25-29: Consider adding error visibility for project sync failures.The fallback
${{ secrets.PROJECT_TOKEN || secrets.GITHUB_TOKEN }}will silently useGITHUB_TOKENifPROJECT_TOKENisn't configured, causing project operations to fail with only warnings in logs. Consider adding explicit feedback:💡 Suggested improvement
- name: Sync issues and project env: GH_TOKEN: ${{ secrets.PROJECT_TOKEN || secrets.GITHUB_TOKEN }} GITHUB_REPOSITORY: ${{ github.repository }} + HAS_PROJECT_TOKEN: ${{ secrets.PROJECT_TOKEN != '' }} run: ./requirements/sync-issues.shThen in
sync-issues.sh, checkHAS_PROJECT_TOKENand warn early if project sync won't work.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/requirements-sync.yml around lines 25 - 29, Set an explicit HAS_PROJECT_TOKEN indicator and surface a clear warning when only GITHUB_TOKEN is present: update the "Sync issues and project" step to export an env var (e.g., GH_TOKEN set from the secrets expression) and a boolean HAS_PROJECT_TOKEN (using the GitHub Actions expression to detect if secrets.PROJECT_TOKEN is non-empty), then update sync-issues.sh to read HAS_PROJECT_TOKEN and, if false, emit a clear warning/error that project sync will not work and skip or fallback the project-related operations; reference the workflow step name "Sync issues and project", the env var GH_TOKEN, the new HAS_PROJECT_TOKEN flag, and the sync-issues.sh script when making the change.test/req-049-control-plane-count.sh (1)
28-33: Ready status detection relies on fragile text matching.The pattern
" Ready "depends on specific column spacing inoc get nodesoutput. If the output format changes or nodes have additional status conditions, this could produce false negatives.Consider using
jsonpathor a more robust approach:♻️ Suggested improvement using jsonpath
-READY_COUNT=$(echo "${CP_NODES}" | grep -c " Ready " || true) -if [[ "${READY_COUNT}" -ne 3 ]]; then - echo "FAIL: expected 3 Ready control plane nodes, found ${READY_COUNT}" - echo "${CP_NODES}" - exit 1 -fi +READY_COUNT=$(oc get nodes -l node-role.kubernetes.io/control-plane= \ + -o jsonpath='{range .items[*]}{.status.conditions[?(@.type=="Ready")].status}{"\n"}{end}' 2>/dev/null | grep -c "True" || true) +if [[ -z "${READY_COUNT}" || "${READY_COUNT}" -eq 0 ]]; then + # Fallback to master label + READY_COUNT=$(oc get nodes -l node-role.kubernetes.io/master= \ + -o jsonpath='{range .items[*]}{.status.conditions[?(@.type=="Ready")].status}{"\n"}{end}' 2>/dev/null | grep -c "True" || true) +fi +if [[ "${READY_COUNT}" -ne 3 ]]; then + echo "FAIL: expected 3 Ready control plane nodes, found ${READY_COUNT}" + oc get nodes -l node-role.kubernetes.io/control-plane= --no-headers 2>/dev/null || \ + oc get nodes -l node-role.kubernetes.io/master= --no-headers + exit 1 +fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/req-049-control-plane-count.sh` around lines 28 - 33, The Ready detection using CP_NODES and grep counting the literal " Ready " is fragile; update the READY_COUNT calculation to query the API output (e.g., use oc get nodes -o jsonpath=... or pipe to jq) to count nodes whose status.conditions has type=="Ready" and status=="True" instead of grepping CP_NODES; change the READY_COUNT assignment (and any use of CP_NODES) to parse JSON from oc get nodes to produce an integer count of Ready control-plane nodes and keep the rest of the if-check intact.requirements/status.sh (1)
23-30: Consider validating filter values.The
--filteroption accepts any value without validation. If an invalid status is passed, the script silently shows no results. Adding a quick validation could improve UX.♻️ Proposed validation
while [[ $# -gt 0 ]]; do case "${1}" in - --filter) FILTER_STATUS="${2}"; shift 2 ;; + --filter) + FILTER_STATUS="${2}" + if [[ ! "${FILTER_STATUS}" =~ ^(passing|failing|untested|no-issue)$ ]]; then + echo "ERROR: Invalid filter '${FILTER_STATUS}'. Use: passing, failing, untested, no-issue" >&2 + exit 1 + fi + shift 2 ;; --area) FILTER_AREA="${2}"; shift 2 ;;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements/status.sh` around lines 23 - 30, The --filter option currently assigns any value to FILTER_STATUS without validation, causing silent no-results for invalid statuses; add validation after parsing (e.g., where FILTER_STATUS is set in the while/case block) to check the value against the allowed set of statuses and, if invalid, print a clear error and exit non-zero. Specifically, update the argument parsing that sets FILTER_STATUS to validate the provided value (FILTER_STATUS) against your canonical list (e.g., "open", "closed", "in-progress" or whatever your app uses) and handle invalid inputs by echoing "Invalid --filter value: <value>" to stderr and exiting with status 1 so users get immediate feedback.requirements/verify-requirements.sh (1)
146-157: JSON construction via string concatenation is fragile.The
sedescaping on lines 150-151 handles double quotes but misses backslashes, newlines, and other special characters that would produce invalid JSON. Consider usingjqfor safe JSON construction.♻️ Safer alternative using jq
if [[ "${OUTPUT_JSON}" == "true" ]]; then - JSON_RESULTS="[" - for idx in "${!RESULT_IDS[@]}"; do - [[ ${idx} -gt 0 ]] && JSON_RESULTS+="," - # Escape strings for JSON - DETAIL_ESC=$(echo "${RESULT_DETAILS[${idx}]}" | sed 's/"/\\"/g' | tr -d '\n') - TITLE_ESC=$(echo "${RESULT_TITLES[${idx}]}" | sed 's/"/\\"/g') - JSON_RESULTS+=$(cat <<ENTRY -{"id":"${RESULT_IDS[${idx}]}","requirement":"${TITLE_ESC}","status":"${RESULT_STATUSES[${idx}]}","detail":"${DETAIL_ESC}","labels":"${RESULT_LABELS[${idx}]}"} -ENTRY -) - done - JSON_RESULTS+="]" + JSON_RESULTS="[]" + for idx in "${!RESULT_IDS[@]}"; do + JSON_RESULTS=$(echo "${JSON_RESULTS}" | jq \ + --arg id "${RESULT_IDS[${idx}]}" \ + --arg req "${RESULT_TITLES[${idx}]}" \ + --arg status "${RESULT_STATUSES[${idx}]}" \ + --arg detail "${RESULT_DETAILS[${idx}]}" \ + --arg labels "${RESULT_LABELS[${idx}]}" \ + '. + [{"id": $id, "requirement": $req, "status": $status, "detail": $detail, "labels": $labels}]') + done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements/verify-requirements.sh` around lines 146 - 157, The current JSON building via string concatenation (variables JSON_RESULTS, DETAIL_ESC, TITLE_ESC and the loop over RESULT_IDS) fails to safely escape backslashes, newlines and other characters; replace this manual escaping with safe construction using jq: for each iteration use jq -n --arg id "${RESULT_IDS[idx]}" --arg requirement "${RESULT_TITLES[idx]}" --arg status "${RESULT_STATUSES[idx]}" --arg detail "${RESULT_DETAILS[idx]}" --arg labels "${RESULT_LABELS[idx]}" '{id:$id,requirement:$requirement,status:$status,detail:$detail,labels:$labels}' and either stream objects into a temporary file and finally produce an array with jq -s '.' or pipe them into jq -s to produce JSON_RESULTS, removing the fragile DETAIL_ESC/TITLE_ESC string manipulation and direct concatenation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/requirements-sync.yml:
- Around line 10-12: The workflow is missing the projects: write permission
required by the gh project commands used in sync-issues.sh (item-add,
field-create, item-edit); update the permissions block in the workflow to
include "projects: write" so GITHUB_TOKEN can perform project operations, and/or
add a brief comment in the workflow/README clarifying that if using
PROJECT_TOKEN it must be a PAT with repo and project scopes as a fallback for
full functionality.
In `@requirements/README.md`:
- Line 9: The README line about update-results.sh is incorrect: the script does
not consume piped go test -json input but instead executes the
test_implementation commands defined in requirements.json and posts pass/fail
comments; update the sentence to state that update-results.sh reads
requirements.json, runs each test_implementation command directly, captures
their exit/results, and posts comments (remove the claim about taking go test
-json from stdin).
- Around line 21-23: Update the schema docs to stop claiming `issue_number` is
populated by `sync-issues.sh`: either remove the `issue_number` field entirely
from the table, or mark it clearly as optional/unused and note that
`sync-issues.sh` identifies issues via REQ-XXX labels (see `labels`) rather than
storing numbers; ensure the table row referencing `issue_number` and the
sentence about `sync-issues.sh` are edited accordingly so docs match actual
script behavior.
- Around line 52-57: The README's "Adding a new requirement" section is
incorrect: sync-issues.sh does not write the issue number back to
requirements.json; instead it uses labels to locate issues. Update the text in
requirements/README.md to reflect the actual workflow: instruct contributors to
add a requirement with "issue_number": null in requirements.json, explain that
sync-issues.sh (or the GitHub Action) will create or find issues by label but
will not modify requirements.json, and add a note that maintainers must manually
update the "issue_number" in requirements.json (or run a dedicated script) if
automatic writing is needed; reference requirements.json and sync-issues.sh by
name so reviewers can locate the relevant files.
- Around line 47-50: Replace the incorrect example that pipes `go test` into the
script with usage that invokes the update-results.sh script directly; update the
README example to show calling ./requirements/update-results.sh to run
requirement tests and post results, demonstrate calling it with specific
requirement IDs (e.g., REQ-001 REQ-002), and show the --dry-run option; ensure
the reference to the script name update-results.sh is used so readers run the
script itself rather than piping go test output into it.
In `@requirements/requirements.json`:
- Line 2: Update the repo field value currently set as
"killianmuldoon/openshift--dpf" to the correct repository
"rh-ecosystem-edge/openshift-dpf" so scripts that fallback to the repo value
(used by sync-issues.sh, update-results.sh, status.sh) will point to the right
owner and correct single-dash repo name.
In `@requirements/status.sh`:
- Around line 51-52: The current gh call assigning ISSUE_LIST uses a fixed
--limit 500 which can truncate results; modify the logic around ISSUE_LIST (the
gh issue list invocation that uses REPO_FLAG) to paginate through all pages or
detect when the 500 cap is hit and loop until no more results (or emit a clear
warning) so every issue is fetched; ensure the final ISSUE_LIST variable
concatenates all pages' outputs (or sets a flag/warning) before downstream
processing.
In `@requirements/update-results.sh`:
- Around line 87-88: The variable REQ_TEXT is read from REQUIREMENTS_FILE but
never used; either remove the assignment to REQ_TEXT to avoid dead code, or
incorporate REQ_TEXT into the comment body where TEST_IMPL is used (e.g.,
include the requirement description alongside the test implementation) so the
fetched requirement text is actually displayed; update the block that currently
sets REQ_TEXT and TEST_IMPL to either drop REQ_TEXT or reference REQ_TEXT when
constructing the comment message.
In `@requirements/verify-requirements.sh`:
- Around line 183-190: The table output can be broken if any requirement text
contains the pipe character; update the for-loop that iterates over RESULT_IDS
and the echo line so that before printing you sanitize RESULT_TITLES and
RESULT_LABELS by replacing or escaping '|' characters (e.g., replace '|' with
'\|' or '|') using shell string replacement on the variables used in the
echo; apply this transformation to the specific variables referenced in the loop
(RESULT_TITLES[${idx}] and RESULT_LABELS[${idx}]) so the echoed table row always
has safe content.
- Around line 33-41: The argument parsing loop incorrectly uses a for loop and
unconditional shifting, causing values for --output to be skipped; replace it
with a proper while [ $# -gt 0 ] loop that inspects $1 and, for the --output
case, reads the next positional parameter into OUTPUT_FILE and shifts by 2
(i.e., handle "--output) set OUTPUT_FILE=\"$2\"; shift 2;;" and "--output=*) set
OUTPUT_FILE from $1;;"), keep the --json and REQ-* branches to set OUTPUT_JSON
and append to FILTER_IDS, and remove the unconditional extra shift at the end so
arguments are consumed only as intended.
---
Nitpick comments:
In @.github/workflows/requirements-sync.yml:
- Around line 25-29: Set an explicit HAS_PROJECT_TOKEN indicator and surface a
clear warning when only GITHUB_TOKEN is present: update the "Sync issues and
project" step to export an env var (e.g., GH_TOKEN set from the secrets
expression) and a boolean HAS_PROJECT_TOKEN (using the GitHub Actions expression
to detect if secrets.PROJECT_TOKEN is non-empty), then update sync-issues.sh to
read HAS_PROJECT_TOKEN and, if false, emit a clear warning/error that project
sync will not work and skip or fallback the project-related operations;
reference the workflow step name "Sync issues and project", the env var
GH_TOKEN, the new HAS_PROJECT_TOKEN flag, and the sync-issues.sh script when
making the change.
In `@requirements/status.sh`:
- Around line 23-30: The --filter option currently assigns any value to
FILTER_STATUS without validation, causing silent no-results for invalid
statuses; add validation after parsing (e.g., where FILTER_STATUS is set in the
while/case block) to check the value against the allowed set of statuses and, if
invalid, print a clear error and exit non-zero. Specifically, update the
argument parsing that sets FILTER_STATUS to validate the provided value
(FILTER_STATUS) against your canonical list (e.g., "open", "closed",
"in-progress" or whatever your app uses) and handle invalid inputs by echoing
"Invalid --filter value: <value>" to stderr and exiting with status 1 so users
get immediate feedback.
In `@requirements/sync-issues.sh`:
- Around line 233-235: The sub-issue linking line uses the experimental GraphQL
feature header "GraphQL-Features: sub_issues" and the mutation addSubIssue
(using PARENT_ID and CHILD_ID), which may fail on GHES; update the script to
document this limitation and make the failure message explicit: add a brief
comment above the gh api graphql call stating that sub_issues is a beta GraphQL
feature and may not be supported on GitHub Enterprise Server, and enhance the
existing echo "WARNING: failed to link sub-issue" to include that the operation
uses the sub_issues GraphQL feature and may not be available on all GitHub
instances (so operators know the cause).
- Around line 85-86: The hardcoded `--limit 500` in the `gh issue list` call
assigned to ISSUE_LIST can miss issues; remove the fixed limit and either (A)
implement pagination by repeatedly invoking `gh issue list ${REPO_FLAG} --state
all --json number,labels,title --jq ...` until no more results, or (B) increase
the limit (e.g., `--limit 1000`) and add server-side filtering via `--label` if
your requirements use a consistent label (use `REPO_FLAG` and the same jq
expression), so update the invocation of `gh issue list` in the assignment to
ISSUE_LIST accordingly.
In `@requirements/verify-requirements.sh`:
- Around line 146-157: The current JSON building via string concatenation
(variables JSON_RESULTS, DETAIL_ESC, TITLE_ESC and the loop over RESULT_IDS)
fails to safely escape backslashes, newlines and other characters; replace this
manual escaping with safe construction using jq: for each iteration use jq -n
--arg id "${RESULT_IDS[idx]}" --arg requirement "${RESULT_TITLES[idx]}" --arg
status "${RESULT_STATUSES[idx]}" --arg detail "${RESULT_DETAILS[idx]}" --arg
labels "${RESULT_LABELS[idx]}"
'{id:$id,requirement:$requirement,status:$status,detail:$detail,labels:$labels}'
and either stream objects into a temporary file and finally produce an array
with jq -s '.' or pipe them into jq -s to produce JSON_RESULTS, removing the
fragile DETAIL_ESC/TITLE_ESC string manipulation and direct concatenation.
In `@test/req-002-install-success.sh`:
- Around line 12-13: The test currently hardcodes EXPECTED_DPF_VERSION and
EXPECTED_OCP_VERSION which forces updates on each release; change
req-002-install-success.sh to read these values from environment variables
(e.g., DPF_EXPECTED_VERSION and OCP_EXPECTED_VERSION) with the current hardcoded
strings "26.4" and "4.22" used as defaults when the env vars are not set, and
update any references to EXPECTED_DPF_VERSION / EXPECTED_OCP_VERSION to use the
new variables so CI or callers can override versions without editing the script.
- Line 63: The script calls jq in the oc command pipeline (oc get ... | jq .)
but doesn't verify jq exists; add a check using command -v or which for "jq" at
the top of test/req-002-install-success.sh (or just before the oc get line) and
if missing either print a clear warning instructing the user to install jq or
fall back to outputting raw JSON (e.g., skip piping to jq), ensuring the oc get
pipeline handles both cases so users see useful output when jq is not installed.
In `@test/req-007-mixed-workers.sh`:
- Around line 30-40: Check for an invalid negative NON_DPU_COUNT by explicitly
validating DPU_HOST_COUNT against WORKER_COUNT before computing NON_DPU_COUNT:
if DPU_HOST_COUNT is greater than WORKER_COUNT (using the DPU_HOST_COUNT and
WORKER_COUNT variables), emit a clear FAIL message indicating a configuration
mismatch (DPUs referenced on non-worker nodes or control-plane nodes) and exit
non-zero; otherwise compute NON_DPU_COUNT and keep the existing checks for
zero/non-zero to preserve current behavior.
- Line 43: The oc get nodes invocation that currently uses the selector
'!node-role.kubernetes.io/control-plane' should be updated to also exclude the
legacy master label so clusters labeled with master are handled; locate the oc
get nodes command containing the selector
'!node-role.kubernetes.io/control-plane' and change the label selector to
exclude both node-role.kubernetes.io/control-plane and
node-role.kubernetes.io/master (i.e., add the master label to the
negation/fallback) while preserving the custom-columns output format.
In `@test/req-049-control-plane-count.sh`:
- Around line 28-33: The Ready detection using CP_NODES and grep counting the
literal " Ready " is fragile; update the READY_COUNT calculation to query the
API output (e.g., use oc get nodes -o jsonpath=... or pipe to jq) to count nodes
whose status.conditions has type=="Ready" and status=="True" instead of grepping
CP_NODES; change the READY_COUNT assignment (and any use of CP_NODES) to parse
JSON from oc get nodes to produce an integer count of Ready control-plane nodes
and keep the rest of the if-check intact.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0bd3d834-afdb-4634-a098-62887092aaec
📒 Files selected for processing (13)
.github/workflows/requirements-sync.ymlMakefilerequirements/README.mdrequirements/requirements.jsonrequirements/status.shrequirements/sync-issues.shrequirements/update-results.shrequirements/verify-requirements.shtest/req-001-amd64-nodes.shtest/req-002-install-success.shtest/req-006-dpu-detection.shtest/req-007-mixed-workers.shtest/req-049-control-plane-count.sh
| permissions: | ||
| contents: read | ||
| issues: write |
There was a problem hiding this comment.
Missing projects: write permission for GitHub Projects integration.
The sync-issues.sh script uses gh project commands (item-add, field-create, item-edit) which require the projects: write permission. Without it, project operations will silently fail when falling back to GITHUB_TOKEN.
Either add the permission or document that PROJECT_TOKEN secret must be configured with appropriate scopes:
🔧 Option 1: Add permission (if using fine-grained PAT or GitHub App)
permissions:
contents: read
issues: write
+ projects: write📝 Option 2: Document PROJECT_TOKEN requirement
Add a comment in the workflow or README explaining that PROJECT_TOKEN secret must be a PAT with repo and project scopes for full functionality, otherwise project sync will be skipped with warnings.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| permissions: | |
| contents: read | |
| issues: write | |
| permissions: | |
| contents: read | |
| issues: write | |
| projects: write |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/requirements-sync.yml around lines 10 - 12, The workflow
is missing the projects: write permission required by the gh project commands
used in sync-issues.sh (item-add, field-create, item-edit); update the
permissions block in the workflow to include "projects: write" so GITHUB_TOKEN
can perform project operations, and/or add a brief comment in the
workflow/README clarifying that if using PROJECT_TOKEN it must be a PAT with
repo and project scopes as a fallback for full functionality.
|
|
||
| 1. **`requirements.json`** is the source of truth. Each requirement declares its description, test details, labels, and (once synced) the corresponding GitHub issue number. | ||
| 2. **`sync-issues.sh`** reads the JSON and creates or updates GitHub Issues and a GitHub Project. | ||
| 3. **`update-results.sh`** takes `go test -json` output and posts pass/fail results as comments on the corresponding issues. |
There was a problem hiding this comment.
Documentation doesn't match script behavior.
Line 9 states update-results.sh "takes go test -json output," but the script actually runs test_implementation commands directly from requirements.json. It does not consume piped input.
📝 Suggested fix
-3. **`update-results.sh`** takes `go test -json` output and posts pass/fail results as comments on the corresponding issues.
+3. **`update-results.sh`** runs each requirement's `test_implementation` command and posts pass/fail results as comments on the corresponding issues.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 3. **`update-results.sh`** takes `go test -json` output and posts pass/fail results as comments on the corresponding issues. | |
| 3. **`update-results.sh`** runs each requirement's `test_implementation` command and posts pass/fail results as comments on the corresponding issues. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@requirements/README.md` at line 9, The README line about update-results.sh is
incorrect: the script does not consume piped go test -json input but instead
executes the test_implementation commands defined in requirements.json and posts
pass/fail comments; update the sentence to state that update-results.sh reads
requirements.json, runs each test_implementation command directly, captures
their exit/results, and posts comments (remove the claim about taking go test
-json from stdin).
| | `test_implementation` | string | Go test reference: `package/path::TestFunctionName`. | | ||
| | `labels` | string[] | GitHub labels to apply (created automatically if missing). | | ||
| | `issue_number` | int\|null | Populated by `sync-issues.sh`. Do not edit manually. | |
There was a problem hiding this comment.
Schema documentation references a field the scripts don't use.
The issue_number field is documented as "Populated by sync-issues.sh", but the scripts use REQ-XXX labels to find issues, not stored issue numbers. This field appears to be vestigial or planned for future use.
📝 Suggested fix
Either remove the field from the documentation:
| `test_implementation` | string | Go test reference: `package/path::TestFunctionName`. |
| `labels` | string[] | GitHub labels to apply (created automatically if missing). |
-| `issue_number` | int\|null | Populated by `sync-issues.sh`. Do not edit manually. |Or clarify it's optional/unused:
-| `issue_number` | int\|null | Populated by `sync-issues.sh`. Do not edit manually. |
+| `issue_number` | int\|null | (Optional) Stored issue number. Currently unused; scripts find issues by REQ-XXX label. |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@requirements/README.md` around lines 21 - 23, Update the schema docs to stop
claiming `issue_number` is populated by `sync-issues.sh`: either remove the
`issue_number` field entirely from the table, or mark it clearly as
optional/unused and note that `sync-issues.sh` identifies issues via REQ-XXX
labels (see `labels`) rather than storing numbers; ensure the table row
referencing `issue_number` and the sentence about `sync-issues.sh` are edited
accordingly so docs match actual script behavior.
| ```bash | ||
| # Run tests and pipe to the results script | ||
| go test ./... -json 2>&1 | ./requirements/update-results.sh | ||
| ``` |
There was a problem hiding this comment.
Example usage is incorrect.
The example shows piping go test output to update-results.sh, but the script doesn't read from stdin—it executes test_implementation commands directly.
📝 Suggested fix
### Post test results
```bash
-# Run tests and pipe to the results script
-go test ./... -json 2>&1 | ./requirements/update-results.sh
+# Run requirement tests and post results to issues
+./requirements/update-results.sh
+
+# Or test specific requirements
+./requirements/update-results.sh REQ-001 REQ-002
+
+# Dry run (no GitHub updates)
+./requirements/update-results.sh --dry-run</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
Verify each finding against the current code and only fix it if needed.
In @requirements/README.md around lines 47 - 50, Replace the incorrect example
that pipes go test into the script with usage that invokes the
update-results.sh script directly; update the README example to show calling
./requirements/update-results.sh to run requirement tests and post results,
demonstrate calling it with specific requirement IDs (e.g., REQ-001 REQ-002),
and show the --dry-run option; ensure the reference to the script name
update-results.sh is used so readers run the script itself rather than piping go
test output into it.
</details>
<!-- fingerprinting:phantom:medusa:ocelot -->
<!-- This is an auto-generated comment by CodeRabbit -->
| ### Adding a new requirement | ||
|
|
||
| 1. Add an entry to `requirements.json` with a new `id` and `"issue_number": null`. | ||
| 2. Run `sync-issues.sh` (or push to main — the GitHub Action will handle it). | ||
| 3. The script creates the issue and writes the number back to the JSON. | ||
|
|
There was a problem hiding this comment.
Adding a new requirement section is inconsistent with actual workflow.
The documentation says sync-issues.sh "writes the number back to the JSON," but the script uses labels to find issues and doesn't modify requirements.json.
📝 Suggested fix
### Adding a new requirement
-1. Add an entry to `requirements.json` with a new `id` and `"issue_number": null`.
+1. Add an entry to `requirements.json` with a new `id`, requirement text, labels, and optionally a `test_implementation`.
2. Run `sync-issues.sh` (or push to main — the GitHub Action will handle it).
-3. The script creates the issue and writes the number back to the JSON.
+3. The script creates the issue with a `REQ-XXX` label matching the `id`. No JSON modifications are needed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@requirements/README.md` around lines 52 - 57, The README's "Adding a new
requirement" section is incorrect: sync-issues.sh does not write the issue
number back to requirements.json; instead it uses labels to locate issues.
Update the text in requirements/README.md to reflect the actual workflow:
instruct contributors to add a requirement with "issue_number": null in
requirements.json, explain that sync-issues.sh (or the GitHub Action) will
create or find issues by label but will not modify requirements.json, and add a
note that maintainers must manually update the "issue_number" in
requirements.json (or run a dedicated script) if automatic writing is needed;
reference requirements.json and sync-issues.sh by name so reviewers can locate
the relevant files.
| @@ -0,0 +1,678 @@ | |||
| { | |||
| "repo": "killianmuldoon/openshift--dpf", | |||
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the correct repository name
gh repo view killianmuldoon/openshift-dpf --json name 2>/dev/null && echo "openshift-dpf exists" || echo "openshift-dpf not found"
gh repo view killianmuldoon/openshift--dpf --json name 2>/dev/null && echo "openshift--dpf exists" || echo "openshift--dpf not found"Repository: rh-ecosystem-edge/openshift-dpf
Length of output: 123
🏁 Script executed:
# Check the repository name and owner in the current context
cd / && git -C /tmp 2>/dev/null || echo "Git info unavailable in sandbox"
# Search for repository references in the requirements file and nearby files
rg "killianmuldoon|rh-ecosystem-edge|openshift.*dpf" --type json
# Check if there are any README or documentation files that might clarify the repo
fd -e md -e txt | grep -i readme | head -5Repository: rh-ecosystem-edge/openshift-dpf
Length of output: 2696
🏁 Script executed:
# Read the requirements.json file to understand its structure and purpose
head -20 /home/jailuser/git/requirements/requirements.json
# Also check if there are any schema files or documentation
fd schema -e json -e txt | head -10
fd -e md | grep -i require | head -5Repository: rh-ecosystem-edge/openshift-dpf
Length of output: 769
🏁 Script executed:
# Read the requirements README to understand the repo field purpose
cat /home/jailuser/git/requirements/README.mdRepository: rh-ecosystem-edge/openshift-dpf
Length of output: 3247
🏁 Script executed:
# Check if the repo field is used in any of the scripts
rg "repo" requirements/ -A 2 -B 2
# Also check GitHub Actions workflow files
fd -e yml -e yaml | xargs rg "repo" -l 2>/dev/null || echo "No matches found"Repository: rh-ecosystem-edge/openshift-dpf
Length of output: 7172
🏁 Script executed:
# Confirm the correct owner by checking git remote or repo metadata
git config --get remote.origin.url 2>/dev/null || git remote -v 2>/dev/null || echo "Could not determine remote"
# Also check if there's any documentation about the repo
fd -e md | xargs grep -l "rh-ecosystem-edge\|killianmuldoon" 2>/dev/null | head -5Repository: rh-ecosystem-edge/openshift-dpf
Length of output: 139
Fix incorrect repository reference in repo field.
The repo field has two errors that will break GitHub issue synchronization and test result posting:
- Wrong owner:
killianmuldoonshould berh-ecosystem-edge - Double dash typo:
openshift--dpfshould beopenshift-dpf
This field is used by sync-issues.sh, update-results.sh, and status.sh as a fallback for the GITHUB_REPOSITORY environment variable. The incorrect value will cause these scripts to fail if the environment variable is not explicitly set during CI runs.
🔧 Suggested fix
- "repo": "killianmuldoon/openshift--dpf",
+ "repo": "rh-ecosystem-edge/openshift-dpf",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "repo": "killianmuldoon/openshift--dpf", | |
| "repo": "rh-ecosystem-edge/openshift-dpf", |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@requirements/requirements.json` at line 2, Update the repo field value
currently set as "killianmuldoon/openshift--dpf" to the correct repository
"rh-ecosystem-edge/openshift-dpf" so scripts that fallback to the repo value
(used by sync-issues.sh, update-results.sh, status.sh) will point to the right
owner and correct single-dash repo name.
| ISSUE_LIST=$(gh issue list ${REPO_FLAG} --state all --limit 500 --json number,labels \ | ||
| --jq '.[] | "\(.number)|\([.labels[].name] | join(","))"' 2>/dev/null || true) |
There was a problem hiding this comment.
The 500 issue limit may truncate results in large repositories.
If the repository has more than 500 issues, some requirements may incorrectly show as "no-issue" because their issues weren't fetched. Consider adding pagination or warning when the limit is reached.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@requirements/status.sh` around lines 51 - 52, The current gh call assigning
ISSUE_LIST uses a fixed --limit 500 which can truncate results; modify the logic
around ISSUE_LIST (the gh issue list invocation that uses REPO_FLAG) to paginate
through all pages or detect when the 500 cap is hit and loop until no more
results (or emit a clear warning) so every issue is fetched; ensure the final
ISSUE_LIST variable concatenates all pages' outputs (or sets a flag/warning)
before downstream processing.
| for arg in "$@"; do | ||
| case "${arg}" in | ||
| --output) shift; OUTPUT_FILE="${1:-}" ;; | ||
| --json) OUTPUT_JSON=true ;; | ||
| --output=*) OUTPUT_FILE="${arg#--output=}" ;; | ||
| REQ-*) FILTER_IDS+=("${arg}") ;; | ||
| esac | ||
| shift 2>/dev/null || true | ||
| done |
There was a problem hiding this comment.
Argument parsing is broken for --output option.
The loop shifts arguments incorrectly. When --output FILE is passed, the inner shift at line 35 advances past FILE, but then line 40 shifts again unconditionally, skipping the next argument. Also, --output (without =) doesn't consume its value correctly because the shift happens before the value is read.
🐛 Proposed fix
+OUTPUT_FILE=""
+OUTPUT_JSON=false
FILTER_IDS=()
-for arg in "$@"; do
- case "${arg}" in
- --output) shift; OUTPUT_FILE="${1:-}" ;;
- --json) OUTPUT_JSON=true ;;
- --output=*) OUTPUT_FILE="${arg#--output=}" ;;
- REQ-*) FILTER_IDS+=("${arg}") ;;
+while [[ $# -gt 0 ]]; do
+ case "${1}" in
+ --output)
+ OUTPUT_FILE="${2:-}"
+ shift 2
+ ;;
+ --output=*)
+ OUTPUT_FILE="${1#--output=}"
+ shift
+ ;;
+ --json)
+ OUTPUT_JSON=true
+ shift
+ ;;
+ REQ-*)
+ FILTER_IDS+=("${1}")
+ shift
+ ;;
+ *)
+ echo "Unknown argument: ${1}" >&2
+ exit 1
+ ;;
esac
- shift 2>/dev/null || true
done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@requirements/verify-requirements.sh` around lines 33 - 41, The argument
parsing loop incorrectly uses a for loop and unconditional shifting, causing
values for --output to be skipped; replace it with a proper while [ $# -gt 0 ]
loop that inspects $1 and, for the --output case, reads the next positional
parameter into OUTPUT_FILE and shifts by 2 (i.e., handle "--output) set
OUTPUT_FILE=\"$2\"; shift 2;;" and "--output=*) set OUTPUT_FILE from $1;;"),
keep the --json and REQ-* branches to set OUTPUT_JSON and append to FILTER_IDS,
and remove the unconditional extra shift at the end so arguments are consumed
only as intended.
| for idx in "${!RESULT_IDS[@]}"; do | ||
| case "${RESULT_STATUSES[${idx}]}" in | ||
| PASS) BADGE="PASS" ;; | ||
| FAIL) BADGE="FAIL" ;; | ||
| *) BADGE="SKIP" ;; | ||
| esac | ||
| echo "| ${RESULT_IDS[${idx}]} | ${BADGE} | ${RESULT_TITLES[${idx}]} | ${RESULT_LABELS[${idx}]} |" | ||
| done |
There was a problem hiding this comment.
Pipe characters in requirement text could break markdown table.
If any requirement text contains |, it would break the table formatting. Consider escaping or replacing pipe characters.
🔧 Proposed fix
for idx in "${!RESULT_IDS[@]}"; do
case "${RESULT_STATUSES[${idx}]}" in
PASS) BADGE="PASS" ;;
FAIL) BADGE="FAIL" ;;
*) BADGE="SKIP" ;;
esac
- echo "| ${RESULT_IDS[${idx}]} | ${BADGE} | ${RESULT_TITLES[${idx}]} | ${RESULT_LABELS[${idx}]} |"
+ SAFE_TITLE=$(echo "${RESULT_TITLES[${idx}]}" | sed 's/|/\\|/g')
+ SAFE_LABELS=$(echo "${RESULT_LABELS[${idx}]}" | sed 's/|/\\|/g')
+ echo "| ${RESULT_IDS[${idx}]} | ${BADGE} | ${SAFE_TITLE} | ${SAFE_LABELS} |"
done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@requirements/verify-requirements.sh` around lines 183 - 190, The table output
can be broken if any requirement text contains the pipe character; update the
for-loop that iterates over RESULT_IDS and the echo line so that before printing
you sanitize RESULT_TITLES and RESULT_LABELS by replacing or escaping '|'
characters (e.g., replace '|' with '\|' or '|') using shell string
replacement on the variables used in the echo; apply this transformation to the
specific variables referenced in the loop (RESULT_TITLES[${idx}] and
RESULT_LABELS[${idx}]) so the echoed table row always has safe content.
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
test/req-019-dts-installed.sh (1)
42-42: Consider adding explicitexit 0for consistency.The script relies on the implicit exit status of
echo. While this works, other scripts in the suite use explicitexit 0after PASS. Adding it here improves clarity and consistency.Proposed fix
echo "PASS: DTS DPUService '${DTS_NAME}' is installed and Ready" +exit 0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/req-019-dts-installed.sh` at line 42, The PASS path in the test script currently ends with echo "PASS: DTS DPUService '${DTS_NAME}' is installed and Ready" but relies on the implicit exit status; update the script so that immediately after this echo statement you add an explicit exit 0 to mirror other tests and make the success exit code explicit, keeping the echo text and placement unchanged.test/req-046-dpucluster-version.sh (1)
47-48: Use single quotes in trap to avoid early expansion (Shellcheck SC2064).While the current code works because
TMPKUBECONFIGis set before the trap, using single quotes is more robust and avoids potential issues if the code is refactored.Proposed fix
TMPKUBECONFIG=$(mktemp) -trap "rm -f ${TMPKUBECONFIG}" EXIT +trap 'rm -f "${TMPKUBECONFIG}"' EXIT🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/req-046-dpucluster-version.sh` around lines 47 - 48, The trap uses double quotes which performs early expansion of TMPKUBECONFIG; change the trap invocation around the temporary kubeconfig cleanup (the trap that references TMPKUBECONFIG) to use single-quoted command text so TMPKUBECONFIG is expanded at trap execution time rather than at trap registration, ensuring robust behavior if the variable assignment is moved or refactored.requirements/verify-requirements.sh (1)
77-81: Consider reducing jq invocations per requirement.Currently spawning jq 4 times per requirement. For large requirement sets, batch extraction would improve performance.
♻️ Optional optimization: single jq call per requirement
for i in $(seq 0 $((REQ_COUNT - 1))); do - REQ_ID=$(jq -r ".requirements[${i}].id" "${REQUIREMENTS_FILE}") - REQ_TEXT=$(jq -r ".requirements[${i}].requirement" "${REQUIREMENTS_FILE}") - TEST_IMPL=$(jq -r ".requirements[${i}].test_implementation" "${REQUIREMENTS_FILE}") - REQ_LABELS_STR=$(jq -r ".requirements[${i}].labels | join(\", \")" "${REQUIREMENTS_FILE}") + read -r REQ_ID REQ_TEXT TEST_IMPL REQ_LABELS_STR < <( + jq -r ".requirements[${i}] | [.id, .requirement, .test_implementation, (.labels | join(\", \"))] | `@tsv`" "${REQUIREMENTS_FILE}" + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements/verify-requirements.sh` around lines 77 - 81, The loop in verify-requirements.sh makes four separate jq calls per requirement (producing REQ_ID, REQ_TEXT, TEST_IMPL, REQ_LABELS_STR); replace those with a single jq invocation that extracts all four fields at once for the current index (for example using ".requirements[${i}] | [.id, .requirement, .test_implementation, (.labels|join(\", \"))] | `@tsv`") and then split/read the returned tab-separated string into REQ_ID, REQ_TEXT, TEST_IMPL, and REQ_LABELS_STR so the for-loop performs one jq call per requirement instead of four.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@requirements/verify-requirements.sh`:
- Around line 156-159: The current manual escaping of RESULT_DETAILS and
RESULT_TITLES (and other fields appended into JSON_RESULTS) is incomplete and
can produce invalid JSON; update the construction to use a proper JSON encoder
(e.g., jq -Rn --arg or a small python -c/json module) to safely JSON-escape each
field (RESULT_DETAILS[idx], RESULT_TITLES[idx], RESULT_IDS[idx],
RESULT_STATUSES[idx], RESULT_LABELS[idx]) and then append the encoded object to
JSON_RESULTS instead of using sed/tr. Locate the JSON assembly around the
JSON_RESULTS+= block and replace the heredoc + sed-based escaping with a call
that produces a JSON string/object via jq/python so backslashes, tabs and
control characters are correctly encoded.
In `@test/req-023-dts-metrics-prometheus.sh`:
- Around line 73-96: The PASS message prints the original METRIC_COUNT even
after the fallback broader search finds results; update the script so that when
METRIC_COUNT is "0" or "null" and the broader query yields results (BROAD_COUNT
> 0), either assign METRIC_COUNT=BROAD_COUNT (using the BROAD_COUNT computed
from RESPONSE_BROAD) or change the final echo to use BROAD_COUNT if available;
adjust references around the if-block that computes RESPONSE_BROAD, BROAD_COUNT
and the final PASS echo so the reported time series count reflects the actual
found count.
In `@test/req-046-dpucluster-version.sh`:
- Line 11: The test hardcodes EXPECTED_OCP_VERSION="4.22" which conflicts with
CI's deployed 4.21.0; update the test to use an environment-driven value or
match CI: replace the hardcoded EXPECTED_OCP_VERSION usage in
test/req-046-dpucluster-version.sh (and similarly in
test/req-002-install-success.sh) to read from an env var like
EXPECTED_OCP_VERSION (falling back to OCP_RELEASE_IMAGE or "4.21.0") so the
check uses the CI-provided version instead of a fixed "4.22".
---
Nitpick comments:
In `@requirements/verify-requirements.sh`:
- Around line 77-81: The loop in verify-requirements.sh makes four separate jq
calls per requirement (producing REQ_ID, REQ_TEXT, TEST_IMPL, REQ_LABELS_STR);
replace those with a single jq invocation that extracts all four fields at once
for the current index (for example using ".requirements[${i}] | [.id,
.requirement, .test_implementation, (.labels|join(\", \"))] | `@tsv`") and then
split/read the returned tab-separated string into REQ_ID, REQ_TEXT, TEST_IMPL,
and REQ_LABELS_STR so the for-loop performs one jq call per requirement instead
of four.
In `@test/req-019-dts-installed.sh`:
- Line 42: The PASS path in the test script currently ends with echo "PASS: DTS
DPUService '${DTS_NAME}' is installed and Ready" but relies on the implicit exit
status; update the script so that immediately after this echo statement you add
an explicit exit 0 to mirror other tests and make the success exit code
explicit, keeping the echo text and placement unchanged.
In `@test/req-046-dpucluster-version.sh`:
- Around line 47-48: The trap uses double quotes which performs early expansion
of TMPKUBECONFIG; change the trap invocation around the temporary kubeconfig
cleanup (the trap that references TMPKUBECONFIG) to use single-quoted command
text so TMPKUBECONFIG is expanded at trap execution time rather than at trap
registration, ensuring robust behavior if the variable assignment is moved or
refactored.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5e22afe4-9c3c-48ad-bcab-6389a5d23343
📒 Files selected for processing (10)
requirements/requirements.jsonrequirements/verify-requirements.shtest/req-009-dpu-provisioned.shtest/req-013-dpuservices-deployed.shtest/req-019-dts-installed.shtest/req-022-dts-metrics-endpoint.shtest/req-023-dts-metrics-prometheus.shtest/req-034-dpu-per-worker.shtest/req-046-dpucluster-version.shtest/req-047-dpucluster-managed.sh
🚧 Files skipped from review as they are similar to previous changes (1)
- requirements/requirements.json
| DETAIL_ESC=$(echo "${RESULT_DETAILS[${idx}]}" | sed 's/"/\\"/g' | tr -d '\n') | ||
| TITLE_ESC=$(echo "${RESULT_TITLES[${idx}]}" | sed 's/"/\\"/g') | ||
| JSON_RESULTS+=$(cat <<ENTRY | ||
| {"id":"${RESULT_IDS[${idx}]}","requirement":"${TITLE_ESC}","status":"${RESULT_STATUSES[${idx}]}","detail":"${DETAIL_ESC}","labels":"${RESULT_LABELS[${idx}]}"} |
There was a problem hiding this comment.
JSON escaping is incomplete and may produce invalid output.
Only double quotes are escaped. Backslashes, tabs, and other control characters are not handled. For example, a detail containing path\to\file produces invalid JSON.
🐛 Proposed fix: use jq for proper JSON string encoding
- DETAIL_ESC=$(echo "${RESULT_DETAILS[${idx}]}" | sed 's/"/\\"/g' | tr -d '\n')
- TITLE_ESC=$(echo "${RESULT_TITLES[${idx}]}" | sed 's/"/\\"/g')
- JSON_RESULTS+=$(cat <<ENTRY
-{"id":"${RESULT_IDS[${idx}]}","requirement":"${TITLE_ESC}","status":"${RESULT_STATUSES[${idx}]}","detail":"${DETAIL_ESC}","labels":"${RESULT_LABELS[${idx}]}"}
-ENTRY
-)
+ JSON_RESULTS+=$(jq -n \
+ --arg id "${RESULT_IDS[${idx}]}" \
+ --arg req "${RESULT_TITLES[${idx}]}" \
+ --arg status "${RESULT_STATUSES[${idx}]}" \
+ --arg detail "${RESULT_DETAILS[${idx}]}" \
+ --arg labels "${RESULT_LABELS[${idx}]}" \
+ '{id: $id, requirement: $req, status: $status, detail: $detail, labels: $labels}')📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| DETAIL_ESC=$(echo "${RESULT_DETAILS[${idx}]}" | sed 's/"/\\"/g' | tr -d '\n') | |
| TITLE_ESC=$(echo "${RESULT_TITLES[${idx}]}" | sed 's/"/\\"/g') | |
| JSON_RESULTS+=$(cat <<ENTRY | |
| {"id":"${RESULT_IDS[${idx}]}","requirement":"${TITLE_ESC}","status":"${RESULT_STATUSES[${idx}]}","detail":"${DETAIL_ESC}","labels":"${RESULT_LABELS[${idx}]}"} | |
| JSON_RESULTS+=$(jq -n \ | |
| --arg id "${RESULT_IDS[${idx}]}" \ | |
| --arg req "${RESULT_TITLES[${idx}]}" \ | |
| --arg status "${RESULT_STATUSES[${idx}]}" \ | |
| --arg detail "${RESULT_DETAILS[${idx}]}" \ | |
| --arg labels "${RESULT_LABELS[${idx}]}" \ | |
| '{id: $id, requirement: $req, status: $status, detail: $detail, labels: $labels}') |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@requirements/verify-requirements.sh` around lines 156 - 159, The current
manual escaping of RESULT_DETAILS and RESULT_TITLES (and other fields appended
into JSON_RESULTS) is incomplete and can produce invalid JSON; update the
construction to use a proper JSON encoder (e.g., jq -Rn --arg or a small python
-c/json module) to safely JSON-escape each field (RESULT_DETAILS[idx],
RESULT_TITLES[idx], RESULT_IDS[idx], RESULT_STATUSES[idx], RESULT_LABELS[idx])
and then append the encoded object to JSON_RESULTS instead of using sed/tr.
Locate the JSON assembly around the JSON_RESULTS+= block and replace the heredoc
+ sed-based escaping with a call that produces a JSON string/object via
jq/python so backslashes, tabs and control characters are correctly encoded.
|
|
||
| set -eo pipefail | ||
|
|
||
| EXPECTED_OCP_VERSION="4.22" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the OCP version used in CI and manifests
echo "=== Checking OCP version in ci/env.defaults ==="
grep -n "OCP_RELEASE_IMAGE" ci/env.defaults || echo "Not found"
echo ""
echo "=== Checking OCP version in manifests ==="
grep -rn "ocpReleaseImage\|OCP_RELEASE" manifests/ || echo "Not found"Repository: rh-ecosystem-edge/openshift-dpf
Length of output: 379
🏁 Script executed:
cat test/req-046-dpucluster-version.shRepository: rh-ecosystem-edge/openshift-dpf
Length of output: 2185
🏁 Script executed:
# Check if there are other version references or if EXPECTED_OCP_VERSION is meant to be parameterized
echo "=== Searching for EXPECTED_OCP_VERSION usage ==="
rg "EXPECTED_OCP_VERSION" --type sh
echo ""
echo "=== Checking for OCP version references in other test files ==="
rg "4\.21|4\.22" test/ --type sh -i | head -20Repository: rh-ecosystem-edge/openshift-dpf
Length of output: 1297
Version mismatch: tests expect 4.22 but CI deploys 4.21.0 — will fail in CI.
Hardcoded EXPECTED_OCP_VERSION="4.22" in this file (and also in test/req-002-install-success.sh) conflicts with ci/env.defaults which sets OCP_RELEASE_IMAGE to 4.21.0-multi. Both tests will fail when run against the deployed cluster version.
Either parameterize these version checks via environment variable or update to match the deployed version 4.21.0.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/req-046-dpucluster-version.sh` at line 11, The test hardcodes
EXPECTED_OCP_VERSION="4.22" which conflicts with CI's deployed 4.21.0; update
the test to use an environment-driven value or match CI: replace the hardcoded
EXPECTED_OCP_VERSION usage in test/req-046-dpucluster-version.sh (and similarly
in test/req-002-install-success.sh) to read from an env var like
EXPECTED_OCP_VERSION (falling back to OCP_RELEASE_IMAGE or "4.21.0") so the
check uses the CI-provided version instead of a fixed "4.22".
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
requirements/verify-requirements.sh (1)
80-84: Consider extracting all fields in a singlejqcall per requirement.Currently four separate
jqinvocations occur per iteration. For large requirement sets, this adds process overhead.♻️ Suggested optimization
for i in $(seq 0 $((REQ_COUNT - 1))); do - REQ_ID=$(jq -r ".requirements[${i}].id" "${REQUIREMENTS_FILE}") - REQ_TEXT=$(jq -r ".requirements[${i}].requirement" "${REQUIREMENTS_FILE}") - TEST_IMPL=$(jq -r ".requirements[${i}].test_implementation" "${REQUIREMENTS_FILE}") - REQ_LABELS_STR=$(jq -r ".requirements[${i}].labels | join(\", \")" "${REQUIREMENTS_FILE}") + REQ_DATA=$(jq -r ".requirements[${i}] | [.id, .requirement, .test_implementation, (.labels | join(\", \"))] | `@tsv`" "${REQUIREMENTS_FILE}") + IFS=$'\t' read -r REQ_ID REQ_TEXT TEST_IMPL REQ_LABELS_STR <<< "${REQ_DATA}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements/verify-requirements.sh` around lines 80 - 84, The loop currently runs jq four times per requirement to populate REQ_ID, REQ_TEXT, TEST_IMPL and REQ_LABELS_STR; replace those four invocations with a single jq call that selects all four fields for ".requirements[$i]" (e.g. produce a safe delimiter-separated or JSON-encoded tuple) and then split/assign the returned values into REQ_ID, REQ_TEXT, TEST_IMPL and REQ_LABELS_STR in one step (use REQUIREMENTS_FILE and the existing loop/REQ_COUNT/REQ_ID variable context); ensure the jq output handles empty/null fields and the chosen delimiter or encoding avoids clobbering content so the downstream code that uses REQ_ID, REQ_TEXT, TEST_IMPL and REQ_LABELS_STR remains unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@requirements/verify-requirements.sh`:
- Around line 112-132: The temporary file OUTPUT_TMP created with mktemp can be
left behind if the script exits early; add a cleanup function (e.g., cleanup())
that removes OUTPUT_TMP if set and exists, and register it with trap cleanup
EXIT near the top of the script (after initial variable declarations) so the
temp file is removed on any exit; initialize OUTPUT_TMP="" before the loop and
you can keep or remove the inline rm -f "${OUTPUT_TMP}" at the end of the loop
since the trap guarantees cleanup.
---
Nitpick comments:
In `@requirements/verify-requirements.sh`:
- Around line 80-84: The loop currently runs jq four times per requirement to
populate REQ_ID, REQ_TEXT, TEST_IMPL and REQ_LABELS_STR; replace those four
invocations with a single jq call that selects all four fields for
".requirements[$i]" (e.g. produce a safe delimiter-separated or JSON-encoded
tuple) and then split/assign the returned values into REQ_ID, REQ_TEXT,
TEST_IMPL and REQ_LABELS_STR in one step (use REQUIREMENTS_FILE and the existing
loop/REQ_COUNT/REQ_ID variable context); ensure the jq output handles empty/null
fields and the chosen delimiter or encoding avoids clobbering content so the
downstream code that uses REQ_ID, REQ_TEXT, TEST_IMPL and REQ_LABELS_STR remains
unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3a954dab-a097-4f2f-adb0-db1539d81b9b
📒 Files selected for processing (2)
Makefilerequirements/verify-requirements.sh
🚧 Files skipped from review as they are similar to previous changes (1)
- Makefile
| OUTPUT_TMP=$(mktemp) | ||
|
|
||
| if [[ "${VERBOSE}" == "true" ]]; then | ||
| echo "" | ||
| echo "--------------------------------------------------------------------------------" | ||
| printf " [%s] %s\n" "${REQ_ID}" "${REQ_TEXT}" | ||
| printf " Command: %s\n" "${TEST_IMPL}" | ||
| echo "--------------------------------------------------------------------------------" | ||
| set +e | ||
| (cd "${REPO_ROOT}" && eval "${TEST_IMPL}") 2>&1 | tee "${OUTPUT_TMP}" | ||
| EXIT_CODE=${PIPESTATUS[0]} | ||
| set -e | ||
| else | ||
| set +e | ||
| (cd "${REPO_ROOT}" && eval "${TEST_IMPL}") > "${OUTPUT_TMP}" 2>&1 | ||
| EXIT_CODE=$? | ||
| set -e | ||
| fi | ||
|
|
||
| LAST_LINES=$(tail -5 "${OUTPUT_TMP}" | tr '\n' ' ') | ||
| rm -f "${OUTPUT_TMP}" |
There was a problem hiding this comment.
Temp file may not be cleaned up on unexpected exit.
If the script exits between mktemp (line 112) and rm -f (line 132), the temp file is orphaned. Consider using a trap to ensure cleanup.
🛡️ Suggested fix
Add a trap near the top of the script (after line 29):
cleanup() {
[[ -n "${OUTPUT_TMP:-}" && -f "${OUTPUT_TMP}" ]] && rm -f "${OUTPUT_TMP}"
}
trap cleanup EXITThen declare OUTPUT_TMP="" before the loop and remove the rm -f from inside the loop (or keep it for immediate cleanup but the trap provides safety).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@requirements/verify-requirements.sh` around lines 112 - 132, The temporary
file OUTPUT_TMP created with mktemp can be left behind if the script exits
early; add a cleanup function (e.g., cleanup()) that removes OUTPUT_TMP if set
and exists, and register it with trap cleanup EXIT near the top of the script
(after initial variable declarations) so the temp file is removed on any exit;
initialize OUTPUT_TMP="" before the loop and you can keep or remove the inline
rm -f "${OUTPUT_TMP}" at the end of the loop since the trap guarantees cleanup.
11a2ae6 to
6fc5a4f
Compare
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
6fc5a4f to
fc4cc8a
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
requirements/verify-requirements.sh (4)
199-206:⚠️ Potential issue | 🟡 MinorPipe characters in requirement text could break markdown table.
If any requirement text or labels contain
|, the table formatting will break. Consider escaping pipe characters.🔧 Proposed fix
for idx in "${!RESULT_IDS[@]}"; do case "${RESULT_STATUSES[${idx}]}" in PASS) BADGE="PASS" ;; FAIL) BADGE="FAIL" ;; *) BADGE="SKIP" ;; esac - echo "| ${RESULT_IDS[${idx}]} | ${BADGE} | ${RESULT_TITLES[${idx}]} | ${RESULT_LABELS[${idx}]} |" + SAFE_TITLE="${RESULT_TITLES[${idx}]//|/\\|}" + SAFE_LABELS="${RESULT_LABELS[${idx}]//|/\\|}" + echo "| ${RESULT_IDS[${idx}]} | ${BADGE} | ${SAFE_TITLE} | ${SAFE_LABELS} |" done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements/verify-requirements.sh` around lines 199 - 206, The generated markdown table can break if RESULT_TITLES or RESULT_LABELS contain pipe characters; update the loop that prints rows (the for ...; do block referencing RESULT_IDS, RESULT_STATUSES, RESULT_TITLES, RESULT_LABELS and BADGE) to escape or replace any '|' characters before echoing (e.g., via parameter expansion or a small helper that replaces '|' with '\|' or '|') so the echoed line uses the sanitized values for both TITLE and LABEL when constructing the "| id | badge | title | label |" row.
112-132:⚠️ Potential issue | 🟡 MinorTemp file may not be cleaned up on unexpected exit.
If the script exits unexpectedly between
mktemp(line 112) andrm -f(line 132), the temp file is orphaned. Consider adding a trap for cleanup.🛡️ Suggested fix
Add a trap near the top of the script (after line 29):
OUTPUT_TMP="" cleanup() { [[ -n "${OUTPUT_TMP}" && -f "${OUTPUT_TMP}" ]] && rm -f "${OUTPUT_TMP}" } trap cleanup EXITThis ensures the temp file is cleaned up on any exit path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements/verify-requirements.sh` around lines 112 - 132, Initialize OUTPUT_TMP to an empty string near the start of the script, then define a cleanup function that checks if OUTPUT_TMP is non-empty and the file exists and removes it, and register that cleanup via trap on EXIT so the temporary file created by mktemp (OUTPUT_TMP) is removed on any exit path; reference the OUTPUT_TMP variable, mktemp invocation, and add a cleanup() + trap cleanup EXIT to guarantee removal.
35-44:⚠️ Potential issue | 🟠 MajorArgument parsing is broken for
--outputoption.The loop uses
for arg in "$@"which iterates over the original arguments, but theshiftcommands inside do not affect the loop iteration. When--output FILEis passed:
- Line 37 shifts past the current arg
${1:-}reads the wrong value (already shifted)- Line 43 shifts again unconditionally
This causes arguments to be misinterpreted or skipped.
🐛 Proposed fix
OUTPUT_FILE="" OUTPUT_JSON=false VERBOSE=false FILTER_IDS=() -for arg in "$@"; do - case "${arg}" in - --output) shift; OUTPUT_FILE="${1:-}" ;; - --json) OUTPUT_JSON=true ;; - --verbose) VERBOSE=true ;; - --output=*) OUTPUT_FILE="${arg#--output=}" ;; - REQ-*) FILTER_IDS+=("${arg}") ;; +while [[ $# -gt 0 ]]; do + case "$1" in + --output) + OUTPUT_FILE="${2:-}" + shift 2 + ;; + --output=*) + OUTPUT_FILE="${1#--output=}" + shift + ;; + --json) + OUTPUT_JSON=true + shift + ;; + --verbose) + VERBOSE=true + shift + ;; + REQ-*) + FILTER_IDS+=("$1") + shift + ;; + *) + shift + ;; esac - shift 2>/dev/null || true done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements/verify-requirements.sh` around lines 35 - 44, The argument parsing loop using "for arg in \"$@\"" is broken because internal shift calls don't affect the iteration; update the parsing to use a proper shift-aware loop (e.g., "while [[ $# -gt 0 ]]; do arg=\"$1\"; case \"$arg\" in ... esac; shift" pattern) so the --output option correctly consumes its following value and sets OUTPUT_FILE, and other flags still set OUTPUT_JSON, VERBOSE, and append to FILTER_IDS as intended; ensure you handle both "--output FILE" and "--output=FILE" branches and remove the existing for-loop and its unconditional shift.
165-172:⚠️ Potential issue | 🟠 MajorJSON escaping is incomplete and may produce invalid JSON.
Only double quotes are escaped via
sed 's/"/\\"/g'. Backslashes, tabs, newlines, and other control characters are not handled. For example, a detail containingpath\to\fileor literal tabs produces invalid JSON.🐛 Proposed fix: use jq for proper JSON string encoding
- DETAIL_ESC=$(echo "${RESULT_DETAILS[${idx}]}" | sed 's/"/\\"/g' | tr -d '\n') - TITLE_ESC=$(echo "${RESULT_TITLES[${idx}]}" | sed 's/"/\\"/g') - JSON_RESULTS+=$(cat <<ENTRY -{"id":"${RESULT_IDS[${idx}]}","requirement":"${TITLE_ESC}","status":"${RESULT_STATUSES[${idx}]}","detail":"${DETAIL_ESC}","labels":"${RESULT_LABELS[${idx}]}"} -ENTRY -) + JSON_RESULTS+=$(jq -n \ + --arg id "${RESULT_IDS[${idx}]}" \ + --arg req "${RESULT_TITLES[${idx}]}" \ + --arg status "${RESULT_STATUSES[${idx}]}" \ + --arg detail "${RESULT_DETAILS[${idx}]}" \ + --arg labels "${RESULT_LABELS[${idx}]}" \ + '{id: $id, requirement: $req, status: $status, detail: $detail, labels: $labels}')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements/verify-requirements.sh` around lines 165 - 172, The current JSON assembly in verify-requirements.sh manually escapes only double quotes (variables DETAIL_ESC, TITLE_ESC) which misses backslashes, newlines, tabs, etc., leading to invalid JSON; replace the manual escaping and here-doc JSON concatenation (JSON_RESULTS building using RESULT_IDS, RESULT_TITLES, RESULT_STATUSES, RESULT_DETAILS, RESULT_LABELS) with proper JSON encoding via jq (or bash + jq) so each field is passed through jq's `@json` or built with jq --arg to produce a safe JSON object per result and then append those objects to JSON_RESULTS; update the loop that sets DETAIL_ESC/TITLE_ESC and the cat <<ENTRY block to instead call jq to encode each string and emit the object.
🧹 Nitpick comments (4)
test/req-005-host-trusted-mode.sh (2)
48-49: Add trailing newline at end of file.The file appears to be missing a trailing newline. POSIX text files should end with a newline character, and some tools may behave unexpectedly without it.
♻️ Proposed fix
echo "" echo "PASS: all ${DPU_COUNT} DPU(s) running in host-trusted mode (dpuInstallInterface=hostAgent)" +🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/req-005-host-trusted-mode.sh` around lines 48 - 49, The file ends without a trailing newline which violates POSIX expectations; open test/req-005-host-trusted-mode.sh and add a single newline character after the final line (the echo "PASS: all ${DPU_COUNT} DPU(s) running in host-trusted mode (dpuInstallInterface=hostAgent)" line) so the file terminates with a newline.
31-46: Consider listing non-compliant DPUs on failure for easier debugging.When the test fails, only a count of non-compliant DPUs is shown, not which specific ones failed. This makes debugging harder in clusters with many DPUs.
♻️ Optional: Track and report non-compliant DPU names
NON_HOST_AGENT=0 +NON_HOST_AGENT_NAMES="" while IFS=' ' read -r name interface; do [[ -z "${name}" ]] && continue echo " ${name}: dpuInstallInterface=${interface}" if [[ "${interface}" != "hostAgent" ]]; then NON_HOST_AGENT=$((NON_HOST_AGENT + 1)) + NON_HOST_AGENT_NAMES="${NON_HOST_AGENT_NAMES} ${name}" fi done <<< "${INTERFACES}" if [[ ${NON_HOST_AGENT} -gt 0 ]]; then echo "" - echo "FAIL: ${NON_HOST_AGENT} DPU(s) not using hostAgent (host-trusted mode)" + echo "FAIL: ${NON_HOST_AGENT} DPU(s) not using hostAgent (host-trusted mode):${NON_HOST_AGENT_NAMES}" exit 1 fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/req-005-host-trusted-mode.sh` around lines 31 - 46, The test currently only counts non-compliant DPUs (NON_HOST_AGENT) using INTERFACES and the read loop; update the loop to also collect the names of the non-compliant DPUs (e.g., push into an array or space-separated string like NON_HOST_NAMES when interface != "hostAgent"), and on failure print the list (or each name) along with the count before exiting so callers can see which DPU names failed the hostAgent check; keep the existing echo lines and simply append the collected NON_HOST_NAMES to the failure output.requirements/sync-issues.sh (1)
265-265: Unused variablePROJECT_ID.
PROJECT_IDis assigned on line 265 but never referenced in the script. The project operations usePROJECT_NUMandOWNERinstead. Consider removing this dead code or documenting its intended future use.🧹 Proposed fix
echo " Found project: #${PROJECT_NUM}" - PROJECT_ID=$(echo "${PROJECT_LIST}" | jq -r ".projects[] | select(.title == \"${PROJECT_NAME}\") | .id" 2>/dev/null || true) if [[ "${DRY_RUN}" == "false" ]]; then🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements/sync-issues.sh` at line 265, The assignment to PROJECT_ID is dead code—PROJECT_ID is set from PROJECT_LIST using PROJECT_NAME but never referenced; remove the unused assignment line (the line that sets PROJECT_ID) to clean up the script, or if PROJECT_ID was intended to be used instead of PROJECT_NUM/OWNER, replace the callers to use PROJECT_ID consistently (search for PROJECT_NUM and OWNER usages) and update downstream logic accordingly; reference the symbols PROJECT_ID, PROJECT_LIST, PROJECT_NAME, PROJECT_NUM, and OWNER when making the change.requirements/update-results.sh (1)
128-128:REQ_TEXTis fetched but never used.The variable
REQ_TEXTis assigned on line 128 but not referenced anywhere in the script. This was flagged in a previous review and marked as addressed, but the unused assignment remains. Either remove it or use it in output/logging.🧹 Proposed fix: remove unused variable
REQ_ID=$(jq -r ".requirements[${i}].id" "${REQUIREMENTS_FILE}") - REQ_TEXT=$(jq -r ".requirements[${i}].requirement" "${REQUIREMENTS_FILE}") TEST_IMPL=$(jq -r ".requirements[${i}].test_implementation" "${REQUIREMENTS_FILE}")Or use it in the output:
- echo " [${REQ_ID}] Running: ${TEST_IMPL}" + echo " [${REQ_ID}] ${REQ_TEXT}" + echo " Running: ${TEST_IMPL}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements/update-results.sh` at line 128, The assignment to REQ_TEXT (REQ_TEXT=$(jq -r ".requirements[${i}].requirement" "${REQUIREMENTS_FILE}")) is unused; either remove this unused variable assignment or actually use REQ_TEXT in the script's output/logging. Fix by deleting the REQ_TEXT assignment if not needed, or replace the no-op with a logging call that references REQ_TEXT (e.g., include it in echo or logger where results for the current requirement are reported) so the extracted requirement text is consumed instead of left unused.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/req-022-dts-metrics-endpoint.sh`:
- Around line 86-94: The failure diagnostic prints ${METRICS_OUTPUT} which is
never assigned because the test writes results to ${METRICS_TMP} and then
removes it; ensure you preserve or read the temp file before deleting it: either
assign METRICS_OUTPUT="$(cat "${METRICS_TMP}")" (or similar) prior to rm -f
"${METRICS_TMP}", or move the rm -f after the failure branch and print the
contents directly from ${METRICS_TMP}; update references around FOUND,
EXPECTED_METRICS, METRICS_TMP and METRICS_OUTPUT accordingly so the "First 10
lines of response" shows actual content.
---
Duplicate comments:
In `@requirements/verify-requirements.sh`:
- Around line 199-206: The generated markdown table can break if RESULT_TITLES
or RESULT_LABELS contain pipe characters; update the loop that prints rows (the
for ...; do block referencing RESULT_IDS, RESULT_STATUSES, RESULT_TITLES,
RESULT_LABELS and BADGE) to escape or replace any '|' characters before echoing
(e.g., via parameter expansion or a small helper that replaces '|' with '\|' or
'|') so the echoed line uses the sanitized values for both TITLE and LABEL
when constructing the "| id | badge | title | label |" row.
- Around line 112-132: Initialize OUTPUT_TMP to an empty string near the start
of the script, then define a cleanup function that checks if OUTPUT_TMP is
non-empty and the file exists and removes it, and register that cleanup via trap
on EXIT so the temporary file created by mktemp (OUTPUT_TMP) is removed on any
exit path; reference the OUTPUT_TMP variable, mktemp invocation, and add a
cleanup() + trap cleanup EXIT to guarantee removal.
- Around line 35-44: The argument parsing loop using "for arg in \"$@\"" is
broken because internal shift calls don't affect the iteration; update the
parsing to use a proper shift-aware loop (e.g., "while [[ $# -gt 0 ]]; do
arg=\"$1\"; case \"$arg\" in ... esac; shift" pattern) so the --output option
correctly consumes its following value and sets OUTPUT_FILE, and other flags
still set OUTPUT_JSON, VERBOSE, and append to FILTER_IDS as intended; ensure you
handle both "--output FILE" and "--output=FILE" branches and remove the existing
for-loop and its unconditional shift.
- Around line 165-172: The current JSON assembly in verify-requirements.sh
manually escapes only double quotes (variables DETAIL_ESC, TITLE_ESC) which
misses backslashes, newlines, tabs, etc., leading to invalid JSON; replace the
manual escaping and here-doc JSON concatenation (JSON_RESULTS building using
RESULT_IDS, RESULT_TITLES, RESULT_STATUSES, RESULT_DETAILS, RESULT_LABELS) with
proper JSON encoding via jq (or bash + jq) so each field is passed through jq's
`@json` or built with jq --arg to produce a safe JSON object per result and then
append those objects to JSON_RESULTS; update the loop that sets
DETAIL_ESC/TITLE_ESC and the cat <<ENTRY block to instead call jq to encode each
string and emit the object.
---
Nitpick comments:
In `@requirements/sync-issues.sh`:
- Line 265: The assignment to PROJECT_ID is dead code—PROJECT_ID is set from
PROJECT_LIST using PROJECT_NAME but never referenced; remove the unused
assignment line (the line that sets PROJECT_ID) to clean up the script, or if
PROJECT_ID was intended to be used instead of PROJECT_NUM/OWNER, replace the
callers to use PROJECT_ID consistently (search for PROJECT_NUM and OWNER usages)
and update downstream logic accordingly; reference the symbols PROJECT_ID,
PROJECT_LIST, PROJECT_NAME, PROJECT_NUM, and OWNER when making the change.
In `@requirements/update-results.sh`:
- Line 128: The assignment to REQ_TEXT (REQ_TEXT=$(jq -r
".requirements[${i}].requirement" "${REQUIREMENTS_FILE}")) is unused; either
remove this unused variable assignment or actually use REQ_TEXT in the script's
output/logging. Fix by deleting the REQ_TEXT assignment if not needed, or
replace the no-op with a logging call that references REQ_TEXT (e.g., include it
in echo or logger where results for the current requirement are reported) so the
extracted requirement text is consumed instead of left unused.
In `@test/req-005-host-trusted-mode.sh`:
- Around line 48-49: The file ends without a trailing newline which violates
POSIX expectations; open test/req-005-host-trusted-mode.sh and add a single
newline character after the final line (the echo "PASS: all ${DPU_COUNT} DPU(s)
running in host-trusted mode (dpuInstallInterface=hostAgent)" line) so the file
terminates with a newline.
- Around line 31-46: The test currently only counts non-compliant DPUs
(NON_HOST_AGENT) using INTERFACES and the read loop; update the loop to also
collect the names of the non-compliant DPUs (e.g., push into an array or
space-separated string like NON_HOST_NAMES when interface != "hostAgent"), and
on failure print the list (or each name) along with the count before exiting so
callers can see which DPU names failed the hostAgent check; keep the existing
echo lines and simply append the collected NON_HOST_NAMES to the failure output.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 737c75e7-5da2-4f4a-a056-68a3da370779
📒 Files selected for processing (11)
Makefilerequirements/requirements.jsonrequirements/sync-issues.shrequirements/update-results.shrequirements/verify-requirements.shtest/req-005-host-trusted-mode.shtest/req-019-dts-installed.shtest/req-022-dts-metrics-endpoint.shtest/req-023-dts-metrics-prometheus.shtest/req-034-dpu-per-worker.shtest/req-047-dpucluster-managed.sh
✅ Files skipped from review due to trivial changes (1)
- requirements/requirements.json
🚧 Files skipped from review as they are similar to previous changes (3)
- Makefile
- test/req-034-dpu-per-worker.sh
- test/req-047-dpucluster-managed.sh
| rm -f "${METRICS_TMP}" | ||
|
|
||
| if [[ ${FOUND} -eq 0 ]]; then | ||
| echo "FAIL: none of the expected DTS metrics found (${EXPECTED_METRICS})" | ||
| echo "" | ||
| echo " First 10 lines of response:" | ||
| echo "${METRICS_OUTPUT}" | head -10 | sed 's/^/ /' | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Bug: METRICS_OUTPUT is undefined, causing empty diagnostic output on failure.
The temp file content is stored in METRICS_TMP and deleted on line 86, but line 92 references METRICS_OUTPUT which is never defined. This results in empty diagnostic output when the test fails.
🐛 Proposed fix
Either save the content before deleting the temp file:
+METRICS_OUTPUT=$(cat "${METRICS_TMP}")
rm -f "${METRICS_TMP}"
if [[ ${FOUND} -eq 0 ]]; then
echo "FAIL: none of the expected DTS metrics found (${EXPECTED_METRICS})"
echo ""
echo " First 10 lines of response:"
echo "${METRICS_OUTPUT}" | head -10 | sed 's/^/ /'
exit 1
fiOr restructure to output before deletion:
if [[ ${FOUND} -eq 0 ]]; then
echo "FAIL: none of the expected DTS metrics found (${EXPECTED_METRICS})"
echo ""
echo " First 10 lines of response:"
- echo "${METRICS_OUTPUT}" | head -10 | sed 's/^/ /'
+ head -10 "${METRICS_TMP}" | sed 's/^/ /'
+ rm -f "${METRICS_TMP}"
exit 1
fi
+
+rm -f "${METRICS_TMP}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| rm -f "${METRICS_TMP}" | |
| if [[ ${FOUND} -eq 0 ]]; then | |
| echo "FAIL: none of the expected DTS metrics found (${EXPECTED_METRICS})" | |
| echo "" | |
| echo " First 10 lines of response:" | |
| echo "${METRICS_OUTPUT}" | head -10 | sed 's/^/ /' | |
| exit 1 | |
| fi | |
| METRICS_OUTPUT=$(cat "${METRICS_TMP}") | |
| rm -f "${METRICS_TMP}" | |
| if [[ ${FOUND} -eq 0 ]]; then | |
| echo "FAIL: none of the expected DTS metrics found (${EXPECTED_METRICS})" | |
| echo "" | |
| echo " First 10 lines of response:" | |
| echo "${METRICS_OUTPUT}" | head -10 | sed 's/^/ /' | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/req-022-dts-metrics-endpoint.sh` around lines 86 - 94, The failure
diagnostic prints ${METRICS_OUTPUT} which is never assigned because the test
writes results to ${METRICS_TMP} and then removes it; ensure you preserve or
read the temp file before deleting it: either assign METRICS_OUTPUT="$(cat
"${METRICS_TMP}")" (or similar) prior to rm -f "${METRICS_TMP}", or move the rm
-f after the failure branch and print the contents directly from ${METRICS_TMP};
update references around FOUND, EXPECTED_METRICS, METRICS_TMP and METRICS_OUTPUT
accordingly so the "First 10 lines of response" shows actual content.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
requirements/requirements.json (1)
29-52: Emptytest_descriptionfields create blank sections in GitHub issues.43 of 58 requirements have empty
test_descriptionvalues. Persync-issues.sh(lines 125-139), these are included directly in the issue body, resulting in:## Test Description (blank) ## Test ImplementationConsider either:
- Populating with a placeholder like
"TBD"or"Not yet defined"- Updating
sync-issues.shto conditionally omit the section when empty🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements/requirements.json` around lines 29 - 52, Several requirement entries (e.g., objects with "id": "REQ-003" and "REQ-004") have empty "test_description" fields which lead to blank "## Test Description" sections in issues; either populate those fields with a placeholder like "TBD" or "Not yet defined" for all requirements missing a value, or update the sync script (sync-issues.sh) to skip rendering the Test Description block when the "test_description" value is empty. Locate the requirement objects in requirements.json and either set "test_description": "TBD" (or similar) for each empty entry, or modify sync-issues.sh where it builds the issue body (the block referenced around lines 125-139) to conditionally include the "## Test Description" header and content only when test_description is non-empty.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@requirements/requirements.json`:
- Around line 593-604: REQ-050 is a duplicate of REQ-001 (same requirement text
and same test_implementation "test/req-001-amd64-nodes.sh"); either remove
REQ-050 or make it distinct by updating its "description" and/or
"test_implementation" to reflect the intended difference (e.g., point to a new
test script or add clarifying description and different labels). Locate the
objects named "REQ-050" and "REQ-001" in the JSON and then either delete the
REQ-050 entry or modify its "description", "test_implementation", and "labels"
to clearly distinguish it from REQ-001.
- Around line 413-460: REQ-035 through REQ-038 contain placeholder tokens
("XXX"/"XXXX") in the requirement text; update each requirement string to a
concrete target (e.g., replace "XXX worker nodes" in REQ-035, "XXX pods per
worker node" in REQ-036, "XXXX Gbps" in REQ-037, and "XXX minutes" in REQ-038)
or, if targets are undecided, add a clear draft marker by appending a label like
"status/draft" or "tbd" to the labels array for each requirement and add a short
note in the description field indicating the value is to be determined; also
ensure test_description or test_implementation fields are updated or left as
"TBD" when placeholders remain so the sync to GitHub issues won’t publish
ambiguous "XXX"/"XXXX" values.
---
Nitpick comments:
In `@requirements/requirements.json`:
- Around line 29-52: Several requirement entries (e.g., objects with "id":
"REQ-003" and "REQ-004") have empty "test_description" fields which lead to
blank "## Test Description" sections in issues; either populate those fields
with a placeholder like "TBD" or "Not yet defined" for all requirements missing
a value, or update the sync script (sync-issues.sh) to skip rendering the Test
Description block when the "test_description" value is empty. Locate the
requirement objects in requirements.json and either set "test_description":
"TBD" (or similar) for each empty entry, or modify sync-issues.sh where it
builds the issue body (the block referenced around lines 125-139) to
conditionally include the "## Test Description" header and content only when
test_description is non-empty.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7773c7f2-7068-4770-be1e-d5ce80f1c086
📒 Files selected for processing (1)
requirements/requirements.json
| { | ||
| "id": "REQ-035", | ||
| "requirement": "DPF on OCP can manage XXX worker nodes equipped with DPUs", | ||
| "description": "", | ||
| "test_description": "", | ||
| "test_implementation": "", | ||
| "labels": [ | ||
| "area/scale-performance", | ||
| "type/functional" | ||
| ], | ||
| "subtasks": [] | ||
| }, | ||
| { | ||
| "id": "REQ-036", | ||
| "requirement": "DPF on OCP can manage XXX pods per worker node equipped with DPUs", | ||
| "description": "", | ||
| "test_description": "", | ||
| "test_implementation": "", | ||
| "labels": [ | ||
| "area/scale-performance", | ||
| "type/functional" | ||
| ], | ||
| "subtasks": [] | ||
| }, | ||
| { | ||
| "id": "REQ-037", | ||
| "requirement": "Traffic between pods can reach XXXX Gbps", | ||
| "description": "", | ||
| "test_description": "", | ||
| "test_implementation": "", | ||
| "labels": [ | ||
| "area/scale-performance", | ||
| "type/functional" | ||
| ], | ||
| "subtasks": [] | ||
| }, | ||
| { | ||
| "id": "REQ-038", | ||
| "requirement": "Deploys a new DPU with the OVN-Kubernetes and HBN DPUService in XXX minutes", | ||
| "description": "", | ||
| "test_description": "", | ||
| "test_implementation": "", | ||
| "labels": [ | ||
| "area/scale-performance", | ||
| "type/functional" | ||
| ], | ||
| "subtasks": [] | ||
| }, |
There was a problem hiding this comment.
Placeholder values need to be defined.
REQ-035 through REQ-038 contain XXX / XXXX placeholders for scale and performance targets:
| ID | Placeholder |
|---|---|
| REQ-035 | "XXX worker nodes" |
| REQ-036 | "XXX pods per worker node" |
| REQ-037 | "XXXX Gbps" |
| REQ-038 | "XXX minutes" |
These will be synced to GitHub issues as-is. Consider defining concrete targets or marking these requirements as draft/TBD in the labels.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@requirements/requirements.json` around lines 413 - 460, REQ-035 through
REQ-038 contain placeholder tokens ("XXX"/"XXXX") in the requirement text;
update each requirement string to a concrete target (e.g., replace "XXX worker
nodes" in REQ-035, "XXX pods per worker node" in REQ-036, "XXXX Gbps" in
REQ-037, and "XXX minutes" in REQ-038) or, if targets are undecided, add a clear
draft marker by appending a label like "status/draft" or "tbd" to the labels
array for each requirement and add a short note in the description field
indicating the value is to be determined; also ensure test_description or
test_implementation fields are updated or left as "TBD" when placeholders remain
so the sync to GitHub issues won’t publish ambiguous "XXX"/"XXXX" values.
| { | ||
| "id": "REQ-050", | ||
| "requirement": "DPF on OCP supports amd64 worker and control plane nodes", | ||
| "description": "", | ||
| "test_description": "Verify all cluster nodes report amd64 architecture via node status", | ||
| "test_implementation": "test/req-001-amd64-nodes.sh", | ||
| "labels": [ | ||
| "area/system", | ||
| "type/non-functional" | ||
| ], | ||
| "subtasks": [] | ||
| }, |
There was a problem hiding this comment.
Duplicate requirement detected.
REQ-050 is identical to REQ-001 — same requirement text and same test_implementation file (test/req-001-amd64-nodes.sh). The only difference is the label (type/non-functional vs type/functional).
This will create two separate GitHub issues with nearly identical content. If this duplication is intentional for categorization purposes, consider:
- Differentiating the test scripts
- Adding clarifying text to the
descriptionfield
If unintentional, remove the duplicate.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@requirements/requirements.json` around lines 593 - 604, REQ-050 is a
duplicate of REQ-001 (same requirement text and same test_implementation
"test/req-001-amd64-nodes.sh"); either remove REQ-050 or make it distinct by
updating its "description" and/or "test_implementation" to reflect the intended
difference (e.g., point to a new test script or add clarifying description and
different labels). Locate the objects named "REQ-050" and "REQ-001" in the JSON
and then either delete the REQ-050 entry or modify its "description",
"test_implementation", and "labels" to clearly distinguish it from REQ-001.
Signed-off-by: Killian Muldoon <kmuldoon@nvidia.com>
4b5ec5c to
6ff9897
Compare
Summary by CodeRabbit
New Features
Documentation
Tests
Chores