Skip to content

Report unit test files with no result#3105

Merged
yongwww merged 5 commits intoflashinfer-ai:mainfrom
dierksen:jdierksen/report-missing-junit
Apr 23, 2026
Merged

Report unit test files with no result#3105
yongwww merged 5 commits intoflashinfer-ai:mainfrom
dierksen:jdierksen/report-missing-junit

Conversation

@dierksen
Copy link
Copy Markdown
Collaborator

@dierksen dierksen commented Apr 17, 2026

📌 Description

Currently, we have a handful of tests during parallel runs that are being OOM killed due to high memory usage. I previously attempted to resolve it in #2961, but some of the tests will need a little more pruning before they can be added in solo as they take too long in addition to consuming tons of memory.

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Bug Fixes

    • Separate passed/failed tests from tests that produced no-result artifacts; exclude "no result" from failed counts, ensure exit status reflects failures and no-results, and ignore skipped jobs in parallel runs.
    • Remove stale result files before runs to prevent misleading outcomes; treat missing result artifacts as "no result" instead of pass/fail.
  • Chores

    • Deterministic per-test result naming to avoid collisions, improved parallel runner metadata, sampled test-case counting in sanity mode, and enhanced execution summary with explicit counts and a list of no-result tests.

@dierksen
Copy link
Copy Markdown
Collaborator Author

/bot run

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 17, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds deterministic per-test junit XML path computation, centralizes recording of failed and "no result" tests, updates sequential and parallel runners to treat missing JUnit artifacts as "NO RESULT", and updates final summary to report and list no-result tests.

Changes

Cohort / File(s) Summary
Test orchestration & result handling
scripts/test_utils.sh
Added junit_file_for_test for deterministic per-test junit paths; delete pre-existing junit files before runs; propagate junit path metadata for parallel jobs; classify tests as pass/fail only when junit exists; missing junit → recorded as NO RESULT.
Helpers & globals
scripts/test_utils.sh
Introduced record_failed_test, record_no_result_test, describe_missing_artifacts, and globals NO_RESULT_TESTS, NO_RESULT_COUNT; centralized EXIT_CODE updates; collator now ignores SKIPPED* jobs and counts sampled PASSED/FAILED payloads in sanity mode.
Summary & reporting
scripts/test_utils.sh
print_execution_summary recalculates failed count as TOTAL - PASSED - NO_RESULT, prints NO RESULT count and lists affected tests, and adjusts overall exit semantics accordingly.

Sequence Diagram(s)

sequenceDiagram
  participant Runner
  participant PyTest
  participant FS as Filesystem
  participant Collator

  Runner->>FS: compute junit_file_for_test(test.py)
  Runner->>FS: rm -f junit_file
  Runner->>PyTest: run pytest --junitxml=junit_file
  PyTest-->>FS: write junit_file (or not)
  PyTest-->>Runner: return exit status
  Runner->>Collator: submit job metadata (status, junit_file)
  Collator->>FS: check junit_file and marker files
  alt junit exists
    Collator->>Collator: increment PASSED_TESTS / FAILED_TESTS based on junit contents
  else missing junit
    Collator->>Collator: record_no_result_test(test.py)
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • sricketts
  • yzh119
  • aleozlx
  • nv-yunzheq
  • samuellees

Poem

🐰 I hopped through logs and junit trails,

I sniffed the files where output fails.
When artifacts vanish from the night,
I mark "no result" and set things right.
Hooray for tests — I count them bright!

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: adding reporting for test files that produce no result, which directly reflects the PR's objective to identify and track OOM-killed or incomplete tests.
Description check ✅ Passed The description includes context about the problem (tests being OOM-killed during parallel runs) and mentions a related PR, but lacks specific technical details about the implemented solution and does not fill the Related Issues section.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !563 has been created, and the CI pipeline #48809567 is currently running. I'll report back once the pipeline job completes.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enhances the test utility script by introducing tracking for tests that fail to produce result artifacts, categorized as "No result." It adds helper functions for recording test outcomes and describing missing artifacts, while updating the execution summary to include these new metrics. Review feedback suggests using printf for joining array elements to avoid Bash IFS limitations and recommends deduplicating the failed_count calculation for better maintainability.

Comment thread scripts/test_utils.sh
Comment thread scripts/test_utils.sh
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
scripts/test_utils.sh (1)

655-669: ⚠️ Potential issue | 🔴 Critical

Critical: read in the sort loop drops junit_file into file_index, corrupting the pid extraction.

test_result_files[$pid] now stores 5 colon-separated fields (result_file:test_file:log_file:file_index:junit_file, set at line 632), but this loop still reads only 4 variables:

IFS=':' read -r result_file test_file log_file file_index <<< "${test_result_files[$pid]}"

With 4 target vars, Bash packs the remaining input (including the separator) into the last variable, so file_index becomes "<idx>:<abs/junit/path>.xml". Then:

sorted_pids+=("$file_index:$pid")   # e.g. "1:/abs/junit/tests_foo.xml:12345"
...
local pid="${entry#*:}"              # strips only "1:" → "/abs/junit/tests_foo.xml:12345"

So pid is no longer a real pid, the subsequent test_result_files[$pid] lookup returns empty, the [ -f "$result_file" ] check at line 678 takes the else branch, and every parallel test result is recorded as NO RESULT, regardless of actual pass/fail. EXIT_CODE is also forced to 1 unconditionally.

This silently inverts the intent of the PR on the parallel path (which task_run_unit_tests.sh enables by default via PARALLEL_TESTS=true).

🐛 Proposed fix
     for pid in "${!test_result_files[@]}"; do
-        local result_file test_file log_file file_index
-        IFS=':' read -r result_file test_file log_file file_index <<< "${test_result_files[$pid]}"
+        local result_file test_file log_file file_index junit_file
+        IFS=':' read -r result_file test_file log_file file_index junit_file <<< "${test_result_files[$pid]}"
         sorted_pids+=("$file_index:$pid")
     done

Alternatively, keep ${entry#*:} robust by using longest-match when extracting the pid: local pid="${entry##*:}". The read fix above is still required so downstream consumers see a correct junit_file.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/test_utils.sh` around lines 655 - 669, The loop that builds and then
processes sorted_pids corrupts pid extraction because test_result_files stores
five colon-separated fields but the first read (inside the sort loop) only reads
four variables, causing file_index to capture the junit_file and later making
local pid="${entry#*:}" produce a non-pid; fix by reading all five fields there
(use IFS=':' read -r result_file test_file log_file file_index junit_file <<<
"${test_result_files[$pid]}") so sorted_pids gets a clean numeric file_index,
and make the pid extraction robust when processing entries by using
longest-match extraction (replace local pid="${entry#*:}" with local
pid="${entry##*:}") to ensure the actual pid is recovered from "file_index:pid"
entries; these changes touch the variables/test_result_files handling around
sorted_pids, the read lines, and the pid extraction in the processing loop.
🧹 Nitpick comments (2)
scripts/test_utils.sh (2)

197-212: Minor: local IFS=', ' joins with only the first character.

${missing[*]} expansion uses the first character of IFS as the separator, so the resulting string is comma-separated (the space is ignored for joining, it just happens to appear in the literal JUnit path). The current output is correct, but the ' ' in IFS=', ' is misleading — consider local IFS=',' and include an explicit space in a join if desired, e.g. a small loop or printf.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/test_utils.sh` around lines 197 - 212, The IFS assignment in
describe_missing_artifacts is misleading because IFS=', ' only uses the first
character as the separator; change the join to use local IFS=',' and then
explicitly add a space when printing (e.g. expand missing with separator and
insert spaces) or implement a small loop/printf to join with ", " so the output
reliably uses a comma+space separator when echoing "${missing[*]}"; update the
IFS line and the final echo to use the chosen join method while keeping the
missing array logic unchanged.

515-590: Verify: job-info field parsing assumes colon-free paths.

job_info is packed as pid:test_file:result_file:log_file:file_index:junit_file and parsed with IFS=':' read -r into 6 variables at line 631. Since junit_file is last, trailing colons in its path would be tolerated, but a colon anywhere in JUNIT_DIR (from realpath ./junit) or in the working directory path would shift earlier fields and misparse. In typical CI this is fine, but if JUNIT_DIR is ever overridden to a path containing :, downstream logic (including the sort loop at line 658) will silently break.

Consider using a different delimiter (e.g. tab or $'\x1f') or emitting structured lines (one field per line) to eliminate this fragility.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/test_utils.sh` around lines 515 - 590, The job-info string emitted by
run_single_test_background is split on ':' (job_info ->
pid:test_file:result_file:log_file:file_index:junit_file) which breaks when any
path contains ':'; change the delimiter to a safe separator (e.g. $'\x1f' /
ASCII unit separator or '\t' or NUL) when echoing the job_info line and update
the parser that does IFS=':' read -r ... to split on the same new separator (or
use read -d for NUL-terminated fields), ensuring symbols to update include
run_single_test_background (the echo of "$pid:...:$junit_file") and the consumer
that reads job_info (the IFS=':' read -r ... handling and the sort/loop that
iterates over those lines).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@scripts/test_utils.sh`:
- Around line 655-669: The loop that builds and then processes sorted_pids
corrupts pid extraction because test_result_files stores five colon-separated
fields but the first read (inside the sort loop) only reads four variables,
causing file_index to capture the junit_file and later making local
pid="${entry#*:}" produce a non-pid; fix by reading all five fields there (use
IFS=':' read -r result_file test_file log_file file_index junit_file <<<
"${test_result_files[$pid]}") so sorted_pids gets a clean numeric file_index,
and make the pid extraction robust when processing entries by using
longest-match extraction (replace local pid="${entry#*:}" with local
pid="${entry##*:}") to ensure the actual pid is recovered from "file_index:pid"
entries; these changes touch the variables/test_result_files handling around
sorted_pids, the read lines, and the pid extraction in the processing loop.

---

Nitpick comments:
In `@scripts/test_utils.sh`:
- Around line 197-212: The IFS assignment in describe_missing_artifacts is
misleading because IFS=', ' only uses the first character as the separator;
change the join to use local IFS=',' and then explicitly add a space when
printing (e.g. expand missing with separator and insert spaces) or implement a
small loop/printf to join with ", " so the output reliably uses a comma+space
separator when echoing "${missing[*]}"; update the IFS line and the final echo
to use the chosen join method while keeping the missing array logic unchanged.
- Around line 515-590: The job-info string emitted by run_single_test_background
is split on ':' (job_info ->
pid:test_file:result_file:log_file:file_index:junit_file) which breaks when any
path contains ':'; change the delimiter to a safe separator (e.g. $'\x1f' /
ASCII unit separator or '\t' or NUL) when echoing the job_info line and update
the parser that does IFS=':' read -r ... to split on the same new separator (or
use read -d for NUL-terminated fields), ensuring symbols to update include
run_single_test_background (the echo of "$pid:...:$junit_file") and the consumer
that reads job_info (the IFS=':' read -r ... handling and the sort/loop that
iterates over those lines).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ecd7cf45-3781-43a1-bbfa-6f2862c7510e

📥 Commits

Reviewing files that changed from the base of the PR and between 0e18a1c and 5685596.

📒 Files selected for processing (1)
  • scripts/test_utils.sh

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
scripts/test_utils.sh (1)

687-710: ⚠️ Potential issue | 🟡 Minor

Preserve sanity case counts before the no-result early exit.

Line 687 can continue before consuming the PASSED:total:sampled / FAILED:total:sampled metadata. In parallel sanity mode, a no-result file with a result marker will be counted as executed but omitted from coverage totals.

Proposed fix
             TOTAL_TESTS=$((TOTAL_TESTS + 1))
 
+            if [ "$mode" = "sanity" ] && [[ "$result" == PASSED* || "$result" == FAILED* ]]; then
+                local total_in_file sampled_in_file
+                # shellcheck disable=SC2034  # status is part of the read but unused
+                IFS=':' read -r _ total_in_file sampled_in_file <<< "$result"
+                TOTAL_TEST_CASES=$((TOTAL_TEST_CASES + total_in_file))
+                SAMPLED_TEST_CASES=$((SAMPLED_TEST_CASES + sampled_in_file))
+            fi
+
             if [ ! -f "$junit_file" ]; then
                 echo "⚠️  NO RESULT: $test_file (missing JUnit XML: $junit_file)"
                 record_no_result_test "$test_file"
                 continue
             fi
 
             if [[ "$result" == PASSED* ]]; then
                 PASSED_TESTS=$((PASSED_TESTS + 1))
-                if [ "$mode" = "sanity" ]; then
-                    local total_in_file sampled_in_file
-                    # shellcheck disable=SC2034  # status is part of the read but unused
-                    IFS=':' read -r _ total_in_file sampled_in_file <<< "$result"
-                    TOTAL_TEST_CASES=$((TOTAL_TEST_CASES + total_in_file))
-                    SAMPLED_TEST_CASES=$((SAMPLED_TEST_CASES + sampled_in_file))
-                fi
             elif [[ "$result" == FAILED* ]]; then
                 record_failed_test "$test_file"
-                if [ "$mode" = "sanity" ]; then
-                    local total_in_file sampled_in_file
-                    # shellcheck disable=SC2034  # status is part of the read but unused
-                    IFS=':' read -r _ total_in_file sampled_in_file <<< "$result"
-                    TOTAL_TEST_CASES=$((TOTAL_TEST_CASES + total_in_file))
-                    SAMPLED_TEST_CASES=$((SAMPLED_TEST_CASES + sampled_in_file))
-                fi
             fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/test_utils.sh` around lines 687 - 710, The no-result early continue
skips updating sanity-mode counters, so parse and apply the PASSED:total:sampled
or FAILED:total:sampled metadata before the missing-JUnit early exit: in the
block around record_no_result_test and the junit_file check, if mode == "sanity"
inspect the existing result variable (same format handled later) and increment
TOTAL_TEST_CASES and SAMPLED_TEST_CASES accordingly before calling
record_no_result_test and continue; update the same parsing logic used in the
PASSED*/FAILED* branches (use IFS=':' read -r _ total_in_file sampled_in_file)
so counts remain consistent for both passed/failed and no-result cases while
leaving record_no_result_test and PASSED_TESTS/record_failed_test behavior
unchanged.
🧹 Nitpick comments (1)
scripts/test_utils.sh (1)

174-178: Consider making JUnit paths collision-resistant for robustness.

Line 177's /_ mapping is theoretically lossy. While the current test file set has no collisions, distinct test files could map to the same XML name if naming patterns overlap (e.g., tests/foo_bar/test_x.py and tests/foo/bar_test_x.py). With the new rm -f in parallel jobs, collisions could cause one job to delete another's JUnit artifact, corrupting no-result reporting. Adding a checksum suffix would make the mapping collision-resistant for future-proofing.

Suggested direction
 junit_file_for_test() {
     local test_file=$1
-    echo "${JUNIT_DIR}/${test_file//\//_}.xml"
+    local safe_name
+    local suffix
+    safe_name=${test_file//\//_}
+    suffix=$(printf '%s' "$test_file" | cksum | awk '{print $1}')
+    echo "${JUNIT_DIR}/${safe_name}.${suffix}.xml"
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/test_utils.sh` around lines 174 - 178, The junit_file_for_test
function currently maps slashes to underscores which can collide (e.g.,
tests/foo_bar/test_x.py vs tests/foo/bar_test_x.py); change junit_file_for_test
to append a short checksum of the original test_file to the generated name to
make it collision-resistant: keep the existing sanitized name
(${JUNIT_DIR}/${test_file//\//_}.xml) but add a suffix like -<hexsum> before
.xml computed from the raw test_file (use md5sum/sha1 or shasum -a1 and truncate
to e.g. 8-12 chars) so the function junit_file_for_test and variable JUNIT_DIR
produce unique, deterministic JUnit paths across parallel jobs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@scripts/test_utils.sh`:
- Around line 687-710: The no-result early continue skips updating sanity-mode
counters, so parse and apply the PASSED:total:sampled or FAILED:total:sampled
metadata before the missing-JUnit early exit: in the block around
record_no_result_test and the junit_file check, if mode == "sanity" inspect the
existing result variable (same format handled later) and increment
TOTAL_TEST_CASES and SAMPLED_TEST_CASES accordingly before calling
record_no_result_test and continue; update the same parsing logic used in the
PASSED*/FAILED* branches (use IFS=':' read -r _ total_in_file sampled_in_file)
so counts remain consistent for both passed/failed and no-result cases while
leaving record_no_result_test and PASSED_TESTS/record_failed_test behavior
unchanged.

---

Nitpick comments:
In `@scripts/test_utils.sh`:
- Around line 174-178: The junit_file_for_test function currently maps slashes
to underscores which can collide (e.g., tests/foo_bar/test_x.py vs
tests/foo/bar_test_x.py); change junit_file_for_test to append a short checksum
of the original test_file to the generated name to make it collision-resistant:
keep the existing sanitized name (${JUNIT_DIR}/${test_file//\//_}.xml) but add a
suffix like -<hexsum> before .xml computed from the raw test_file (use
md5sum/sha1 or shasum -a1 and truncate to e.g. 8-12 chars) so the function
junit_file_for_test and variable JUNIT_DIR produce unique, deterministic JUnit
paths across parallel jobs.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5d4d4a35-957b-4c70-b3b2-065c0908636f

📥 Commits

Reviewing files that changed from the base of the PR and between 5685596 and 49dd90d.

📒 Files selected for processing (1)
  • scripts/test_utils.sh

@dierksen
Copy link
Copy Markdown
Collaborator Author

/bot run

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !563 has been updated with latest changes, and the CI pipeline #49008058 is currently running. I'll report back once the pipeline job completes.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
scripts/test_utils.sh (1)

681-709: ⚠️ Potential issue | 🟠 Major

Handle malformed result markers so CI cannot exit green.

If $result_file exists but is empty/corrupt and $junit_file exists, this block increments TOTAL_TESTS but records neither failed nor no-result, leaving EXIT_CODE unchanged even though the summary derives a failure. Since scripts/task_run_unit_tests.sh exits with EXIT_CODE, classify unknown markers explicitly.

🐛 Proposed fix
-            if [[ "$result" == PASSED* ]]; then
-                PASSED_TESTS=$((PASSED_TESTS + 1))
-            elif [[ "$result" == FAILED* ]]; then
-                record_failed_test "$test_file"
-            fi
+            case "$result" in
+                PASSED|PASSED:*)
+                    PASSED_TESTS=$((PASSED_TESTS + 1))
+                    ;;
+                FAILED|FAILED:*)
+                    record_failed_test "$test_file"
+                    ;;
+                *)
+                    echo "⚠️  NO RESULT: $test_file (unrecognized result marker: $result_file)"
+                    record_no_result_test "$test_file"
+                    ;;
+            esac
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/test_utils.sh` around lines 681 - 709, The test-result handling loop
increments TOTAL_TESTS but currently ignores malformed/unknown $result markers
(empty or corrupt) and so neither records a failure nor a no-result; update the
branch after checking for JUnit XML and result prefixes in the block using
TOTAL_TESTS, PASSED_TESTS, record_failed_test and record_no_result_test so that
any $result that does not match the expected "PASSED:*", "FAILED:*" or
"SKIPPED*" patterns is treated explicitly (e.g., call record_failed_test or
record_no_result_test) to ensure malformed markers contribute to EXIT_CODE
rather than leaving them unaccounted for.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@scripts/test_utils.sh`:
- Around line 681-709: The test-result handling loop increments TOTAL_TESTS but
currently ignores malformed/unknown $result markers (empty or corrupt) and so
neither records a failure nor a no-result; update the branch after checking for
JUnit XML and result prefixes in the block using TOTAL_TESTS, PASSED_TESTS,
record_failed_test and record_no_result_test so that any $result that does not
match the expected "PASSED:*", "FAILED:*" or "SKIPPED*" patterns is treated
explicitly (e.g., call record_failed_test or record_no_result_test) to ensure
malformed markers contribute to EXIT_CODE rather than leaving them unaccounted
for.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d522b626-cbd9-4f59-8d5c-31408a6a9e11

📥 Commits

Reviewing files that changed from the base of the PR and between 49dd90d and 30789df.

📒 Files selected for processing (1)
  • scripts/test_utils.sh

Comment thread scripts/test_utils.sh
@dierksen
Copy link
Copy Markdown
Collaborator Author

dierksen commented Apr 20, 2026 via email

@yongwww yongwww added the run-ci label Apr 22, 2026
@yongwww yongwww enabled auto-merge (squash) April 22, 2026 17:35
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/test_utils.sh`:
- Around line 210-214: The record_no_result_test function updates
NO_RESULT_TESTS and NO_RESULT_COUNT but doesn't mark the overall run as failed;
modify record_no_result_test to set a non-zero EXIT_CODE (e.g., EXIT_CODE=1)
when invoked so suites that only have missing JUnit/result artifacts are treated
as failures; update the function (record_no_result_test) to assign EXIT_CODE=1
after incrementing NO_RESULT_COUNT and before returning.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c96b1516-6db9-4feb-b7f6-0df17f1f22a5

📥 Commits

Reviewing files that changed from the base of the PR and between 651dd97 and 5f3b154.

📒 Files selected for processing (1)
  • scripts/test_utils.sh

Comment thread scripts/test_utils.sh
@yongwww yongwww merged commit 53bc819 into flashinfer-ai:main Apr 23, 2026
36 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants