Skip to content

[TRTLLM-11257][fix] release GPU memory and FDs in MnnvlMemory on pidfd failure to prevent leak#11979

Open
zhaoyangwang-nvidia wants to merge 2 commits intoNVIDIA:mainfrom
zhaoyangwang-nvidia:fix-nvl-bug
Open

[TRTLLM-11257][fix] release GPU memory and FDs in MnnvlMemory on pidfd failure to prevent leak#11979
zhaoyangwang-nvidia wants to merge 2 commits intoNVIDIA:mainfrom
zhaoyangwang-nvidia:fix-nvl-bug

Conversation

@zhaoyangwang-nvidia
Copy link
Collaborator

@zhaoyangwang-nvidia zhaoyangwang-nvidia commented Mar 6, 2026

Summary by CodeRabbit

  • Bug Fixes
    • Enhanced error handling during memory operations to ensure proper resource cleanup when failures occur, preventing potential resource leaks.

Description

When NVLink one-sided communication is used for MoE, workspace allocation in nvlink_one_sided.py calls MnnvlMemory(mapping, workspace_size_per_rank), which allocates via cuMemCreate + cuMemExportToShareableHandle and shares across processes using pidfd_open / pidfd_getfd. If pidfd_open or pidfd_getfd fails (e.g., EPERM in containers without SYS_PTRACE), the code previously raised without releasing resources created in the current attempt, including the CUDA allocation handle, exported shareable handle (FD for POSIX handle type), and any already-opened pidfds / duplicated remote FDs. Because self.WORKSPACE remains None, later retries could repeat this path, causing cumulative GPU memory and FD leaks.

This PR fixes the failure path in the POSIX (non-FABRIC) branch of open_mnnvl_memory in tensorrt_llm/_mnnvl_utils.py by ensuring proper cleanup before re-raising: it closes the exported shareable FD when applicable, calls cuMemRelease on the cuMemCreate allocation handle, and closes any pidfds and duplicated remote FDs opened during the attempt. Cleanup errors are logged as warnings and do not mask the original exception.

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

@zhaoyangwang-nvidia
Copy link
Collaborator Author

/bot help

@github-actions
Copy link

github-actions bot commented Mar 6, 2026

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental) --high-priority]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

--high-priority (OPTIONAL) : Run the pipeline with high priority. This option is restricted to authorized users only and will route the job to a high-priority queue.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@zhaoyangwang-nvidia
Copy link
Collaborator Author

/bot run --stage-list "GB200-8_GPUs-2_Nodes-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-2"

1 similar comment
@sunnyqgg
Copy link
Collaborator

sunnyqgg commented Mar 6, 2026

/bot run --stage-list "GB200-8_GPUs-2_Nodes-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-2"

@sunnyqgg
Copy link
Collaborator

sunnyqgg commented Mar 6, 2026

/bot run

1 similar comment
@zhaoyangwang-nvidia
Copy link
Collaborator Author

/bot run

@zhaoyangwang-nvidia
Copy link
Collaborator Author

/bot help

@github-actions
Copy link

github-actions bot commented Mar 6, 2026

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental) --high-priority]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

--high-priority (OPTIONAL) : Run the pipeline with high priority. This option is restricted to authorized users only and will route the job to a high-priority queue.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@zhaoyangwang-nvidia
Copy link
Collaborator Author

/bot run

@zhaoyangwang-nvidia
Copy link
Collaborator Author

/bot kill

@zhaoyangwang-nvidia
Copy link
Collaborator Author

/bot run

1 similar comment
@zhaoyangwang-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38166 [ run ] triggered by Bot. Commit: a44a6bb Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38166 [ run ] completed with state SUCCESS. Commit: a44a6bb
/LLM/main/L0_MergeRequest_PR pipeline #29569 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@zhaoyangwang-nvidia zhaoyangwang-nvidia changed the title [TRTLLM-11257][fix] release MNNVL workspace on NVLinkOneSided failure and skip retry to fix MoE OOM [TRTLLM-11257][fix] release GPU memory and FDs in MnnvlMemory on pidfd failure to prevent leak Mar 9, 2026
@zhaoyangwang-nvidia
Copy link
Collaborator Author

/bot run

@github-actions
Copy link

github-actions bot commented Mar 9, 2026

👎 Promotion blocked, new vulnerability found

Vulnerability report

Component Vulnerability Description Severity
xgrammar CVE-2026-25048 xgrammar is an open-source library for efficient, flexible, and portable structured generation. Prior to version 0.1.32, the multi-level nested syntax caused a segmentation fault (core dumped). This issue has been patched in version 0.1.32. HIGH

@zhaoyangwang-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38251 [ run ] triggered by Bot. Commit: f0280cb Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38251 [ run ] completed with state SUCCESS. Commit: f0280cb
/LLM/main/L0_MergeRequest_PR pipeline #29635 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@zhaoyangwang-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38351 [ run ] triggered by Bot. Commit: f0280cb Link to invocation

@zhaoyangwang-nvidia
Copy link
Collaborator Author

/bot run

@zhaoyangwang-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38357 [ run ] triggered by Bot. Commit: 29636ae Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38359 [ run ] triggered by Bot. Commit: 29636ae Link to invocation

@zhaoyangwang-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38419 [ run ] triggered by Bot. Commit: 29636ae Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38419 [ run ] completed with state SUCCESS. Commit: 29636ae
/LLM/main/L0_MergeRequest_PR pipeline #29779 completed with status: 'SUCCESS'

Link to invocation

@EmmaQiaoCh
Copy link
Collaborator

/bot run --stage-list "GB200-8_GPUs-2_Nodes-PyTorch-2"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38517 [ run ] triggered by Bot. Commit: 29636ae Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38517 [ run ] completed with state SUCCESS. Commit: 29636ae
/LLM/main/L0_MergeRequest_PR pipeline #29865 (Partly Tested) completed with status: 'SUCCESS'

Link to invocation

@zhaoyangwang-nvidia zhaoyangwang-nvidia marked this pull request as ready for review March 11, 2026 08:42
@zhaoyangwang-nvidia
Copy link
Collaborator Author

/bot run

@zhaoyangwang-nvidia
Copy link
Collaborator Author

Hi @hlu1 I’d like to check whether you’re familiar with this part of the code. If so, could you please help review this change when you have time? Thanks a lot!

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 11, 2026

📝 Walkthrough

Walkthrough

Added error path cleanup in open_mnnvl_memory function to release resources (file descriptors and GPU memory) before raising exceptions on pidfd_getfd failure, preventing resource leaks.

Changes

Cohort / File(s) Summary
Error Path Resource Cleanup
tensorrt_llm/_mnnvl_utils.py
Added cleanup logic on pidfd_getfd failure: closes exported_fabric_handle if int, releases GPU memory via cuMemRelease, closes all opened pidfds and remote_fds, then raises RuntimeError with augmented message. All cleanup operations include error logging/warning but continue on individual failures.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The pull request title clearly identifies the main change: fixing resource leaks in MnnvlMemory by releasing GPU memory and file descriptors on pidfd failure.
Description check ✅ Passed PR description is comprehensive and explains the issue, solution, and testing approach clearly.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/_mnnvl_utils.py (1)

231-279: ⚠️ Potential issue | 🔴 Critical

Handle pidfd_open() failures with the same cleanup path.

This only cleans up after pidfd_getfd() fails. If pidfd_open() fails after one or more earlier pidfds were opened, the exported shareable FD, allocated_mem_handle, and any collected pidfds still leak, so retries can still accumulate GPU memory and FDs. Please move this cleanup into a shared helper (or a try/except covering both loops) and call it from both failure sites.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tensorrt_llm/_mnnvl_utils.py` around lines 231 - 279, The pidfd_open failure
path currently doesn't run the same cleanup as the pidfd_getfd failure path,
leaking exported_fabric_handle, allocated_mem_handle and any opened pidfds; wrap
the pidfd_open/pidfd_getfd loops in a single try/except or extract the cleanup
into a helper (e.g., _cleanup_on_fd_failure) and call it from both failure
sites. Ensure the helper or except block closes exported_fabric_handle if it's
an int, calls _check_cu_result(cuda.cuMemRelease(allocated_mem_handle)) inside a
try/except with a warning on failure, closes any entries in pidfds and
remote_fds, and then re-raises a RuntimeError with the original error message
constructed for pidfd_open or pidfd_getfd so behaviour stays the same.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tensorrt_llm/_mnnvl_utils.py`:
- Around line 262-268: The cleanup block currently catches all Exceptions which
is too broad; narrow it to catch only RuntimeError since _check_cu_result()
raises RuntimeError on CUDA failures: replace the "except Exception as e" in the
try/except around cuda.cuMemRelease(allocated_mem_handle) with "except
RuntimeError as e" and keep the existing logger.warning call (referencing
_check_cu_result, cuda.cuMemRelease, allocated_mem_handle, and logger.warning)
so unrelated exceptions are not swallowed.

---

Outside diff comments:
In `@tensorrt_llm/_mnnvl_utils.py`:
- Around line 231-279: The pidfd_open failure path currently doesn't run the
same cleanup as the pidfd_getfd failure path, leaking exported_fabric_handle,
allocated_mem_handle and any opened pidfds; wrap the pidfd_open/pidfd_getfd
loops in a single try/except or extract the cleanup into a helper (e.g.,
_cleanup_on_fd_failure) and call it from both failure sites. Ensure the helper
or except block closes exported_fabric_handle if it's an int, calls
_check_cu_result(cuda.cuMemRelease(allocated_mem_handle)) inside a try/except
with a warning on failure, closes any entries in pidfds and remote_fds, and then
re-raises a RuntimeError with the original error message constructed for
pidfd_open or pidfd_getfd so behaviour stays the same.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: efdb344b-6a0a-4010-8ee7-913ec8e08553

📥 Commits

Reviewing files that changed from the base of the PR and between 298b6c8 and 6fb5788.

📒 Files selected for processing (1)
  • tensorrt_llm/_mnnvl_utils.py

Comment on lines +262 to +268
try:
_check_cu_result(cuda.cuMemRelease(allocated_mem_handle))
except Exception as e:
logger.warning(
"cuMemRelease failed during error cleanup (original error will be raised): %s",
e,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find the file and locate _check_cu_result function
fd "_mnnvl_utils.py" -x cat -n {} | head -20

Repository: NVIDIA/TensorRT-LLM

Length of output: 1012


🏁 Script executed:

#!/bin/bash
# Search for _check_cu_result function definition
rg "_check_cu_result" --type py -B 2 -A 10 | head -50

Repository: NVIDIA/TensorRT-LLM

Length of output: 3415


🏁 Script executed:

#!/bin/bash
# View lines 250-280 to see the context around the cleanup block
sed -n '250,280p' tensorrt_llm/_mnnvl_utils.py | cat -n

Repository: NVIDIA/TensorRT-LLM

Length of output: 1755


Narrow the cleanup catch to RuntimeError.

_check_cu_result() raises RuntimeError when a CUDA operation fails, so except Exception is unnecessarily broad and risks swallowing unrelated bugs in the cleanup block.

🔧 Minimal fix
-                    except Exception as e:
+                    except RuntimeError as e:

As per coding guidelines, catch specific exceptions, not Exception.

🧰 Tools
🪛 Ruff (0.15.5)

[warning] 264-264: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tensorrt_llm/_mnnvl_utils.py` around lines 262 - 268, The cleanup block
currently catches all Exceptions which is too broad; narrow it to catch only
RuntimeError since _check_cu_result() raises RuntimeError on CUDA failures:
replace the "except Exception as e" in the try/except around
cuda.cuMemRelease(allocated_mem_handle) with "except RuntimeError as e" and keep
the existing logger.warning call (referencing _check_cu_result,
cuda.cuMemRelease, allocated_mem_handle, and logger.warning) so unrelated
exceptions are not swallowed.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38569 [ run ] triggered by Bot. Commit: 6fb5788 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38569 [ run ] completed with state SUCCESS. Commit: 6fb5788
/LLM/main/L0_MergeRequest_PR pipeline #29909 completed with status: 'SUCCESS'

CI Report

Link to invocation

Release allocated_mem_handle, exported shareable handle, and open
pidfds/remote_fds before re-raise to avoid leaks.

Signed-off-by: ZhaoyangWang <zhaoyangw@nvidia.com>
Signed-off-by: ZhaoyangWang <zhaoyangw@nvidia.com>
@zhaoyangwang-nvidia
Copy link
Collaborator Author

/bot run

@zhaoyangwang-nvidia
Copy link
Collaborator Author

Hi @brb-nv @dongxuy04 @WeiHaocheng could you please help review this change? Just a little modification, thanks a lot!

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38655 [ run ] triggered by Bot. Commit: a0d159a Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38655 [ run ] completed with state SUCCESS. Commit: a0d159a
/LLM/main/L0_MergeRequest_PR pipeline #29981 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants