[TRTLLM-11257][fix] release GPU memory and FDs in MnnvlMemory on pidfd failure to prevent leak#11979
[TRTLLM-11257][fix] release GPU memory and FDs in MnnvlMemory on pidfd failure to prevent leak#11979zhaoyangwang-nvidia wants to merge 2 commits intoNVIDIA:mainfrom
Conversation
|
/bot help |
GitHub Bot Help
Provide a user friendly way for developers to interact with a Jenkins server. Run See details below for each supported subcommand. Details
Launch build/test pipelines. All previously running jobs will be killed.
kill
Kill all running builds associated with pull request. skip
Skip testing for latest commit on pull request. reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break. |
|
/bot run --stage-list "GB200-8_GPUs-2_Nodes-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-2" |
1 similar comment
|
/bot run --stage-list "GB200-8_GPUs-2_Nodes-PyTorch-1, GB200-8_GPUs-2_Nodes-PyTorch-2" |
|
/bot run |
1 similar comment
|
/bot run |
|
/bot help |
GitHub Bot Help
Provide a user friendly way for developers to interact with a Jenkins server. Run See details below for each supported subcommand. Details
Launch build/test pipelines. All previously running jobs will be killed.
kill
Kill all running builds associated with pull request. skip
Skip testing for latest commit on pull request. reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break. |
|
/bot run |
|
/bot kill |
|
/bot run |
1 similar comment
|
/bot run |
|
PR_Github #38166 [ run ] triggered by Bot. Commit: |
|
PR_Github #38166 [ run ] completed with state
|
a44a6bb to
a7aaaeb
Compare
|
/bot run |
👎 Promotion blocked, new vulnerability foundVulnerability report
|
a7aaaeb to
f0280cb
Compare
|
/bot run |
|
PR_Github #38251 [ run ] triggered by Bot. Commit: |
|
PR_Github #38251 [ run ] completed with state
|
|
/bot run |
|
PR_Github #38351 [ run ] triggered by Bot. Commit: |
f0280cb to
4aacbb4
Compare
|
/bot run |
4aacbb4 to
29636ae
Compare
|
/bot run |
|
PR_Github #38357 [ run ] triggered by Bot. Commit: |
|
PR_Github #38359 [ run ] triggered by Bot. Commit: |
|
/bot run |
|
PR_Github #38419 [ run ] triggered by Bot. Commit: |
|
PR_Github #38419 [ run ] completed with state |
|
/bot run --stage-list "GB200-8_GPUs-2_Nodes-PyTorch-2" |
|
PR_Github #38517 [ run ] triggered by Bot. Commit: |
|
PR_Github #38517 [ run ] completed with state |
29636ae to
6fb5788
Compare
|
/bot run |
|
Hi @hlu1 I’d like to check whether you’re familiar with this part of the code. If so, could you please help review this change when you have time? Thanks a lot! |
📝 WalkthroughWalkthroughAdded error path cleanup in Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tensorrt_llm/_mnnvl_utils.py (1)
231-279:⚠️ Potential issue | 🔴 CriticalHandle
pidfd_open()failures with the same cleanup path.This only cleans up after
pidfd_getfd()fails. Ifpidfd_open()fails after one or more earlierpidfds were opened, the exported shareable FD,allocated_mem_handle, and any collectedpidfds still leak, so retries can still accumulate GPU memory and FDs. Please move this cleanup into a shared helper (or atry/exceptcovering both loops) and call it from both failure sites.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tensorrt_llm/_mnnvl_utils.py` around lines 231 - 279, The pidfd_open failure path currently doesn't run the same cleanup as the pidfd_getfd failure path, leaking exported_fabric_handle, allocated_mem_handle and any opened pidfds; wrap the pidfd_open/pidfd_getfd loops in a single try/except or extract the cleanup into a helper (e.g., _cleanup_on_fd_failure) and call it from both failure sites. Ensure the helper or except block closes exported_fabric_handle if it's an int, calls _check_cu_result(cuda.cuMemRelease(allocated_mem_handle)) inside a try/except with a warning on failure, closes any entries in pidfds and remote_fds, and then re-raises a RuntimeError with the original error message constructed for pidfd_open or pidfd_getfd so behaviour stays the same.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tensorrt_llm/_mnnvl_utils.py`:
- Around line 262-268: The cleanup block currently catches all Exceptions which
is too broad; narrow it to catch only RuntimeError since _check_cu_result()
raises RuntimeError on CUDA failures: replace the "except Exception as e" in the
try/except around cuda.cuMemRelease(allocated_mem_handle) with "except
RuntimeError as e" and keep the existing logger.warning call (referencing
_check_cu_result, cuda.cuMemRelease, allocated_mem_handle, and logger.warning)
so unrelated exceptions are not swallowed.
---
Outside diff comments:
In `@tensorrt_llm/_mnnvl_utils.py`:
- Around line 231-279: The pidfd_open failure path currently doesn't run the
same cleanup as the pidfd_getfd failure path, leaking exported_fabric_handle,
allocated_mem_handle and any opened pidfds; wrap the pidfd_open/pidfd_getfd
loops in a single try/except or extract the cleanup into a helper (e.g.,
_cleanup_on_fd_failure) and call it from both failure sites. Ensure the helper
or except block closes exported_fabric_handle if it's an int, calls
_check_cu_result(cuda.cuMemRelease(allocated_mem_handle)) inside a try/except
with a warning on failure, closes any entries in pidfds and remote_fds, and then
re-raises a RuntimeError with the original error message constructed for
pidfd_open or pidfd_getfd so behaviour stays the same.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: efdb344b-6a0a-4010-8ee7-913ec8e08553
📒 Files selected for processing (1)
tensorrt_llm/_mnnvl_utils.py
| try: | ||
| _check_cu_result(cuda.cuMemRelease(allocated_mem_handle)) | ||
| except Exception as e: | ||
| logger.warning( | ||
| "cuMemRelease failed during error cleanup (original error will be raised): %s", | ||
| e, | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the file and locate _check_cu_result function
fd "_mnnvl_utils.py" -x cat -n {} | head -20Repository: NVIDIA/TensorRT-LLM
Length of output: 1012
🏁 Script executed:
#!/bin/bash
# Search for _check_cu_result function definition
rg "_check_cu_result" --type py -B 2 -A 10 | head -50Repository: NVIDIA/TensorRT-LLM
Length of output: 3415
🏁 Script executed:
#!/bin/bash
# View lines 250-280 to see the context around the cleanup block
sed -n '250,280p' tensorrt_llm/_mnnvl_utils.py | cat -nRepository: NVIDIA/TensorRT-LLM
Length of output: 1755
Narrow the cleanup catch to RuntimeError.
_check_cu_result() raises RuntimeError when a CUDA operation fails, so except Exception is unnecessarily broad and risks swallowing unrelated bugs in the cleanup block.
🔧 Minimal fix
- except Exception as e:
+ except RuntimeError as e:As per coding guidelines, catch specific exceptions, not Exception.
🧰 Tools
🪛 Ruff (0.15.5)
[warning] 264-264: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tensorrt_llm/_mnnvl_utils.py` around lines 262 - 268, The cleanup block
currently catches all Exceptions which is too broad; narrow it to catch only
RuntimeError since _check_cu_result() raises RuntimeError on CUDA failures:
replace the "except Exception as e" in the try/except around
cuda.cuMemRelease(allocated_mem_handle) with "except RuntimeError as e" and keep
the existing logger.warning call (referencing _check_cu_result,
cuda.cuMemRelease, allocated_mem_handle, and logger.warning) so unrelated
exceptions are not swallowed.
|
PR_Github #38569 [ run ] triggered by Bot. Commit: |
|
PR_Github #38569 [ run ] completed with state |
Release allocated_mem_handle, exported shareable handle, and open pidfds/remote_fds before re-raise to avoid leaks. Signed-off-by: ZhaoyangWang <zhaoyangw@nvidia.com>
Signed-off-by: ZhaoyangWang <zhaoyangw@nvidia.com>
2f1ddc5 to
a0d159a
Compare
|
/bot run |
|
Hi @brb-nv @dongxuy04 @WeiHaocheng could you please help review this change? Just a little modification, thanks a lot! |
|
PR_Github #38655 [ run ] triggered by Bot. Commit: |
|
PR_Github #38655 [ run ] completed with state
|
Summary by CodeRabbit
Description
When NVLink one-sided communication is used for MoE, workspace allocation in nvlink_one_sided.py calls MnnvlMemory(mapping, workspace_size_per_rank), which allocates via cuMemCreate + cuMemExportToShareableHandle and shares across processes using pidfd_open / pidfd_getfd. If pidfd_open or pidfd_getfd fails (e.g., EPERM in containers without SYS_PTRACE), the code previously raised without releasing resources created in the current attempt, including the CUDA allocation handle, exported shareable handle (FD for POSIX handle type), and any already-opened pidfds / duplicated remote FDs. Because self.WORKSPACE remains None, later retries could repeat this path, causing cumulative GPU memory and FD leaks.
This PR fixes the failure path in the POSIX (non-FABRIC) branch of open_mnnvl_memory in tensorrt_llm/_mnnvl_utils.py by ensuring proper cleanup before re-raising: it closes the exported shareable FD when applicable, calls cuMemRelease on the cuMemCreate allocation handle, and closes any pidfds and duplicated remote FDs opened during the attempt. Cleanup errors are logged as warnings and do not mask the original exception.
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.