[None][test] fix perf test cases issue of incorrect match#12096
[None][test] fix perf test cases issue of incorrect match#12096ruodil wants to merge 1 commit intoNVIDIA:mainfrom
Conversation
Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com>
|
/bot skip --comment "skip test as just modifying cases" |
📝 WalkthroughWalkthroughModifies test configuration to add multi-GPU test blocks for 8-GPU systems (L40S, H100, H20, H200), adds a new test entry for qwen model under RTX-6000 Server section, and removes a duplicate test entry while reintroducing an aligned 8-GPU variant block with compute capability conditions. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
|
PR_Github #38539 [ skip ] triggered by Bot. Commit: |
| system_gpu_count: | ||
| gte: 8 | ||
| compute_capability: | ||
| gt: 8.0 |
There was a problem hiding this comment.
| gt: 8.0 | |
| gte: 9.0 |
There was a problem hiding this comment.
maybe remove 8XL40S because it's low-priority and 8XGPU is SYS topo which lead to unstable
There was a problem hiding this comment.
if remove 8XL40S, maybe you can merge 3b into 4
| # 2: L20, L40S, H100, H20, H200 | ||
| # 3: L40S, H100, H20, H200 | ||
| # 3: L40S, H100, H20, H200 (4 GPUs) | ||
| # 3b: L40S, H100, H20, H200 (8 GPUs) |
There was a problem hiding this comment.
| # 3b: L40S, H100, H20, H200 (8 GPUs) | |
| # 3b: H100, H20, H200 (8 GPUs) |
| - perf/test_perf.py::test_perf[llama_v3.3_70b_instruct_fp8-bench-pytorch-float8-input_output_len:512,32-gpus:4] #llama_v3.3_70b_instruct_fp8 | ||
|
|
||
|
|
||
| # 3b: L40S, H100, H20, H200 (8 GPUs) |
There was a problem hiding this comment.
| # 3b: L40S, H100, H20, H200 (8 GPUs) | |
| # 3b: H100, H20, H200 (8 GPUs) |
|
PR_Github #38539 [ skip ] completed with state |
Summary by CodeRabbit
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.