Skip to content

[None][chore] Add explicit error for intermediate size misalignment with fp8 block size#12101

Open
leslie-fang25 wants to merge 1 commit intoNVIDIA:mainfrom
leslie-fang25:leslie/assert_fp8_block_wise_scale_alignment
Open

[None][chore] Add explicit error for intermediate size misalignment with fp8 block size#12101
leslie-fang25 wants to merge 1 commit intoNVIDIA:mainfrom
leslie-fang25:leslie/assert_fp8_block_wise_scale_alignment

Conversation

@leslie-fang25
Copy link
Collaborator

@leslie-fang25 leslie-fang25 commented Mar 11, 2026

Summary by CodeRabbit

Release Notes

  • Improvements
    • Enhanced FP8 quantization block scaling configuration for improved maintainability.
    • Added alignment validation to ensure proper memory layout constraints during expert weight loading, increasing robustness.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

…ith fp8 block size

Signed-off-by: leslie-fang25 <leslief@nvidia.com>
@leslie-fang25 leslie-fang25 requested a review from a team as a code owner March 11, 2026 06:52
@leslie-fang25 leslie-fang25 requested a review from QiJune March 11, 2026 06:52
@leslie-fang25
Copy link
Collaborator Author

/bot run --disable-fail-fast

@leslie-fang25 leslie-fang25 requested a review from xxi-nv March 11, 2026 06:52
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 11, 2026

📝 Walkthrough

Walkthrough

A new class attribute fp8_block_size = 128 is introduced in DeepSeekFP8BlockScalesFusedMoEMethod to replace hard-coded FP8 block size literals in tensor shape calculations. An alignment assertion is added to enforce divisibility requirements for intermediate tensor dimensions.

Changes

Cohort / File(s) Summary
FP8 Block Size Refactoring
tensorrt_llm/_torch/modules/fused_moe/quantization.py
Extracted hard-coded FP8 block size (128) into a configurable class attribute; replaced literals in shape calculations for weight scaling factors; added divisibility assertion for alignment validation in VANILLA loading path.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ⚠️ Warning The PR description is incomplete. While the title clearly indicates the purpose ('Add explicit error for intermediate size misalignment with fp8 block size'), the required template sections for Description, Test Coverage, and PR Checklist details are unfilled or only contain template comments without actual content. Complete the Description section explaining the issue and solution, fill in Test Coverage with relevant tests, and verify/update the PR Checklist items with actual status.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: adding an explicit error for intermediate size misalignment with fp8 block size, which aligns with the assertion added to enforce alignment requirements.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tensorrt_llm/_torch/modules/fused_moe/quantization.py (1)

992-992: Consider using explicit raise instead of assert for production validation.

Using assert for validation can be problematic because assertions are disabled when Python runs with optimization flags (-O or -OO). For a check that must always execute to prevent data corruption or incorrect behavior, an explicit conditional with raise ValueError is safer.

Also, this check is only in the VANILLA path. If FUSED_GATE_UP_PROJ mode can also encounter misaligned partitions, a similar check may be warranted there (or moved earlier in the flow).

♻️ Suggested refactor to use explicit raise
             dst_w3_weight_scale, dst_w1_weight_scale = dst_w3_w1_weight_scale[
                 local_slot_id].chunk(2, dim=0)
-            assert module.intermediate_size_per_partition % self.fp8_block_size == 0, "For DeepSeekFP8BlockScalesFusedMoEMethod, intermediate_size_per_partition should be divisible by fp8_block_size."
+            if module.intermediate_size_per_partition % self.fp8_block_size != 0:
+                raise ValueError(
+                    f"For DeepSeekFP8BlockScalesFusedMoEMethod, intermediate_size_per_partition "
+                    f"({module.intermediate_size_per_partition}) must be divisible by "
+                    f"fp8_block_size ({self.fp8_block_size})."
+                )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tensorrt_llm/_torch/modules/fused_moe/quantization.py` at line 992, Replace
the runtime-only assertion with an explicit validation: in quantization.py where
the code currently does assert module.intermediate_size_per_partition %
self.fp8_block_size == 0 (inside DeepSeekFP8BlockScalesFusedMoEMethod or its
surrounding method), change it to an if-check that raises ValueError with a
clear message when the condition fails; also add the same explicit validation
for the FUSED_GATE_UP_PROJ path (or move the check to a shared validation
routine run before branching) so misaligned intermediate_size_per_partition is
always caught in production.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tensorrt_llm/_torch/modules/fused_moe/quantization.py`:
- Line 992: Replace the runtime-only assertion with an explicit validation: in
quantization.py where the code currently does assert
module.intermediate_size_per_partition % self.fp8_block_size == 0 (inside
DeepSeekFP8BlockScalesFusedMoEMethod or its surrounding method), change it to an
if-check that raises ValueError with a clear message when the condition fails;
also add the same explicit validation for the FUSED_GATE_UP_PROJ path (or move
the check to a shared validation routine run before branching) so misaligned
intermediate_size_per_partition is always caught in production.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: a3da34a9-95b1-4ab3-bf09-540f8c752b45

📥 Commits

Reviewing files that changed from the base of the PR and between f7255e0 and 4defc14.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/modules/fused_moe/quantization.py

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38552 [ run ] triggered by Bot. Commit: 4defc14 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38552 [ run ] completed with state SUCCESS. Commit: 4defc14
/LLM/main/L0_MergeRequest_PR pipeline #29896 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@leslie-fang25
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38642 [ run ] triggered by Bot. Commit: 4defc14 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38642 [ run ] completed with state SUCCESS. Commit: 4defc14
/LLM/main/L0_MergeRequest_PR pipeline #29972 completed with status: 'SUCCESS'

CI Report

Link to invocation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants