Skip to content

Add torch grouped gemm bf16 and mxfp8 support w/ cuda graphed + inference_optimized MoEs#3858

Merged
sidsingh-nvidia merged 56 commits intoNVIDIA:mainfrom
sidsingh-nvidia:siddharth/torch-ggemm-mxfp8
Mar 17, 2026
Merged

Add torch grouped gemm bf16 and mxfp8 support w/ cuda graphed + inference_optimized MoEs#3858
sidsingh-nvidia merged 56 commits intoNVIDIA:mainfrom
sidsingh-nvidia:siddharth/torch-ggemm-mxfp8

Conversation

@sidsingh-nvidia
Copy link
Contributor

@sidsingh-nvidia sidsingh-nvidia commented Mar 13, 2026

In main, we only support non-cudagraphed bf-16 only inference with torch grouped gemms. This PR introduces cuda-graphing and mxfp8 support for the same.

Limitation: Mxfp8 will only work with non-colocated RL, since TE and torch storage formats are slightly different.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@sidsingh-nvidia sidsingh-nvidia requested review from a team as code owners March 13, 2026 13:25
@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft March 13, 2026 13:26
@github-actions
Copy link
Contributor

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Mar 13, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Mar 13, 2026
@sidsingh-nvidia
Copy link
Contributor Author

/ok to test a7d4d52

if args.cuda_graph_impl == "local" and CudaGraphScope.full_iteration_inference in args.cuda_graph_scope:
assert args.fp8 is None, \
"fp8 is not supported with inference dynamic batching and full_iteration_inference CUDA graph"
if args.fp8 is not None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can this entire check be moved to TransformerConfig post init? I'm planning to eventually move contents of validate_args in bulk to appropriate config post inits, but doesn't hurt to do it now as well.
will not block merge on this though.

@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Approved All necessary approvals have been made label Mar 17, 2026
Copy link
Contributor

@jiemingz jiemingz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! just left 1 minor comment

return x, None


class InferenceColumnParallelLinear(TEColumnParallelLinear):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: IMO the name and description is a little confusing- my first impression is this was an inference only CPL since thats the convention in other parts of the repo.

Maybe something along the lines of TEColumnParallelLinearWithInference is more clear and also reflects that this is class is extending TEColumnParallelLinear rather than replacing it.

@sidsingh-nvidia sidsingh-nvidia added this pull request to the merge queue Mar 17, 2026
@svcnvidia-nemo-ci
Copy link

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/23207473460

Merged via the queue into NVIDIA:main with commit 589cd9e Mar 17, 2026
122 of 126 checks passed
@sidsingh-nvidia sidsingh-nvidia deleted the siddharth/torch-ggemm-mxfp8 branch March 17, 2026 18:53
ilml added a commit to ilml/Megatron-LM that referenced this pull request Mar 20, 2026
…ort w/ cuda graphed + inference_optimized MoEs (NVIDIA#3858)

New files:
  - megatron/core/inference/moe/__init__.py
  - megatron/core/inference/moe/activations.py
  - megatron/core/inference/moe/fused_moe.py
  - megatron/core/inference/moe/pad.py
  - megatron/core/inference/moe/permute.py
  - megatron/core/inference/quantization/mxfp8_quantize.py
  - tests/unit_tests/inference/test_moe_permute.py
  - tests/unit_tests/inference/test_mxfp8_utils.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.