Add torch grouped gemm bf16 and mxfp8 support w/ cuda graphed + inference_optimized MoEs#3858
Conversation
…ation_controller.py Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
|
This PR has been automatically converted to draft because all PRs must start as drafts. When you are ready for review, click Ready for Review to begin the review process. This will:
See the contribution guide for more details. |
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
|
/ok to test a7d4d52 |
| if args.cuda_graph_impl == "local" and CudaGraphScope.full_iteration_inference in args.cuda_graph_scope: | ||
| assert args.fp8 is None, \ | ||
| "fp8 is not supported with inference dynamic batching and full_iteration_inference CUDA graph" | ||
| if args.fp8 is not None: |
There was a problem hiding this comment.
nit: can this entire check be moved to TransformerConfig post init? I'm planning to eventually move contents of validate_args in bulk to appropriate config post inits, but doesn't hurt to do it now as well.
will not block merge on this though.
jiemingz
left a comment
There was a problem hiding this comment.
LGTM! just left 1 minor comment
| return x, None | ||
|
|
||
|
|
||
| class InferenceColumnParallelLinear(TEColumnParallelLinear): |
There was a problem hiding this comment.
nit: IMO the name and description is a little confusing- my first impression is this was an inference only CPL since thats the convention in other parts of the repo.
Maybe something along the lines of TEColumnParallelLinearWithInference is more clear and also reflects that this is class is extending TEColumnParallelLinear rather than replacing it.
|
🔄 Merge queue validation started! You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/23207473460 |
…ort w/ cuda graphed + inference_optimized MoEs (NVIDIA#3858) New files: - megatron/core/inference/moe/__init__.py - megatron/core/inference/moe/activations.py - megatron/core/inference/moe/fused_moe.py - megatron/core/inference/moe/pad.py - megatron/core/inference/moe/permute.py - megatron/core/inference/quantization/mxfp8_quantize.py - tests/unit_tests/inference/test_moe_permute.py - tests/unit_tests/inference/test_mxfp8_utils.py
In main, we only support non-cudagraphed bf-16 only inference with torch grouped gemms. This PR introduces cuda-graphing and mxfp8 support for the same.
Limitation: Mxfp8 will only work with non-colocated RL, since TE and torch storage formats are slightly different.
Contribution process
Pre-checks
Code review
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.
Step 1: Mark PR as "Ready for Review"
.github/CODEOWNERS.Final Review might get declined if these requirements are not fulfilled.
Step 2: Final Review
For PRs that change
megatron/core, once all expert reviewers have approved, theFinal Reviewlabel is applied automatically and final reviewers are assigned.For PRs outside
megatron/core, this step is skipped.Step 3: Approved
Once all required reviewers have approved, the
Approvedlabel is applied automatically.Merge
Any member of mcore-engineers will be able to merge your PR.
For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.