Support 'auto' argument which defaults to pre-MixedPrecisionPolicy be…#3810
Open
cspades wants to merge 1 commit intoNVIDIA:mainfrom
Open
Support 'auto' argument which defaults to pre-MixedPrecisionPolicy be…#3810cspades wants to merge 1 commit intoNVIDIA:mainfrom
cspades wants to merge 1 commit intoNVIDIA:mainfrom
Conversation
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
cspades
commented
Mar 11, 2026
Comment on lines
-861
to
870
| 'fp32': torch.float32, 'bf16': torch.bfloat16, 'fp16': torch.float16, 'fp8': torch.uint8, | ||
| 'fp32': torch.float32, 'bf16': torch.bfloat16, 'fp16': torch.float16, 'fp8': torch.uint8, 'auto': None, | ||
| } | ||
| map_dtype = lambda d: d if isinstance(d, torch.dtype) else dtype_map[d] | ||
|
|
||
| args.main_grads_dtype = map_dtype(args.main_grads_dtype) | ||
| args.main_params_dtype = map_dtype(args.main_params_dtype) | ||
| args.exp_avg_dtype = map_dtype(args.exp_avg_dtype) | ||
| args.exp_avg_sq_dtype = map_dtype(args.exp_avg_sq_dtype) | ||
| args.mamba_inference_conv_states_dtype = map_dtype(args.mamba_inference_conv_states_dtype) | ||
| args.mamba_inference_ssm_states_dtype = map_dtype(args.mamba_inference_ssm_states_dtype) |
Member
Author
There was a problem hiding this comment.
Should I guard the args that don't support None? The argparser should already take care of it...
…havior for supporting per-parameter grad dtypes. Signed-off-by: Cory Ye <cye@nvidia.com>
f82a451 to
85de76e
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
…havior for supporting per-parameter grad dtypes.
What does this PR do ?
auto(i.e.None) argument for Megatron-FSDP MixedPrecisionPolicy. Addresses the possible case where model gradients are mixed precision and static gradient buffers are used (as in the case of--gradient-accumulation-fusion), in which case we fall-back to the original logic of using the parameter data-type for the gradient buffer data-type, and always use BF16 for quantized parameters.Details / Backstory
--gradient-accumulation-fusionis not compatible with the defaulttorch.float32value for gradient communication dtype when using FP8 parameters, becauseget_main_grad()is called during Megatron-LM's backward implementation and produces an FP32 gradient buffer for BF16 gradients if--megatron-fsdp-grad-comm-dtype bf16is not set. Megatron-Bridge currently does not support setting this argument, so we have to hard-code it to unblock MLPerf. This PR generalizes this compatibility in case models actually have mixed precision gradients, we need theNoneoption in Megatron-LM.Contribution process
Pre-checks
Code review
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.
Step 1: Mark PR as "Ready for Review"
.github/CODEOWNERS.Final Review might get declined if these requirements are not fulfilled.
Step 2: Final Review
For PRs that change
megatron/core, once all expert reviewers have approved, theFinal Reviewlabel is applied automatically and final reviewers are assigned.For PRs outside
megatron/core, this step is skipped.Step 3: Approved
Once all required reviewers have approved, the
Approvedlabel is applied automatically.Merge
Any member of mcore-engineers will be able to merge your PR.
For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.