Exposing interleave argument for fused_apply_rotary_pos_emb_thd#3794
Conversation
|
This PR has been automatically converted to draft because all PRs must start as drafts. When you are ready for review, click Ready for Review to begin the review process. This will:
See the contribution guide for more details. |
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
|
/claude review |
| cu_seqlens=cu_seqlens, | ||
| cp_size=cp_size, | ||
| cp_rank=cp_rank, | ||
| interleaved=interleaved, |
There was a problem hiding this comment.
Bug: The interleaved parameter is correctly passed in this if branch (TE >= 1.12.0), but the else branch at line 2596-2598 does not pass interleaved=interleaved to apply_rotary_pos_emb. This means on older TE versions, the interleaved setting will be silently ignored. Since this PR is specifically adding interleaved support, the fallback path should be fixed too:
# line 2596-2598, else branch:
return apply_rotary_pos_emb(
t, freqs, tensor_format="thd", fused=True, cu_seqlens=cu_seqlens,
interleaved=interleaved,
)There was a problem hiding this comment.
If older TE versions don't have the interleaved argument we should assert False in the else branch.
There was a problem hiding this comment.
@jaredcasper I looked into the TE code and found out that interleaved only supports for TE>2.3.0.
Hence, I added a condition check for min_TE when using interleaved=True, if it's <2.3.0 will assert error. (https://github.com/huvunvidia/Megatron-LM/blob/ce883edd14063fa0f298b8b8bbaaea5f0ba893c9/megatron/core/extensions/transformer_engine.py#L2583)
With that we don't need to modify subsequent code.
|
🔄 Merge queue validation started! You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/23071702779 |
What does this PR do ?
Exposing interleave argument for fused_apply_rotary_pos_emb_thd.
This PR is the mirror of #3759.
Contribution process
Pre-checks
Code review
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.
Step 1: Mark PR as "Ready for Review"
.github/CODEOWNERS.Final Review might get declined if these requirements are not fulfilled.
Step 2: Final Review
For PRs that change
megatron/core, once all expert reviewers have approved, theFinal Reviewlabel is applied automatically and final reviewers are assigned.For PRs outside
megatron/core, this step is skipped.Step 3: Approved
Once all required reviewers have approved, the
Approvedlabel is applied automatically.Merge
Any member of mcore-engineers will be able to merge your PR.
For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.