Skip to content

Exposing interleave argument for fused_apply_rotary_pos_emb_thd#3794

Merged
huvunvidia merged 6 commits intoNVIDIA:mainfrom
huvunvidia:huvu/rope_fusion_thd_interleave_main
Mar 13, 2026
Merged

Exposing interleave argument for fused_apply_rotary_pos_emb_thd#3794
huvunvidia merged 6 commits intoNVIDIA:mainfrom
huvunvidia:huvu/rope_fusion_thd_interleave_main

Conversation

@huvunvidia
Copy link
Contributor

What does this PR do ?

Exposing interleave argument for fused_apply_rotary_pos_emb_thd.
This PR is the mirror of #3759.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@huvunvidia huvunvidia requested review from a team as code owners March 11, 2026 03:14
@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft March 11, 2026 03:14
@github-actions
Copy link
Contributor

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Mar 11, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Mar 11, 2026
@Phlip79 Phlip79 marked this pull request as ready for review March 11, 2026 05:39
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team March 11, 2026 05:39
@svcnvidia-nemo-ci svcnvidia-nemo-ci added Final Review PR is in the "final review" stage complexity: low labels Mar 11, 2026
@ericharper
Copy link
Contributor

/claude review

cu_seqlens=cu_seqlens,
cp_size=cp_size,
cp_rank=cp_rank,
interleaved=interleaved,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: The interleaved parameter is correctly passed in this if branch (TE >= 1.12.0), but the else branch at line 2596-2598 does not pass interleaved=interleaved to apply_rotary_pos_emb. This means on older TE versions, the interleaved setting will be silently ignored. Since this PR is specifically adding interleaved support, the fallback path should be fixed too:

# line 2596-2598, else branch:
return apply_rotary_pos_emb(
    t, freqs, tensor_format="thd", fused=True, cu_seqlens=cu_seqlens,
    interleaved=interleaved,
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If older TE versions don't have the interleaved argument we should assert False in the else branch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jaredcasper I looked into the TE code and found out that interleaved only supports for TE>2.3.0.
Hence, I added a condition check for min_TE when using interleaved=True, if it's <2.3.0 will assert error. (https://github.com/huvunvidia/Megatron-LM/blob/ce883edd14063fa0f298b8b8bbaaea5f0ba893c9/megatron/core/extensions/transformer_engine.py#L2583)
With that we don't need to modify subsequent code.

@huvunvidia huvunvidia requested a review from jaredcasper March 13, 2026 18:50
@svcnvidia-nemo-ci svcnvidia-nemo-ci added Approved All necessary approvals have been made and removed Final Review PR is in the "final review" stage labels Mar 13, 2026
@huvunvidia huvunvidia enabled auto-merge March 13, 2026 21:32
@huvunvidia huvunvidia added this pull request to the merge queue Mar 13, 2026
@svcnvidia-nemo-ci
Copy link

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/23071702779

Merged via the queue into NVIDIA:main with commit 87eb3c2 Mar 13, 2026
56 of 59 checks passed
@huvunvidia huvunvidia deleted the huvu/rope_fusion_thd_interleave_main branch March 13, 2026 22:32
HollowMan6 pushed a commit to HollowMan6/Megatron-LM that referenced this pull request Mar 16, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Approved All necessary approvals have been made complexity: low

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants