Skip to content

[MoE][ddp] Enable distributed MoE calibration replacement#2449

Draft
dichn wants to merge 3 commits intovllm-project:mainfrom
dichn:ddp_moe_replace
Draft

[MoE][ddp] Enable distributed MoE calibration replacement#2449
dichn wants to merge 3 commits intovllm-project:mainfrom
dichn:ddp_moe_replace

Conversation

@dichn
Copy link
Contributor

@dichn dichn commented Mar 6, 2026

SUMMARY:
Extends moe_calibration_context to support PyTorch DDP for parallel MoE module replacement across multiple ranks during calibration.

Key changes:

  • Pre-replacement verification: all_gather to ensure consistent module counts across ranks, preventing structure mismatch errors
  • Post-replacement barrier: synchronizes all ranks before calibration begins
  • Post-restoration barrier: ensures clean context manager exit
  • Rank-aware logging: only rank 0 shows progress bars and info logs

TEST PLAN:
Tests include single-rank and DDP integration coverage.

Extends moe_calibration_context to support PyTorch DDP for parallel
MoE module replacement across multiple ranks during calibration.

Key changes:
- Pre-replacement verification: all_gather to ensure consistent module
  counts across ranks, preventing structure mismatch errors
- Post-replacement barrier: synchronizes all ranks before calibration begins
- Post-restoration barrier: ensures clean context manager exit
- Rank-aware logging: only rank 0 shows progress bars and info logs

Tests include single-rank and DDP integration coverage.
@dichn dichn requested review from dsikka and kylesayrs as code owners March 6, 2026 09:03
@github-actions
Copy link

github-actions bot commented Mar 6, 2026

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the moe_calibration_context to enable distributed MoE module replacement and calibration across multiple PyTorch DDP ranks. This ensures consistent model structures and synchronized operations in a distributed environment, improving the robustness and scalability of MoE calibration.

Highlights

  • Distributed Module Verification: Implemented pre-replacement verification using all_gather to ensure all DDP ranks have identical MoE module counts, preventing structural mismatches.
  • Distributed Synchronization: Added dist.barrier() calls to synchronize all ranks after MoE module replacement and again after restoration, ensuring coordinated execution during calibration.
  • Rank-Aware Logging: Modified logging and progress bar display to be rank-aware, showing output only on rank 0 to avoid redundant console spam in distributed settings.
  • Import Refactoring: Refactored the import path for is_distributed to use compressed_tensors.offload.dist_utils for better organization.
  • New Test Coverage: Introduced new unit and integration tests to cover both single-rank and DDP scenarios for the MoE calibration context.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/llmcompressor/modeling/moe_context.py
    • Updated the import statement for is_distributed to point to compressed_tensors.offload.dist_utils.
    • Added logic to verify that all DDP ranks have the same number of MoE modules before replacement, raising a RuntimeError if inconsistencies are found.
    • Modified the progress bar and logger info messages to only display on rank 0 when running in a distributed environment.
    • Inserted dist.barrier() calls after module replacement and after module restoration to ensure all ranks are synchronized.
    • Added debug logging for rank-specific completion of replacement and restoration.
  • tests/llmcompressor/modeling/test_moe_context.py
    • Added a new test file for single-rank MoE calibration context.
    • Included test_moe_context_replacement to verify correct replacement and restoration of MoE modules.
    • Added test_moe_context_calibrate_flag to ensure the calibrate_all_experts flag is passed correctly.
  • tests/llmcompressor/modeling/test_moe_context_ddp.py
    • Added a new test file dedicated to DDP integration tests for the MoE calibration context.
    • Implemented a ddp_environment fixture to initialize the DDP environment for tests.
    • Included test_moe_context_ddp to verify MoE module replacement and consistency across multiple DDP ranks.
Activity
  • No human activity (comments, reviews, progress updates) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively extends moe_calibration_context to support distributed data parallel (DDP) environments. The changes, including pre-replacement verification of module counts, proper synchronization with barriers, and rank-aware logging, are well-implemented. The addition of both single-rank and DDP tests is also a great improvement. I've identified a potential issue in both the implementation and the new DDP test related to device placement for tensors used in distributed communication, which could lead to runtime errors with certain backends like nccl. My review includes suggestions to make the code more robust in this regard.

@dichn dichn marked this pull request as draft March 6, 2026 09:07
dichn and others added 2 commits March 6, 2026 17:17
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Di Chen <dichen@redhat.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Di Chen <dichen@redhat.com>
@dsikka dsikka requested a review from HDCharles March 6, 2026 17:34
@dichn dichn marked this pull request as ready for review March 9, 2026 14:06
Copy link
Collaborator

@kylesayrs kylesayrs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like you still need to actually implement the distributed workload logic, ie assigning modules to ranks, processing them, then broadcasting the results.

I recommend looking at the following algorithm, as I think this may be the best way to support distributed MoE Calibration Replacement: vllm-project/compressed-tensors#624

all_counts = [torch.zeros_like(num_modules) for _ in range(world_size)]
dist.all_gather(all_counts, num_modules)

if not all(count.item() == num_modules.item() for count in all_counts):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sometimes, the number of modules will be not evenly divisible by the number of ranks, so this check can be harmful.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just checking that each rank has the same number of modules total, not assigned I think

if _is_registered(class_name, MoECalibrationModule):
modules_to_replace.append((name, module, class_name))

# Step 1.5: Verify all ranks have same number of modules (distributed mode)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't look like you actually assign modules to ranks. It seems like right now, all ranks are still doing duplicate work.

@dichn dichn marked this pull request as draft March 9, 2026 22:42
if modules_to_replace:
logger.info(f"Found {len(modules_to_replace)} MoE modules to replace")
# Only rank 0 shows progress bar and logs
show_progress = not is_distributed() or dist.get_rank() == 0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can use is_rank0, also probably simpler to just inline this into the if statement

if _is_registered(class_name, MoECalibrationModule):
modules_to_replace.append((name, module, class_name))

# Step 1.5: Verify all ranks have same number of modules (distributed mode)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also don't think this check is necessary

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants