Add MoE calibration wrapper for GLM-4.7-Flash (Glm4MoeLiteMoE)#2547
Add MoE calibration wrapper for GLM-4.7-Flash (Glm4MoeLiteMoE)#2547Nottlespike wants to merge 1 commit intovllm-project:mainfrom
Conversation
|
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
There was a problem hiding this comment.
Code Review
This pull request introduces a calibration wrapper for GLM-4.7-Flash models to ensure all experts are properly calibrated during quantization, preventing suboptimal expert weight quantization. The feedback identifies a potential shape mismatch in the routing logic that could occur if input logits are not flattened and suggests a refactor to the forward method to eliminate redundant code execution paths.
| if self.calibrate_all_experts: | ||
| # Send ALL tokens to ALL experts for calibration | ||
| num_tokens = hidden_states.shape[0] | ||
| all_expert_indices = torch.arange( | ||
| self.n_routed_experts, device=hidden_states.device | ||
| ).unsqueeze(0).expand(num_tokens, -1) | ||
| all_expert_weights = torch.ones( | ||
| num_tokens, self.n_routed_experts, | ||
| dtype=hidden_states.dtype, | ||
| device=hidden_states.device | ||
| ) / self.n_routed_experts | ||
|
|
||
| # Run calibration pass through all experts | ||
| _ = self.experts(hidden_states, all_expert_indices, all_expert_weights) | ||
|
|
||
| # Use actual routing for output | ||
| hidden_states = self.experts(hidden_states, topk_indices, topk_weights) | ||
| else: | ||
| # Standard routing | ||
| hidden_states = self.experts(hidden_states, topk_indices, topk_weights) |
There was a problem hiding this comment.
The call to self.experts(hidden_states, topk_indices, topk_weights) is duplicated in both branches of the if self.calibrate_all_experts block. This can be refactored to improve maintainability by moving the common call outside the conditional block.
if self.calibrate_all_experts:
# Send ALL tokens to ALL experts for calibration
num_tokens = hidden_states.shape[0]
all_expert_indices = torch.arange(
self.n_routed_experts, device=hidden_states.device
).unsqueeze(0).expand(num_tokens, -1)
all_expert_weights = torch.ones(
num_tokens, self.n_routed_experts,
dtype=hidden_states.dtype,
device=hidden_states.device
) / self.n_routed_experts
# Run calibration pass through all experts
_ = self.experts(hidden_states, all_expert_indices, all_expert_weights)
# Standard routing for output
hidden_states = self.experts(hidden_states, topk_indices, topk_weights)There was a problem hiding this comment.
Pull request overview
Adds MoE calibration support for GLM-4.7-Flash models by introducing a dedicated calibration wrapper for the Glm4MoeLiteMoE architecture, ensuring expert activation statistics are collected instead of silently skipping MoE calibration.
Changes:
- Added
CalibrationGlm4MoeLiteMoEwrapper with GLM-4.7-Flash group-based routing and “all experts see tokens” calibration behavior. - Registered the new wrapper via
llmcompressor.modelingpackage import side effects.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| src/llmcompressor/modeling/glm4_moe_lite.py | New MoE calibration wrapper for Glm4MoeLiteMoE, including routing + calibration-only all-expert pass. |
| src/llmcompressor/modeling/init.py | Imports the new wrapper to trigger registry registration. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| num_tokens = hidden_states.shape[0] | ||
| all_expert_indices = torch.arange( | ||
| self.n_routed_experts, device=hidden_states.device | ||
| ).unsqueeze(0).expand(num_tokens, -1) | ||
| all_expert_weights = torch.ones( | ||
| num_tokens, self.n_routed_experts, | ||
| dtype=hidden_states.dtype, | ||
| device=hidden_states.device | ||
| ) / self.n_routed_experts | ||
|
|
||
| # Run calibration pass through all experts | ||
| _ = self.experts(hidden_states, all_expert_indices, all_expert_weights) |
There was a problem hiding this comment.
The extra calibration-only expert pass (_ = self.experts(...)) doesn’t use its output, but it will still build an autograd graph when calibration runs with grads enabled, increasing memory/compute significantly. Wrap this call (and the temporary tensor construction if desired) in a torch.no_grad() block (or otherwise explicitly disable grads) since it’s only for collecting activation stats.
| num_tokens = hidden_states.shape[0] | |
| all_expert_indices = torch.arange( | |
| self.n_routed_experts, device=hidden_states.device | |
| ).unsqueeze(0).expand(num_tokens, -1) | |
| all_expert_weights = torch.ones( | |
| num_tokens, self.n_routed_experts, | |
| dtype=hidden_states.dtype, | |
| device=hidden_states.device | |
| ) / self.n_routed_experts | |
| # Run calibration pass through all experts | |
| _ = self.experts(hidden_states, all_expert_indices, all_expert_weights) | |
| with torch.no_grad(): | |
| num_tokens = hidden_states.shape[0] | |
| all_expert_indices = torch.arange( | |
| self.n_routed_experts, device=hidden_states.device | |
| ).unsqueeze(0).expand(num_tokens, -1) | |
| all_expert_weights = torch.ones( | |
| num_tokens, self.n_routed_experts, | |
| dtype=hidden_states.dtype, | |
| device=hidden_states.device | |
| ) / self.n_routed_experts | |
| # Run calibration pass through all experts | |
| _ = self.experts(hidden_states, all_expert_indices, all_expert_weights) |
| @MoECalibrationModule.register("Glm4MoeLiteMoE") | ||
| class CalibrationGlm4MoeLiteMoE(MoECalibrationModule): | ||
| """ | ||
| Calibration version of Glm4MoeLiteMoE that sends all tokens to all experts. | ||
|
|
||
| GLM-4.7-Flash uses Glm4MoeLiteNaiveMoe which has a batched expert interface: | ||
| experts(hidden_states, top_k_index, top_k_weights) | ||
|
|
||
| During calibration with calibrate_all_experts=True, we override routing to | ||
| send all tokens to all experts, ensuring proper quantization statistics. | ||
| """ |
There was a problem hiding this comment.
This introduces a new MoE calibration wrapper but there’s no corresponding unit test under tests/llmcompressor/modeling/ (other calibration wrappers like GLM4 MoE have targeted tests). Add a test that (1) verifies moe_calibration_context replaces Glm4MoeLiteMoE modules with CalibrationGlm4MoeLiteMoE, and (2) when calibrate_all_experts=True, all routed experts receive a forward call (e.g., via forward hooks), similar to tests/llmcompressor/modeling/test_calib_glm4_moe.py.
GLM-4.7-Flash Lite stores routed experts as packed 3D tensors in Glm4MoeLiteNaiveMoe, so the existing calibration path not only skipped MoE-aware calibration but also kept routed experts invisible to Linear-targeted quantization. Unpack the routed experts into per-expert Glm4MoeLiteMLP modules, preserve the unpacked structure for quantization and checkpoint save, and add focused modeling tests for expert activation, output parity, and Linear visibility. Signed-off-by: Jason Lu <[email protected]>
d2c0afa to
e460938
Compare
brian-dellabetta
left a comment
There was a problem hiding this comment.
Hi @Nottlespike , thanks for preparing this. It looks like a lot of the code is shared with what is in src/llmcompressor/modeling/glm_moe_dsa.py. Have you explored what it would look like to import and subclass the classes from that file directly? I know transformers sticks to the approach of no shared code across model definitions, but given we want to apply the same operation to the 3D expert tensors in both, maybe we won't have to repeat our code
Summary
GLM-4.7-Flash uses a separate MoE class (
Glm4MoeLiteMoE) that is not covered by the existingGlm4MoeMoEwrapper. Without this fix, MoE calibration is silently skipped for GLM-4.7-Flash models, resulting in quantization that doesn't properly calibrate expert weights.Problem
The model
zai-org/GLM-4.7-Flash(31B MoE) usesGlm4MoeLiteMoEwith a different architecture thanGlm4MoeMoE:shared_expertsattribute (notshared_expert)Glm4MoeLiteNaiveMoeexperts interface:(hidden_states, topk_indices, topk_weights)n_group,topk_groupparametersWhen quantizing with NVFP4, the MoE calibration context manager checks for registered wrappers but
Glm4MoeLiteMoEdoesn't matchGlm4MoeMoE, so calibration silently falls back to standard forward passes without collecting expert activation statistics.Solution
Add
CalibrationGlm4MoeLiteMoEwrapper class that:Glm4MoeLiteMoEclass specificallyroute_tokens_to_experts()for proper group-based routingforward()that routes through all expertsChanges
src/llmcompressor/modeling/glm4_moe_lite.py- New 119-line wrappersrc/llmcompressor/modeling/__init__.py- Import the new wrapperTesting
Verified that quantization now shows:
Instead of the previous behavior where no MoE modules were detected for
Glm4MoeLiteMoE.Hardware Tested