fix(dummy_loader): always create moe_mesh for expert-sharded weights#1028
Open
JamesBrianD wants to merge 1 commit intosgl-project:mainfrom
Open
fix(dummy_loader): always create moe_mesh for expert-sharded weights#1028JamesBrianD wants to merge 1 commit intosgl-project:mainfrom
JamesBrianD wants to merge 1 commit intosgl-project:mainfrom
Conversation
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
The _load_dummy_weights method only created a moe_mesh with
("expert", "tensor") axes when ep_size > 1, falling back to
self.mesh ("data", "tensor") otherwise. This caused a crash because
NamedSharding with P("expert", ...) requires a mesh that has an
"expert" axis. The real weight loading paths always create moe_mesh
regardless of ep_size. Align dummy loader to match.
Fixes sgl-project#1022
c29f74f to
611feb6
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
_load_dummy_weightsonly created amoe_meshwith("expert", "tensor")axes whenep_size > 1, falling back toself.mesh("data", "tensor")otherwise. This causedNamedSharding(self.mesh, P("expert", ...))to crash because the mesh has no"expert"axis.moe_meshwhen sharding contains"expert", matching the real weight loading paths (lines 2032 and 2145 inweight_utils.py). Also addsaxis_types=Explicitconsistent with other mesh constructions.Test plan
Qwen3-30B-A3B --load-format dummy --tp-size=16 --nnodes=2: dummy weights load successfully and server enters precompile stage.Fixes #1022