Error:
File "/usr/local/lib/python3.12/dist-packages/mbridge/core/bridge.py", line 196, in load_weights
hf_weights_map = self.safetensor_io.load_some_hf_weight(
File "/usr/local/lib/python3.12/dist-packages/mbridge/core/safetensor_io.py", line 71, in load_some_hf_weight
filename = index[name]
KeyError: 'model.language_model.layers.0.mlp.experts.gate_up_proj'
The reason is that Transformers 5.x splits the expert weights and stores them separately for each expert. For example, what used to be model.language_model.layers.26.mlp.experts.gate_up_proj has now become model.language_model.layers.26.mlp.experts.xxx.gate_proj.weight and model.language_model.layers.26.mlp.experts.xxx.up_proj.weight.
This causes the _QWEN3p5TEXT_MOE_MLP_MAPPING in qwen3_5_vl_bridge.py to be incompatible. Please help make it compatible with Transformers 5.x.