Skip to content

enable flashinfer moe kernel for DP + EP#36838

Open
czhu-cohere wants to merge 2 commits intovllm-project:mainfrom
czhu-cohere:czhu/flashinfer-dp-ep
Open

enable flashinfer moe kernel for DP + EP#36838
czhu-cohere wants to merge 2 commits intovllm-project:mainfrom
czhu-cohere:czhu/flashinfer-dp-ep

Conversation

@czhu-cohere
Copy link
Contributor

@czhu-cohere czhu-cohere commented Mar 12, 2026

Purpose

Previously the BF16 flashinfer moe kernel is disabled when dp > 1. I think the kernel itself should be able to support it, we just need to enable on the vLLM side.

Test Plan

pytest tests/kernels/moe/test_unquantized_backend_selection.py
run gsm8k with bf16 qwen 3a30b on 2xB200 DP2 EP2 and compare the result with different moe backend.

server command

# triton/default backend
vllm serve Qwen/Qwen3-30B-A3B \
  --data-parallel-size 2 \
  --enable-expert-parallel \
  --trust-remote-code \
  --port 8000

# flashinfer cutlass
VLLM_USE_FLASHINFER_MOE_FP16=1 VLLM_FLASHINFER_MOE_BACKEND=throughput vllm serve Qwen/Qwen3-30B-A3B \
  --data-parallel-size 2 \
  --enable-expert-parallel \
  --trust-remote-code \
  --port 8000

# flashinfer trtllm
VLLM_USE_FLASHINFER_MOE_FP16=1 VLLM_FLASHINFER_MOE_BACKEND=latency vllm serve Qwen/Qwen3-30B-A3B \
  --data-parallel-size 2 \
  --enable-expert-parallel \
  --trust-remote-code \
  --port 8000

test command

python -m lm_eval \
  --model local-completions \
  --model_args "model=Qwen/Qwen3-30B-A3B,base_url=http://localhost:8000/v1/completions,num_concurrent=128,max_retries=5,tokenized_requests=False,tokenizer=Qwen/Qwen3-30B-A3B" \
  --tasks gsm8k_cot \
  --batch_size auto \
  --log_samples \
  --output_path /tmp/lm_eval_qwen_dp2_ep

Test Result

Backend flexible-extract stderr
Triton 0.8893 ±0.0086
FlashInfer CUTLASS (throughput) 0.8961 ±0.0084
FlashInfer TRTLLM (latency) 0.8976 ±0.0083

pytest tests/kernels/moe/test_unquantized_backend_selection.py pass


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: root <conway.zhu@cohere.com>
@czhu-cohere czhu-cohere force-pushed the czhu/flashinfer-dp-ep branch from 48b7786 to ff3ff57 Compare March 12, 2026 02:14
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enables the FlashInfer CUTLASS MoE kernel for configurations using both Data Parallelism (DP) and Expert Parallelism (EP). The changes involve removing the restriction that prevented this kernel from being selected when DP is active. While the logic change appears correct, there is a lack of corresponding test updates to validate this new capability, which is a significant concern for ensuring correctness.

Comment on lines 97 to 101
flashinfer_cutlass_available = (
has_flashinfer_cutlass_fused_moe()
and use_ep
and (not use_dp)
and current_platform.has_device_capability(90)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This change enables the FlashInfer CUTLASS MoE kernel for configurations with Data Parallelism (use_dp=True). However, the corresponding tests in tests/kernels/moe/test_unquantized_backend_selection.py have not been updated to reflect this. The existing test test_select_cuda_flashinfer_cutlass_backend explicitly sets use_dp=False and includes a comment stating that CUTLASS does not support DP. To ensure the correctness of this feature and prevent future regressions, please add a new test case that validates the behavior when use_dp=True.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added test

Signed-off-by: root <conway.zhu@cohere.com>
@czhu-cohere czhu-cohere marked this pull request as ready for review March 12, 2026 02:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant