Skip to content

Conversation

@samuellees
Copy link
Contributor

@samuellees samuellees commented Nov 3, 2025

Dependency

Require flashinfer-python >= 0.5.0

Usage

python3 -m sglang.launch_server --model-path Qwen3-Next/Qwen3-Next-80B-A3B-Instruct-FP8 --tp-size 4 --ep-size 4 --cuda-graph-bs 1 2 4 8 16 32 64 128 256 512 1024 --mem-fraction-static 0.7 --moe-runner-backend flashinfer_trtllm --attention-backend triton --quantization fp8 --mamba-ssm-dtype bfloat16 

Accuracy Tests

Triton

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 8 exact_match 0.9621 ± 0.0053
strict-match 8 exact_match 0.8279 ± 0.0104

Flashinfer TRTLLM-GEN-MoE

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 8 exact_match 0.9629 ± 0.0052
strict-match 8 exact_match 0.8294 ± 0.0104

Benchmarking and Profiling

Triton

concurrency Mean TPOT (ms) Output token throughput (tok/s)
1 7.63 130.8
4 8.36 476.76
16 9.52 1668.3
64 11.89 5327.62
256 21.72 11633.8
512 32.82 15341.72

Flashinfer TRTLLM-GEN-MoE

concurrency Mean TPOT (ms) Output token throughput (tok/s)
1 6.04 165.07
4 6.96 572.5
16 8.08 1960.68
64 10.48 6031.62
256 20.08 12555.37
512 31.84 15804.58

Motivation

Modifications

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @samuellees, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates the Flashinfer TRTLLM-GEN-MoE FP8 blockwise kernel to optimize the performance of Qwen3-Next models, particularly on Blackwell GPUs. The changes introduce dynamic routing method selection for Mixture-of-Experts (MoE) layers and demonstrate notable gains in output token throughput through comprehensive benchmarking. This enhancement aims to leverage specialized hardware capabilities for more efficient inference.

Highlights

  • Flashinfer TRTLLM-GEN-MoE Integration: Enabled the Flashinfer TRTLLM-GEN-MoE FP8 blockwise kernel for Qwen3-Next models, specifically targeting Blackwell architecture, to leverage specialized hardware capabilities.
  • Performance Improvement: Benchmarks demonstrate significant improvements in output token throughput when using Flashinfer TRTLLM-GEN-MoE compared to the Triton backend, especially at higher concurrencies, indicating more efficient inference.
  • Dynamic MoE Routing: Implemented dynamic handling of MoE routing methods by introducing routing_method_type parameters, allowing for flexible selection of routing strategies (e.g., DeepSeekV3, RenormalizeNaive) within the Flashinfer kernel.
  • Code Refactoring for Flexibility: Modified MoE layer constructors to accept and pass through arbitrary keyword arguments, enhancing modularity and future extensibility for different MoE configurations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @samuellees, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates the Flashinfer TRTLLM-GEN-MoE FP8 blockwise kernel into the system, specifically for the Qwen3-Next model running on Blackwell architecture. The primary goal is to enhance the performance of Mixture-of-Experts (MoE) computations through optimized FP8 quantization and kernel execution. The changes facilitate the use of Flashinfer's specialized MoE operations, leading to improved throughput as validated by comprehensive benchmarking results included in the PR description.

Highlights

  • Flashinfer TRTLLM-GEN-MoE Integration: Enabled the Flashinfer TRTLLM-GEN-MoE FP8 blockwise kernel for Qwen3-Next models, specifically targeting Blackwell GPUs, to leverage specialized hardware optimizations.
  • Performance Improvement: Benchmarks demonstrate significant improvements in output token throughput when using Flashinfer TRTLLM-GEN-MoE compared to the Triton backend, especially at higher concurrencies, indicating enhanced efficiency.
  • Dynamic MoE Routing: Introduced dynamic handling of MoE routing methods, allowing the trtllm_fp8_block_scale_moe kernel to adapt its behavior (e.g., routing_logits type conversion) based on the specified RoutingMethodType.
  • Codebase Adaptations: Modified MoE layer constructors and FP8 quantization logic across several files to properly support the new Flashinfer kernel and its associated routing configurations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enables the Flashinfer TRTLLM-GEN-MoE FP8 blockwise kernel, specifically for Qwen3-Next on Blackwell hardware, which results in significant performance improvements as shown in the benchmarks. The changes are well-structured, introducing a routing_method_type parameter that is plumbed through the MoE layers to the kernel call. This design is flexible and maintains backward compatibility. The implementation looks solid. I have one suggestion regarding import statements to improve code organization and maintainability.

Comment on lines +1189 to 1190
from flashinfer import RoutingMethodType
from flashinfer.fused_moe import trtllm_fp8_block_scale_moe
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

For better code organization and to avoid a potential ImportError, it's recommended to move these flashinfer imports to the top of the file. The current local import can cause a runtime crash if flashinfer is not installed, as the logic in should_use_flashinfer_trtllm_moe might incorrectly evaluate to True in that scenario.

Guarding the imports at the top level with is_flashinfer_available would make the code more robust. For example:

# In python/sglang/srt/layers/quantization/fp8.py
# First, add is_flashinfer_available to the import from sglang.srt.utils
from sglang.srt.utils import (
    # ... existing imports
    is_flashinfer_available,
)

if is_flashinfer_available():
    from flashinfer import RoutingMethodType
    from flashinfer.fused_moe import trtllm_fp8_block_scale_moe
else:
    RoutingMethodType = None
    trtllm_fp8_block_scale_moe = None

Then, you can remove the local imports from this method. This change centralizes dependency management and improves code readability and robustness.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enables the Flashinfer TRTLLM-GEN-MoE FP8 blockwise kernel for Qwen3-Next models on Blackwell hardware, which shows significant performance improvements in the provided benchmarks. The changes involve adding a routing_method_type to configure the MoE routing method, allowing different models like Qwen to use their specific routing logic. The implementation is clean and correctly applies the new routing type for Qwen models.

I have one suggestion to make the code more robust by ensuring the routing_method_type does not default to None, which could cause runtime errors. Other than that, the changes look good and are a valuable performance enhancement.

Comment on lines +1210 to +1212
routing_method_type = getattr(
layer, "routing_method_type", RoutingMethodType.DeepSeekV3
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current logic for getting routing_method_type can result in None being passed to the trtllm_fp8_block_scale_moe kernel. If a model using FlashInferFusedMoE does not specify routing_method_type, it defaults to None in FusedMoE.__init__. In this case, getattr(layer, "routing_method_type", ...) will return None.

The kernel previously used a hardcoded value and likely does not handle None, which could lead to a runtime error. To make this more robust and ensure backward compatibility, it's better to explicitly check for None and fall back to RoutingMethodType.DeepSeekV3.

Suggested change
routing_method_type = getattr(
layer, "routing_method_type", RoutingMethodType.DeepSeekV3
)
routing_method_type = getattr(layer, "routing_method_type", None)
if routing_method_type is None:
routing_method_type = RoutingMethodType.DeepSeekV3

@b8zhong b8zhong added the run-ci label Nov 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants