Skip to content

Conversation

@BBuf
Copy link
Collaborator

@BBuf BBuf commented Oct 31, 2025

Summary

  • Refactored tuning and benchmark scripts under benchmark/kernels/fused_moe_triton/ to remove duplication and improve maintainability.
  • Added full Expert Parallelism (EP) support and MLLM model handling to tuning scripts.
  • Updated README with EP usage guidance and detailed instructions for the separate-kernel tuning workflow.
  • Fix tuning script cpu overhead.

CheckLists

  • Extracted shared utilities to common_utils.py
  • EP mode support in both tuning scripts
  • MLLM support verified via unified model parsing
  • README updated with examples and guidance
  • Import and typing fixes for direct execution

Fix tuning script cpu overhead:

python benchmark/kernels/fused_moe_triton/tuning_fused_moe_triton.py \    --model mistralai/Mixtral-8x7B-Instruct-v0.1 \
    --tune

python3 -m sglang.launch_server --model  mistralai/Mixtral-8x7B-Instruct-v0.1 --tp-size 2

python3 -m sglang.bench_serving --backend sglang-oai  --dataset-name random --random-input-len 4096 --random-output-len 1024 --random-range-ratio 1 --num-prompts 30 --max-concurrency 5 --warmup-requests 5

main:

Total input text tokens:                 122880    
Total input vision tokens:               0         
Total generated tokens:                  30720     
Total generated tokens (retokenized):    30329     
Request throughput (req/s):              0.40      
Input token throughput (tok/s):          1631.05   
Output token throughput (tok/s):         407.76    
Total token throughput (tok/s):          2038.81   
Concurrency:                             5.00      
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   12552.80  
Median E2E Latency (ms):                 12550.68  
---------------Time to First Token----------------
Mean TTFT (ms):                          499.50    
Median TTFT (ms):                        410.12    
P99 TTFT (ms):                           1049.02   
---------------Inter-Token Latency----------------
Mean ITL (ms):                           11.79     
Median ITL (ms):                         11.60     
P95 ITL (ms):                            12.27     
P99 ITL (ms):                            12.59     
Max ITL (ms):                            881.40    
==================================================

pr:


============ Serving Benchmark Result ============
Backend:                                 sglang-oai
Traffic request rate:                    inf       
Max request concurrency:                 5         
Successful requests:                     30        
Benchmark duration (s):                  74.68     
Total input tokens:                      122880    
Total input text tokens:                 122880    
Total input vision tokens:               0         
Total generated tokens:                  30720     
Total generated tokens (retokenized):    30329     
Request throughput (req/s):              0.40      
Input token throughput (tok/s):          1645.50   
Output token throughput (tok/s):         411.38    
Total token throughput (tok/s):          2056.88   
Concurrency:                             5.00      
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   12442.32  
Median E2E Latency (ms):                 12464.80  
---------------Time to First Token----------------
Mean TTFT (ms):                          433.16    
Median TTFT (ms):                        407.16    
P99 TTFT (ms):                           608.47    
---------------Inter-Token Latency----------------
Mean ITL (ms):                           11.74     
Median ITL (ms):                         11.58     
P95 ITL (ms):                            12.23     
P99 ITL (ms):                            12.57     
Max ITL (ms):                            445.18    
==================================================

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @BBuf, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the fused_moe_triton kernel tuning and benchmarking tools. The primary goal is to improve the modularity and extensibility of the existing scripts by centralizing common functionalities. This refactoring also introduces crucial support for Expert Parallelism (EP) and Multi-modal Large Language Models (MLLM), broadening the applicability of the tuning tools. The updated documentation provides clear instructions for leveraging these new features, making the tools more user-friendly and robust for various MoE model architectures and parallelism strategies.

Highlights

  • Refactored Tuning Tools: The MoE kernel tuning and benchmark scripts have been refactored to reduce code duplication and improve maintainability by extracting shared utilities.
  • Expert Parallelism (EP) Support: Comprehensive support for Expert Parallelism has been added to the tuning scripts, allowing experts to be distributed across GPUs.
  • MLLM Model Handling: The tuning scripts now support Multi-modal Large Language Models (MLLM) with text encoders, broadening their applicability.
  • Updated Documentation: The README.md has been significantly updated with detailed usage guidance for EP mode, MLLM tuning, and separate-kernel tuning workflows.
  • Centralized Utilities: A new common_utils.py file has been introduced to house shared functions like get_model_config, validate_ep_tp_mode, and configuration saving, streamlining the codebase.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request does a great job of refactoring the fused_moe_triton tuning and benchmarking tools by extracting shared logic into a common_utils.py file. This significantly improves maintainability and reduces code duplication. The addition of Expert Parallelism (EP) support and the comprehensive README updates are also valuable contributions. My review identifies a couple of critical issues in benchmark_vllm_vs_sglang_fused_moe_triton.py that would prevent it from running correctly: an incorrect relative import and a bug in the parallel environment initialization for EP mode. I have also included a few medium-severity suggestions to improve code formatting and documentation clarity. Overall, this is a solid refactoring effort.

@ch-wan ch-wan self-assigned this Oct 31, 2025
@BBuf BBuf force-pushed the restruct_fused_moe_tuning_tools branch from 01a6bd0 to 3cf80a6 Compare November 4, 2025 03:13
@BBuf BBuf force-pushed the restruct_fused_moe_tuning_tools branch from 3cf80a6 to d76f8a3 Compare November 4, 2025 03:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants