- 
                Notifications
    
You must be signed in to change notification settings  - Fork 3.3k
 
[Refactor] Refactor fused_moe_triton tuning tools: extract shared utils, add EP/MLLM support, update README #12440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
          Summary of ChangesHello @BBuf, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the  Highlights
 Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either  
 Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a  Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
  | 
    
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request does a great job of refactoring the fused_moe_triton tuning and benchmarking tools by extracting shared logic into a common_utils.py file. This significantly improves maintainability and reduces code duplication. The addition of Expert Parallelism (EP) support and the comprehensive README updates are also valuable contributions. My review identifies a couple of critical issues in benchmark_vllm_vs_sglang_fused_moe_triton.py that would prevent it from running correctly: an incorrect relative import and a bug in the parallel environment initialization for EP mode. I have also included a few medium-severity suggestions to improve code formatting and documentation clarity. Overall, this is a solid refactoring effort.
        
          
                benchmark/kernels/fused_moe_triton/benchmark_vllm_vs_sglang_fused_moe_triton.py
          
            Show resolved
            Hide resolved
        
              
          
                benchmark/kernels/fused_moe_triton/benchmark_vllm_vs_sglang_fused_moe_triton.py
          
            Show resolved
            Hide resolved
        
      01a6bd0    to
    3cf80a6      
    Compare
  
    Co-authored-by: xu-yfei <[email protected]>
3cf80a6    to
    d76f8a3      
    Compare
  
    …ect/sglang into restruct_fused_moe_tuning_tools
Summary
CheckLists
Fix tuning script cpu overhead:
python benchmark/kernels/fused_moe_triton/tuning_fused_moe_triton.py \ --model mistralai/Mixtral-8x7B-Instruct-v0.1 \ --tune python3 -m sglang.launch_server --model mistralai/Mixtral-8x7B-Instruct-v0.1 --tp-size 2 python3 -m sglang.bench_serving --backend sglang-oai --dataset-name random --random-input-len 4096 --random-output-len 1024 --random-range-ratio 1 --num-prompts 30 --max-concurrency 5 --warmup-requests 5 main: Total input text tokens: 122880 Total input vision tokens: 0 Total generated tokens: 30720 Total generated tokens (retokenized): 30329 Request throughput (req/s): 0.40 Input token throughput (tok/s): 1631.05 Output token throughput (tok/s): 407.76 Total token throughput (tok/s): 2038.81 Concurrency: 5.00 ----------------End-to-End Latency---------------- Mean E2E Latency (ms): 12552.80 Median E2E Latency (ms): 12550.68 ---------------Time to First Token---------------- Mean TTFT (ms): 499.50 Median TTFT (ms): 410.12 P99 TTFT (ms): 1049.02 ---------------Inter-Token Latency---------------- Mean ITL (ms): 11.79 Median ITL (ms): 11.60 P95 ITL (ms): 12.27 P99 ITL (ms): 12.59 Max ITL (ms): 881.40 ================================================== pr: ============ Serving Benchmark Result ============ Backend: sglang-oai Traffic request rate: inf Max request concurrency: 5 Successful requests: 30 Benchmark duration (s): 74.68 Total input tokens: 122880 Total input text tokens: 122880 Total input vision tokens: 0 Total generated tokens: 30720 Total generated tokens (retokenized): 30329 Request throughput (req/s): 0.40 Input token throughput (tok/s): 1645.50 Output token throughput (tok/s): 411.38 Total token throughput (tok/s): 2056.88 Concurrency: 5.00 ----------------End-to-End Latency---------------- Mean E2E Latency (ms): 12442.32 Median E2E Latency (ms): 12464.80 ---------------Time to First Token---------------- Mean TTFT (ms): 433.16 Median TTFT (ms): 407.16 P99 TTFT (ms): 608.47 ---------------Inter-Token Latency---------------- Mean ITL (ms): 11.74 Median ITL (ms): 11.58 P95 ITL (ms): 12.23 P99 ITL (ms): 12.57 Max ITL (ms): 445.18 ==================================================