- 
                Notifications
    
You must be signed in to change notification settings  - Fork 3.3k
 
[WIP] feat: Enable workload tracing and kernel optimization via FlashInfer-Bench #12542
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Add FlashInfer-Bench integration module with tracing and kernel substitution - Add environment variables: FIB_ENABLE_TRACING, FIB_ENABLE_APPLY, FIB_DATASET_PATH - Add CLI arguments for FlashInfer-Bench configuration - Integrate with FlashInferAttnBackend for automatic optimization - Support workload collection and automatic kernel substitution This enables SGLang to: 1. Collect production workloads for analysis 2. Automatically substitute optimized kernels at runtime 3. Support the 'AI improves itself' vision with day-zero kernel optimization Work in Progress: - Testing with real models - Documentation updates - Performance benchmarking
          Summary of ChangesHello @dtunai, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates FlashInfer-Bench into SGLang, providing a mechanism for automatic kernel optimization. By tracing production workloads, SGLang can identify common patterns and dynamically substitute highly optimized kernels, aiming to enhance inference performance without requiring manual code adjustments. Highlights
 Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either  
 Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a  Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
  | 
    
          Summary of ChangesHello @dtunai, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant enhancement to SGLang by integrating FlashInfer-Bench. The primary goal is to boost inference performance through automatic kernel optimization. By enabling workload tracing, the system can learn from production patterns and then apply dynamically optimized kernels, ensuring SGLang remains efficient and adaptable to varying demands. Highlights
 Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either  
 Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a  Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
  | 
    
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces an integration with FlashInfer-Bench to enable automatic kernel optimization based on production workloads. The changes include adding new environment variables and CLI arguments for configuration, a new integration module, and hooking this into the FlashInferAttnBackend. The implementation is well-structured, using decorators to conditionally wrap attention kernels, which is a clean approach.
My review includes a couple of suggestions for the new flashinfer_bench_integration.py module to improve consistency with the existing codebase and enhance maintainability. Specifically, I recommend using the project's envs object for environment variable access and simplifying some configuration logic.
As this is a work-in-progress, I'd also like to remind you to add corresponding tests and update the documentation for the new CLI arguments and environment variables before finalizing the PR. Overall, this is a great feature addition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces an integration with FlashInfer-Bench to enable workload tracing and automatic kernel optimization. The changes are well-structured, adding new configuration options through both CLI arguments and environment variables, and encapsulating the core logic in a new flashinfer_bench_integration.py module. The use of try-except for the optional flashinfer-bench dependency is a good practice. I've identified a couple of critical bugs that prevent the tracing feature from working correctly when kernel substitution is disabled, along with some suggestions to improve code consistency and readability.
- Fix critical bug: Enable kernel wrapping for tracing-only mode
  * Wrap kernels when either FIB_ENABLE_TRACING or FIB_ENABLE_APPLY is set
  * Previously only wrapped when FIB_ENABLE_APPLY was enabled
- Fix critical bug: Call apply() for both tracing and substitution
  * flashinfer_bench.apply() handles both tracing and kernel substitution
  * Previously only called when apply_enabled was true
- Use envs object instead of os.environ for consistency
  * Replace os.environ.get() with envs.FIB_*.get()
  * Aligns with SGLang codebase conventions
- Simplify TracingConfig initialization logic
  * Use (tracing_config or {}) pattern for cleaner code
  * Remove redundant conditional expressions
These fixes ensure tracing works correctly when only FIB_ENABLE_TRACING=1
is set without FIB_ENABLE_APPLY.
    Linting fixes: - Remove unused imports (flashinfer_bench, enable_apply) - Fix import order (alphabetical) - Add trailing commas for black formatter - Add blank lines between functions - Add newline at end of file - Fix line length with black formatting CRITICAL BUG FIX: - Actually call enable_tracing() and enable_apply() functions! - Previous implementation only set internal flags but never called the FlashInfer-Bench API functions - enable_tracing() installs FlashInfer integrations automatically - enable_apply() activates kernel substitution - Without these calls, the integration did nothing This fix ensures the integration will actually work when enabled.
Motivation
PR addresses #12193 by integrating FlashInfer-Bench into SGLang, enabling automatic kernel optimization based on production workloads. FlashInfer-Bench allows SGLang to collect real-world workload patterns and automatically substitute optimized kernels at runtime, improving inference performance without code changes.
Work in Progress
This is a WIP pull request. Current tasks:
Related Links
Modifications
New Files
python/sglang/srt/layers/flashinfer_bench_integration.py:Core integration module providing workload tracing and kernel substitution capabilitiesModified Files
python/sglang/srt/environ.py:Added environment variables for FlashInfer-Bench configurationpython/sglang/srt/server_args.py:Added CLI argumentspython/sglang/srt/layers/attention/flashinfer_backend.py:Added _init_flashinfer_bench() method to FlashInferAttnBackend for automatic initialization when enabledKey Features
Accuracy Tests
Benchmarking and Profiling
Checklist