Skip to content

Conversation

@dtunai
Copy link

@dtunai dtunai commented Nov 3, 2025

Motivation

PR addresses #12193 by integrating FlashInfer-Bench into SGLang, enabling automatic kernel optimization based on production workloads. FlashInfer-Bench allows SGLang to collect real-world workload patterns and automatically substitute optimized kernels at runtime, improving inference performance without code changes.

Work in Progress

This is a WIP pull request. Current tasks:

  • Add tests
  • Complete testing with production models
  • Update documentation with usage examples
  • Provide benchmark results showing performance improvements

Related Links

Modifications

New Files

  • python/sglang/srt/layers/flashinfer_bench_integration.py: Core integration module providing workload tracing and kernel substitution capabilities

Modified Files

  • python/sglang/srt/environ.py: Added environment variables for FlashInfer-Bench configuration
    • FIB_ENABLE_TRACING: Enable workload collection
    • FIB_ENABLE_APPLY: Enable kernel substitution
    • FIB_DATASET_PATH: Path for trace storage
  • python/sglang/srt/server_args.py: Added CLI arguments
    • --enable-flashinfer-bench-tracing
    • --enable-flashinfer-bench-apply
    • --flashinfer-bench-dataset-path
  • python/sglang/srt/layers/attention/flashinfer_backend.py: Added _init_flashinfer_bench() method to FlashInferAttnBackend for automatic initialization when enabled

Key Features

  • Automatic workload tracing during inference
  • Dynamic kernel substitution with fallback mechanism
  • Non-invasive design that doesn't affect performance when disabled

Accuracy Tests

  • Tests WIP with various models (Llama, Qwen, etc.) to ensure kernel substitution maintains output accuracy

Benchmarking and Profiling

  • Performance benchmarks WIP with various models (Llama, Qwen, etc.)

Checklist

dtunai and others added 2 commits November 3, 2025 11:24
- Add FlashInfer-Bench integration module with tracing and kernel substitution
- Add environment variables: FIB_ENABLE_TRACING, FIB_ENABLE_APPLY, FIB_DATASET_PATH
- Add CLI arguments for FlashInfer-Bench configuration
- Integrate with FlashInferAttnBackend for automatic optimization
- Support workload collection and automatic kernel substitution

This enables SGLang to:
1. Collect production workloads for analysis
2. Automatically substitute optimized kernels at runtime
3. Support the 'AI improves itself' vision with day-zero kernel optimization

Work in Progress:
- Testing with real models
- Documentation updates
- Performance benchmarking
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @dtunai, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates FlashInfer-Bench into SGLang, providing a mechanism for automatic kernel optimization. By tracing production workloads, SGLang can identify common patterns and dynamically substitute highly optimized kernels, aiming to enhance inference performance without requiring manual code adjustments.

Highlights

  • FlashInfer-Bench Integration: Introduces FlashInfer-Bench into SGLang to enable automatic kernel optimization based on production workloads.
  • Workload Tracing: Enables the collection of real-world workload patterns, allowing SGLang to adapt and optimize kernels dynamically.
  • Dynamic Kernel Substitution: Facilitates the automatic replacement of existing kernels with optimized versions at runtime, aiming to improve inference performance.
  • Non-Invasive Design: Ensures that the integration does not affect performance when FlashInfer-Bench is disabled, maintaining system efficiency.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @dtunai, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant enhancement to SGLang by integrating FlashInfer-Bench. The primary goal is to boost inference performance through automatic kernel optimization. By enabling workload tracing, the system can learn from production patterns and then apply dynamically optimized kernels, ensuring SGLang remains efficient and adaptable to varying demands.

Highlights

  • FlashInfer-Bench Integration: This PR integrates FlashInfer-Bench into SGLang to enable automatic kernel optimization and workload tracing for improved inference performance.
  • Dynamic Kernel Optimization: SGLang can now collect real-world workload patterns and dynamically substitute optimized kernels at runtime without requiring manual code changes.
  • Configuration Options: New environment variables (FIB_ENABLE_TRACING, FIB_ENABLE_APPLY, FIB_DATASET_PATH) and corresponding CLI arguments are introduced to control tracing, kernel substitution, and dataset paths.
  • Non-Invasive Design: The integration is designed to be non-invasive, meaning it will not affect performance when disabled.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an integration with FlashInfer-Bench to enable automatic kernel optimization based on production workloads. The changes include adding new environment variables and CLI arguments for configuration, a new integration module, and hooking this into the FlashInferAttnBackend. The implementation is well-structured, using decorators to conditionally wrap attention kernels, which is a clean approach.

My review includes a couple of suggestions for the new flashinfer_bench_integration.py module to improve consistency with the existing codebase and enhance maintainability. Specifically, I recommend using the project's envs object for environment variable access and simplifying some configuration logic.

As this is a work-in-progress, I'd also like to remind you to add corresponding tests and update the documentation for the new CLI arguments and environment variables before finalizing the PR. Overall, this is a great feature addition.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an integration with FlashInfer-Bench to enable workload tracing and automatic kernel optimization. The changes are well-structured, adding new configuration options through both CLI arguments and environment variables, and encapsulating the core logic in a new flashinfer_bench_integration.py module. The use of try-except for the optional flashinfer-bench dependency is a good practice. I've identified a couple of critical bugs that prevent the tracing feature from working correctly when kernel substitution is disabled, along with some suggestions to improve code consistency and readability.

dtunai and others added 5 commits November 3, 2025 20:10
- Fix critical bug: Enable kernel wrapping for tracing-only mode
  * Wrap kernels when either FIB_ENABLE_TRACING or FIB_ENABLE_APPLY is set
  * Previously only wrapped when FIB_ENABLE_APPLY was enabled

- Fix critical bug: Call apply() for both tracing and substitution
  * flashinfer_bench.apply() handles both tracing and kernel substitution
  * Previously only called when apply_enabled was true

- Use envs object instead of os.environ for consistency
  * Replace os.environ.get() with envs.FIB_*.get()
  * Aligns with SGLang codebase conventions

- Simplify TracingConfig initialization logic
  * Use (tracing_config or {}) pattern for cleaner code
  * Remove redundant conditional expressions

These fixes ensure tracing works correctly when only FIB_ENABLE_TRACING=1
is set without FIB_ENABLE_APPLY.
Linting fixes:
- Remove unused imports (flashinfer_bench, enable_apply)
- Fix import order (alphabetical)
- Add trailing commas for black formatter
- Add blank lines between functions
- Add newline at end of file
- Fix line length with black formatting

CRITICAL BUG FIX:
- Actually call enable_tracing() and enable_apply() functions!
- Previous implementation only set internal flags but never called
  the FlashInfer-Bench API functions
- enable_tracing() installs FlashInfer integrations automatically
- enable_apply() activates kernel substitution
- Without these calls, the integration did nothing

This fix ensures the integration will actually work when enabled.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants