Skip to content

Fix flashinfer autotune to only wrap run_once()#19004

Open
ch-wan wants to merge 1 commit intomainfrom
cheng/fix/flashinfer-autotune
Open

Fix flashinfer autotune to only wrap run_once()#19004
ch-wan wants to merge 1 commit intomainfrom
cheng/fix/flashinfer-autotune

Conversation

@ch-wan
Copy link
Collaborator

@ch-wan ch-wan commented Feb 19, 2026

Motivation

The _flashinfer_autotune method previously wrapped the entire _dummy_run call inside the autotune() context. This meant that buffer creation, ForwardBatch construction, and attn_backend.init_forward_metadata() (which calls flashinfer's .plan() / .begin_forward() metadata operations) were all unnecessarily wrapped by the autotuner.

Only the actual attention kernel execution inside run_once()self.model.forward(...) should be autotuned.

Modifications

  • Added a run_ctx parameter to _dummy_run() that accepts an optional context manager.
  • The autotune() context is now applied only around run_once(), not the entire _dummy_run.
  • Uses the existing empty_context() utility as the no-op fallback when run_ctx is not provided.
  • No changes to other callers of _dummy_run (warmup/profiling paths are unaffected).

Accuracy Tests

N/A — no change to model outputs; this only narrows the scope of the autotune context.

Benchmarking and Profiling

N/A — no expected performance change; the same kernels are autotuned, just with a more precise context boundary.

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ch-wan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the FlashInfer autotuning mechanism by ensuring that the autotuning context is applied with greater precision. Previously, the autotuner wrapped setup operations like buffer creation and metadata initialization, which are not relevant to kernel performance. The changes now isolate the autotuning to only the actual kernel execution, improving the logical correctness of the autotuning process without impacting performance or model outputs.

Highlights

  • Autotune Scope Narrowed: The _flashinfer_autotune method now precisely wraps only the run_once() kernel execution within _dummy_run, rather than the entire dummy run process, avoiding unnecessary wrapping of setup operations.
  • Context Manager Integration: An optional run_ctx parameter was introduced to _dummy_run to allow external context managers, such as autotune(), to be applied to specific parts of its execution.
  • No-op Context Fallback: The empty_context() utility is now utilized as a default no-op context when run_ctx is not explicitly provided, ensuring existing callers of _dummy_run remain unaffected.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/model_executor/model_runner.py
    • Imported empty_context from sglang.srt.utils.
    • Modified _flashinfer_autotune to pass the autotune() context as a run_ctx argument to _dummy_run.
    • Updated the _dummy_run method signature to accept an optional run_ctx parameter.
    • Wrapped the run_once() call within _dummy_run with the provided run_ctx or empty_context() if run_ctx is None.
Activity
  • The author has formatted the code according to pre-commit hooks.
  • The author has ensured the code adheres to SGLang code style guidance.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the FlashInfer autotuning logic to more precisely wrap only the kernel execution (run_once) instead of the entire _dummy_run method. This is a good improvement for correctness and to avoid tuning unnecessary setup operations. However, the refactoring has inadvertently removed the torch.inference_mode() context, which could lead to performance issues and increased memory usage during autotuning. I've added a comment to restore it.

Comment on lines +1862 to +1865
self._dummy_run(
batch_size=self.req_to_token_pool.size,
run_ctx=autotune(),
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The torch.inference_mode() context manager was removed in this refactoring. This means that autotuning will now run with gradient tracking enabled, which can be slower and consume more memory. It should be added back to ensure autotuning is performed in inference mode.

Suggested change
self._dummy_run(
batch_size=self.req_to_token_pool.size,
run_ctx=autotune(),
)
with torch.inference_mode():
self._dummy_run(
batch_size=self.req_to_token_pool.size,
run_ctx=autotune(),
)

@ch-wan ch-wan force-pushed the cheng/fix/flashinfer-autotune branch from fd291a0 to 75228b5 Compare February 19, 2026 09:05
@ch-wan ch-wan force-pushed the cheng/fix/flashinfer-autotune branch from 75228b5 to 26d2380 Compare February 19, 2026 09:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments