Skip to content

Comments

[jit_kernel] Add JIT ngram_utils kernel #19085

Open
Johnsonms wants to merge 4 commits intosgl-project:mainfrom
Johnsonms:ngram-utils-jit
Open

[jit_kernel] Add JIT ngram_utils kernel #19085
Johnsonms wants to merge 4 commits intosgl-project:mainfrom
Johnsonms:ngram-utils-jit

Conversation

@Johnsonms
Copy link
Contributor

@Johnsonms Johnsonms commented Feb 20, 2026

Motivation

#17865
The reconstruct_indices_from_tree_mask kernel in sgl-kernel reconstructs
tree traversal indices (positions, retrive_index, retrive_next_token,
retrive_next_sibling) from a flat boolean tree attention mask. Porting it
to the JIT kernel framework makes it available without requiring a
pre-compiled sgl-kernel package, improving portability and development
iteration speed.

Modifications

  • python/sglang/jit_kernel/csrc/speculative/ngram_utils.cuh: JIT port of
    sgl-kernel/csrc/speculative/ngram_utils.cu. Replaces PyTorch ATen tensor
    types with tvm::ffi::TensorView, at::cuda::getCurrentCUDAStream() with
    LaunchKernel::resolve_device(), and adds comprehensive input validation via
    RuntimeCheck. The CUDA device kernel reconstructIndicesFromTreeMask is
    unchanged.
  • python/sglang/jit_kernel/ngram_utils.py: Python wrapper for
    reconstruct_indices_from_tree_mask using @register_custom_op, following the
    same pattern as other JIT kernels.
  • python/sglang/jit_kernel/tests/test_ngram_utils.py: Tests covering smoke
    runs across batch sizes and tree sizes, known-answer tests for a
    linear-chain tree (positions, next_token links, sibling links), and JIT vs
    AOT cross-validation (15 tests, all passing).
  • python/sglang/jit_kernel/benchmark/bench_ngram_utils.py:
    triton.testing.perf_report benchmark comparing JIT vs AOT across typical
    batch sizes and draft token counts, with a correctness diff section.

Accuracy Tests

python -m pytest python/sglang/jit_kernel/tests/test_ngram_utils.py

image

Benchmarking and Profiling

python python/sglang/jit_kernel/benchmark/bench_ngram_utils.py
image

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Johnsonms, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a Just-In-Time (JIT) compiled version of the reconstruct_indices_from_tree_mask kernel. This change aims to improve the system's portability by removing the dependency on pre-compiled sgl-kernel packages and to speed up development cycles. The JIT kernel includes robust input validation and is integrated with a Python wrapper, supported by extensive testing and performance benchmarks comparing it against the existing Ahead-Of-Time (AOT) compiled version.

Highlights

  • JIT Kernel Porting: The reconstruct_indices_from_tree_mask kernel has been ported to the JIT kernel framework, enhancing portability and accelerating development iteration by removing dependency on pre-compiled sgl-kernel packages.
  • C++ Kernel Adaptation: The C++ CUDA kernel (ngram_utils.cuh) was adapted for JIT, replacing PyTorch ATen types with tvm::ffi::TensorView and integrating LaunchKernel::resolve_device() for stream resolution, while the core CUDA logic remains unchanged.
  • Robust Input Validation: Comprehensive input validation using RuntimeCheck has been added to the JIT C++ kernel to ensure data integrity and correct usage.
  • Python Integration: A new Python wrapper (ngram_utils.py) was created using @register_custom_op to expose the JIT kernel, making it accessible from Python.
  • Comprehensive Testing & Benchmarking: New unit tests (test_ngram_utils.py) cover smoke tests, known-answer scenarios, and JIT vs AOT cross-validation. A benchmark script (bench_ngram_utils.py) was added to compare JIT and AOT performance across typical batch and draft token sizes.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/jit_kernel/benchmark/bench_ngram_utils.py
    • Added a new benchmark script to compare the performance of the JIT and AOT versions of the reconstruct_indices_from_tree_mask kernel.
  • python/sglang/jit_kernel/csrc/speculative/ngram_utils.cuh
    • Added the C++ CUDA kernel for reconstruct_indices_from_tree_mask, adapted for JIT compilation with tvm::ffi::TensorView and LaunchKernel::resolve_device().
    • Integrated comprehensive input validation using RuntimeCheck for all tensor arguments.
  • python/sglang/jit_kernel/ngram_utils.py
    • Added a Python wrapper for the JIT reconstruct_indices_from_tree_mask kernel, registering it as a custom operation.
  • python/sglang/jit_kernel/tests/test_ngram_utils.py
    • Added a comprehensive suite of unit tests for the JIT reconstruct_indices_from_tree_mask kernel.
    • Included smoke tests across various batch and tree sizes.
    • Implemented known-answer tests for linear chain and branching tree structures.
    • Added cross-validation tests comparing JIT output against the AOT version when available.
Activity
  • The author provided detailed motivation for porting the reconstruct_indices_from_tree_mask kernel to JIT.
  • Detailed modifications were outlined, specifying changes in C++ kernel adaptation, Python wrapper creation, and testing/benchmarking.
  • Accuracy tests were performed using pytest, with a screenshot indicating successful execution across various test cases.
  • Benchmarking results were included, comparing JIT and AOT performance, along with a screenshot of the performance report.
  • A checklist for code formatting, unit tests, documentation, and benchmarks was provided, indicating adherence to contribution guidelines.
  • A clear review process was outlined for maintainers and reviewers.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a JIT kernel for ngram_utils, improving portability and development iteration speed. The changes include a new CUDA kernel, a Python wrapper, and comprehensive tests and benchmarks. The code appears to be well-structured and follows existing patterns within the project. The addition of correctness checks and performance benchmarks is a good practice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant