Skip to content

[Diffusion] Refactor diffusion triton kernels#18966

Merged
mickqian merged 3 commits intomainfrom
refactor_diffusion_triton_kernels
Feb 19, 2026
Merged

[Diffusion] Refactor diffusion triton kernels#18966
mickqian merged 3 commits intomainfrom
refactor_diffusion_triton_kernels

Conversation

@BBuf
Copy link
Collaborator

@BBuf BBuf commented Feb 18, 2026

Motivation

Refactor diffusion triton kernels to make it more clear and easier to use.

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@BBuf
Copy link
Collaborator Author

BBuf commented Feb 18, 2026

/tag-and-rerun-ci

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @BBuf, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the organization and maintainability of diffusion Triton kernels by breaking down a monolithic file into several focused modules. This restructuring makes it easier to locate, understand, and manage specific kernel implementations, including their NPU-specific fallbacks. The change primarily impacts the internal structure of the kernel definitions and their import locations, without altering the external API or behavior of the diffusion models.

Highlights

  • Triton Kernel Refactoring: The diffusion-related Triton kernels have been refactored and moved from a single triton_ops.py file into several specialized files for better organization and clarity.
  • Modularization of Kernels: Specific kernels for normalization, rotary embedding, and fused scale/shift operations are now housed in their own dedicated modules within sglang/jit_kernel/diffusion/triton.
  • NPU Fallback Centralization: Native NPU fallback implementations for certain operations have been consolidated into a new npu_fallback.py file and are conditionally imported by the respective Triton kernel modules.
  • Import Path Updates: Existing files that utilized these Triton kernels have been updated to reflect the new modularized import paths, ensuring continuity of functionality.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/jit_kernel/diffusion/triton/norm.py
    • Created as a new file, retaining maybe_contiguous_lastdim and rms_norm_fn related code from the original triton_ops.py.
  • python/sglang/jit_kernel/diffusion/triton/npu_fallback.py
    • Added new file containing native fallback implementations for fuse_scale_shift_native and apply_rotary_embedding_native for NPU platforms.
  • python/sglang/jit_kernel/diffusion/triton/rmsnorm_onepass.py
    • Added new file containing the _rms_norm_tiled_onepass kernel and triton_one_pass_rms_norm function.
  • python/sglang/jit_kernel/diffusion/triton/rotary.py
    • Added new file containing the _rotary_embedding_kernel and apply_rotary_embedding function, with conditional NPU fallback.
  • python/sglang/jit_kernel/diffusion/triton/scale_shift.py
    • Added new file containing _fused_scale_shift_4d_kernel, fuse_scale_shift_kernel_blc_opt, fuse_scale_shift_gate_select01_kernel_blc_opt, fuse_scale_shift_kernel, and fuse_scale_shift_gate_select01_kernel, with conditional NPU fallback.
  • python/sglang/multimodal_gen/runtime/layers/elementwise.py
    • Updated import path for fuse_scale_shift_kernel to the new modular location.
  • python/sglang/multimodal_gen/runtime/layers/layernorm.py
    • Updated import paths for norm_infer, rms_norm_fn, triton_one_pass_rms_norm, and fuse_scale_shift_kernel to their new modular locations.
  • python/sglang/multimodal_gen/runtime/layers/rotary_embedding.py
    • Updated import path for apply_rotary_embedding to the new modular location.
  • python/sglang/multimodal_gen/runtime/layers/triton_ops.py
    • Renamed to python/sglang/jit_kernel/diffusion/triton/norm.py.
    • Removed _fused_scale_shift_4d_kernel, fuse_scale_shift_kernel_blc_opt, fuse_scale_shift_gate_select01_kernel_blc_opt, fuse_scale_shift_kernel, fuse_scale_shift_gate_select01_kernel, _rotary_embedding_kernel, apply_rotary_embedding, _rms_norm_tiled_onepass, triton_one_pass_rms_norm, and NPU fallback logic.
  • python/sglang/multimodal_gen/runtime/models/dits/qwen_image.py
    • Updated import path for fuse_scale_shift_gate_select01_kernel to the new modular location.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the Triton kernels for diffusion models by splitting the large triton_ops.py file into several smaller, more focused modules under the new sglang/jit_kernel/diffusion/triton/ directory. This is a great improvement for code organization and maintainability. The changes in other files correctly update the import paths to use these new modules.

My main feedback is about the new dependency created from the jit_kernel package to multimodal_gen for platform detection. I've left a couple of comments with suggestions on how to improve this. Overall, this is a solid refactoring.

@BBuf
Copy link
Collaborator Author

BBuf commented Feb 18, 2026

/rerun-failed-ci

@ping1jing2
Copy link
Collaborator

/rerun-failed-ci

@mickqian mickqian merged commit 19aa19b into main Feb 19, 2026
151 of 166 checks passed
@mickqian mickqian deleted the refactor_diffusion_triton_kernels branch February 19, 2026 09:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants

Comments