Skip to content

Conversation

@ispobock
Copy link
Collaborator

@ispobock ispobock commented Jan 1, 2026

Motivation

  • Do swa translation before forward to save kernel launch.
  • Fix page table translation for page size > 1 (translate loc instread of page id)
  • This PR only handled normal extend and decode. Spec case will be handled in the followup PRs.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ispobock, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces significant optimizations to the FlashAttention (FA3) backend, specifically targeting the handling of Sliding Window Attention (SWA) page tables. By adding a dedicated swa_page_table and pre-translating memory locations, the changes aim to reduce kernel launch overhead during forward passes. The update also refines the page table translation logic for larger page sizes and integrates these improvements with CUDA graph operations, leading to more efficient and accurate attention computations within the system.

Highlights

  • Sliding Window Attention (SWA) Page Table Introduction: A new swa_page_table attribute has been added to FlashAttentionMetadata to store pre-translated page table information specifically for Sliding Window Attention, enabling more efficient lookups.
  • Optimized SWA Page Table Translation: The swa_page_table is now pre-translated during the init_forward_metadata and normal_decode_set_metadata phases. This optimization avoids redundant kernel launches during subsequent forward_extend and forward_decode operations, improving performance.
  • Correct Page Table Handling for Large Page Sizes: The logic for page table translation when page_size > 1 has been refined. It now correctly translates memory locations instead of page IDs, ensuring accuracy for larger memory block configurations.
  • CUDA Graph Integration for SWA Page Table: The swa_page_table has been integrated into the CUDA graph capture and replay mechanisms. This ensures that the optimized SWA page table is properly managed and utilized when CUDA graphs are employed for performance acceleration.
  • Refined SWA Pool Usage Condition: The condition for use_sliding_window_kv_pool has been updated to explicitly check if model_runner.token_to_kv_pool.swa_layer_nums > 0. This ensures that SWA-specific logic is only activated when there are actual SWA layers present in the model.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces optimizations for Sliding Window Attention (SWA) by pre-calculating the SWA page table to reduce kernel launches. It also fixes a bug in page table translation for page sizes greater than one. The changes for normal extend and decode operations look solid. However, I've identified a potential issue in the fallback logic within forward_extend and forward_decode that could lead to incorrect behavior for SWA layers when page_size > 1. My review includes suggestions to address this by removing the fallback and asserting the presence of the pre-calculated swa_page_table.

@ispobock
Copy link
Collaborator Author

ispobock commented Jan 1, 2026

/tag-and-rerun-ci

@github-actions github-actions bot added the run-ci label Jan 1, 2026
@ispobock
Copy link
Collaborator Author

ispobock commented Jan 1, 2026

/rerun-stage unit-test-backend-8-gpu-h200

@github-actions
Copy link
Contributor

github-actions bot commented Jan 1, 2026

✅ Triggered unit-test-backend-8-gpu-h200 to run independently (skipping dependencies).

It will not be shown in this page. Check the Actions tab for progress.

@ispobock
Copy link
Collaborator Author

ispobock commented Jan 1, 2026

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants