Skip to content

[WIP][FIX] NSA backend page_table overflow in speculative decoding target_verify#19016

Open
JustinTong0323 wants to merge 1 commit intosgl-project:mainfrom
JustinTong0323:fix-v32-spec-crash-nsa
Open

[WIP][FIX] NSA backend page_table overflow in speculative decoding target_verify#19016
JustinTong0323 wants to merge 1 commit intosgl-project:mainfrom
JustinTong0323:fix-v32-spec-crash-nsa

Conversation

@JustinTong0323
Copy link
Collaborator

Motivation

This PR may address #18980 .

When speculative decoding is enabled with the NSA attention backend, the decode server crashes with:

ERROR 2026-02-18T14:43:05.821583340Z [severity: ERROR] metadata.page_table_1[:, :max_seqlen_k].copy_(page_indices)
ERROR 2026-02-18T14:43:05.821584871Z [severity: ERROR] RuntimeError: The size of tensor a (202752) must match the size of tensor b (202754) at non-singleton dimension 1

Root cause:

  • init_cuda_graph_state() allocates the CUDA graph page_table with shape (max_num_tokens, self.max_context_len), where max_context_len equals model_config.context_len (e.g., 202752).
  • During target_verify in init_forward_metadata_replay_cuda_graph(), max_seqlen_k is computed as seq_lens.max() + speculative_num_draft_tokens. When a request is near the max context length, this exceeds the page_table column dimension (e.g., 202752 + 2 = 202754).
  • The req_to_token pool does NOT have this issue because model_runner_kv_cache_mixin.py already adds extra_max_context_len (4 + speculative_num_draft_tokens) to its allocation. But the NSA backend's page_table was not updated to match.

Fix: Add speculative_num_draft_tokens to the page_table column dimension, matching how req_to_token_pool handles the same scenario.

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

…get_verify

When speculative decoding is enabled with the NSA attention backend, the
decode server crashes with:

  RuntimeError: The size of tensor a (202752) must match the size of
  tensor b (202754) at non-singleton dimension 1

at: metadata.page_table_1[:, :max_seqlen_k].copy_(page_indices)

Root cause:
- init_cuda_graph_state() allocates the CUDA graph page_table with shape
  (max_num_tokens, self.max_context_len), where max_context_len equals
  model_config.context_len (e.g., 202752).
- During target_verify in init_forward_metadata_replay_cuda_graph(),
  max_seqlen_k is computed as seq_lens.max() + speculative_num_draft_tokens.
  When a request is near the max context length, this exceeds the
  page_table column dimension (e.g., 202752 + 2 = 202754).
- The req_to_token pool does NOT have this issue because
  model_runner_kv_cache_mixin.py already adds extra_max_context_len
  (4 + speculative_num_draft_tokens) to its allocation. But the NSA
  backend's page_table was not updated to match.

Fix: Add speculative_num_draft_tokens to the page_table column dimension,
matching how req_to_token_pool handles the same scenario.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @JustinTong0323, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical bug in the NSA attention backend's speculative decoding mechanism. It addresses a RuntimeError caused by an undersized page_table tensor, ensuring that the system can correctly handle speculative draft tokens without crashing, particularly when processing requests near the maximum context length. The change aligns the page_table allocation with how other components handle similar scenarios.

Highlights

  • Fix NSA backend page_table overflow: The page_table in the NSA attention backend was not correctly sized to accommodate speculative draft tokens, leading to a RuntimeError during target_verify when requests approached the maximum context length. This fix adds the necessary extra columns to prevent the overflow.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/layers/attention/nsa_backend.py
    • Increased the column dimension of the page_table tensor allocation to include speculative_num_draft_tokens, preventing overflow during target_verify.
Activity
  • No human activity has occurred on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@JustinTong0323 JustinTong0323 changed the title [Bug] Fix NSA backend page_table overflow in speculative decoding target_verify [WIP][FIX NSA backend page_table overflow in speculative decoding target_verify Feb 19, 2026
@JustinTong0323 JustinTong0323 changed the title [WIP][FIX NSA backend page_table overflow in speculative decoding target_verify [WIP][FIX] NSA backend page_table overflow in speculative decoding target_verify Feb 19, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a RuntimeError that occurs during speculative decoding with the NSA attention backend. The error is caused by an overflow in the page_table dimension when a request's sequence length is close to the maximum context length. The fix correctly increases the allocated size of the page_table in the CUDA graph state by adding the number of speculative draft tokens. This aligns the allocation with how other similar buffers are handled and prevents the out-of-bounds access. The change is clear, concise, and effectively resolves the bug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments