[WIP][FIX] NSA backend page_table overflow in speculative decoding target_verify#19016
[WIP][FIX] NSA backend page_table overflow in speculative decoding target_verify#19016JustinTong0323 wants to merge 1 commit intosgl-project:mainfrom
Conversation
…get_verify When speculative decoding is enabled with the NSA attention backend, the decode server crashes with: RuntimeError: The size of tensor a (202752) must match the size of tensor b (202754) at non-singleton dimension 1 at: metadata.page_table_1[:, :max_seqlen_k].copy_(page_indices) Root cause: - init_cuda_graph_state() allocates the CUDA graph page_table with shape (max_num_tokens, self.max_context_len), where max_context_len equals model_config.context_len (e.g., 202752). - During target_verify in init_forward_metadata_replay_cuda_graph(), max_seqlen_k is computed as seq_lens.max() + speculative_num_draft_tokens. When a request is near the max context length, this exceeds the page_table column dimension (e.g., 202752 + 2 = 202754). - The req_to_token pool does NOT have this issue because model_runner_kv_cache_mixin.py already adds extra_max_context_len (4 + speculative_num_draft_tokens) to its allocation. But the NSA backend's page_table was not updated to match. Fix: Add speculative_num_draft_tokens to the page_table column dimension, matching how req_to_token_pool handles the same scenario.
Summary of ChangesHello @JustinTong0323, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a critical bug in the NSA attention backend's speculative decoding mechanism. It addresses a Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request addresses a RuntimeError that occurs during speculative decoding with the NSA attention backend. The error is caused by an overflow in the page_table dimension when a request's sequence length is close to the maximum context length. The fix correctly increases the allocated size of the page_table in the CUDA graph state by adding the number of speculative draft tokens. This aligns the allocation with how other similar buffers are handled and prevents the out-of-bounds access. The change is clear, concise, and effectively resolves the bug.
Motivation
This PR may address #18980 .
When speculative decoding is enabled with the NSA attention backend, the decode server crashes with:
Root cause:
Fix: Add speculative_num_draft_tokens to the page_table column dimension, matching how req_to_token_pool handles the same scenario.
Modifications
Accuracy Tests
Benchmarking and Profiling
Checklist
Review Process
/tag-run-ci-label,/rerun-failed-ci,/tag-and-rerun-ci