Skip to content

Conversation

@BBuf
Copy link
Collaborator

@BBuf BBuf commented Jan 1, 2026

Motivation

When run:

CUDA_VISIBLE_DEVICES=4,5,6,7 sglang generate --model-path Wan-AI/Wan2.2-I2V-A14B-Diffusers  --text-encoder-cpu-offload --pin-cpu-memory  --num-gpus 4  --ulysses-degree 2  --ring-degree 2 --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."  --save-output  --output-path outputs  --output-file-name "output_video.mp4"  --image-path="https://github.com/Wan-Video/Wan2.2/blob/990af50de458c19590c245151197326e208d7191/examples/i2v_input.JPG?raw=true" --720p --num-frames 81  --fps 16  --guidance-scale 3.5 --guidance-scale-2 4 --num-inference-steps 27 

main:

File "/home/lmsys/bbuf/sglang/python/sglang/multimodal_gen/runtime/models/dits/wanvideo.py", line 405, in forward
    attn_output = self.attn1(query, key, value)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1786, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lmsys/bbuf/sglang/python/sglang/multimodal_gen/runtime/layers/attention/layer.py", line 376, in forward
    out = ring_attn(
          ^^^^^^^^^^
  File "/home/lmsys/bbuf/sglang/python/sglang/multimodal_gen/runtime/layers/usp.py", line 244, in ring_attn
    out, *_ = _templated_ring_attention(
              ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/experimental/_attention.py", line 466, in _templated_ring_attention
    out, logsumexp, *rest = op(
                            ^^^
  File "/home/lmsys/bbuf/sglang/python/sglang/multimodal_gen/runtime/layers/usp.py", line 217, in attn_callable_adapter
    output, softmax_lse, *rest = attn_impl.forward(
    ^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: not enough values to unpack (expected at least 2, got 1)

pr:

100%|███████████████████████████████████████████████████████████████| 27/27 [01:50<00:00,  4.10s/it]
[01-01 13:57:35] [DenoisingStage] average time per step: 4.1008 seconds
[01-01 13:57:36] [DenoisingStage] finished in 111.1133 seconds
[01-01 13:57:36] [DecodingStage] started...
[01-01 13:57:39] [DecodingStage] finished in 3.6695 seconds
[01-01 13:57:39] Peak GPU memory: 61.25 GB, Remaining GPU memory at peak: 18.40 GB. Components that can stay resident: ['vae']
[01-01 13:57:42] Output saved to outputs/output_video.mp4
[01-01 13:57:42] Pixel data generated successfully in 123.40 seconds
[01-01 13:57:42] Completed batch processing. Generated 1 outputs in 123.41 seconds.
[01-01 13:57:42] Memory usage - Max peak: 62718.84 MB, Avg peak: 62718.84 MB
output_video.mp4

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments (/tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci) or contact authorized users to do so.
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@github-actions github-actions bot added the diffusion SGLang Diffusion label Jan 1, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @BBuf, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug impacting the ring_attention functionality within diffusion models, particularly when integrated with the flash_attn library. The issue stemmed from an incorrect handling of return values, leading to a ValueError during the attention computation. The implemented fix ensures that the flash_attn backend properly manages the unpacking of attention outputs and log-sum-exp values, thereby restoring the stability and correct operation of ring_attention for multimodal generation tasks.

Highlights

  • Bug Fix: Ring Attention: Resolved a ValueError that occurred in the ring_attention mechanism when utilizing the upstream flash_attn backend, specifically when return_softmax_lse was enabled.
  • Flash Attention Backend Update: Modified the flash_attn backend to correctly pass the return_attn_probs argument and handle the unpacking of multiple return values (out and softmax_lse) when return_softmax_lse is true, preventing the 'not enough values to unpack' error.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively resolves a ValueError that occurs during ring_attention operations when using the upstream FlashAttention 3 implementation. The issue stemmed from incorrect return value handling when return_softmax_lse was specified. The fix correctly passes the return_attn_probs parameter and properly unpacks the results from the attention function. The change is accurate and addresses the bug. I have one minor suggestion to improve code consistency.

@mickqian
Copy link
Collaborator

mickqian commented Jan 1, 2026

/tag-and-rerun-ci

@github-actions github-actions bot added the run-ci label Jan 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

diffusion SGLang Diffusion run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants