Skip to content

[Feature] Support CUDA Graph under mixed mode DeepEP communication#7345

Open
lizexu123 wants to merge 2 commits intoPaddlePaddle:developfrom
lizexu123:cuda_graph_stream-1
Open

[Feature] Support CUDA Graph under mixed mode DeepEP communication#7345
lizexu123 wants to merge 2 commits intoPaddlePaddle:developfrom
lizexu123:cuda_graph_stream-1

Conversation

@lizexu123
Copy link
Copy Markdown
Collaborator

@lizexu123 lizexu123 commented Apr 12, 2026

Motivation

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

Modifications

报错日志:
DeepEP/csrc/kernels/internode_ll.cu:553 operation would make the legacy stream depend on a capturing blocking stream
根本原因:

Python: low_latency_dispatch(return_recv_hook=True, async_finish=False)
    ↓
C++: deep_ep.cpp:1679 → launch_stream.stream() 传给 internode_ll::dispatch()
    ↓
C++: internode_ll.cu:520 → LAUNCH_KERNEL(&cfg, dispatch_func, ...)
    ↓
宏展开: launch.cuh:29 → CUDA_CHECK(cudaLaunchKernelEx(config, kernel, ...))
    ↓
CUDA驱动: 检测到 legacy stream 与 capturing stream 之间有隐式依赖 → 报错!
// deep_ep.cpp:1614-1615
auto compute_stream = at::cuda::getCurrentCUDAStream();  // ← c10 的 TLS
auto launch_stream = return_recv_hook ? compute_stream : comm_stream.value();
at::cuda::getCurrentCUDAStream() 返回的是 c10 TLS 中的默认流(stream 0,即 legacy stream),而不是 Paddle 当前的 capture stream。
所以 cudaLaunchKernelEx 是在 legacy stream 上调用的,而 CUDA graph capture 在 Paddle 的CudaStreamDefault创建的流上进行的。

本次修复内容:

# 1. capture stream 用 Python API 创建 → 默认 cudaStreamNonBlocking
self._capture_stream = paddle.device.Stream()
# paddle/fluid/pybind/cuda_streams_py.cc:370
# → auto stream_flag = phi::CUDAStream::StreamFlag::kStreamNonBlocking;

# 2. _DeepEPStreamGuard 同步 c10 TLS
with _DeepEPStreamGuard(self._capture_stream):
    # at::cuda::getCurrentCUDAStream() → 返回 capture_stream(不是 legacy stream)
    # cudaLaunchKernelEx 在 capture_stream 上调用 → 被 capture 进 graph
    # capture_stream 是 NonBlocking → 与 legacy stream 无隐式同步

本来可以很简单的实现,比如像sglang/python/sglang/srt/distributed/parallel_state.py:483-510中这样

@contextmanager
def graph_capture(self, stream=None):
    if stream is None:
        stream = torch.cuda.Stream()              # 1. 创建新流

    curr_stream = torch.cuda.current_stream()
    stream.wait_stream(curr_stream)               # 2. 新流等待当前流完成

    with torch.cuda.stream(stream):               # 3. 切换当前流
        yield graph_capture_context

with torch.cuda.stream(stream):
# PyTorch 的 current stream → stream ✓
# c10 的 TLS → stream ✓
# DeepEP 调用 at::cuda::getCurrentCUDAStream() → stream ✓
但是Paddle的paddle.device.stream_guard() 只更新了 Paddle 自己的 GPUContext,没有更新 c10 的 TLS:

with paddle.device.stream_guard(stream):
    # Paddle GPUContext → stream     ✓
    # c10 TLS → 没更新!             ✗
    # DeepEP 调用 at::cuda::getCurrentCUDAStream() → legacy stream (stream 0)  ✗

所以我们才需要用 ctypes 手动调用 c10::cuda::setCurrentCUDAStream() 来弥补这个差距。

感谢护航实习生PaddlePaddle/Paddle#78652 pr解决了这个问题

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Apr 12, 2026

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 2 committers have signed the CLA.

✅ lizexu123
❌ root


root seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@paddle-bot
Copy link
Copy Markdown

paddle-bot bot commented Apr 12, 2026

Thanks for your contribution!

fastdeploy-bot

This comment was marked as outdated.

@fastdeploy-bot
Copy link
Copy Markdown

🤖 AI Code Review | {{TIMESTAMP}}

📋 Review 摘要

PR 概述:修复 CUDA Graph 在混合模式 DeepEP 通信下的兼容性问题,通过使用独立的 capture stream 避免与 legacy stream 的依赖冲突。

变更范围graph_optimization/layers/moe/

影响面 Tag[Graph Optimization] [OP]

📝 PR 规范检查

PR 描述已填写 Motivation 和 Modifications,标题包含 [Feature] 标签,符合规范。


问题

级别 文件 概述
🔴 Bug cudagraph_piecewise_backend.py:248 调用处传递了已移除的 is_decode 参数,会导致 TypeError
🔴 Bug cudagraph_piecewise_backend.py:345 引用了已删除的 self._decode_capture_index,会导致 AttributeError
🟡 建议 forward_meta.py:161 新增的 audio_token_num 字段未被使用
🟡 建议 benchmark.sh:1 包含硬编码本地路径,不应提交
🟡 建议 test_ci.sh:1 包含硬编码本地路径,不应提交
❓ 疑问 fused_moe_backend_base.py:236 prefill/decode 逻辑被注释掉,影响是否已评估?

总体评价

PR 的核心修复方案(使用独立 capture stream + 清理 DeepEP buffer)设计合理,与 SGLang 的做法一致。但在代码重构过程中遗漏了方法签名更新和属性清理,存在两个运行时错误。另外,本地测试脚本不应提交到仓库。

详细问题说明

🔴 Bug 1: cudagraph_piecewise_backend.py:248

调用处传递了 is_decode=static_cudagraph_for_decode 参数,但方法签名(162行)已移除该参数,会导致运行时 TypeError。

建议修复:从调用处移除该参数:

return self.run_static_model(entry, **kwargs)

🔴 Bug 2: cudagraph_piecewise_backend.py:345

引用了已删除的属性 self._decode_capture_index,会导致运行时 AttributeError。

建议修复:删除该行:

# self._decode_capture_index = 0  # 已删除该属性

🟡 建议 1: forward_meta.py:161

新增的字段 audio_token_num 在代码中未被使用,请确认是否需要保留或删除。

🟡 建议 2: benchmark.sh:1

包含硬编码的本地路径(如 /root/paddlejob/workspace/env_run/output/lizexu/FastDeploy-1),这是本地测试脚本,不应提交到仓库。

建议:删除该文件,或将其添加到 .gitignore 中。

🟡 建议 3: test_ci.sh:1

包含硬编码的本地路径,这是本地测试脚本,不应提交到仓库。

建议:删除该文件,或将其添加到 .gitignore 中。

❓ 疑问 1: fused_moe_backend_base.py:236

原来区分 prefill/decode 的逻辑被注释掉了,现在总是调用 apply_ep_decode。对于 mixed 模式,这可能导致 prefill 阶段的行为异常。

请确认这是否是故意为之?如果是为了简化逻辑,请说明原因。


AI Code Review by fastdeploy-llm-integration skill

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants