Skip to content

[Feature] support v1 update/clear api for RL#6761

Open
liyonghua0910 wants to merge 3 commits intoPaddlePaddle:developfrom
liyonghua0910:develop+rl_update_weight_v1
Open

[Feature] support v1 update/clear api for RL#6761
liyonghua0910 wants to merge 3 commits intoPaddlePaddle:developfrom
liyonghua0910:develop+rl_update_weight_v1

Conversation

@liyonghua0910
Copy link
Collaborator

@liyonghua0910 liyonghua0910 commented Mar 10, 2026

Motivation

This PR upgrades the weight clearing and updating flow for RL scenarios.

The legacy control path mainly relied on shared memory to synchronize state across the engine, worker, and cache-related components. While functional, the signal path was not explicit enough, and it was difficult to trace how failed requests were handled across components. In addition, the old workflow usually cleared weights through clear_load_weight first, even though residual requests could still exist, and then relied on a manual reset_scheduler call to clean up the scheduler queue. This made the lifecycle less explicit and introduced risks of inconsistent states during asynchronous resource recycling.

The goal of this PR is to move the control flow to an explicit control-request path and replace the legacy weight clear/reload flow with the new sleep/wakeup workflow, so state transitions and troubleshooting become more straightforward.

Modifications

  • Switch weight-clear/update control signals to a dedicated ControlRequest/ControlResponse path, so each control request has its own request ID and can be traced through logs end to end.
  • Add /v1/sleep and /v1/wakeup, with tags support to specify which part of GPU memory should be offloaded or reloaded. Enable these APIs by export FD_ENABLE_V1_UPDATE_WEIGHTS=1.
  • Keep /clear_load_weight and /update_model_weight for compatibility:
    • When FD_ENABLE_V1_UPDATE_WEIGHTS=0, /clear_load_weight and /update_model_weight still rely on shared memory for control and multi-process synchronization.
    • When FD_ENABLE_V1_UPDATE_WEIGHTS=1, /clear_load_weight and /update_model_weight switch to the new control path, using the engine worker queue, engine cache queue, and FMQ for request dispatch and response collection.
  • Extend /v1/pause and /v1/resume with cache-transfer-manager coordination to support multi-level cache and KV-cache-backend scenarios.

Usage or Command

Export the following environment variable when starting server:

export FD_ENABLE_V1_UPDATE_WEIGHTS=1

Send control requests:

# Legacy-compatible endpoints
curl -i http://<IP>:<PORT>/clear_load_weight
curl -i http://<IP>:<PORT>/update_model_weight

# New control endpoints
curl -X POST http://<IP>:<PORT>/v1/sleep
curl -X POST http://<IP>:<PORT>/v1/wakeup

Accuracy Tests

  • N/A. This PR focuses on the RL weight-control path, request/control coordination, and interface changes. No model-output accuracy result is included in the current description.

Checklist

  • Add at least a tag in the PR title.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link

paddle-bot bot commented Mar 10, 2026

Thanks for your contribution!

@codecov-commenter
Copy link

codecov-commenter commented Mar 10, 2026

Codecov Report

❌ Patch coverage is 22.19680% with 340 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@9f0778f). Learn more about missing BASE report.

Files with missing lines Patch % Lines
fastdeploy/cache_manager/cache_transfer_manager.py 9.93% 136 Missing ⚠️
fastdeploy/engine/common_engine.py 12.24% 85 Missing and 1 partial ⚠️
fastdeploy/worker/gpu_model_runner.py 9.52% 38 Missing ⚠️
fastdeploy/rl/dynamic_weight_manager.py 13.95% 37 Missing ⚠️
fastdeploy/worker/worker_process.py 15.38% 7 Missing and 4 partials ⚠️
...astdeploy/inter_communicator/engine_cache_queue.py 58.33% 10 Missing ⚠️
fastdeploy/entrypoints/openai/api_server.py 77.14% 7 Missing and 1 partial ⚠️
fastdeploy/entrypoints/engine_client.py 36.36% 7 Missing ⚠️
fastdeploy/entrypoints/openai/utils.py 25.00% 3 Missing ⚠️
fastdeploy/cache_manager/prefix_cache_manager.py 0.00% 0 Missing and 2 partials ⚠️
... and 1 more
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #6761   +/-   ##
==========================================
  Coverage           ?   71.17%           
==========================================
  Files              ?      395           
  Lines              ?    54984           
  Branches           ?     8678           
==========================================
  Hits               ?    39137           
  Misses             ?    13060           
  Partials           ?     2787           
Flag Coverage Δ
GPU 71.17% <22.19%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants