[ckpt] feat: add kimi ckpt engine backend#4954
Conversation
There was a problem hiding this comment.
Code Review
This pull request introduces a new checkpoint engine backend, kimi_ckpt_engine, designed to support both GPU and Huawei Ascend NPU environments. The implementation is comprehensive, including the core engine logic, integration with the existing checkpointing framework, and a new test suite. My review focuses on ensuring correctness and maintainability. I've identified a critical thread-safety issue in the weight sending logic that could lead to data corruption. Additionally, I've suggested a minor but important rename in the test suite to improve clarity.
| with concurrent.futures.ThreadPoolExecutor(max_workers=32) as executor: | ||
| futures = [ | ||
| executor.submit( | ||
| offload_cpu, |
There was a problem hiding this comment.
Why we need to offload model to CPU first? Does kimi ckpt engine support D2D?
There was a problem hiding this comment.
The kimi_checkpoint_engine only supports reading weights from the CPU. It first register all weights and all gather the metadata across all ranks.
34b6bab to
5f985f9
Compare
5f985f9 to
ab708eb
Compare
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
Please format code first. |
verl/checkpoint_engine/README.md
Outdated
| |nixl|NIXL|all_gather+ring p2p|Various transport backends (D2D, H2H, H2D, etc)<br>- UCX<br>- UCCL<br>- Mooncacke|Medium/High|High: dynamic adjust ring topology|Off-policy training<br>- Trainer/rollout disaggregated<br>- Elastic rollout<br>- Rollout fault tolerance<br>- Heterogeneous hardware rollout | ||
| |kimi_ckpt_engine|MOONCAKE+NCCL/HCCL|p2p+broadcast|NVIDIA/Ascend|High|Low: rebuild communication group|Off-policy training<br>- Trainer/rollout disaggregated<br>- Save checkpoint each time | ||
|
|
||
| PS: kimi_ckpt_engine first offloads all weights to the CPU. Then, using Mooncake transfer engine, these weights are transmitted via P2P to a specific worker in the rollout, followed by a broadcast to all other rollout workers. This mode requires the P2P feature of checkpoint_engine. Please ensure you have installed it via pip install 'checkpoint-engine[p2p]' and that your version is 0.4.0 or higher. |
There was a problem hiding this comment.
Could you provide more details about how kimi_ckpt_engine works? For example, give an image to show the communication topology between trainer and rollout workers.
There was a problem hiding this comment.
I have updated the description and added a schematic. Could you please review it again to see if the level of detail is sufficient? Also, I’ve been scheduled for the H100 environment and will supplement the performance data over the next two days.
There was a problem hiding this comment.
The H100 environment was preempted by a higher-priority task, so the performance data will be pending for now. Will update later.
cd6d4c1 to
93a988f
Compare
|
@wuxibin89 I have updated the description. Could you please take another look to see if it’s detailed enough to be merged? |
93a988f to
8dc26f1
Compare
### What does this PR do? Based on ckpt engine abstraction [add checkpoint-engine abstraction](verl-project#4775), in this PR, we add kimi_ckpt_engine backend to support both GPU and huawei Ascend NPU. Since establishing communication domains across trainer and rollout workers is required, this PR also depends on the [newly added communication domain support](MoonshotAI/checkpoint-engine#66) in kimi_ckpt_engine. TODO: - [x] Add detailed performance testing results in checkpoint engine README. ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: [add Hccl ckpt engine backend](verl-project#4885) - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test We have verified the functionality on both GPU and NPU. Performance benchmarks on a 32 NPU environment show promising results; however, due to a lack of available GPU resources, performance data for GPU is still pending. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) - [ ] If your PR is related to the `recipe` submodule, please also update the reference to the submodule commit via `git submodule update --remote` or `cd recipe && git pull origin main`. --------- Co-authored-by: kip-cxj <cuixiaojin@huawei.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
### What does this PR do? Based on ckpt engine abstraction [add checkpoint-engine abstraction](verl-project#4775), in this PR, we add kimi_ckpt_engine backend to support both GPU and huawei Ascend NPU. Since establishing communication domains across trainer and rollout workers is required, this PR also depends on the [newly added communication domain support](MoonshotAI/checkpoint-engine#66) in kimi_ckpt_engine. TODO: - [x] Add detailed performance testing results in checkpoint engine README. ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: [add Hccl ckpt engine backend](verl-project#4885) - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test We have verified the functionality on both GPU and NPU. Performance benchmarks on a 32 NPU environment show promising results; however, due to a lack of available GPU resources, performance data for GPU is still pending. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) - [ ] If your PR is related to the `recipe` submodule, please also update the reference to the submodule commit via `git submodule update --remote` or `cd recipe && git pull origin main`. --------- Co-authored-by: kip-cxj <cuixiaojin@huawei.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
What does this PR do?
Based on ckpt engine abstraction add checkpoint-engine abstraction, in this PR, we add kimi_ckpt_engine backend to support both GPU and huawei Ascend NPU.
Since establishing communication domains across trainer and rollout workers is required, this PR also depends on the newly added communication domain support in kimi_ckpt_engine.
TODO:
Checklist Before Starting
[{modules}] {type}: {description}(This will be checked by the CI){modules}includefsdp,megatron,veomni,sglang,vllm,rollout,trainer,ci,training_utils,recipe,hardware,deployment,ray,worker,single_controller,misc,perf,model,algo,env,tool,ckpt,doc,data,cfg,reward,like[megatron, fsdp, doc]{type}is infeat,fix,refactor,chore,test[BREAKING]to the beginning of the title.[BREAKING][fsdp, megatron] feat: dynamic batchingTest
We have verified the functionality on both GPU and NPU. Performance benchmarks on a 32 NPU environment show promising results; however, due to a lack of available GPU resources, performance data for GPU is still pending.
Checklist Before Submitting
Important
Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.
pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=alwaysci-requestchannel in theverlSlack workspace. (If not accessible, please try the Feishu group (飞书群).)recipesubmodule, please also update the reference to the submodule commit viagit submodule update --remoteorcd recipe && git pull origin main.