Skip to content

Conversation

@yyDing1
Copy link
Collaborator

@yyDing1 yyDing1 commented Dec 15, 2025

What does this PR do?

  • Migrate all Reward-Model-related CI to Reward Loop (verified)
  • Set the naive router as the default for the reward loop

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request migrates all Reward-Model-related CI to use the new reward_loop feature. The changes span across example scripts, test scripts, and configuration files, consistently replacing the old reward model setup with the reward_loop configuration. The core logic in verl/trainer/ppo/ray_trainer.py is updated to handle both the legacy and the new reward_loop paths. I've found one minor issue in a test script where a parameter is duplicated. Overall, the changes look good and align with the PR's objective.

reward_model.profiler.enable=$PROFILE_ENABLE \
reward_model.profiler.ranks=$PROFILE_RANKS \
reward_model.profiler.all_ranks=$PROFILE_RANKS_ALL \
reward_model.use_reward_loop=True \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The parameter reward_model.use_reward_loop=True is duplicated. Please remove this redundant line to improve clarity and avoid potential issues.

@wuxibin89 wuxibin89 merged commit a07556d into volcengine:main Dec 17, 2025
84 of 94 checks passed
@yyDing1 yyDing1 deleted the update_ci branch December 17, 2025 03:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants