-
Notifications
You must be signed in to change notification settings - Fork 2.9k
[perf] feat: support profiler in model engine and sft trainer #4749
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds profiling support to the SFT trainer and model engine, which is a great performance analysis feature. The implementation is mostly solid, but I've identified a critical bug in the default profiler configuration that will cause a crash when profiling is enabled. Additionally, there's a significant amount of duplicated code between the standard and Ray-based SFT trainers that should be refactored to improve maintainability. Addressing these points will make the new feature robust and easier to maintain.
| step_start: 0 | ||
|
|
||
| # stop profile mini-batch in training | ||
| step_end: null |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The step_end value is set to null, which will be parsed as None in Python. This will cause a TypeError in verl/utils/profiler/profile.py during the validation check (self.tool_config.step_end >= 0) and when calculating the active duration for the profiler schedule (self.tool_config.step_end - self.tool_config.step_start).
To fix this, step_end must be an integer. A default value of 1 would be reasonable, allowing the profiler to capture a single step by default.
step_end: 1| self.profiler_config = omega_conf_to_dataclass(self.config.profiler) | ||
|
|
||
| # check profile interval | ||
| self.profiler_interval = self.config.trainer.profile_interval | ||
| self._validate_profiler_interval() | ||
|
|
||
| def _validate_profiler_interval(self): | ||
| assert len(self.profiler_interval) == 2 | ||
| self.start_profile_step = self.profiler_interval[0] | ||
| self.end_profile_step = self.profiler_interval[1] | ||
| assert self.end_profile_step >= self.start_profile_step | ||
| if self.start_profile_step < 0: | ||
| assert self.end_profile_step < 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for handling the profiler configuration, including the _validate_profiler_interval method, is duplicated from verl/trainer/sft_trainer.py. This code duplication makes the codebase harder to maintain and increases the risk of introducing inconsistencies if one file is updated but the other is not.
Consider refactoring the common logic from SFTTrainer in both sft_trainer.py and sft_trainer_ray.py into a shared base class. This would centralize the configuration handling and other shared functionalities, leading to cleaner and more maintainable code.
What does this PR do?
Checklist Before Starting
[{modules}] {type}: {description}(This will be checked by the CI){modules}includefsdp,megatron,sglang,vllm,rollout,trainer,ci,training_utils,recipe,hardware,deployment,ray,worker,single_controller,misc,perf,model,algo,env,tool,ckpt,doc,data,cfg,reward,like[megatron, fsdp, doc]{type}is infeat,fix,refactor,chore,test[BREAKING]to the beginning of the title.[BREAKING][fsdp, megatron] feat: dynamic batchingTest
API and Usage Example
# Add code snippet or script demonstrating how to use thisDesign & Code Changes
Checklist Before Submitting
Important
Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.
pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=alwaysci-requestchannel in theverlSlack workspace. (If not accessible, please try the Feishu group (飞书群).)