Skip to content

Commit bf9b7e4

Browse files
authored
[fsdp] fix: replicate ref compute_log_prob (disable calculate_entropy ...) in LoRA (#4675)
### What does this PR do? The calculated entropy is discarded, as only `old_log_probs` is used to create `ref_log_prob`. This is inefficient. The original logic for `compute_ref_log_prob` for full-parameter fine-tuning does not calculate entropy. ### Checklist Before Starting - [X] Search for similar PRs. Paste at least one query link here: ... - [X] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [X] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [X] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [X] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [X] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [X] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) Signed-off-by: Hollow Man <[email protected]>
1 parent 8bd2e08 commit bf9b7e4

File tree

1 file changed

+10
-9
lines changed

1 file changed

+10
-9
lines changed

verl/workers/fsdp_workers.py

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -979,16 +979,20 @@ def compute_log_prob(self, data: DataProto):
979979
is_lora = data.meta_info.pop("is_lora", False)
980980
adapter_ctx = self.actor.actor_module.disable_adapter() if is_lora else nullcontext()
981981
# we should always recompute old_log_probs when it is HybridEngine
982-
data.meta_info["micro_batch_size"] = self.config.rollout.log_prob_micro_batch_size_per_gpu
983-
data.meta_info["max_token_len"] = self.config.rollout.log_prob_max_token_len_per_gpu
984-
data.meta_info["use_dynamic_bsz"] = self.config.rollout.log_prob_use_dynamic_bsz
982+
config_source = self.config.ref if is_lora else self.config.rollout
983+
data.meta_info["micro_batch_size"] = config_source.log_prob_micro_batch_size_per_gpu
984+
data.meta_info["max_token_len"] = config_source.log_prob_max_token_len_per_gpu
985+
data.meta_info["use_dynamic_bsz"] = config_source.log_prob_use_dynamic_bsz
985986
data.meta_info["temperature"] = self.config.rollout.temperature
986987
# perform recompute log_prob
987988
with self.ulysses_sharding_manager:
988989
with adapter_ctx:
989-
output, entropys = self.actor.compute_log_prob(data=data, calculate_entropy=True)
990+
output, entropys = self.actor.compute_log_prob(data=data, calculate_entropy=not is_lora)
991+
tensors = {"ref_log_prob": output} if is_lora else {"old_log_probs": output}
992+
if not is_lora:
993+
tensors["entropys"] = entropys
990994
output = DataProto.from_dict(
991-
tensors={"old_log_probs": output, "entropys": entropys},
995+
tensors=tensors,
992996
meta_info={"temperature": self.config.rollout.temperature},
993997
)
994998

@@ -1011,10 +1015,7 @@ def compute_ref_log_prob(self, data: DataProto):
10111015
if self._is_lora:
10121016
# if _is_lora, actor without lora applied is the ref
10131017
data.meta_info["is_lora"] = True
1014-
data = self.compute_log_prob(data)
1015-
# this old_log_probs is in fact ref_log_prob
1016-
data = DataProto.from_dict(tensors={"ref_log_prob": data.batch["old_log_probs"]})
1017-
return data
1018+
return self.compute_log_prob(data)
10181019
assert self._is_ref
10191020
# else:
10201021
# otherwise, the class have a standalone ref model

0 commit comments

Comments
 (0)