Skip to content

Conversation

@HollowMan6
Copy link
Collaborator

@HollowMan6 HollowMan6 commented Dec 25, 2025

Waiting for:

For compute_ref_log_prob, we can do that by disabling lora layers temporarily for the forward pass, as base weight are frozen and only lora layers are trained.

This has already been supported in FSDP LoRA.

What does this PR do?

Add concise overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

✨ Presented to you with Mind Lab - A Lab for Experiential Intelligence.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature by enabling LoRA for Megatron models, allowing the actor and reference models to share weights. The implementation correctly uses a context manager to disable LoRA adapters for computing reference log probabilities. My review includes two high-severity suggestions: one to address duplicated code for determining lora_rank across multiple files, and another to fix a performance issue where entropy is unnecessarily calculated and then discarded.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the LoRA configuration handling to support a nested structure and introduces a mechanism to share the actor and reference models in megatron_workers.py when using LoRA. This is achieved by temporarily disabling the LoRA adapter to compute reference log probabilities, which is a clean and efficient approach.

My main feedback is regarding the duplicated logic for determining the lora_rank across four different trainer files. I've left comments with suggestions to refactor this into a shared utility function for better maintainability. The rest of the changes look good.

@HollowMan6 HollowMan6 force-pushed the megatron_lora_ref branch 2 times, most recently from a59fe4a to 6a59c97 Compare December 25, 2025 18:41
@HollowMan6 HollowMan6 force-pushed the megatron_lora_ref branch 2 times, most recently from f231448 to e8ad2a0 Compare December 30, 2025 01:17
For `compute_ref_log_prob`, we can do that by disabling
lora layers temporarily for the forward pass, as base
weight are frozen and only lora layers are trained.

This has already been supported in FSDP LoRA.

Signed-off-by: Hollow Man <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant