Skip to content

fix lora weight merge for vllm (for fixing issue #659)#668

Merged
hiyouga merged 4 commits intohiyouga:mainfrom
KellyGong:main
Mar 26, 2026
Merged

fix lora weight merge for vllm (for fixing issue #659)#668
hiyouga merged 4 commits intohiyouga:mainfrom
KellyGong:main

Conversation

@KellyGong
Copy link
Copy Markdown
Contributor

Fix issue #659.

  • Update weight merge logic to properly handle vLLM backend
  • Ensure merged weights are compatible with vLLM format

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves an issue (issue #659) where LoRA weights were not correctly merged for models intended for the vLLM inference engine. It introduces a refined weight merging process that loads the base model, applies LoRA adapters, and then merges them into a dense Hugging Face checkpoint, which is the format expected by vLLM. This ensures seamless integration and correct functionality of LoRA-trained models within a vLLM environment.

Highlights

  • LoRA Weight Merge Fix: Implemented a fix for LoRA weight merging specifically for vLLM backend compatibility, addressing issue #659.
  • vLLM Compatibility: Ensured that merged weights are properly formatted and compatible with vLLM requirements by loading into PeftModel and performing merge_and_unload().
  • New Utility Functions: Introduced functions for stripping FSDP/DDP prefixes from state dict keys, merging LoRA adapters into base models, and robustly loading pretrained configurations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a fix for merging LoRA weights, specifically targeting compatibility with the vLLM backend. The changes correctly identify LoRA checkpoints, merge adapter weights into the base model, and save the resulting dense model. The implementation is robust, handling different versions of the peft library and ensuring the model configuration and tokenizer are correctly processed. I've identified a minor opportunity for code simplification to improve maintainability by removing duplicated logic. Overall, this is a good improvement that addresses the issue described.

Comment thread scripts/model_merger.py Outdated
@hiyouga hiyouga merged commit 3f527bd into hiyouga:main Mar 26, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants