Skip to content

qwen2.5 modeling support + conversion back to hf ckpt format #1107

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

uralik
Copy link
Contributor

@uralik uralik commented Apr 12, 2025

What does this PR do? Please describe:

  • adding support for qwen models that do not require tensor parallelism. All loading is done from HF safetensors and remapping of state dicts to fs2 format.
  • hugging face tokenizer support added. qwen model uses hf based tokenizer
  • qwen ckpt conversion command added to save it back into HF model.

all transformers imports are checked with try except given that transformers is not mandatory (yet)

Confirmed that this works by training SFT with 7B size, converting it back to HF and using with vllm.

@uralik uralik requested a review from cbalioglu as a code owner April 12, 2025 01:20
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants