Skip to content

Conversation

@kylesayrs
Copy link
Collaborator

Purpose

Changes

  • Pass model.thinker to oneshot, since model does not implement a forward method (the thinker module is a PreTrainedModel that contains all of the parameters worth quantizing)
  • Patch the fast_pos_embed_interpolate to support accelerate offloading
  • Patch dataloader to squeeze image_grid_thw, but leave pixel_values and other inputs unsqueezed
  • Save example audio generation

Testing

  • Ran example and produced sane outputs
    output.wav

Signed-off-by: Kyle Sayers <[email protected]>
@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @kylesayrs, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates the Qwen3OmniMoeForConditionalGeneration model by providing a comprehensive example for its quantization using llmcompressor. It addresses specific technical requirements such as patching the position embedding interpolation for offloading compatibility and adjusting data collation for multimodal inputs, ultimately enabling efficient and functional use of this advanced model.

Highlights

  • Qwen3OmniMoeForConditionalGeneration Support: This pull request introduces support for the Qwen3OmniMoeForConditionalGeneration model, enabling its use within the system.
  • New Example Script: A new example script, qwen3_omni_example.py, has been added to demonstrate the quantization of the Qwen3OmniMoeForConditionalGeneration model using llmcompressor's oneshot functionality.
  • Position Embedding Patch: The fast_pos_embed_interpolate function has been patched to correctly handle scenarios where pos_embed.weight might be offloaded, ensuring compatibility with accelerate offloading strategies.
  • Custom Data Collator: A custom data_collator is implemented to properly squeeze the image_grid_thw tensor while leaving other inputs unsqueezed, which is crucial for correct data processing during calibration.
  • Audio Generation Example: The example now includes saving generated audio output to sample_output.wav, showcasing the model's multimodal capabilities.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new example for Qwen3OmniMoeForConditionalGeneration, including a patch to support accelerated offloading. The example script demonstrates how to perform one-shot quantization with GPTQ and generate sample outputs. The changes are well-structured and the example is clear. My review includes a suggestion to improve the performance of the patch file by using more efficient tensor operations, and a comment on improving the clarity of the example script's save directory naming.

Signed-off-by: Kyle Sayers <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: get_input_embeddings not auto‑handled for Qwen3OmniMoeForConditionalGeneration

2 participants