Skip to content

Conversation

@mutichung
Copy link
Contributor

@mutichung mutichung commented Dec 11, 2025

Summary

This PR introduces a new script to convert AutoAWQ checkpoints into compressed-tensors-compatible format under modifiers/awq. Resolves #2087.

Usage

  • Via CLI:

    python -m llmcompressor.modifiers.awq.convert_autoawq \
      --model-name-or-path /path/to/model \
      --output-dir /path/to/compressed/model \
      --quantization-format naive-quantized
  • Via Python:

    from llmcompressor.modifiers.awq.convert_autoawq import load_and_convert_from_autoawq
    
    awq_model_path = "/path/to/model"  # can also be model_id on huggingface hub
    model = load_and_convert_from_autoawq(awq_model_path)

Known Issue

Asymmetric Support in llm-compressor & compressed-tensors

  • AutoAWQ with version GEMM only supports asymmetric quantization 1.
    • AssertionError will be raised despite setting zero_point=False.
  • Support for zero-point decompression in PackedQuantizationCompressor is a WIP 2.
  • 2025/12/15 Update: zero-point decompression has been merged in 3 but reverted shortly after 4.

Test Plan

  • Create tests to compare output logits between AutoAWQ-dequantized floating-point model and llmcompressor-compressed model with CompressedLinear.
    • The logits do not satisfy torch.testing.assert_close, potentially due to GEMM kernel's internal precision?
  • Run and compare benchmark results between AutoAWQForCausalLM and vLLM.
    • Using compressed-tensors based on 3.
  • Created tests to compare benchmark results between AutoAWQ and llmcompressor checkpoints.

ruikangliu/DeepSeek-R1-Distill-Qwen-1.5B-quantized.awq-autoawq-w4g128

Format Inference Backend ARC-Easy ARC-Challenge
AutoAWQ hf 0.6435 0.3584
naive-quantized hf 0.6431 0.3584
packed-quantized hf 0.6431 0.3584
packed-quantized vllm 0.6427 0.3592

AMead10/Llama-3.2-3B-Instruct-AWQ

Format Inference Backend ARC-Easy ARC-Challenge
AutoAWQ hf 0.7976 0.5017
naive-quantized hf 0.7971 0.5026
packed-quantized hf 0.7971 0.5026
packed-quantized vllm 0.7976 0.5043

fbaldassarri/mistralai_Mistral-7B-Instruct-v0.3-autoawq-int4-gs128-asym

Format Inference Backend ARC-Easy ARC-Challenge
AutoAWQ hf 0.8641 0.6280
naive-quantized hf 0.8645 0.6280
packed-quantized hf 0.8645 0.6280
packed-quantized vllm 0.8649 0.6280

Future Work

  • Support other AutoAWQ versions, e.g., GEMV.
  • Set default quantization format to packed-quantized once asymmetric decompression is finalized.
  • Replace AutoModelForCausalLM with a more generalized autoclass.

Footnotes

  1. awq/modules/linear/gemm.py#L187

  2. [Feature] Support Zero-point Decompression #1704

  3. compressed-tensors@f9e7426 2

  4. compressed-tensors@cf5980d

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @mutichung, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a dedicated utility to bridge the gap between AutoAWQ-quantized models and the llmcompressor framework's compressed-tensors format. It provides a robust conversion pipeline, allowing users to take existing AutoAWQ checkpoints, dequantize them, and then re-compress them into a format that llmcompressor can natively understand and utilize, thereby expanding the interoperability of quantized models.

Highlights

  • AutoAWQ to compressed-tensors conversion: A new script is added to convert AutoAWQ checkpoints into the compressed-tensors-compatible format, enabling broader interoperability.
  • CLI and Python API: The conversion process can be initiated either through a command-line interface or programmatically using a dedicated Python function.
  • GEMM version support: The script specifically supports the GEMM version of AutoAWQ quantization, handling its unique dequantization and re-packing requirements.
  • Zero-point adjustment: The conversion correctly adjusts the zero-point representation to align AutoAWQ's [0, 2^bits - 1] range with compressed-tensors' [-2^(bits - 1), 2^(bits - 1) - 1] range for accurate quantization.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new script to convert AutoAWQ checkpoints into a compressed-tensors-compatible format. The implementation covers loading model weights, dequantizing them according to the AutoAWQ GEMM version, and then re-packing them using ModelCompressor. The script also includes CLI and Python interfaces for conversion. Overall, the changes are well-structured and address the stated objective. However, there are a few areas related to security, correctness, and consistency that could be improved.

@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@dsikka dsikka added the awq For any issue / PR related to AWQ support label Dec 11, 2025
mutichung and others added 7 commits December 12, 2025 09:24
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Muti Chung <[email protected]>
- Add usage example in module docstring.
- Modified what to show on document page.

Signed-off-by: Muti Chung <[email protected]>
Signed-off-by: Muti Chung <[email protected]>
@mutichung mutichung force-pushed the feature/convert-autoawq branch from 2f45719 to f997a80 Compare December 12, 2025 09:25
@mutichung
Copy link
Contributor Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable script for converting AutoAWQ checkpoints to the compressed-tensors format. The implementation is well-structured, with clear separation of concerns and good use of existing libraries. However, I've identified a couple of potential issues in the dequantization logic that could lead to incorrect behavior, particularly concerning tensor shapes and the handling of quantization parameters. My review includes suggestions to address these points to ensure the conversion is robust and correct for a wider range of models. The accompanying tests are a great start for validation.

@mutichung mutichung marked this pull request as ready for review December 13, 2025 01:03
Signed-off-by: Muti Chung <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

awq For any issue / PR related to AWQ support

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature Request][Help Wanted] Convert AutoAWQ checkpoints to compressed-tensors

2 participants