Skip to content

[AWQ] Restructure AWQModifier as smoothing-only, decouple from Quanti…#2511

Open
colldata79 wants to merge 2 commits intovllm-project:mainfrom
colldata79:awq-smoothing-only
Open

[AWQ] Restructure AWQModifier as smoothing-only, decouple from Quanti…#2511
colldata79 wants to merge 2 commits intovllm-project:mainfrom
colldata79:awq-smoothing-only

Conversation

@colldata79
Copy link
Copy Markdown
Contributor

…zationMixin

Remove QuantizationMixin inheritance from AWQModifier so it becomes a pre-quantization transform (like SmoothQuant) that only applies smoothing scales. Final quantization is now handled by a downstream quantizer (QuantizationModifier, GPTQModifier) stacked after AWQ in the recipe.

Key changes:

  • Drop QuantizationMixin inheritance, keep quant config locally for grid search pseudo-quantization only
  • Add _temporary_quant_schemes context manager that snapshots/restores all quant-related module state (schemes, observers, scales, zero-points, forward overrides) with full exception safety
  • Decompose _compute_best_scale into pure helpers: _collect_activation_stats, _generate_scale_candidates, _evaluate_candidate, _select_best_scale, _apply_best_scales
  • Add recipe validation: errors on mismatched scheme config, warns on missing downstream quantizer or reversed ordering
  • Remove all quantization lifecycle calls from on_end (no more update_weight_zp_scale, update_weight_global_scale, end_calibration)
  • Update all examples and e2e recipes to stacked pattern [AWQModifier, QuantizationModifier]

Closes #2327

SUMMARY:
"please provide a brief summary"

TEST PLAN:
"please outline how the changes were tested"

…zationMixin

Remove QuantizationMixin inheritance from AWQModifier so it becomes a
pre-quantization transform (like SmoothQuant) that only applies smoothing
scales. Final quantization is now handled by a downstream quantizer
(QuantizationModifier, GPTQModifier) stacked after AWQ in the recipe.

Key changes:
- Drop QuantizationMixin inheritance, keep quant config locally for
  grid search pseudo-quantization only
- Add _temporary_quant_schemes context manager that snapshots/restores
  all quant-related module state (schemes, observers, scales, zero-points,
  forward overrides) with full exception safety
- Decompose _compute_best_scale into pure helpers: _collect_activation_stats,
  _generate_scale_candidates, _evaluate_candidate, _select_best_scale,
  _apply_best_scales
- Add recipe validation: errors on mismatched scheme config, warns on
  missing downstream quantizer or reversed ordering
- Remove all quantization lifecycle calls from on_end (no more
  update_weight_zp_scale, update_weight_global_scale, end_calibration)
- Update all examples and e2e recipes to stacked pattern
  [AWQModifier, QuantizationModifier]

Closes vllm-project#2327

Signed-off-by: colldata79 <colltrix@colltrix.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the AWQModifier to separate its smoothing functionality from the final quantization process. By decoupling AWQ from direct quantization, it now acts as a dedicated pre-quantization transform, enhancing modularity and composability within the compression pipeline. This change ensures that AWQ focuses solely on optimizing weights through smoothing, while a subsequent modifier handles the actual quantization, leading to a clearer and more robust architecture for applying quantization techniques.

Highlights

  • Decoupled AWQModifier: The AWQModifier has been refactored to act solely as a pre-quantization smoothing transform, removing its inheritance from QuantizationMixin. Final quantization is now handled by a separate, downstream quantizer like QuantizationModifier or GPTQModifier.
  • Temporary Quantization Schemes: A new _temporary_quant_schemes context manager was introduced to safely snapshot and restore all quantization-related module states (schemes, observers, scales, zero-points, forward overrides) during the AWQ grid search, ensuring no state leakage.
  • Refactored Grid Search Logic: The _compute_best_scale method has been decomposed into several pure helper functions: _collect_activation_stats, _generate_scale_candidates, _evaluate_candidate, _select_best_scale, and _apply_best_scales, improving modularity and readability.
  • Recipe Validation: Added validation logic to the AWQModifier to check for compatible downstream quantizers, warn about reversed modifier ordering, and error on mismatched quantization scheme configurations between AWQ and the subsequent quantizer.
  • Streamlined Lifecycle Hooks: Quantization lifecycle calls (e.g., update_weight_zp_scale, end_calibration) have been removed from on_end and on_finalize methods of AWQModifier, aligning with its new role as a smoothing-only modifier.
  • Updated Examples and Recipes: All AWQ examples and end-to-end recipes have been updated to reflect the new stacked pattern, where AWQModifier is followed by a QuantizationModifier.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the documentation Improvements or additions to documentation label Mar 24, 2026
@github-actions
Copy link
Copy Markdown

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the AWQModifier to decouple its smoothing functionality from the final quantization process, establishing it as a pure pre-quantization transform. Key changes include removing QuantizationMixin inheritance, introducing a context manager for temporary quantization scheme application during internal operations, and adding extensive validation for recipe compatibility with downstream quantizers. The on_end method is simplified to only handle smoothing, with actual quantization now delegated to a separate QuantizationModifier. Example recipes and new integration/unit tests have been updated to reflect this two-stage process. The review comments suggest improving the maintainability of the example recipes by using shared variables for common AWQModifier and QuantizationModifier parameters.

Comment on lines 54 to 59
recipe = [
AWQModifier(
ignore=["lm_head"], scheme="FP8_BLOCK", targets=["Linear"], duo_scaling="both"
),
QuantizationModifier(ignore=["lm_head"], scheme="FP8_BLOCK", targets=["Linear"]),
]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve maintainability and avoid duplicating parameters between AWQModifier and QuantizationModifier, consider defining the shared arguments in variables. This makes it easier to keep them in sync.

_ignore = ["lm_head"]
_scheme = "FP8_BLOCK"
_targets = ["Linear"]
recipe = [
    AWQModifier(
        ignore=_ignore, scheme=_scheme, targets=_targets, duo_scaling="both"
    ),
    QuantizationModifier(ignore=_ignore, scheme=_scheme, targets=_targets),
]

Comment on lines 54 to 59
recipe = [
AWQModifier(
ignore=["lm_head"], scheme="FP8_DYNAMIC", targets=["Linear"], duo_scaling="both"
),
QuantizationModifier(ignore=["lm_head"], scheme="FP8_DYNAMIC", targets=["Linear"]),
]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve maintainability and avoid duplicating parameters between AWQModifier and QuantizationModifier, consider defining the shared arguments in variables. This makes it easier to keep them in sync.

_ignore = ["lm_head"]
_scheme = "FP8_DYNAMIC"
_targets = ["Linear"]
recipe = [
    AWQModifier(
        ignore=_ignore, scheme=_scheme, targets=_targets, duo_scaling="both"
    ),
    QuantizationModifier(ignore=_ignore, scheme=_scheme, targets=_targets),
]

Comment on lines 54 to 59
recipe = [
AWQModifier(
ignore=["lm_head"], scheme="W4A16_ASYM", targets=["Linear"], duo_scaling="both"
),
QuantizationModifier(ignore=["lm_head"], scheme="W4A16_ASYM", targets=["Linear"]),
]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve maintainability and avoid duplicating parameters between AWQModifier and QuantizationModifier, consider defining the shared arguments in variables. This makes it easier to keep them in sync.

_ignore = ["lm_head"]
_scheme = "W4A16_ASYM"
_targets = ["Linear"]
recipe = [
    AWQModifier(
        ignore=_ignore, scheme=_scheme, targets=_targets, duo_scaling="both"
    ),
    QuantizationModifier(ignore=_ignore, scheme=_scheme, targets=_targets),
]

Comment on lines 14 to 26
recipe = [
AWQModifier(
duo_scaling=False,
ignore=["lm_head", "re:.*mlp.gate$", "re:.*mlp.shared_expert_gate$"],
scheme="W4A16",
targets=["Linear"],
),
QuantizationModifier(
ignore=["lm_head", "re:.*mlp.gate$", "re:.*mlp.shared_expert_gate$"],
scheme="W4A16",
targets=["Linear"],
),
]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve maintainability and avoid duplicating parameters between AWQModifier and QuantizationModifier, consider defining the shared arguments in variables. This makes it easier to keep them in sync. A similar approach is used in examples/awq/qwen3-vl-30b-a3b-Instruct-example.py.

Suggested change
recipe = [
AWQModifier(
duo_scaling=False,
ignore=["lm_head", "re:.*mlp.gate$", "re:.*mlp.shared_expert_gate$"],
scheme="W4A16",
targets=["Linear"],
),
QuantizationModifier(
ignore=["lm_head", "re:.*mlp.gate$", "re:.*mlp.shared_expert_gate$"],
scheme="W4A16",
targets=["Linear"],
),
]
_ignore = ["lm_head", "re:.*mlp.gate$", "re:.*mlp.shared_expert_gate$"]
_scheme = "W4A16"
_targets = ["Linear"]
recipe = [
AWQModifier(
duo_scaling=False,
ignore=_ignore,
scheme=_scheme,
targets=_targets,
),
QuantizationModifier(
ignore=_ignore,
scheme=_scheme,
targets=_targets,
),
]

Comment on lines 55 to 66
recipe = [
AWQModifier(
ignore=["lm_head", "re:.*mlp.gate$", "re:.*mlp.shared_expert_gate$"],
scheme="W4A16",
targets=["Linear"],
),
QuantizationModifier(
ignore=["lm_head", "re:.*mlp.gate$", "re:.*mlp.shared_expert_gate$"],
scheme="W4A16",
targets=["Linear"],
),
]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve maintainability and avoid duplicating parameters between AWQModifier and QuantizationModifier, consider defining the shared arguments in variables. This makes it easier to keep them in sync.

_ignore = ["lm_head", "re:.*mlp.gate$", "re:.*mlp.shared_expert_gate$"]
_scheme = "W4A16"
_targets = ["Linear"]
recipe = [
    AWQModifier(
        ignore=_ignore,
        scheme=_scheme,
        targets=_targets,
    ),
    QuantizationModifier(
        ignore=_ignore,
        scheme=_scheme,
        targets=_targets,
    ),
]

Comment on lines 44 to 56
recipe = [
AWQModifier(
ignore=["lm_head"],
scheme="W4AFP8",
targets=["Linear"],
duo_scaling=True,
),
QuantizationModifier(
ignore=["lm_head"],
scheme="W4AFP8",
targets=["Linear"],
),
]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve maintainability and avoid duplicating parameters between AWQModifier and QuantizationModifier, consider defining the shared arguments in variables. This makes it easier to keep them in sync.

_ignore = ["lm_head"]
_scheme = "W4AFP8"
_targets = ["Linear"]
recipe = [
    AWQModifier(
        ignore=_ignore,
        scheme=_scheme,
        targets=_targets,
        duo_scaling=True,
    ),
    QuantizationModifier(
        ignore=_ignore,
        scheme=_scheme,
        targets=_targets,
    ),
]

Address review feedback: extract _ignore, _scheme, _targets variables
so AWQModifier and QuantizationModifier share a single source of truth
for recipe configuration in all example files.

Signed-off-by: colldata79 <colltrix@colltrix.com>
Copy link
Copy Markdown
Collaborator

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @colldata79 , thanks for taking a stab at this. Please see comments below. This is proving to be a trickier thing to implement, I will message you over slack to discuss how to proceed

ignore=["lm_head"], scheme="FP8_BLOCK", targets=["Linear"], duo_scaling="both"
),
AWQModifier(ignore=_ignore, scheme=_scheme, targets=_targets, duo_scaling="both"),
QuantizationModifier(ignore=_ignore, scheme=_scheme, targets=_targets),
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than requiring user to set this explicitly, and to retain backward compatibility, i propose we append this when the recipe is parsed. if a user provides AWQ without a follow-on modifier that quantizes, we should append it with the same ignore,scheme,targets,config_groups

class AWQModifier(Modifier, QuantizationMixin):

@dataclass
class AWQSearchResult:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's quite a bit in here that is beyond the scope of this PR. Please prune, there are already a lot of changes in this file

Image

)
# List to store error metrics for each layer
_error_metrics: list[dict] = PrivateAttr(default_factory=list)
_fp16_baseline_cache: dict[Module, IntermediatesCache] = PrivateAttr(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is this? i don't see it used anywhere

# ------------------------------------------------------------------ #

@contextmanager
def _temporary_quant_schemes(self, model: Module, with_observers: bool = False):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rather than maintaining this, we are better off applying quantization config on start and removing it on end, as I show in my PR here

# Recipe validation #
# ------------------------------------------------------------------ #

def _validate_recipe(self, state: State):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

recipe validation would need to occur higher up in scope, like during oneshot

Comment on lines -479 to -480
# parent kwargs needed for future forward passes
# same parent may appear multiple times in resolved mappings
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why remove these comments?

Comment on lines -918 to -919
To minimize memory requirements, layers are reduced to a running total
of sums and counts when calculating mean
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are lines here and below removed?

and self.scheme is None
)

def _resolve_quantization_config(self) -> QuantizationConfig:
Copy link
Copy Markdown
Collaborator

@brian-dellabetta brian-dellabetta Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

having to copy this over means to me that we need another abstraction that can provide this functionality to both AWQModifier and QuantizationMixin classes, which can subclass it.

@brian-dellabetta brian-dellabetta self-assigned this Mar 24, 2026
@brian-dellabetta brian-dellabetta mentioned this pull request Mar 26, 2026
4 tasks
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Mar 27, 2026

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @colldata79.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Mar 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation needs-rebase

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Restructure and simplify the AWQModifier to be similar to SmoothQuant

2 participants