[AWQ] Restructure AWQModifier as smoothing-only, decouple from Quanti…#2511
[AWQ] Restructure AWQModifier as smoothing-only, decouple from Quanti…#2511colldata79 wants to merge 2 commits intovllm-project:mainfrom
Conversation
…zationMixin Remove QuantizationMixin inheritance from AWQModifier so it becomes a pre-quantization transform (like SmoothQuant) that only applies smoothing scales. Final quantization is now handled by a downstream quantizer (QuantizationModifier, GPTQModifier) stacked after AWQ in the recipe. Key changes: - Drop QuantizationMixin inheritance, keep quant config locally for grid search pseudo-quantization only - Add _temporary_quant_schemes context manager that snapshots/restores all quant-related module state (schemes, observers, scales, zero-points, forward overrides) with full exception safety - Decompose _compute_best_scale into pure helpers: _collect_activation_stats, _generate_scale_candidates, _evaluate_candidate, _select_best_scale, _apply_best_scales - Add recipe validation: errors on mismatched scheme config, warns on missing downstream quantizer or reversed ordering - Remove all quantization lifecycle calls from on_end (no more update_weight_zp_scale, update_weight_global_scale, end_calibration) - Update all examples and e2e recipes to stacked pattern [AWQModifier, QuantizationModifier] Closes vllm-project#2327 Signed-off-by: colldata79 <colltrix@colltrix.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly refactors the AWQModifier to separate its smoothing functionality from the final quantization process. By decoupling AWQ from direct quantization, it now acts as a dedicated pre-quantization transform, enhancing modularity and composability within the compression pipeline. This change ensures that AWQ focuses solely on optimizing weights through smoothing, while a subsequent modifier handles the actual quantization, leading to a clearer and more robust architecture for applying quantization techniques. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
There was a problem hiding this comment.
Code Review
This pull request refactors the AWQModifier to decouple its smoothing functionality from the final quantization process, establishing it as a pure pre-quantization transform. Key changes include removing QuantizationMixin inheritance, introducing a context manager for temporary quantization scheme application during internal operations, and adding extensive validation for recipe compatibility with downstream quantizers. The on_end method is simplified to only handle smoothing, with actual quantization now delegated to a separate QuantizationModifier. Example recipes and new integration/unit tests have been updated to reflect this two-stage process. The review comments suggest improving the maintainability of the example recipes by using shared variables for common AWQModifier and QuantizationModifier parameters.
| recipe = [ | ||
| AWQModifier( | ||
| ignore=["lm_head"], scheme="FP8_BLOCK", targets=["Linear"], duo_scaling="both" | ||
| ), | ||
| QuantizationModifier(ignore=["lm_head"], scheme="FP8_BLOCK", targets=["Linear"]), | ||
| ] |
There was a problem hiding this comment.
To improve maintainability and avoid duplicating parameters between AWQModifier and QuantizationModifier, consider defining the shared arguments in variables. This makes it easier to keep them in sync.
_ignore = ["lm_head"]
_scheme = "FP8_BLOCK"
_targets = ["Linear"]
recipe = [
AWQModifier(
ignore=_ignore, scheme=_scheme, targets=_targets, duo_scaling="both"
),
QuantizationModifier(ignore=_ignore, scheme=_scheme, targets=_targets),
]| recipe = [ | ||
| AWQModifier( | ||
| ignore=["lm_head"], scheme="FP8_DYNAMIC", targets=["Linear"], duo_scaling="both" | ||
| ), | ||
| QuantizationModifier(ignore=["lm_head"], scheme="FP8_DYNAMIC", targets=["Linear"]), | ||
| ] |
There was a problem hiding this comment.
To improve maintainability and avoid duplicating parameters between AWQModifier and QuantizationModifier, consider defining the shared arguments in variables. This makes it easier to keep them in sync.
_ignore = ["lm_head"]
_scheme = "FP8_DYNAMIC"
_targets = ["Linear"]
recipe = [
AWQModifier(
ignore=_ignore, scheme=_scheme, targets=_targets, duo_scaling="both"
),
QuantizationModifier(ignore=_ignore, scheme=_scheme, targets=_targets),
]| recipe = [ | ||
| AWQModifier( | ||
| ignore=["lm_head"], scheme="W4A16_ASYM", targets=["Linear"], duo_scaling="both" | ||
| ), | ||
| QuantizationModifier(ignore=["lm_head"], scheme="W4A16_ASYM", targets=["Linear"]), | ||
| ] |
There was a problem hiding this comment.
To improve maintainability and avoid duplicating parameters between AWQModifier and QuantizationModifier, consider defining the shared arguments in variables. This makes it easier to keep them in sync.
_ignore = ["lm_head"]
_scheme = "W4A16_ASYM"
_targets = ["Linear"]
recipe = [
AWQModifier(
ignore=_ignore, scheme=_scheme, targets=_targets, duo_scaling="both"
),
QuantizationModifier(ignore=_ignore, scheme=_scheme, targets=_targets),
]| recipe = [ | ||
| AWQModifier( | ||
| duo_scaling=False, | ||
| ignore=["lm_head", "re:.*mlp.gate$", "re:.*mlp.shared_expert_gate$"], | ||
| scheme="W4A16", | ||
| targets=["Linear"], | ||
| ), | ||
| QuantizationModifier( | ||
| ignore=["lm_head", "re:.*mlp.gate$", "re:.*mlp.shared_expert_gate$"], | ||
| scheme="W4A16", | ||
| targets=["Linear"], | ||
| ), | ||
| ] |
There was a problem hiding this comment.
To improve maintainability and avoid duplicating parameters between AWQModifier and QuantizationModifier, consider defining the shared arguments in variables. This makes it easier to keep them in sync. A similar approach is used in examples/awq/qwen3-vl-30b-a3b-Instruct-example.py.
| recipe = [ | |
| AWQModifier( | |
| duo_scaling=False, | |
| ignore=["lm_head", "re:.*mlp.gate$", "re:.*mlp.shared_expert_gate$"], | |
| scheme="W4A16", | |
| targets=["Linear"], | |
| ), | |
| QuantizationModifier( | |
| ignore=["lm_head", "re:.*mlp.gate$", "re:.*mlp.shared_expert_gate$"], | |
| scheme="W4A16", | |
| targets=["Linear"], | |
| ), | |
| ] | |
| _ignore = ["lm_head", "re:.*mlp.gate$", "re:.*mlp.shared_expert_gate$"] | |
| _scheme = "W4A16" | |
| _targets = ["Linear"] | |
| recipe = [ | |
| AWQModifier( | |
| duo_scaling=False, | |
| ignore=_ignore, | |
| scheme=_scheme, | |
| targets=_targets, | |
| ), | |
| QuantizationModifier( | |
| ignore=_ignore, | |
| scheme=_scheme, | |
| targets=_targets, | |
| ), | |
| ] |
| recipe = [ | ||
| AWQModifier( | ||
| ignore=["lm_head", "re:.*mlp.gate$", "re:.*mlp.shared_expert_gate$"], | ||
| scheme="W4A16", | ||
| targets=["Linear"], | ||
| ), | ||
| QuantizationModifier( | ||
| ignore=["lm_head", "re:.*mlp.gate$", "re:.*mlp.shared_expert_gate$"], | ||
| scheme="W4A16", | ||
| targets=["Linear"], | ||
| ), | ||
| ] |
There was a problem hiding this comment.
To improve maintainability and avoid duplicating parameters between AWQModifier and QuantizationModifier, consider defining the shared arguments in variables. This makes it easier to keep them in sync.
_ignore = ["lm_head", "re:.*mlp.gate$", "re:.*mlp.shared_expert_gate$"]
_scheme = "W4A16"
_targets = ["Linear"]
recipe = [
AWQModifier(
ignore=_ignore,
scheme=_scheme,
targets=_targets,
),
QuantizationModifier(
ignore=_ignore,
scheme=_scheme,
targets=_targets,
),
]| recipe = [ | ||
| AWQModifier( | ||
| ignore=["lm_head"], | ||
| scheme="W4AFP8", | ||
| targets=["Linear"], | ||
| duo_scaling=True, | ||
| ), | ||
| QuantizationModifier( | ||
| ignore=["lm_head"], | ||
| scheme="W4AFP8", | ||
| targets=["Linear"], | ||
| ), | ||
| ] |
There was a problem hiding this comment.
To improve maintainability and avoid duplicating parameters between AWQModifier and QuantizationModifier, consider defining the shared arguments in variables. This makes it easier to keep them in sync.
_ignore = ["lm_head"]
_scheme = "W4AFP8"
_targets = ["Linear"]
recipe = [
AWQModifier(
ignore=_ignore,
scheme=_scheme,
targets=_targets,
duo_scaling=True,
),
QuantizationModifier(
ignore=_ignore,
scheme=_scheme,
targets=_targets,
),
]Address review feedback: extract _ignore, _scheme, _targets variables so AWQModifier and QuantizationModifier share a single source of truth for recipe configuration in all example files. Signed-off-by: colldata79 <colltrix@colltrix.com>
brian-dellabetta
left a comment
There was a problem hiding this comment.
Hi @colldata79 , thanks for taking a stab at this. Please see comments below. This is proving to be a trickier thing to implement, I will message you over slack to discuss how to proceed
| ignore=["lm_head"], scheme="FP8_BLOCK", targets=["Linear"], duo_scaling="both" | ||
| ), | ||
| AWQModifier(ignore=_ignore, scheme=_scheme, targets=_targets, duo_scaling="both"), | ||
| QuantizationModifier(ignore=_ignore, scheme=_scheme, targets=_targets), |
There was a problem hiding this comment.
Rather than requiring user to set this explicitly, and to retain backward compatibility, i propose we append this when the recipe is parsed. if a user provides AWQ without a follow-on modifier that quantizes, we should append it with the same ignore,scheme,targets,config_groups
| class AWQModifier(Modifier, QuantizationMixin): | ||
|
|
||
| @dataclass | ||
| class AWQSearchResult: |
| ) | ||
| # List to store error metrics for each layer | ||
| _error_metrics: list[dict] = PrivateAttr(default_factory=list) | ||
| _fp16_baseline_cache: dict[Module, IntermediatesCache] = PrivateAttr( |
There was a problem hiding this comment.
what is this? i don't see it used anywhere
| # ------------------------------------------------------------------ # | ||
|
|
||
| @contextmanager | ||
| def _temporary_quant_schemes(self, model: Module, with_observers: bool = False): |
There was a problem hiding this comment.
rather than maintaining this, we are better off applying quantization config on start and removing it on end, as I show in my PR here
| # Recipe validation # | ||
| # ------------------------------------------------------------------ # | ||
|
|
||
| def _validate_recipe(self, state: State): |
There was a problem hiding this comment.
recipe validation would need to occur higher up in scope, like during oneshot
| # parent kwargs needed for future forward passes | ||
| # same parent may appear multiple times in resolved mappings |
There was a problem hiding this comment.
why remove these comments?
| To minimize memory requirements, layers are reduced to a running total | ||
| of sums and counts when calculating mean |
There was a problem hiding this comment.
why are lines here and below removed?
| and self.scheme is None | ||
| ) | ||
|
|
||
| def _resolve_quantization_config(self) -> QuantizationConfig: |
There was a problem hiding this comment.
having to copy this over means to me that we need another abstraction that can provide this functionality to both AWQModifier and QuantizationMixin classes, which can subclass it.
|
This pull request has merge conflicts that must be resolved before it can be |
…zationMixin
Remove QuantizationMixin inheritance from AWQModifier so it becomes a pre-quantization transform (like SmoothQuant) that only applies smoothing scales. Final quantization is now handled by a downstream quantizer (QuantizationModifier, GPTQModifier) stacked after AWQ in the recipe.
Key changes:
Closes #2327
SUMMARY:
"please provide a brief summary"
TEST PLAN:
"please outline how the changes were tested"