Skip to content

perf: make MSE observer compatible with torch.compile#2384

Open
Bias92 wants to merge 1 commit intovllm-project:mainfrom
Bias92:torch-compile-observers
Open

perf: make MSE observer compatible with torch.compile#2384
Bias92 wants to merge 1 commit intovllm-project:mainfrom
Bias92:torch-compile-observers

Conversation

@Bias92
Copy link

@Bias92 Bias92 commented Feb 18, 2026

make the MSE observer inner loop compatible with torch.compile by extracting _compute_candidate_error as a standalone function compiled with torch.compile ( dynamic=True ). Early stopping is preserved in the outer loop.
compile flag is exposed as a oneshot argument (enable_observer_compile).

e2e benchmark (TinyLlama-1.1B, INT8 W8A8, MSE observer, 64 cal samples, RTX 4060 Ti):

  • Eager: 4.9s, 4265 MB
  • Compiled warm: 3.9s, 4199 MB
  • Speedup: 1.26x

Requires: vllm-project/compressed-tensors#627
Related: pytorch/pytorch#177131
Partial fix for #1485

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Bias92, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the performance of the Mean Squared Error (MSE) observer by making it fully compatible with torch.compile(fullgraph=True). The changes involve refactoring the _grid_search_mse function to remove operations that cause graph breaks, specifically replacing dynamic attribute patching with pre-configured arguments and converting conditional logic to torch.where statements. This optimization results in substantial speed improvements for MSE quantization parameter calculation.

Highlights

  • Torch.compile Compatibility: Enabled torch.compile(fullgraph=True) compatibility for the MSE observer by removing graph-breaking operations.
  • Performance Improvement: Achieved a significant 39.5x speedup for the MSE observer on CPU when compiled.
  • Graph Break Removal: Eliminated the patch_attr context manager by pre-creating token_args with strategy=TOKEN.
  • Control Flow Optimization: Replaced data-dependent control flow (boolean indexing with torch.any and early stopping) with torch.where for compile-friendly execution.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/llmcompressor/observers/mse.py
    • Removed the import of patch_attr from compressed_tensors.utils.
    • Initialized _token_args in the __init__ methods of MemorylessMSEObserver and MovingAverageMSEObserver to pre-configure QuantizationStrategy.TOKEN.
    • Modified the _grid_search_mse function signature to accept token_args and removed the patience parameter.
    • Updated calls to _grid_search_mse in get_min_max, get_global_min_max, get_current_min_max, and get_current_global_min_max methods to pass the new token_args and omit patience.
    • Replaced the with patch_attr(...) block in _grid_search_mse with a direct call to fake_quantize using token_args.
    • Converted the conditional logic for updating best_error, best_min_val, and best_max_val from if torch.any(tmp): ... else: ... to torch.where statements.
    • Removed the no_improve_count variable and the early stopping mechanism based on patience.
    • Updated the docstring for _grid_search_mse to reflect the torch.compile compatibility and changes in parameters.
Activity
  • Identified and removed graph-breaking operations in the MSE observer.
  • Implemented changes to ensure torch.compile(fullgraph=True) compatibility.
  • Conducted benchmarks demonstrating a 39.5x speedup for the MSE observer on CPU.
  • Verified that all existing observer tests pass.
  • Noted that the patience parameter is now unused but retained for backward compatibility.
  • Requested reviewers to perform additional CUDA benchmarks.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly improves the performance of the MSE observer by making it compatible with torch.compile. The changes, which include removing the patch_attr context manager and replacing data-dependent control flow with torch.where, are well-executed and result in a remarkable 39x speedup. My review focuses on ensuring the documentation is updated to reflect these changes. Specifically, the docstrings for MemorylessMSEObserver and MovingAverageMSEObserver should be updated to indicate that the patience parameter is no longer used, as early stopping has been removed.

@Bias92 Bias92 force-pushed the torch-compile-observers branch from 7099621 to 3c2ac54 Compare February 18, 2026 12:41
@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@HDCharles
Copy link
Collaborator

HDCharles commented Feb 18, 2026

Thanks for your contribution!

comments

high level i'm not sure if we can land with the stategy of removing early stopping. In my tests i remember the early stopping preventing us from searching ~90% of the search space, so in practice i think this will result in a 10x slowdown for the non compiled performance. Given that, your numbers here are a bit off because you're comparing PR to PR whereas the actual number that would be useful is main to PR, if my experience is correct, you've effectively slowed your baseline by 10x so the 35x speedup is really more like 3.5x of what it was before this PR. Thats still great but it seems like it ignores the compile overhead and so we'll need to make sure this is actually a positive change with some e2e testing.

I'm also a bit unsure about the end game for this improvement since it just makes the observer compilable but doesn't actually make it compiled. We'd want this to be something a user could disable if they run into trouble as well.

a good comparison PR might be this compile optimizations for GPTQ PR as far as how they implement the compilation and the flag.

also what benchmark are you running? is this an actual example with real data or is a toy example? I'm not sure what Benchmark (CPU, inductor, shape=(1,1,4096)) means in practice or how i could run it myself.


next steps

i think we most likely want the compile and non compiled path to be reasonable and by removing early stopping we're basically destroying the non compiled path. Given that, I think there are 4 strategies we should look at for how compilation could work here,

A - bring back early stopping, compile the inner loop

walk back the removal early stopping. instead, extract a compilable inner loop of the code and exclude the data dependent part from compilation.

B - bring back early stopping, hide it behind a flag so the whole thing can be compiled

do early stopping if its not compiled, but don't do early stopping if it is compiled. I think you can leave data dependent control flow but hide it behind a flag and if i remember correctly, compile will not see the data dependent flow unless the flag changes. may require some finagling to get compile to not try to look and see the data dependent control flow.

C - chunked early stopping

we can partially remove early stopping, we can chunk the grid search into a few chunks and run those and only do a data dependent exit after calculating the mse for the whole chunk. This has the potential to actually be the fastest in terms of compile performance since it gets you both early stopping while still compiling a larger chunk in one go.

D

leave as is with no early stopping but do a ton of testing to make sure its bulletproof so we won't have to disable compilation by default because we get a bunch of issues because GPTQ isn't working and then get really terrible performance.

I would probably do A by default but could see doing B or C depending on benchmark comparisons between them.

These steps will be required regardless

  1. add functionality to compile the optimized function
  2. Add a flag to enable_observer_compilation either as a oneshot arg or on the modifiers (ideally enabled by default)
  3. get some benchmarks for the change in e2e speed for one of our examples using the MSE observer so we can see what happens when we take compiler overhead into account.
  4. edit the existing observer tests to test with/without compile

@mergify
Copy link
Contributor

mergify bot commented Feb 18, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Copy link
Collaborator

@HDCharles HDCharles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see comment

@Bias92
Copy link
Author

Bias92 commented Feb 18, 2026

Thanks for the detailed feedback! I agree that preserving the non-compiled path performance is important. I'll look into the GPTQ compile PR you referenced and implement strategy A (bring back early stopping, compile the inner loop) as a starting point. I'll also add the enable_observer_compilation flag and run e2e benchmarks with real data. Will update the PR soon.

@kylesayrs
Copy link
Collaborator

Hi @Bias92, could you please post your original baseline performance metrics, and post the script you're using to benchmark? I don't fully understand where your speedups are coming from, as you do not actually add a torch.compile decorator anywhere?

@Bias92
Copy link
Author

Bias92 commented Feb 19, 2026

"Hi @kylesayrs, good point — I'll share the benchmark script and original baseline metrics. The speedups come from externally wrapping the function with torch.compile(fullgraph=True), which wasn't included in the PR itself. As @HDCharles suggested, I'll add proper compilation functionality with an enable_observer_compilation flag in the next update."

@mergify mergify bot removed the quality-failed label Feb 19, 2026
@Bias92
Copy link
Author

Bias92 commented Feb 19, 2026

Updated the implementation following the GPTQ PR #2320 pattern:

  • Restored original _grid_search_mse with early stopping + patch_attr (non-compiled path preserved)
  • Added _grid_search_mse_compiled as a separate torch.compile-compatible path
  • Added enable_torch_compile flag via observer_kwargs (default False)
  • Added _call_grid_search helper to reduce code duplication across observer classes

Non-compiled path performance is now identical to the original baseline. Will work on e2e benchmarks and lint fixes next.

1 similar comment
@Bias92
Copy link
Author

Bias92 commented Feb 19, 2026

Updated the implementation following the GPTQ PR #2320 pattern:

  • Restored original _grid_search_mse with early stopping + patch_attr (non-compiled path preserved)
  • Added _grid_search_mse_compiled as a separate torch.compile-compatible path
  • Added enable_torch_compile flag via observer_kwargs (default False)
  • Added _call_grid_search helper to reduce code duplication across observer classes

Non-compiled path performance is now identical to the original baseline. Will work on e2e benchmarks and lint fixes next.

@Bias92
Copy link
Author

Bias92 commented Feb 19, 2026

E2E Benchmark Results (RTX 4060 Ti, CUDA)

TinyLlama-1.1B, FP8 W8A8, MSE observer, 64 calibration samples:

Metric Baseline Compiled Delta
Time (s) 26.5 24.4 -7.8%
Peak memory (MB) 2200 2200 +0.0%
Quantized layers 154 154 -
Scales all_close - - True
Max abs diff - - 0.00e+00
Matching layers - - 154/154

Calibration-only speedup: ~1.15x (5.0s → 4.35s). E2E speedup is 1.09x because model loading + tokenization (~21s) dominates total time. Larger models where calibration is a bigger fraction of total time should see greater benefit.

Key findings:

  • Zero memory overhead
  • Numerically identical weight scales (154/154 layers match exactly)
  • Non-compiled path preserves original baseline performance with early stopping

Benchmark script: tests/llmcompressor/observers/benchmark_mse_compile.py

@Bias92 Bias92 changed the title perf: make MSE observer compatible with torch.compile (39x speedup) perf: make MSE observer compatible with torch.compile (dual-path implementation) Feb 19, 2026
@Bias92 Bias92 requested a review from HDCharles February 19, 2026 08:41
@HDCharles HDCharles added the ready When a PR is ready for review label Feb 19, 2026
@Bias92 Bias92 force-pushed the torch-compile-observers branch from f7448d2 to 6a415e6 Compare February 19, 2026 16:21
@Bias92
Copy link
Author

Bias92 commented Feb 19, 2026

Addressed all review feedback:

  1. Extracted shared helper: _compute_candidate_error() used by both compiled and non-compiled paths — no more code duplication
  2. Removed patch_attr from both paths (pre-created token_args instead)
  3. Preserved early stopping in non-compiled path (original baseline performance unchanged)
  4. Moved compile test into test_mse.py, removed test_observer_compile.py

All 53 tests passing locally (0 failures, 4 skipped).

@Bias92 Bias92 requested a review from HDCharles February 19, 2026 16:24
@mergify
Copy link
Contributor

mergify bot commented Feb 19, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

@HDCharles
Copy link
Collaborator

can you fix quality issues?

@mergify mergify bot removed the quality-failed label Feb 20, 2026
@Bias92
Copy link
Author

Bias92 commented Feb 20, 2026

Done — ran make style and make quality, both passing now.

@Bias92
Copy link
Author

Bias92 commented Feb 22, 2026

Hi @HDCharles, friendly ping — I've addressed all the feedback from your review:

Restored original _grid_search_mse with early stopping (non-compiled baseline unchanged)
Added _grid_search_mse_compiled as a separate path with enable_torch_compile flag (default False)
Extracted _compute_candidate_error() shared helper to eliminate duplication
Moved compile test into test_mse.py, removed separate test file
Quality checks passing

Would you be able to re-review when you have a moment? Also, CI is pending workflow approval if you could approve that as well. Thanks!

@Bias92
Copy link
Author

Bias92 commented Mar 4, 2026

Hi @HDCharles, @dsikka, @kylesayrs — friendly ping!
I've addressed all the feedback from the previous review:

Restored original _grid_search_mse with early stopping (non-compiled path unchanged)
Added _grid_search_mse_compiled as a separate torch.compile-compatible path
Added enable_torch_compile flag via observer_kwargs (default False)
Extracted _compute_candidate_error() shared helper to reduce duplication
Fixed all lint/formatting issues (make style + make quality both passing)

The only remaining blocker is workflow approval for CI — once that's approved the checks can actually run. Would appreciate a re-review when you get a chance!

@HDCharles
Copy link
Collaborator

Hey sorry about the wait, reach out to me on vllm slack if i don't respond in ~24 hours

Copy link
Collaborator

@HDCharles HDCharles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is findstr in the PR?

}


def compare_scales(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't think this is needed, its unit tested in test_mse

@HDCharles
Copy link
Collaborator

this isn't torch.compiling the function?

@HDCharles
Copy link
Collaborator

can you reach out to me on vllm slack there's some additional things to discuss

@Bias92 Bias92 force-pushed the torch-compile-observers branch from 0d1c168 to 243c720 Compare March 5, 2026 22:37
@mergify mergify bot added the documentation Improvements or additions to documentation label Mar 5, 2026
@Bias92 Bias92 closed this Mar 11, 2026
@Bias92 Bias92 force-pushed the torch-compile-observers branch from a6bc3fa to 36c30ee Compare March 11, 2026 16:53
@Bias92 Bias92 reopened this Mar 11, 2026
@mergify
Copy link
Contributor

mergify bot commented Mar 11, 2026

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @Bias92.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Mar 11, 2026
@Bias92 Bias92 changed the title perf: make MSE observer compatible with torch.compile (dual-path implementation) perf: make MSE observer compatible with torch.compile Mar 11, 2026
compile inner _compute_candidate_error via torch.compile(dynamic=True).
early stopping preserved in outer loop. compile flag added as oneshot arg.

requires: vllm-project/compressed-tensors#627
related: pytorch/pytorch#177131
Signed-off-by: Jaewoo Kim <pewpewplay315@gmail.com>
@Bias92 Bias92 force-pushed the torch-compile-observers branch from 9d33973 to bf63a4c Compare March 11, 2026 17:19
@mergify mergify bot removed the needs-rebase label Mar 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation ready When a PR is ready for review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants