Skip to content

Add GSM8K evaluation script and AWQ+FP8 results#2330

Open
rtj1 wants to merge 12 commits intovllm-project:mainfrom
rtj1:add-gsm8k-eval-fp8
Open

Add GSM8K evaluation script and AWQ+FP8 results#2330
rtj1 wants to merge 12 commits intovllm-project:mainfrom
rtj1:add-gsm8k-eval-fp8

Conversation

@rtj1
Copy link

@rtj1 rtj1 commented Feb 4, 2026

This PR adds GSM8K evaluation results for AWQ+FP8 quantization as requested in #2305.

What's included

RESULTS.md - Evaluation results comparing FP8_DYNAMIC vs FP8_BLOCK quantization schemes on Meta-Llama-3-8B-Instruct

Results

Scheme Strict Match Flexible Extract
FP8_DYNAMIC 76.42% 76.19%
FP8_BLOCK 75.21% 74.98%
  • Model: Meta-Llama-3-8B-Instruct
  • Hardware: 8x NVIDIA A100-SXM4-80GB
  • FP8_DYNAMIC outperforms FP8_BLOCK by ~1.2% on strict match

Evaluation command

lm_eval \
  --model hf \
  --model_args pretrained=<model_path>,dtype=auto \
  --tasks gsm8k \
  --batch_size 16 \
  --output_path <output_dir>

Note: batch_size=16 is important — the default auto picks 1, significantly increasing evaluation time.

Model Checkpoints (from @HDCharles)

@github-actions
Copy link

github-actions bot commented Feb 4, 2026

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @rtj1, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new evaluation script and comprehensive results for AWQ+FP8 quantization on the GSM8K benchmark, specifically targeting the Qwen2.5-0.5B-Instruct model. It demonstrates the superior performance of the FP8_DYNAMIC scheme over FP8_BLOCK in terms of accuracy preservation, while maintaining efficient model compression. The changes aim to provide clear, reproducible benchmarks for quantized models.

Highlights

  • New GSM8K Evaluation Script: A new Python script, gsm8k_eval.py, has been added to facilitate benchmarking of quantized models on the GSM8K dataset using the lm-eval harness. This script includes a critical batch_size=16 setting to ensure efficient evaluation, preventing the default auto setting from causing excessively long runtimes.
  • Comprehensive AWQ+FP8 Results Document: A new RESULTS.md file is introduced, providing detailed quantization and evaluation results. It covers the Qwen2.5-0.5B-Instruct model, tested with FP8_DYNAMIC and FP8_BLOCK schemes on a Google Colab L4 GPU, including model sizes, evaluation metrics, and a full reproducibility workflow.
  • Key Performance Findings for FP8 Quantization: The evaluation reveals that FP8_DYNAMIC quantization significantly outperforms FP8_BLOCK in strict match accuracy on GSM8K (22.67% vs 17.97%). Both schemes achieve a similar compression ratio of approximately 1.2x, reducing the model size from 1.1GB to 0.92GB.
  • Enhanced Reproducibility: To ensure reproducibility, the quantized models (FP8_DYNAMIC and FP8_BLOCK) have been uploaded to HuggingFace. The RESULTS.md file also provides a step-by-step bash workflow, including commands to set up and run the quantization and evaluation processes.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • examples/awq/RESULTS.md
    • Added a new markdown file documenting the AWQ + FP8 quantization results for Qwen2.5-0.5B-Instruct.
    • Includes a summary, quantization metrics (size, files), GSM8K evaluation results (Strict Match, Flexible Extract), HuggingFace model links, setup instructions, evaluation details, reproducibility workflow, and a recommendation to use FP8_DYNAMIC.
  • examples/awq/gsm8k_eval.py
    • Added a new Python script for evaluating quantized models on the GSM8K dataset.
    • Utilizes lm-eval and sets a fixed batch_size=16 to optimize evaluation runtime, avoiding the slow default auto setting.
    • Takes the model path as a command-line argument and saves results to a dynamically named output directory.
Activity
  • The pull request addresses and closes issue Evaluate AWQ + FP8 Example #2305, which requested GSM8K evaluation results for AWQ+FP8 quantization.
  • The author rtj1 has added two new files: examples/awq/RESULTS.md and examples/awq/gsm8k_eval.py.
  • The author has requested a review or attention from HDCharles via a cc mention in the PR description.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the documentation Improvements or additions to documentation label Feb 4, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable evaluation script for GSM8K and provides detailed results for AWQ+FP8 quantization in RESULTS.md. The documentation is clear and the script is a useful addition for benchmarking. I have a couple of suggestions to enhance the robustness of the reproduction steps and the new evaluation script, mainly by making a shell command more specific and adding input validation to the Python script for better error handling.

Used the existing example scripts from the repo:
```bash
cd examples/awq
sed -i 's/meta-llama\/Meta-Llama-3-8B-Instruct/Qwen\/Qwen2.5-0.5B-Instruct/g' *.py
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using a broad wildcard like *.py with sed -i can be risky, as it might unintentionally modify other Python files in the directory. It would be safer and more robust to use a more specific pattern that targets only the intended example scripts.

Suggested change
sed -i 's/meta-llama\/Meta-Llama-3-8B-Instruct/Qwen\/Qwen2.5-0.5B-Instruct/g' *.py
sed -i 's/meta-llama\/Meta-Llama-3-8B-Instruct/Qwen\/Qwen2.5-0.5B-Instruct/g' fp8_*_llama_example.py

rtj1 and others added 3 commits February 5, 2026 13:53
Closes vllm-project#2305

This PR adds:
- gsm8k_eval.py: Evaluation script for running GSM8K benchmarks on quantized models
- RESULTS.md: Quantization and evaluation results for Qwen2.5-0.5B-Instruct with FP8_DYNAMIC and FP8_BLOCK schemes

Key findings:
- FP8_DYNAMIC achieves 22.67% strict match vs 17.97% for FP8_BLOCK on GSM8K
- Both schemes achieve ~1.2x compression (1.1GB -> 0.92GB)
- Quantized models uploaded to HuggingFace Hub for reproducibility

Evaluated on Google Colab L4 GPU (22.5GB) using the existing example scripts.

Signed-off-by: rtj1 <tharunjagarlamudi@gmail.com>
Signed-off-by: rtj1 <tharunjagarlamudi@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Jagarlamudi <76727507+rtj1@users.noreply.github.com>
Signed-off-by: rtj1 <tharunjagarlamudi@gmail.com>
@rtj1 rtj1 force-pushed the add-gsm8k-eval-fp8 branch from 3ab5622 to 14bc4cc Compare February 5, 2026 18:54
@HDCharles
Copy link
Collaborator

So we're looking to evaluate the actual models the examples are generating. I'll run evals using your PR and we can go from there

@mergify
Copy link
Contributor

mergify bot commented Feb 10, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

@rtj1 rtj1 requested a review from kylesayrs as a code owner February 10, 2026 18:47
@mergify mergify bot removed the quality-failed label Feb 10, 2026
- Run make style to format code with ruff
- Run make quality to ensure all checks pass
- Address mergify bot feedback on quality checks

Signed-off-by: rtj1 <tharunjagarlamudi@gmail.com>
@rtj1 rtj1 force-pushed the add-gsm8k-eval-fp8 branch from dabab9a to 151a5b9 Compare February 10, 2026 18:48
@rtj1
Copy link
Author

rtj1 commented Feb 10, 2026

Thanks for taking a look, @HDCharles!

The quantized models are uploaded to HuggingFace:

I've also fixed the quality checks - all tests passing now. Let me know if you see any issues with the models or need different configs for the evaluation.

@mergify
Copy link
Contributor

mergify bot commented Feb 10, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

@HDCharles
Copy link
Collaborator

looks like you edited a ton of files. I would undo the last commit and reinstall llm-compressor to get the correct version of ruff pip install -e.[dev] and try make style and make quality again.

@HDCharles
Copy link
Collaborator

HDCharles commented Feb 11, 2026

- Update RESULTS.md with HDCharles's Llama-3-8B-Instruct evaluation results
- FP8_DYNAMIC: 76.42% strict match vs FP8_BLOCK: 75.21%
- Run make style with proper dev dependencies (pip install -e.[dev])
- Fix code formatting per maintainer feedback

Results from: vllm-project#2347

Signed-off-by: rtj1 <tharunjagarlamudi@gmail.com>
@rtj1 rtj1 force-pushed the add-gsm8k-eval-fp8 branch from 3302cd9 to 9d425dd Compare February 11, 2026 17:12
@rtj1
Copy link
Author

rtj1 commented Feb 11, 2026

Updated with Llama-3-8B evaluation results from PR #2347.

Reinstalled llm-compressor with dev dependencies (pip install -e.[dev]) and reran code formatting as requested. All quality checks now passing.

The PR now includes your official Llama-3-8B-Instruct evaluation results showing FP8_DYNAMIC achieves 76.42% vs FP8_BLOCK's 75.21% on GSM8K strict matching.

@@ -0,0 +1,67 @@
# AWQ + FP8 Quantization Results

Closes #2305
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to call this out in the changes, just in the PR summary

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should remove this file in favor of directly calling lm_eval as you show in the markdown file.

- Remove 'Closes vllm-project#2305' from RESULTS.md
- Remove gsm8k_eval.py file (use lm_eval directly as documented)
- Update RESULTS.md to reference only lm_eval command

Signed-off-by: rtj1 <tharunjagarlamudi@gmail.com>
@rtj1
Copy link
Author

rtj1 commented Feb 16, 2026

Thanks for the feedback @brian-dellabetta! I've addressed both points:

  • Removed Closes #2305 from RESULTS.md
  • Removed gsm8k_eval.py and updated docs to use lm_eval directly

Latest commit: 002b77a

@HDCharles HDCharles added the ready When a PR is ready for review label Feb 17, 2026
| **FP8_DYNAMIC** | **76.42%** | **76.19%** |
| **FP8_BLOCK** | 75.21% | 74.98% |

FP8_DYNAMIC wins by ~1.2% on strict matching. Both achieve similar performance on flexible extraction.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
FP8_DYNAMIC wins by ~1.2% on strict matching. Both achieve similar performance on flexible extraction.

this seems outdated?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, removed. Thanks!

@HDCharles
Copy link
Collaborator

HDCharles commented Feb 17, 2026

update the PR description for the targeted model

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation ready When a PR is ready for review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants