Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial ParetoQ commit #1876

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Initial ParetoQ commit #1876

wants to merge 1 commit into from

Conversation

andrewor14
Copy link
Contributor

This project contains the training code of ParetoQ introduced in: "ParetoQ: Scaling Laws in Extremely Low-bit LLM Quantization" (https://arxiv.org/abs/2502.02631). All code is written by @liuzechun and @zxdmike and migrated from
https://github.com/facebookresearch/ParetoQ.

ParetoQ is the first unified framework that facilitates rigorous comparisons across 1-bit, 1.58-bit, 2-bit, 3-bit, and 4-bit quantization settings. By optimizing training schemes and refining quantization functions, ParetoQ surpasses all previous methods tailored to specific bit widths. Specifically, the 1.58-bit ParetoQ LLaMA-3 8B model reduces the performance gap to full precision by relatively 37.8% compared to the 1-bit Era’s 1.58-bit LLaMA-3 8B model, while using only 30% of the training tokens.

Copy link

pytorch-bot bot commented Mar 12, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1876

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 77b1bcc with merge base 8c81863 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 12, 2025
@andrewor14 andrewor14 marked this pull request as draft March 12, 2025 20:14
@andrewor14 andrewor14 added the topic: new feature Use this tag if this PR adds a new feature label Mar 12, 2025
@andrewor14 andrewor14 marked this pull request as ready for review March 13, 2025 20:31
This project contains the training code of ParetoQ introduced in:
"ParetoQ: Scaling Laws in Extremely Low-bit LLM Quantization"
(https://arxiv.org/abs/2502.02631). All code is written by
@liuzechun and @zxdmike and migrated from
https://github.com/facebookresearch/ParetoQ.

ParetoQ is the first unified framework that facilitates rigorous
comparisons across 1-bit, 1.58-bit, 2-bit, 3-bit, and 4-bit
quantization settings. By optimizing training schemes and refining
quantization functions, ParetoQ surpasses all previous methods
tailored to specific bit widths. Specifically, the 1.58-bit
ParetoQ LLaMA-3 8B model reduces the performance gap to full
precision by relatively 37.8% compared to the 1-bit Era’s
1.58-bit LLaMA-3 8B model, while using only 30% of the
training tokens.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: new feature Use this tag if this PR adds a new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants