-
Notifications
You must be signed in to change notification settings - Fork 234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Match QAT prepare and convert numerics exactly #1964
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1964
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New FailuresAs of commit 890438a with merge base dfbd681 ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
1fe118a
to
df409ed
Compare
cb9942c
to
709feab
Compare
@andrewor14 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Thanks for the updates, will be good to setup a deprecation plan for quantized_decomposed.choose_qparams op then |
**Summary:** Previously, `Int8DynActInt4QATQuantizer` had slightly diverging numerics between the prepare and convert steps. This is because the prepare step uses quantization primitives shared with AQT (specifically `quantize_affine` and `dequantize_affine`), while the convert step relies on old ops from the `torch.ops.quantized_decomposed` namespace. The diverging numerics is negligible for small models, but the quantization errors begin to compound for larger models with many linear layers. More specifically, there are three different places where the divergence occurs during activation quantization: 1. **Choose qparams.** The prepare step casts the qparams to `torch.float32`, whereas the convert step casts the scales to `torch.float64` and zero points to `torch.int64`. 2. **Quantize.** The prepare step performs round before adding zero points and uses torch functions, while the convert step adds before rounding and uses torch tensor methods. ``` x = torch.clamp( torch.round(x * (1.0 / scale)) + zero_point, qmin, qmax, ) x = ( x.mul(1.0 / scale) .add(zero_point) .round() .clamp(qmin, qmax) .to(quantize_dtype) ) ``` 3. **Dequantize.** The prepare step casts to `torch.int32` before adding the zero points, and casts back to the original dtype before multiplying the scale. The convert step only casts at the very end. ``` x = x.to(torch.int32) - zero_point.to(torch.int32) x = x.to(orig_dtype) x = x * scale x = x - zero_point x = x * scale x = x.to(orig_dtype) ``` This commit makes the convert path use the same torchao quantization primitives as the prepare path, thereby resolving the 3 above differences. Now, the prepare and convert steps match exactly in terms of numerics over many trials. **Test Plan:** python test/quantization/test_qat.py -k test_fake_quantize_per_token_vs_convert python test/quantization/test_qat.py -k test_qat_8da4w_prepare_vs_convert
709feab
to
890438a
Compare
@andrewor14 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Summary: Previously,
Int8DynActInt4QATQuantizer
had slightly diverging numerics between the prepare and convert steps. This is because the prepare step uses quantization primitives shared with AQT (specificallyquantize_affine
anddequantize_affine
), while the convert step relies on old ops from thetorch.ops.quantized_decomposed
namespace. The diverging numerics is negligible for small models, but the quantization errors begin to compound for larger models with many linear layers.More specifically, there are three different places where the divergence occurs during activation quantization:
Choose qparams. The prepare step casts the qparams to
torch.float32
, whereas the convert step casts the scales totorch.float64
and zero points totorch.int64
.Quantize. The prepare step performs round before adding zero points and uses torch functions, while the convert step adds before rounding and uses torch tensor methods.
torch.int32
before adding the zero points, and casts back to the original dtype before multiplying the scale. The convert step only casts at the very end.This commit makes the convert path use the same torchao quantization primitives as the prepare path, thereby resolving the 3 above differences. Now, the prepare and convert steps match exactly in terms of numerics over many trials.
Test Plan: