-
Notifications
You must be signed in to change notification settings - Fork 294
Match QAT prepare and convert numerics exactly #1964
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1964
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New FailuresAs of commit 4c45344 with merge base dfbd681 ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
1fe118a
to
df409ed
Compare
cb9942c
to
709feab
Compare
@andrewor14 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Thanks for the updates, will be good to setup a deprecation plan for quantized_decomposed.choose_qparams op then |
709feab
to
890438a
Compare
@andrewor14 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
890438a
to
d9af870
Compare
**Summary:** Previously, `Int8DynActInt4QATQuantizer` had slightly diverging numerics between the prepare and convert steps. This is because the prepare step uses quantization primitives shared with AQT (specifically `quantize_affine` and `dequantize_affine`), while the convert step relies on old ops from the `torch.ops.quantized_decomposed` namespace. The diverging numerics is negligible for small models, but the quantization errors begin to compound for larger models with many linear layers. More specifically, there are three different places where the divergence occurs during activation quantization: 1. **Choose qparams.** The prepare step casts the qparams to `torch.float32`, whereas the convert step casts the scales to `torch.float64` and zero points to `torch.int64`. 2. **Quantize.** The prepare step performs round before adding zero points and uses torch functions, while the convert step adds before rounding and uses torch tensor methods. ``` x = torch.clamp( torch.round(x * (1.0 / scale)) + zero_point, qmin, qmax, ) x = ( x.mul(1.0 / scale) .add(zero_point) .round() .clamp(qmin, qmax) .to(quantize_dtype) ) ``` 3. **Dequantize.** The prepare step casts to `torch.int32` before adding the zero points, and casts back to the original dtype before multiplying the scale. The convert step only casts at the very end. ``` x = x.to(torch.int32) - zero_point.to(torch.int32) x = x.to(orig_dtype) x = x * scale x = x - zero_point x = x * scale x = x.to(orig_dtype) ``` This commit makes the convert path use the same torchao quantization primitives as the prepare path, thereby resolving the 3 above differences. Now, the prepare and convert steps match exactly in terms of numerics over many trials. **Test Plan:** python test/quantization/test_qat.py -k test_fake_quantize_per_token_vs_convert python test/quantization/test_qat.py -k test_qat_8da4w_prepare_vs_convert
d9af870
to
4c45344
Compare
@andrewor14 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
**Summary:** Previously, `Int8DynActInt4QATQuantizer` had slightly diverging numerics between the prepare and convert steps. This is because the prepare step uses quantization primitives shared with AQT (specifically `quantize_affine` and `dequantize_affine`), while the convert step relies on old ops from the `torch.ops.quantized_decomposed` namespace. The diverging numerics is negligible for small models, but the quantization errors begin to compound for larger models with many linear layers. More specifically, there are three different places where the divergence occurs during activation quantization: 1. **Choose qparams.** The prepare step casts the qparams to `torch.float32`, whereas the convert step casts the scales to `torch.float64` and zero points to `torch.int64`. 2. **Quantize.** The prepare step performs round before adding zero points and uses torch functions, while the convert step adds before rounding and uses torch tensor methods. ``` x = torch.clamp( torch.round(x * (1.0 / scale)) + zero_point, qmin, qmax, ) x = ( x.mul(1.0 / scale) .add(zero_point) .round() .clamp(qmin, qmax) .to(quantize_dtype) ) ``` 3. **Dequantize.** The prepare step casts to `torch.int32` before adding the zero points, and casts back to the original dtype before multiplying the scale. The convert step only casts at the very end. ``` x = x.to(torch.int32) - zero_point.to(torch.int32) x = x.to(orig_dtype) x = x * scale x = x - zero_point x = x * scale x = x.to(orig_dtype) ``` This commit makes the convert path use the same torchao quantization primitives as the prepare path, thereby resolving the 3 above differences. Now, the prepare and convert steps match exactly in terms of numerics over many trials. **Test Plan:** python test/quantization/test_qat.py -k test_fake_quantize_per_token_vs_convert python test/quantization/test_qat.py -k test_qat_8da4w_prepare_vs_convert
**Summary:** The previous PR #1964 got this to match for fp32, but there were two additional sources of numerical discrepancies with bf16: 1. QAT asymmetric per token choose qparams diverged from `choose_qparams_affine`, which had simpler logic 2. QAT per token fake quantize cast the input to fp32 before fake quantizing them These are both resolved in this commit: (1) QAT now uses `choose_qparams_affine` instead of the custom function for asymmetric per token, which is now deleted, and (2) QAT no longer casts the input to fp32. The result is exact match in numerics between the prepare and convert steps for both fp32 and bf16. **Test Plan:** python test/quantization/test_qat.py -k test_fake_quantize_per_token_vs_convert_fp32 python test/quantization/test_qat.py -k test_fake_quantize_per_token_vs_convert_bf16 python test/quantization/test_qat.py -k test_qat_8da4w_prepare_vs_convert_fp32 python test/quantization/test_qat.py -k test_qat_8da4w_prepare_vs_convert_bf16
**Summary:** The previous PR #1964 got this to match for fp32, but there were two additional sources of numerical discrepancies with bf16: 1. QAT asymmetric per token choose qparams diverged from `choose_qparams_affine`, which had simpler logic 2. QAT per token fake quantize cast the input to fp32 before fake quantizing them These are both resolved in this commit: (1) QAT now uses `choose_qparams_affine` instead of the custom function for asymmetric per token, which is now deleted, and (2) QAT no longer casts the input to fp32. The result is exact match in numerics between the prepare and convert steps for both fp32 and bf16. **Test Plan:** python test/quantization/test_qat.py -k test_fake_quantize_per_token_vs_convert_fp32 python test/quantization/test_qat.py -k test_fake_quantize_per_token_vs_convert_bf16 python test/quantization/test_qat.py -k test_qat_8da4w_prepare_vs_convert_fp32 python test/quantization/test_qat.py -k test_qat_8da4w_prepare_vs_convert_bf16
**Summary:** The previous PR #1964 got this to match for fp32, but there were two additional sources of numerical discrepancies with bf16: 1. QAT asymmetric per token choose qparams diverged from `choose_qparams_affine`, which had simpler logic 2. QAT per token fake quantize cast the input to fp32 before fake quantizing them These are both resolved in this commit: (1) QAT now uses `choose_qparams_affine` instead of the custom function for asymmetric per token, which is now deleted, and (2) QAT no longer casts the input to fp32. The result is exact match in numerics between the prepare and convert steps for both fp32 and bf16. **Test Plan:** python test/quantization/test_qat.py -k test_fake_quantize_per_token_vs_convert_fp32 python test/quantization/test_qat.py -k test_fake_quantize_per_token_vs_convert_bf16 python test/quantization/test_qat.py -k test_qat_8da4w_prepare_vs_convert_fp32 python test/quantization/test_qat.py -k test_qat_8da4w_prepare_vs_convert_bf16
**Summary:** The previous PR #1964 got this to match for fp32, but there were two additional sources of numerical discrepancies with bf16: 1. QAT asymmetric per token choose qparams diverged from `choose_qparams_affine`, which had simpler logic 2. QAT per token fake quantize cast the input to fp32 before fake quantizing them 3. QAT symmetric per group choose qparams used a hardcoded eps value that did not match `choose_qparams_affine` These are both resolved in this commit: (1) QAT now uses `choose_qparams_affine` instead of the custom function for asymmetric per token, which is now deleted, (2) QAT no longer casts the input to fp32, and (3) QAT now uses an eps value that corresponds to the input dtype. The result is exact match in numerics between the prepare and convert steps for both fp32 and bf16. **Test Plan:** python test/quantization/test_qat.py -k test_fake_quantize_per_token_vs_convert_fp32 python test/quantization/test_qat.py -k test_fake_quantize_per_token_vs_convert_bf16 python test/quantization/test_qat.py -k test_qat_8da4w_prepare_vs_convert_fp32 python test/quantization/test_qat.py -k test_qat_8da4w_prepare_vs_convert_bf16
**Summary:** The previous PR #1964 got this to match for fp32, but there were two additional sources of numerical discrepancies with bf16: 1. QAT asymmetric per token choose qparams diverged from `choose_qparams_affine`, which had simpler logic 2. QAT per token fake quantize cast the input to fp32 before fake quantizing them 3. QAT symmetric per group choose qparams used a hardcoded eps value that did not match `choose_qparams_affine` These are both resolved in this commit: (1) QAT now uses `choose_qparams_affine` instead of the custom function for asymmetric per token, which is now deleted, (2) QAT no longer casts the input to fp32, and (3) QAT now uses an eps value that corresponds to the input dtype. The result is exact match in numerics between the prepare and convert steps for both fp32, bf16, and fp16. **Test Plan:** python test/quantization/test_qat.py -k test_fake_quantize_per_token_vs_convert python test/quantization/test_qat.py -k test_qat_8da4w_prepare_vs_convert
**Summary:** The previous PR #1964 got this to match for fp32, but there were two additional sources of numerical discrepancies with bf16: 1. QAT asymmetric per token choose qparams diverged from `choose_qparams_affine`, which had simpler logic 2. QAT per token fake quantize cast the input to fp32 before fake quantizing them 3. QAT symmetric per group choose qparams used a hardcoded eps value that did not match `choose_qparams_affine` These are both resolved in this commit: (1) QAT now uses `choose_qparams_affine` instead of the custom function for asymmetric per token, which is now deleted, (2) QAT no longer casts the input to fp32, and (3) QAT now uses an eps value that corresponds to the input dtype. The result is exact match in numerics between the prepare and convert steps for both fp32, bf16, and fp16. **Test Plan:** python test/quantization/test_qat.py -k test_fake_quantize_per_token_vs_convert python test/quantization/test_qat.py -k test_qat_8da4w_prepare_vs_convert
Summary: Previously,
Int8DynActInt4QATQuantizer
had slightly diverging numerics between the prepare and convert steps. This is because the prepare step uses quantization primitives shared with AQT (specificallyquantize_affine
anddequantize_affine
), while the convert step relies on old ops from thetorch.ops.quantized_decomposed
namespace. The diverging numerics is negligible for small models, but the quantization errors begin to compound for larger models with many linear layers.More specifically, there are three different places where the divergence occurs during activation quantization:
Choose qparams. The prepare step casts the qparams to
torch.float32
, whereas the convert step casts the scales totorch.float64
and zero points totorch.int64
.Quantize. The prepare step performs round before adding zero points and uses torch functions, while the convert step adds before rounding and uses torch tensor methods.
torch.int32
before adding the zero points, and casts back to the original dtype before multiplying the scale. The convert step only casts at the very end.This commit makes the convert path use the same torchao quantization primitives as the prepare path, thereby resolving the 3 above differences. Now, the prepare and convert steps match exactly in terms of numerics over many trials.
Test Plan: