-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix wrong scale eps applied #1770
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1770
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 1ff1c36 with merge base ab3792e ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
ebc22f4
to
1acb897
Compare
Please don't merge yet, this isn't good enough... |
1acb897
to
a66f34d
Compare
Ok, I think it could be reviewed now. Basically, in Now, this is all probably an overkill: it's pretty much relevant only for float16 inputs, it could be that it fixes only one of several quantization code paths, etc. So maybe I just put this in my #1671 for now, specifically for the quantization type where I've encountered the issue while I was testing it? |
Closing as the issue is rather improbable to encounter in practice. |
a66f34d
to
ee7884c
Compare
ee7884c
to
535ac19
Compare
@@ -944,10 +944,16 @@ def _choose_qparams_affine( | |||
else: | |||
zero_point = torch.full_like(scale, int((quant_max + quant_min + 1) / 2)) | |||
scale = torch.clamp(scale, min=eps) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we modify eps
to the right value instead of trying to clamp twice? Right now eps
is set to torch.finfo(input.dtype).eps
, it seems like that just isn't the right way to set it here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO we should fix eps
instead of clamping twice
By the way, thanks for fixing this! I think this PR should include a test case which fails before and passes after these changes. |
535ac19
to
51d3ca0
Compare
Added a test case - without changes in I see you point about clamping twice. I need to see if some further changes in (The |
9a43a80
to
42a2347
Compare
Pushed an update, I think this is it. Namely, in
|
42a2347
to
beab4c1
Compare
beab4c1
to
a4f5ada
Compare
@vkuzo Are you ok with merging this now? |
if torch.is_floating_point(max_val): | ||
# In this case, scale will be calculated below in | ||
# max_val.dtype. | ||
eps = max(eps, torch.finfo(max_val.dtype).tiny) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where is eps
used? I see that on L984 we are calculating eps
again, just wondering if we need both calculations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is used in lines 957 and 961 (line numbers are after changes by this PR), to clamp scale in order to prevent 1/scale to become Inf. In second case, this is immediately necessary, as dividing by scale is already performed in line 966. Furthermore, towards the end of the function, in line 980, scale may be converted to different data type from what is used up to this point, which could trigger scale going into subnormal range for this final data type, so it's necessary to clamp again from below (and, while we're at it, I'm clamping it from above too).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, so eps
is read as an argument. In that case, it's a bit confusing to silently override eps
here. Is there a way to set it correctly at the callsite (using the logic you have here) so the setting is honored in this function?
Overall logic of how to choose eps looks great, now I'm just trying to help fit this in cleanly :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand - it is not nice that the argument may get changed silently, but there is a number of call sites, so it seems to me for the maintenance etc., the best place to fix it is here, at single place. If silent change is considered very intrusive, maybe I can add a printout in case when change needed? Overall: I think I mentioned it elsewhere that the best way to fix it may be to drop eps from arguments list, but it's already in a public interface...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in a codepath such as quant_primitives.py
which is supposed to be the canonical way these calculations are done, IMO silently modifying a passed-in argument is not something that should be landed
it's understandable if you don't want to sign up for changing every callsite - in that case a good way to wrap this up could be
- create a test case which fails
- skip the test case
- create an issue asking to fix it properly and unskip the test case
then someone else could pick up the fix
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fixed the issue in _choose_qparams_affine
, that choose_qparams_affine
and choose_qparams_affine_with_min_max
are thin wrappers around. These two methods are then called from more than 30 places in code, tests included. In particular, choose_qparams_affine
is called from to_affine_quantized_intx
, that is in turn called from to_affine_quantized_floatx
, and these two are called from about 20 and 10 call sites, respectively. For pretty much all of these call sites, a test case could be constructed that would trigger a scale factor to become Inf, and then some scaled values to become NaNs after applying scale. Etc. - the call chain probably could be continued for some of these cases.
An example of such test case, that you suggested, is part of this PR. So I can remove my fix, and whoever is "someone else" could pick up to fix it, to start with. But I really think there is no point to have a copy of the same fix on number of such places. Plus, many of these call sites are also public interfaces, so the user could still see the eps value he supplied not actually being used. So I really think the least painful way to fix this is as in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I personally don't think fixing this issue by silently changing the argument value deep inside our stack should land - it should be done the right way by changing the user visible callsites. A good process would be:
- have the default logic to set
eps
use this new logic in whichever functions are visible to the user - ensure all the callsites use the default in (1)
These two methods are then called from more than 30 places in code
I think the # will be much smaller if you only include the callsites which directly set eps
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the definition of "visible to user"? Every function that I mentioned above is visible to user in the sense it is possible for user to import the module and use the function - and all of them expose eps
as argument, so it's plain impossible to find all call sites, let alone fix them. On the other hand, if we say quantize_
is the only API visible to users, then I believe the eps
is not visible to users at all (not 100% sure - maybe there is a config exposing it), which means eps
argument is considered being used by torchao internally only, and as apparently it doesn't get used the right way, the best fix is just to remove it as an argument everywhere, and keep the check added by this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can treat "visible to the user" as "in torchao repository".
it's plain impossible to find all call sites, let alone fix them.
it's definitely possible to fix this for callsites inside of torchao
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please understand that I'm not into nit-picking. However, here we have plain simple case of handling an invalid argument value, and we're going into great lengths about what is in a nutshell an aesthetics argument.
The invalid eps
value could be silently changed (and I think it's the best idea, as this change does "the right thing", i.e. makes it possible to do the quantization later, keeping the quantized range as big as possible), with or without printing a warning to the user. Alternatively, we could throw an exception. Or, we could decide that what is this all about is just a contrived corner case that most likely won't happen in practice, so we change nothing. With any of these, we're resolving this issue once and for all. On the other hand, I really don't see if I for example make the fix here (the reproducer is below), how is that going to prevent another torchao developer down the road from making this same omission when writing alike handler for a new config? So it's not that I'm lazy or whatever to do what you're suggesting, it's simply that I don't want to do that.
Script to reproduce issue in case of integer quantization - the problem manifests itself here with the too coarse quantization, with the fix in this PR quantization is much better
import torch
import torchao
from torchao.dtypes import to_affine_quantized_intx
from torchao.quantization import (
Int8WeightOnlyConfig,
MappingType,
quantize_,
)
dtype = torch.float16
tiny = torch.finfo(dtype).tiny
model = torch.nn.Linear(1, 4, dtype=dtype)
model.weight = torch.nn.Parameter(
torch.tensor([[0, 10 * tiny, 20 * tiny, 30 * tiny]], dtype=dtype)
)
print(model.weight)
quantize_(model, Int8WeightOnlyConfig())
print(model.weight)
a4f5ada
to
6a341c5
Compare
6a341c5
to
1ff1c36
Compare
Fixes #1766.