Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix wrong scale eps applied #1770

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

alexsamardzic
Copy link
Collaborator

Fixes #1766.

Copy link

pytorch-bot bot commented Feb 24, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1770

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 1ff1c36 with merge base ab3792e (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 24, 2025
@alexsamardzic alexsamardzic added float8 topic: bug fix Use this tag for PRs that fix bugs labels Feb 24, 2025
@alexsamardzic alexsamardzic marked this pull request as draft February 24, 2025 18:50
@alexsamardzic
Copy link
Collaborator Author

Please don't merge yet, this isn't good enough...

@alexsamardzic alexsamardzic marked this pull request as ready for review February 24, 2025 21:48
@alexsamardzic
Copy link
Collaborator Author

Ok, I think it could be reviewed now. Basically, in calculate_scale_eps_for_dtype(), scale calculations are kind of emulated, and the minimum value that won't produce an Inf when reciprocated is returned. This should produce such eps value for choose_qparams_affine() that would calculate the scale so that the range of quantized values is maximized, but that scale reciprocal, used when given tensor actually quantized, doesn't become Inf.

Now, this is all probably an overkill: it's pretty much relevant only for float16 inputs, it could be that it fixes only one of several quantization code paths, etc. So maybe I just put this in my #1671 for now, specifically for the quantization type where I've encountered the issue while I was testing it?

@alexsamardzic
Copy link
Collaborator Author

Closing as the issue is rather improbable to encounter in practice.

@@ -944,10 +944,16 @@ def _choose_qparams_affine(
else:
zero_point = torch.full_like(scale, int((quant_max + quant_min + 1) / 2))
scale = torch.clamp(scale, min=eps)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we modify eps to the right value instead of trying to clamp twice? Right now eps is set to torch.finfo(input.dtype).eps, it seems like that just isn't the right way to set it here?

Copy link
Contributor

@vkuzo vkuzo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO we should fix eps instead of clamping twice

@vkuzo
Copy link
Contributor

vkuzo commented Feb 28, 2025

By the way, thanks for fixing this!

I think this PR should include a test case which fails before and passes after these changes.

@alexsamardzic
Copy link
Collaborator Author

I think this PR should include a test case which fails before and passes after these changes.

Added a test case - without changes in torchao/quantization/quant_primitives.py, it will produce Inf scale for all "high-precision" floating point data types tested. (I've put some comments in the test code that I hope explain the issue.)

I see you point about clamping twice. I need to see if some further changes in torchao/quantization/quant_primitives.py are needed anyway. The problem is that, it seems to me, the scale could end up calculated in data type different from scale_data type, that it will be eventually casted to, and also that for asymmetric mapping, its reciprocal actually gets used immediately - and it has to be properly clamped from below in all the cases. Thus, I don't think it is possible just to fix eps before branching on mapping type.

(The eps argument probably should hot have been there to start with - if we're not completely sure how to choose it, the users are even less.)

@alexsamardzic alexsamardzic force-pushed the fix-wrong-scale-eps branch 4 times, most recently from 9a43a80 to 42a2347 Compare February 28, 2025 20:58
@alexsamardzic
Copy link
Collaborator Author

Pushed an update, I think this is it. Namely, in _choose_qparams_affine():

  1. For floating point inputs: scale is calculated in min_val/max_val dtype, so eps is clamped against the smallest normalized value of this datatype, to have scale clamped againt this eps value, and thus prevent scale reciprocal, used here to become Inf.
  2. For integer inputs, scale ends up calculated as torch.float32 tensor (because min_val/max_val, that is an integer tensor, is part of arithmetic operations with a Python float value, and it seems the results in such case is promoted to torch.float32), so eps is clamped to smallest normalized value of torch.float32 - the clamping is for the same reason as in the previous case.
  3. At the end of the function, scale is converted to scale_dtype dtype, so if this dtype is floating point, then before returning the value, it is clamped against the smallest normalized value of this datatype, again to prevent scale reciprocal (now if used from call site), to become Inf.

@alexsamardzic alexsamardzic requested a review from vkuzo March 13, 2025 11:48
@alexsamardzic
Copy link
Collaborator Author

@vkuzo Are you ok with merging this now?

if torch.is_floating_point(max_val):
# In this case, scale will be calculated below in
# max_val.dtype.
eps = max(eps, torch.finfo(max_val.dtype).tiny)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is eps used? I see that on L984 we are calculating eps again, just wondering if we need both calculations?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is used in lines 957 and 961 (line numbers are after changes by this PR), to clamp scale in order to prevent 1/scale to become Inf. In second case, this is immediately necessary, as dividing by scale is already performed in line 966. Furthermore, towards the end of the function, in line 980, scale may be converted to different data type from what is used up to this point, which could trigger scale going into subnormal range for this final data type, so it's necessary to clamp again from below (and, while we're at it, I'm clamping it from above too).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, so eps is read as an argument. In that case, it's a bit confusing to silently override eps here. Is there a way to set it correctly at the callsite (using the logic you have here) so the setting is honored in this function?

Overall logic of how to choose eps looks great, now I'm just trying to help fit this in cleanly :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand - it is not nice that the argument may get changed silently, but there is a number of call sites, so it seems to me for the maintenance etc., the best place to fix it is here, at single place. If silent change is considered very intrusive, maybe I can add a printout in case when change needed? Overall: I think I mentioned it elsewhere that the best way to fix it may be to drop eps from arguments list, but it's already in a public interface...

Copy link
Contributor

@vkuzo vkuzo Mar 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in a codepath such as quant_primitives.py which is supposed to be the canonical way these calculations are done, IMO silently modifying a passed-in argument is not something that should be landed

it's understandable if you don't want to sign up for changing every callsite - in that case a good way to wrap this up could be

  1. create a test case which fails
  2. skip the test case
  3. create an issue asking to fix it properly and unskip the test case

then someone else could pick up the fix

Copy link
Collaborator Author

@alexsamardzic alexsamardzic Mar 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fixed the issue in _choose_qparams_affine, that choose_qparams_affine and choose_qparams_affine_with_min_max are thin wrappers around. These two methods are then called from more than 30 places in code, tests included. In particular, choose_qparams_affine is called from to_affine_quantized_intx, that is in turn called from to_affine_quantized_floatx, and these two are called from about 20 and 10 call sites, respectively. For pretty much all of these call sites, a test case could be constructed that would trigger a scale factor to become Inf, and then some scaled values to become NaNs after applying scale. Etc. - the call chain probably could be continued for some of these cases.

An example of such test case, that you suggested, is part of this PR. So I can remove my fix, and whoever is "someone else" could pick up to fix it, to start with. But I really think there is no point to have a copy of the same fix on number of such places. Plus, many of these call sites are also public interfaces, so the user could still see the eps value he supplied not actually being used. So I really think the least painful way to fix this is as in this PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally don't think fixing this issue by silently changing the argument value deep inside our stack should land - it should be done the right way by changing the user visible callsites. A good process would be:

  1. have the default logic to set eps use this new logic in whichever functions are visible to the user
  2. ensure all the callsites use the default in (1)

These two methods are then called from more than 30 places in code

I think the # will be much smaller if you only include the callsites which directly set eps.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the definition of "visible to user"? Every function that I mentioned above is visible to user in the sense it is possible for user to import the module and use the function - and all of them expose eps as argument, so it's plain impossible to find all call sites, let alone fix them. On the other hand, if we say quantize_ is the only API visible to users, then I believe the eps is not visible to users at all (not 100% sure - maybe there is a config exposing it), which means eps argument is considered being used by torchao internally only, and as apparently it doesn't get used the right way, the best fix is just to remove it as an argument everywhere, and keep the check added by this PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can treat "visible to the user" as "in torchao repository".

it's plain impossible to find all call sites, let alone fix them.

it's definitely possible to fix this for callsites inside of torchao

Copy link
Collaborator Author

@alexsamardzic alexsamardzic Mar 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please understand that I'm not into nit-picking. However, here we have plain simple case of handling an invalid argument value, and we're going into great lengths about what is in a nutshell an aesthetics argument.

The invalid eps value could be silently changed (and I think it's the best idea, as this change does "the right thing", i.e. makes it possible to do the quantization later, keeping the quantized range as big as possible), with or without printing a warning to the user. Alternatively, we could throw an exception. Or, we could decide that what is this all about is just a contrived corner case that most likely won't happen in practice, so we change nothing. With any of these, we're resolving this issue once and for all. On the other hand, I really don't see if I for example make the fix here (the reproducer is below), how is that going to prevent another torchao developer down the road from making this same omission when writing alike handler for a new config? So it's not that I'm lazy or whatever to do what you're suggesting, it's simply that I don't want to do that.

Script to reproduce issue in case of integer quantization - the problem manifests itself here with the too coarse quantization, with the fix in this PR quantization is much better
import torch
import torchao

from torchao.dtypes import to_affine_quantized_intx
from torchao.quantization import (
    Int8WeightOnlyConfig,
    MappingType,
    quantize_,
)

dtype = torch.float16
tiny = torch.finfo(dtype).tiny

model = torch.nn.Linear(1, 4, dtype=dtype)
model.weight = torch.nn.Parameter(
    torch.tensor([[0, 10 * tiny, 20 * tiny, 30 * tiny]], dtype=dtype)
)
print(model.weight)

quantize_(model, Int8WeightOnlyConfig())
print(model.weight)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. float8 topic: bug fix Use this tag for PRs that fix bugs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[QST] About NaNs generated during FP16->FP8 quantization
4 participants