-
Notifications
You must be signed in to change notification settings - Fork 258
Updated RQ signature #3441
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated RQ signature #3441
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no test to catch the bug. Please add it.
E.g. you can mock call of Quantize_backward
and check arguments type via call_args
. This test would fail before this PR and pass with it.
|
The test was added. |
self.input_low = torch.tensor([-0.5]) | ||
self.input_range = torch.tensor([1.0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.input_low = torch.tensor([-0.5]) | |
self.input_range = torch.tensor([1.0]) | |
self.input_low = torch.tensor([[-0.5]]) | |
self.input_range = torch.tensor([[1.0]]) |
I guess there's a bug in sum_like
when number of dimension doesn't match. In that way test passes for cuda
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tests passed with your fix. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have understanding why it didn't work without it? Was it valid behavior? Is some fix required in sum_like
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not yet. I'll do my best to provide more details soon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've checked the test outputs regarding the issue.
I think that sum_like
operates correctly, and the reason for failures was the incorrect input_low
and input_range
shapes.
For example, if the input_
shape [1, 2]
, then the low
and range
shapes should be [1, 1]
each, but not [1]
.
Please, address your comment in the ticket.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Changes
ReferenceQuantize.backward
signature to align with the CUDA/CPU extensions signature.Reason for changes
Related tickets
Tests
tests/torch/quantization/test_functions.py