-
Notifications
You must be signed in to change notification settings - Fork 197
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Summary
When using two consecutive reshape operations in a QAT model, such as two torch.unsqueeze operations, the compilation throws the ValueError "Could not determine a unique scale for the quantization! Please check the ONNX graph of this model.", even though the two torch.unsqueeze are between two QuantIdentity layers.
It seems that this especially happens when passing a parameter n_bits to compile_brevitas_qat_model that is smaller than the bit width used for the QAT model.
For instance, using a bit width of 8 for the QuantIdentity layers in the QAT model, but choosing n_bits=6.
Description
- versions affected: 1.5.0
- python version: 3.9.16
Minimal code to reproduce the bug:
import brevitas.nn as qnn
import torch
import torch.nn as nn
from concrete.ml.torch.compile import compile_brevitas_qat_model
class Unsqueeze(nn.Module):
def __init__(self, bit_width):
super().__init__()
self.id1 = qnn.QuantIdentity(bit_width=bit_width)
self.conv1 = qnn.QuantConv2d(1, 1, 1, bit_width=bit_width, bias=False)
def forward(self, x):
"""Forward pass of the model."""
x = self.id1(x)
x = x.unsqueeze(1)
x = x.unsqueeze(1)
x = self.id1(x)
x = self.conv1(x)
return x
model = Unsqueeze(bit_width=8)
tensor_ = torch.randn(1, 200)
compile_brevitas_qat_model(model, tensor_, verbose=False, n_bits=8)
print("Compilation with 8 bits successful")
compile_brevitas_qat_model(model, tensor_, verbose=False, n_bits=7)
print("Compilation with 7 bits successful")
try:
compile_brevitas_qat_model(model, tensor_, verbose=False, n_bits=6)
except Exception as e:
print(e)
print("Compilation with 6 bits failed")Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working