Skip to content

TensorRT QuantDequantLinearHelper Error: Scale Coefficients Must All Be Positive #454

@jhcho90

Description

@jhcho90

Thank you for your help.

I used the onnx_ptq example to quantize my own model.
I applied it and successfully obtained quantized results.

However, when I tried to evaluate latency on Orin-X, it failed with the following error:

[E] [TRT] ModelImporter.cpp:776: ERROR: builtin_op_importers.cpp:1197 In function QuantDequantLinearHelper:
[6] Assertion failed: scaleAllPositive && "Scale coefficients must all be positive"

My TensorRT version is 8.6.1, and Orin-X uses the same version.
The opset of ONNX is 21.

Thank you.

Metadata

Metadata

Assignees

Labels

questionHelp is is needed

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions