Description
The example script fx/quantized_resnet_test.py in the Torch-TensorRT repository fails to execute due to the use of a deprecated attribute EXPLICIT_PRECISION in the TensorRT Python API. This attribute is no longer available in recent versions of TensorRT (e.g., TensorRT 10.1).
The error traceback is as follows:
Traceback (most recent call last):
File "/home/yz9qvs/projects/Torch-TensorRT/examples/fx/quantized_resnet_test.py", line 142, in <module>
int8_trt = build_int8_trt(rn18)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/yz9qvs/projects/Torch-TensorRT/examples/fx/quantized_resnet_test.py", line 60, in build_int8_trt
interp = TRTInterpreter(
File "/usr/local/lib/python3.10/dist-packages/torch_tensorrt/fx/fx2trt.py", line 59, in __init__
trt.NetworkDefinitionCreationFlag.EXPLICIT_PRECISION
AttributeError: type object 'tensorrt.tensorrt.NetworkDefinitionCreationFlag' has no attribute 'EXPLICIT_PRECISION'
To Reproduce
Steps to reproduce the behavior:
Steps to reproduce the behavior:
- Clone the Torch-TensorRT repository.
- Navigate to the examples/fx directory.
- Run the script quantized_resnet_test.py:
python quantized_resnet_test.py
Expected behavior
The script should run successfully, converting the quantized ResNet model to TensorRT without encountering an error.
Environment
Torch-TensorRT Version: 2.4.0
PyTorch Version: 2.4.0
CPU Architecture: amd64
OS: Ubuntu 22.04
How you installed PyTorch: pip
Build command you used (if compiling from source): N/A
Are you using local sources or building from archives: Building from local sources
Python version: 3.10
CUDA version: 11.8
GPU models and configuration: NVIDIA A40
Any other relevant information: Running TensorRT 10.1.0
Additional context
The issue seems to stem from the use of the deprecated EXPLICIT_PRECISION flag in the TRTInterpreter class within torch_tensorrt/fx/fx2trt.py. TensorRT 10.1 does not support this attribute, and its usage needs to be updated to align with the latest TensorRT API.
This script is one of the very few examples that demonstrates how to quantize a model using FX and lower it to TensorRT. It is a valuable resource for users looking to implement this workflow.
If addressing this issue immediately is not feasible, it would be extremely helpful if an alternative example could be provided to demonstrate how to achieve model quantization and conversion to TensorRT using FX. This would ensure users can still proceed with their workflows while awaiting a permanent fix.
THANKS!