Closed
Description
Bug Description
Run: python examples/dynamo/torch_export_gpt2.py
Get error:
Traceback (most recent call last):
File "/home/zewenl/Documents/pytorch/TensorRT/examples/dynamo/torch_export_gpt2.py", line 77, in <module>
trt_gen_tokens = generate(trt_model, input_ids, MAX_TOKENS, tokenizer.eos_token_id)
File "/home/zewenl/Documents/pytorch/TensorRT/examples/dynamo/utils.py", line 54, in generate
outputs = model(input_seq)
File "/home/zewenl/anaconda3/envs/trt-10.1-py310/lib/python3.10/site-packages/torch/fx/graph_module.py", line 824, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/home/zewenl/anaconda3/envs/trt-10.1-py310/lib/python3.10/site-packages/torch/fx/graph_module.py", line 400, in __call__
raise e
File "/home/zewenl/anaconda3/envs/trt-10.1-py310/lib/python3.10/site-packages/torch/fx/graph_module.py", line 387, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/home/zewenl/anaconda3/envs/trt-10.1-py310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/zewenl/anaconda3/envs/trt-10.1-py310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.40", line 6, in forward
File "/home/zewenl/anaconda3/envs/trt-10.1-py310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/zewenl/anaconda3/envs/trt-10.1-py310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/zewenl/Documents/pytorch/TensorRT/py/torch_tensorrt/_features.py", line 54, in wrapper
return f(*args, **kwargs)
File "/home/zewenl/Documents/pytorch/TensorRT/py/torch_tensorrt/dynamo/runtime/_TorchTensorRTModule.py", line 301, in forward
outputs: List[torch.Tensor] = torch.ops.tensorrt.execute_engine(
File "/home/zewenl/anaconda3/envs/trt-10.1-py310/lib/python3.10/site-packages/torch/_ops.py", line 1156, in __call__
return self._op(*args, **(kwargs or {}))
NotImplementedError: Could not run 'tensorrt::execute_engine' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'tensorrt::execute_engine' is only available for these backends: [Meta, NestedTensorCPU, NestedTensorCUDA, NestedTensorHIP, NestedTensorXLA, NestedTensorMPS, NestedTensorIPU, NestedTensorXPU, NestedTensorHPU, NestedTensorVE, NestedTensorLazy, NestedTensorMTIA, NestedTensorPrivateUse1, NestedTensorPrivateUse2, NestedTensorPrivateUse3, NestedTensorMeta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMTIA, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastMTIA, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
Meta: registered at /dev/null:214 [kernel]
NestedTensorCPU: registered at core/runtime/register_jit_hooks.cpp:156 [nested kernel]
NestedTensorCUDA: registered at core/runtime/register_jit_hooks.cpp:156 [nested kernel]
NestedTensorHIP: registered at core/runtime/register_jit_hooks.cpp:156 [nested kernel]
NestedTensorXLA: registered at core/runtime/register_jit_hooks.cpp:156 [nested kernel]
NestedTensorMPS: registered at core/runtime/register_jit_hooks.cpp:156 [nested kernel]
NestedTensorIPU: registered at core/runtime/register_jit_hooks.cpp:156 [nested kernel]
NestedTensorXPU: registered at core/runtime/register_jit_hooks.cpp:156 [nested kernel]
NestedTensorHPU: registered at core/runtime/register_jit_hooks.cpp:156 [nested kernel]
NestedTensorVE: registered at core/runtime/register_jit_hooks.cpp:156 [nested kernel]
NestedTensorLazy: registered at core/runtime/register_jit_hooks.cpp:156 [nested kernel]
NestedTensorMTIA: registered at core/runtime/register_jit_hooks.cpp:156 [nested kernel]
NestedTensorPrivateUse1: registered at core/runtime/register_jit_hooks.cpp:156 [nested kernel]
NestedTensorPrivateUse2: registered at core/runtime/register_jit_hooks.cpp:156 [nested kernel]
NestedTensorPrivateUse3: registered at core/runtime/register_jit_hooks.cpp:156 [nested kernel]
NestedTensorMeta: registered at core/runtime/register_jit_hooks.cpp:156 [nested kernel]
BackendSelect: fallthrough registered at /pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:194 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:479 [backend fallback]
Functionalize: registered at /pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:349 [backend fallback]
Named: registered at /pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at /pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:100 [backend fallback]
AutogradOther: registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradCPU: registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:67 [backend fallback]
AutogradCUDA: registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:75 [backend fallback]
AutogradXLA: registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:83 [backend fallback]
AutogradMPS: registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:91 [backend fallback]
AutogradXPU: registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:71 [backend fallback]
AutogradHPU: registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:104 [backend fallback]
AutogradLazy: registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:87 [backend fallback]
AutogradMTIA: registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:79 [backend fallback]
AutogradMeta: registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:95 [backend fallback]
AutogradNestedTensor: registered at core/runtime/register_jit_hooks.cpp:156 [nested kernel]
Tracer: registered at /pytorch/torch/csrc/autograd/TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:322 [backend fallback]
AutocastMTIA: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:466 [backend fallback]
AutocastXPU: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:504 [backend fallback]
AutocastMPS: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:209 [backend fallback]
AutocastCUDA: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:165 [backend fallback]
FuncTorchBatched: registered at /pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:731 [backend fallback]
BatchedNestedTensor: registered at /pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:758 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:27 [backend fallback]
Batched: registered at /pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at /pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:208 [backend fallback]
PythonTLSSnapshot: registered at /pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:202 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:475 [backend fallback]
PreDispatch: registered at /pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:206 [backend fallback]
PythonDispatcher: registered at /pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:198 [backend fallback]
Environment
tensorrt 10.7.0
tensorrt_cu12 10.7.0
tensorrt-cu12-bindings 10.7.0
tensorrt-cu12-libs 10.7.0
torch 2.7.0.dev20250205+cu124
torch_tensorrt 2.7.0.dev0+fb6d4d3db
torchaudio 2.6.0.dev20250205+cu124
torchvision 0.22.0.dev20250205+cu124