Skip to content

Commit 2be7202

Browse files
authored
[TensorRT EP] Update doc for ORT 1.22 (#24727)
### Description <!-- Describe your changes. --> Preview: https://yf711.github.io/onnxruntime/docs/build/eps.html#tensorrt https://yf711.github.io/onnxruntime/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. -->
1 parent 5feef70 commit 2be7202

File tree

2 files changed

+15
-9
lines changed

2 files changed

+15
-9
lines changed

docs/build/eps.md

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ The onnxruntime code will look for the provider shared libraries in the same loc
4646
{: .no_toc }
4747

4848
* Install [CUDA](https://developer.nvidia.com/cuda-toolkit) and [cuDNN](https://developer.nvidia.com/cudnn)
49-
* The CUDA execution provider for ONNX Runtime is built and tested with CUDA 11.8, 12.2 and cuDNN 8.9. Check [here](../execution-providers/CUDA-ExecutionProvider.md#requirements) for more version information.
49+
* The CUDA execution provider for ONNX Runtime is built and tested with CUDA 12.x and cuDNN 9. Check [here](../execution-providers/CUDA-ExecutionProvider.md#requirements) for more version information.
5050
* The path to the CUDA installation must be provided via the CUDA_HOME environment variable, or the `--cuda_home` parameter. The installation directory should contain `bin`, `include` and `lib` sub-directories.
5151
* The path to the CUDA `bin` directory must be added to the PATH environment variable so that `nvcc` is found.
5252
* The path to the cuDNN installation must be provided via the CUDNN_HOME environment variable, or `--cudnn_home` parameter. In Windows, the installation directory should contain `bin`, `include` and `lib` sub-directories.
@@ -110,7 +110,7 @@ See more information on the TensorRT Execution Provider [here](../execution-prov
110110

111111
* Follow [instructions for CUDA execution provider](#cuda) to install CUDA and cuDNN, and setup environment variables.
112112
* Follow [instructions for installing TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/latest/installing-tensorrt/installing.html)
113-
* The TensorRT execution provider for ONNX Runtime is built and tested with TensorRT 10.8.
113+
* The TensorRT execution provider for ONNX Runtime is built and tested with TensorRT 10.9.
114114
* The path to TensorRT installation must be provided via the `--tensorrt_home` parameter.
115115
* ONNX Runtime uses [TensorRT built-in parser](https://developer.nvidia.com/tensorrt/download) from `tensorrt_home` by default.
116116
* To use open-sourced [onnx-tensorrt](https://github.com/onnx/onnx-tensorrt/tree/main) parser instead, add `--use_tensorrt_oss_parser` parameter in build commands below.
@@ -123,14 +123,15 @@ See more information on the TensorRT Execution Provider [here](../execution-prov
123123
* i.e It's version-matched if assigning `tensorrt_home` with path to TensorRT-10.9 built-in binaries and onnx-tensorrt [10.9-GA branch](https://github.com/onnx/onnx-tensorrt/tree/release/10.9-GA) specified in [cmake/deps.txt](https://github.com/microsoft/onnxruntime/blob/main/cmake/deps.txt).
124124

125125

126-
### **[Note to ORT 1.21.0 open-sourced parser users]**
126+
### **[Note to ORT 1.21/1.22 open-sourced parser users]**
127127

128-
* ORT 1.21.0 links against onnx-tensorrt 10.8-GA, which requires upcoming onnx 1.18.
129-
* Here's a temporarily fix to preview on onnx-tensorrt 10.8-GA (or newer) when building ORT 1.21.0:
128+
* ORT 1.21/1.22 link against onnx-tensorrt 10.8-GA/10.9-GA, which requires newly released onnx 1.18.
129+
* Here's a temporarily fix to preview on onnx-tensorrt 10.8-GA/10.9-GA when building ORT 1.21/1.22:
130130
* Replace the [onnx line in cmake/deps.txt](https://github.com/microsoft/onnxruntime/blob/rel-1.21.0/cmake/deps.txt#L38)
131-
with `onnx;https://github.com/onnx/onnx/archive/f22a2ad78c9b8f3bd2bb402bfce2b0079570ecb6.zip;324a781c31e30306e30baff0ed7fe347b10f8e3c`
131+
with `onnx;https://github.com/onnx/onnx/archive/e709452ef2bbc1d113faf678c24e6d3467696e83.zip;c0b9f6c29029e13dea46b7419f3813f4c2ca7db8`
132132
* Download [this](https://github.com/microsoft/onnxruntime/blob/7b2733a526c12b5ef4475edd47fd9997ebc2b2c6/cmake/patches/onnx/onnx.patch) as raw file and save file to [cmake/patches/onnx/onnx.patch](https://github.com/microsoft/onnxruntime/blob/rel-1.21.0/cmake/patches/onnx/onnx.patch) (do not copy/paste from browser, as it might alter line break type)
133-
* Build ORT 1.21.0 with trt-related flags above (including `--use_tensorrt_oss_parser`)
133+
* Build ORT with trt-related flags above (including `--use_tensorrt_oss_parser`)
134+
* The [onnx 1.18](https://github.com/onnx/onnx/releases/tag/v1.18.0) is supported by latest ORT main branch. Please checkout main branch and build ORT-TRT with `--use_tensorrt_oss_parser` to enable OSS parser with full onnx 1.18 support.
134135

135136
### Build Instructions
136137
{: .no_toc }

docs/execution-providers/TensorRT-ExecutionProvider.md

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,11 +27,16 @@ See [Build instructions](../build/eps.md#tensorrt).
2727

2828
## Requirements
2929

30-
Note: Starting with version 1.19, **CUDA 12** becomes the default version when distributing ONNX Runtime GPU packages.
30+
Note:
31+
32+
Starting with version 1.19, **CUDA 12** becomes the default version when distributing ONNX Runtime GPU packages.
33+
34+
Starting with ORT 1.22, only CUDA 12 GPU packages are released.
3135

3236
| ONNX Runtime | TensorRT | CUDA |
3337
| :----------- | :------- | :------------------ |
34-
| main | 10.9 | **12.0-12.8**, 11.8 |
38+
| main | 10.9 | **12.0-12.8** |
39+
| 1.22 | 10.9 | **12.0-12.8** |
3540
| 1.21 | 10.8 | **12.0-12.8**, 11.8 |
3641
| 1.20 | 10.4 | **12.0-12.6**, 11.8 |
3742
| 1.19 | 10.2 | **12.0-12.6**, 11.8 |

0 commit comments

Comments
 (0)