Skip to content

Commit e6c7ee1

Browse files
committed
Revert "docs: Update OpenVINO EP"
This reverts commit d50ebdf.
1 parent d50ebdf commit e6c7ee1

File tree

2 files changed

+297
-419
lines changed

2 files changed

+297
-419
lines changed

docs/build/eps.md

Lines changed: 93 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -161,6 +161,65 @@ Dockerfile instructions are available [here](https://github.com/microsoft/onnxru
161161

162162
---
163163

164+
## NVIDIA TensorRT RTX
165+
166+
See more information on the TensorRT RTX Execution Provider [here](../execution-providers/TensorRTRTX-ExecutionProvider.md).
167+
168+
### Minimum requirements
169+
170+
| ONNX Runtime | TensorRT-RTX | CUDA Toolkit |
171+
| :----------- | :----------- | :------------- |
172+
| main branch | 1.1 | 12.9 |
173+
| 1.23 | 1.1 | 12.9 |
174+
| 1.22 | 1.0 | 12.8 |
175+
176+
TensorRT RTX EP supports RTX GPUs based on Ampere (GeForce RTX 30xx) and later architectures with minimum recommended NVIDIA driver version 555.85.
177+
178+
### Pre-requisites
179+
* Install git, cmake, Python 3.12
180+
* Install latest [NVIDIA driver](https://www.nvidia.com/en-us/drivers/)
181+
* Install [CUDA toolkit 12.9](https://developer.nvidia.com/cuda-12-9-1-download-archive)
182+
* Install [TensorRT RTX](https://docs.nvidia.com/deeplearning/tensorrt-rtx/latest/installing-tensorrt-rtx/installing.html)
183+
* For Windows only, install [Visual Studio](https://visualstudio.microsoft.com/downloads/)
184+
* Set TensorRT-RTX dlls in `PATH` or put it in same folder as application exe
185+
186+
187+
```sh
188+
git clone https://github.com/microsoft/onnxruntime.git
189+
cd onnxruntime
190+
```
191+
192+
### Windows
193+
194+
```powershell
195+
.\build.bat --config Release --build_dir build --parallel --use_nv_tensorrt_rtx --tensorrt_rtx_home "path\to\tensorrt-rtx" --cuda_home "path\to\cuda\home" --cmake_generator "Visual Studio 17 2022" --build_shared_lib --skip_tests --build --update --use_vcpkg
196+
```
197+
198+
### Linux
199+
200+
```sh
201+
./build.sh --config Release --build_dir build --parallel --use_nv_tensorrt_rtx --tensorrt_rtx_home "path/to/tensorrt-rtx" --cuda_home "path/to/cuda/home" --build_shared_lib --skip_tests --build --update
202+
```
203+
204+
### Run unit test
205+
```powershell
206+
.\build\Release\Release\onnxruntime_test_all.exe --gtest_filter=*NvExecutionProviderTest.*
207+
```
208+
209+
### Python wheel
210+
211+
```powershell
212+
# build the python wheel
213+
.\build.bat --config Release --build_dir build --parallel --use_nv_tensorrt_rtx --tensorrt_rtx_home "path\to\tensorrt-rtx" --cuda_home "path\to\cuda\home" --cmake_generator "Visual Studio 17 2022" --build_shared_lib --skip_tests --build_wheel
214+
215+
# install
216+
pip install "build\Release\Release\dist\onnxruntime-1.23.0-cp312-cp312-win_amd64.whl"
217+
```
218+
219+
> NOTE: TensorRT-RTX .dll or .so are in `PATH` or in the same folder as the application
220+
221+
---
222+
164223
## NVIDIA Jetson TX1/TX2/Nano/Xavier/Orin
165224

166225
### Build Instructions
@@ -235,20 +294,7 @@ These instructions are for the latest [JetPack SDK](https://developer.nvidia.com
235294

236295
* For a portion of Jetson devices like the Xavier series, higher power mode involves more cores (up to 6) to compute but it consumes more resource when building ONNX Runtime. Set `--parallel 1` in the build command if OOM happens and system is hanging.
237296

238-
## TensorRT-RTX
239-
240-
See more information on the NV TensorRT RTX Execution Provider [here](../execution-providers/TensorRTRTX-ExecutionProvider.md).
241-
242-
### Prerequisites
243-
{: .no_toc }
244-
245-
* Follow [instructions for CUDA execution provider](#cuda) to install CUDA and setup environment variables.
246-
* Intall TensorRT for RTX from nvidia.com (TODO: add link when available)
247-
248-
### Build Instructions
249-
{: .no_toc }
250-
`build.bat --config Release --parallel 32 --build_dir _build --build_shared_lib --use_nv_tensorrt_rtx --tensorrt_home "C:\dev\TensorRT-RTX-1.1.0.3" --cuda_home "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.9" --cmake_generator "Visual Studio 17 2022" --use_vcpkg`
251-
Replace the --tensorrt_home and --cuda_home with correct paths to CUDA and TensorRT-RTX installations.
297+
---
252298

253299
## oneDNN
254300

@@ -291,20 +337,19 @@ See more information on the OpenVINO™ Execution Provider [here](../execution-p
291337
### Prerequisites
292338
{: .no_toc }
293339

294-
1. Install the OpenVINO™ offline/online installer from Intel<sup>®</sup> Distribution of OpenVINO™<sup>TM</sup> Toolkit **Release 2025.3** for the appropriate OS and target hardware:
295-
* [Windows - CPU, GPU, NPU](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html?PACKAGE=OPENVINO_BASE&VERSION=v_2025_3_0&OP_SYSTEM=WINDOWS&DISTRIBUTION=ARCHIVE).
296-
* [Linux - CPU, GPU, NPU](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html?PACKAGE=OPENVINO_BASE&VERSION=v_2025_3_0&OP_SYSTEM=LINUX&DISTRIBUTION=ARCHIVE)
297-
298-
Follow [documentation](https://docs.openvino.ai/2025/index.html) for detailed instructions.
299-
300-
*2025.3 is the current recommended OpenVINO™ version. [OpenVINO™ 2025.0](https://docs.openvino.ai/2025/index.html) is minimal OpenVINO™ version requirement.*
340+
1. Install the OpenVINO™ offline/online installer from Intel<sup>®</sup> Distribution of OpenVINO™<sup>TM</sup> Toolkit **Release 2024.3** for the appropriate OS and target hardware:
341+
* [Windows - CPU, GPU, NPU](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html?PACKAGE=OPENVINO_BASE&VERSION=v_2024_3_0&OP_SYSTEM=WINDOWS&DISTRIBUTION=ARCHIVE).
342+
* [Linux - CPU, GPU, NPU](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html?PACKAGE=OPENVINO_BASE&VERSION=v_2024_3_0&OP_SYSTEM=LINUX&DISTRIBUTION=ARCHIVE)
343+
344+
Follow [documentation](https://docs.openvino.ai/2024/home.html) for detailed instructions.
301345

302-
2. Install CMake 3.28 or higher. Download from the [official CMake website](https://cmake.org/download/).
346+
*2024.5 is the current recommended OpenVINO™ version. [OpenVINO™ 2024.5](https://docs.openvino.ai/2024/index.html) is minimal OpenVINO™ version requirement.*
303347

304-
3. Configure the target hardware with specific follow on instructions:
305-
* To configure Intel<sup>®</sup> Processor Graphics(GPU) please follow these instructions: [Windows](https://docs.openvino.ai/2025/get-started/install-openvino/configurations/configurations-intel-gpu.html#windows), [Linux](https://docs.openvino.ai/2025/get-started/install-openvino/configurations/configurations-intel-gpu.html#linux)
348+
2. Configure the target hardware with specific follow on instructions:
349+
* To configure Intel<sup>®</sup> Processor Graphics(GPU) please follow these instructions: [Windows](https://docs.openvino.ai/2024/get-started/configurations/configurations-intel-gpu.html#windows), [Linux](https://docs.openvino.ai/2024/get-started/configurations/configurations-intel-gpu.html#linux)
306350

307-
4. Initialize the OpenVINO™ environment by running the setupvars script as shown below. This is a required step:
351+
352+
3. Initialize the OpenVINO™ environment by running the setupvars script as shown below. This is a required step:
308353
* For Windows:
309354
```
310355
C:\<openvino_install_directory>\setupvars.bat
@@ -313,30 +358,30 @@ See more information on the OpenVINO™ Execution Provider [here](../execution-p
313358
```
314359
$ source <openvino_install_directory>/setupvars.sh
315360
```
316-
361+
**Note:** If you are using a dockerfile to use OpenVINO™ Execution Provider, sourcing OpenVINO™ won't be possible within the dockerfile. You would have to explicitly set the LD_LIBRARY_PATH to point to OpenVINO™ libraries location. Refer our [dockerfile](https://github.com/microsoft/onnxruntime/blob/main/dockerfiles/Dockerfile.openvino).
317362
318363
### Build Instructions
319364
{: .no_toc }
320365
321366
#### Windows
322367
323368
```
324-
.\build.bat --config Release --use_openvino <hardware_option> --build_shared_lib --build_wheel
369+
.\build.bat --config RelWithDebInfo --use_openvino <hardware_option> --build_shared_lib --build_wheel
325370
```
326371
327372
*Note: The default Windows CMake Generator is Visual Studio 2019, but you can also use the newer Visual Studio 2022 by passing `--cmake_generator "Visual Studio 17 2022"` to `.\build.bat`*
328373
329374
#### Linux
330375
331376
```bash
332-
./build.sh --config Release --use_openvino <hardware_option> --build_shared_lib --build_wheel
377+
./build.sh --config RelWithDebInfo --use_openvino <hardware_option> --build_shared_lib --build_wheel
333378
```
334379
335380
* `--build_wheel` Creates python wheel file in dist/ folder. Enable it when building from source.
336381
* `--use_openvino` builds the OpenVINO™ Execution Provider in ONNX Runtime.
337382
* `<hardware_option>`: Specifies the default hardware target for building OpenVINO™ Execution Provider. This can be overriden dynamically at runtime with another option (refer to [OpenVINO™-ExecutionProvider](../execution-providers/OpenVINO-ExecutionProvider.md#summary-of-options) for more details on dynamic device selection). Below are the options for different Intel target devices.
338383
339-
Refer to [Intel GPU device naming convention](https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html#device-naming-convention) for specifying the correct hardware target in cases where both integrated and discrete GPU's co-exist.
384+
Refer to [Intel GPU device naming convention](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html#device-naming-convention) for specifying the correct hardware target in cases where both integrated and discrete GPU's co-exist.
340385

341386
| Hardware Option | Target Device |
342387
| --------------- | ------------------------|
@@ -345,18 +390,34 @@ Refer to [Intel GPU device naming convention](https://docs.openvino.ai/2025/open
345390
| <code>GPU.0</code> | Intel<sup>®</sup> Integrated Graphics |
346391
| <code>GPU.1</code> | Intel<sup>®</sup> Discrete Graphics |
347392
| <code>NPU</code> | Intel<sup>®</sup> Neural Processor Unit |
393+
| <code>HETERO:DEVICE_TYPE_1,DEVICE_TYPE_2,DEVICE_TYPE_3...</code> | All Intel<sup>®</sup> silicons mentioned above |
394+
| <code>MULTI:DEVICE_TYPE_1,DEVICE_TYPE_2,DEVICE_TYPE_3...</code> | All Intel<sup>®</sup> silicons mentioned above |
395+
| <code>AUTO:DEVICE_TYPE_1,DEVICE_TYPE_2,DEVICE_TYPE_3...</code> | All Intel<sup>®</sup> silicons mentioned above |
396+
397+
Specifying Hardware Target for HETERO or Multi or AUTO device Build:
348398

399+
HETERO:DEVICE_TYPE_1,DEVICE_TYPE_2,DEVICE_TYPE_3...
400+
The DEVICE_TYPE can be any of these devices from this list ['CPU','GPU', 'NPU']
401+
402+
A minimum of two device's should be specified for a valid HETERO or MULTI or AUTO device build.
403+
404+
```
405+
Example's: HETERO:GPU,CPU or AUTO:GPU,CPU or MULTI:GPU,CPU
406+
```
349407
350408
#### Disable subgraph partition Feature
351-
* Builds the OpenVINO™ Execution Provider in ONNX Runtime with graph partitioning disabled, which will run fully supported models on OpenVINO Execution Provider else they completely fall back to default CPU EP,
409+
* Builds the OpenVINO™ Execution Provider in ONNX Runtime with sub graph partitioning disabled.
410+
411+
* With this option enabled. Fully supported models run on OpenVINO Execution Provider else they completely fall back to default CPU EP.
352412
353413
* To enable this feature during build time. Use `--use_openvino ` `<hardware_option>_NO_PARTITION`
354414
355415
```
356-
Usage: --use_openvino CPU_NO_PARTITION or --use_openvino GPU_NO_PARTITION or --use_openvino NPU_NO_PARTITION
416+
Usage: --use_openvino CPU_FP32_NO_PARTITION or --use_openvino GPU_FP32_NO_PARTITION or
417+
--use_openvino GPU_FP16_NO_PARTITION
357418
```
358419
359-
For more information on OpenVINO™ Execution Provider&#39;s ONNX Layer support, Topology support, and Intel hardware enabled, please refer to the document [OpenVINO™-ExecutionProvider](../execution-providers/OpenVINO-ExecutionProvider.md#support-coverage)
420+
For more information on OpenVINO™ Execution Provider&#39;s ONNX Layer support, Topology support, and Intel hardware enabled, please refer to the document [OpenVINO™-ExecutionProvider](../execution-providers/OpenVINO-ExecutionProvider.md)
360421
361422
---
362423

0 commit comments

Comments
 (0)