You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> NOTE: TensorRT-RTX .dll or .so are in `PATH` or in the same folder as the application
220
+
221
+
---
222
+
164
223
## NVIDIA Jetson TX1/TX2/Nano/Xavier/Orin
165
224
166
225
### Build Instructions
@@ -235,20 +294,7 @@ These instructions are for the latest [JetPack SDK](https://developer.nvidia.com
235
294
236
295
* For a portion of Jetson devices like the Xavier series, higher power mode involves more cores (up to 6) to compute but it consumes more resource when building ONNX Runtime. Set `--parallel 1`in the build commandif OOM happens and system is hanging.
237
296
238
-
## TensorRT-RTX
239
-
240
-
See more information on the NV TensorRT RTX Execution Provider [here](../execution-providers/TensorRTRTX-ExecutionProvider.md).
241
-
242
-
### Prerequisites
243
-
{: .no_toc }
244
-
245
-
* Follow [instructions for CUDA execution provider](#cuda) to install CUDA and setup environment variables.
246
-
* Intall TensorRT for RTX from nvidia.com (TODO: add link when available)
Replace the --tensorrt_home and --cuda_home with correct paths to CUDA and TensorRT-RTX installations.
297
+
---
252
298
253
299
## oneDNN
254
300
@@ -291,20 +337,19 @@ See more information on the OpenVINO™ Execution Provider [here](../execution-p
291
337
### Prerequisites
292
338
{: .no_toc }
293
339
294
-
1. Install the OpenVINO™ offline/online installer from Intel<sup>®</sup> Distribution of OpenVINO™<sup>TM</sup> Toolkit **Release 2025.3**for the appropriate OS and target hardware:
Follow [documentation](https://docs.openvino.ai/2025/index.html) for detailed instructions.
299
-
300
-
*2025.3 is the current recommended OpenVINO™ version. [OpenVINO™ 2025.0](https://docs.openvino.ai/2025/index.html) is minimal OpenVINO™ version requirement.*
340
+
1. Install the OpenVINO™ offline/online installer from Intel<sup>®</sup> Distribution of OpenVINO™<sup>TM</sup> Toolkit **Release 2024.3**for the appropriate OS and target hardware:
Follow [documentation](https://docs.openvino.ai/2024/home.html) for detailed instructions.
301
345
302
-
2. Install CMake 3.28 or higher. Download from the [official CMake website](https://cmake.org/download/).
346
+
*2024.5 is the current recommended OpenVINO™ version. [OpenVINO™ 2024.5](https://docs.openvino.ai/2024/index.html) is minimal OpenVINO™ version requirement.*
303
347
304
-
3. Configure the target hardware with specific follow on instructions:
305
-
* To configure Intel<sup>®</sup> Processor Graphics(GPU) please follow these instructions: [Windows](https://docs.openvino.ai/2025/get-started/install-openvino/configurations/configurations-intel-gpu.html#windows), [Linux](https://docs.openvino.ai/2025/get-started/install-openvino/configurations/configurations-intel-gpu.html#linux)
348
+
2. Configure the target hardware with specific follow on instructions:
349
+
* To configure Intel<sup>®</sup> Processor Graphics(GPU) please follow these instructions: [Windows](https://docs.openvino.ai/2024/get-started/configurations/configurations-intel-gpu.html#windows), [Linux](https://docs.openvino.ai/2024/get-started/configurations/configurations-intel-gpu.html#linux)
306
350
307
-
4. Initialize the OpenVINO™ environment by running the setupvars script as shown below. This is a required step:
351
+
352
+
3. Initialize the OpenVINO™ environment by running the setupvars script as shown below. This is a required step:
308
353
* For Windows:
309
354
```
310
355
C:\<openvino_install_directory>\setupvars.bat
@@ -313,30 +358,30 @@ See more information on the OpenVINO™ Execution Provider [here](../execution-p
313
358
```
314
359
$ source<openvino_install_directory>/setupvars.sh
315
360
```
316
-
361
+
**Note:** If you are using a dockerfile to use OpenVINO™ Execution Provider, sourcing OpenVINO™ won't be possible within the dockerfile. You would have to explicitly set the LD_LIBRARY_PATH to point to OpenVINO™ libraries location. Refer our [dockerfile](https://github.com/microsoft/onnxruntime/blob/main/dockerfiles/Dockerfile.openvino).
*Note: The default Windows CMake Generator is Visual Studio 2019, but you can also use the newer Visual Studio 2022 by passing `--cmake_generator "Visual Studio 17 2022"` to `.\build.bat`*
* `--build_wheel` Creates python wheel file in dist/ folder. Enable it when building from source.
336
381
* `--use_openvino` builds the OpenVINO™ Execution Provider in ONNX Runtime.
337
382
* `<hardware_option>`: Specifies the default hardware target for building OpenVINO™ Execution Provider. This can be overriden dynamically at runtime with another option (refer to [OpenVINO™-ExecutionProvider](../execution-providers/OpenVINO-ExecutionProvider.md#summary-of-options) for more details on dynamic device selection). Below are the options for different Intel target devices.
338
383
339
-
Refer to [Intel GPU device naming convention](https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html#device-naming-convention) forspecifying the correct hardware targetin cases where both integrated and discrete GPU's co-exist.
384
+
Refer to [Intel GPU device naming convention](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html#device-naming-convention) for specifying the correct hardware target in cases where both integrated and discrete GPU's co-exist.
The DEVICE_TYPE can be any of these devices from this list ['CPU','GPU', 'NPU']
401
+
402
+
A minimum of two device's should be specified for a valid HETERO or MULTI or AUTO device build.
403
+
404
+
```
405
+
Example's: HETERO:GPU,CPU or AUTO:GPU,CPU or MULTI:GPU,CPU
406
+
```
349
407
350
408
#### Disable subgraph partition Feature
351
-
* Builds the OpenVINO™ Execution Provider in ONNX Runtime with graph partitioning disabled, which will run fully supported models on OpenVINO Execution Provider else they completely fall back to default CPU EP,
409
+
* Builds the OpenVINO™ Execution Provider in ONNX Runtime with sub graph partitioning disabled.
410
+
411
+
* With this option enabled. Fully supported models run on OpenVINO Execution Provider else they completely fall back to default CPU EP.
352
412
353
413
* To enable this feature during build time. Use `--use_openvino ``<hardware_option>_NO_PARTITION`
354
414
355
415
```
356
-
Usage: --use_openvino CPU_NO_PARTITION or --use_openvino GPU_NO_PARTITION or --use_openvino NPU_NO_PARTITION
416
+
Usage: --use_openvino CPU_FP32_NO_PARTITION or --use_openvino GPU_FP32_NO_PARTITION or
417
+
--use_openvino GPU_FP16_NO_PARTITION
357
418
```
358
419
359
-
For more information on OpenVINO™ Execution Provider's ONNX Layer support, Topology support, and Intel hardware enabled, please refer to the document [OpenVINO™-ExecutionProvider](../execution-providers/OpenVINO-ExecutionProvider.md#support-coverage)
420
+
For more information on OpenVINO™ Execution Provider's ONNX Layer support, Topology support, and Intel hardware enabled, please refer to the document [OpenVINO™-ExecutionProvider](../execution-providers/OpenVINO-ExecutionProvider.md)
0 commit comments