You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> NOTE: TensorRT-RTX .dll or .so are in `PATH` or in the same folder as the application
220
-
221
-
---
222
-
223
164
## NVIDIA Jetson TX1/TX2/Nano/Xavier/Orin
224
165
225
166
### Build Instructions
@@ -294,7 +235,20 @@ These instructions are for the latest [JetPack SDK](https://developer.nvidia.com
294
235
295
236
* For a portion of Jetson devices like the Xavier series, higher power mode involves more cores (up to 6) to compute but it consumes more resource when building ONNX Runtime. Set `--parallel 1`in the build commandif OOM happens and system is hanging.
296
237
297
-
---
238
+
## TensorRT-RTX
239
+
240
+
See more information on the NV TensorRT RTX Execution Provider [here](../execution-providers/TensorRTRTX-ExecutionProvider.md).
241
+
242
+
### Prerequisites
243
+
{: .no_toc }
244
+
245
+
* Follow [instructions for CUDA execution provider](#cuda) to install CUDA and setup environment variables.
246
+
* Intall TensorRT for RTX from nvidia.com (TODO: add link when available)
Replace the --tensorrt_home and --cuda_home with correct paths to CUDA and TensorRT-RTX installations.
298
252
299
253
## oneDNN
300
254
@@ -337,19 +291,20 @@ See more information on the OpenVINO™ Execution Provider [here](../execution-p
337
291
### Prerequisites
338
292
{: .no_toc }
339
293
340
-
1. Install the OpenVINO™ offline/online installer from Intel<sup>®</sup> Distribution of OpenVINO™<sup>TM</sup> Toolkit **Release 2024.3**for the appropriate OS and target hardware:
Follow [documentation](https://docs.openvino.ai/2024/home.html) for detailed instructions.
345
-
346
-
*2024.5 is the current recommended OpenVINO™ version. [OpenVINO™ 2024.5](https://docs.openvino.ai/2024/index.html) is minimal OpenVINO™ version requirement.*
294
+
1. Install the OpenVINO™ offline/online installer from Intel<sup>®</sup> Distribution of OpenVINO™<sup>TM</sup> Toolkit **Release 2025.3**for the appropriate OS and target hardware:
Follow [documentation](https://docs.openvino.ai/2025/index.html) for detailed instructions.
299
+
300
+
*2025.3 is the current recommended OpenVINO™ version. [OpenVINO™ 2025.0](https://docs.openvino.ai/2025/index.html) is minimal OpenVINO™ version requirement.*
347
301
348
-
2. Configure the target hardware with specific follow on instructions:
349
-
* To configure Intel<sup>®</sup> Processor Graphics(GPU) please follow these instructions: [Windows](https://docs.openvino.ai/2024/get-started/configurations/configurations-intel-gpu.html#windows), [Linux](https://docs.openvino.ai/2024/get-started/configurations/configurations-intel-gpu.html#linux)
302
+
2. Install CMake 3.28 or higher. Download from the [official CMake website](https://cmake.org/download/).
350
303
304
+
3. Configure the target hardware with specific follow on instructions:
305
+
* To configure Intel<sup>®</sup> Processor Graphics(GPU) please follow these instructions: [Windows](https://docs.openvino.ai/2025/get-started/install-openvino/configurations/configurations-intel-gpu.html#windows), [Linux](https://docs.openvino.ai/2025/get-started/install-openvino/configurations/configurations-intel-gpu.html#linux)
351
306
352
-
3. Initialize the OpenVINO™ environment by running the setupvars script as shown below. This is a required step:
307
+
4. Initialize the OpenVINO™ environment by running the setupvars script as shown below. This is a required step:
353
308
* For Windows:
354
309
```
355
310
C:\<openvino_install_directory>\setupvars.bat
@@ -358,30 +313,30 @@ See more information on the OpenVINO™ Execution Provider [here](../execution-p
358
313
```
359
314
$ source<openvino_install_directory>/setupvars.sh
360
315
```
361
-
**Note:** If you are using a dockerfile to use OpenVINO™ Execution Provider, sourcing OpenVINO™ won't be possible within the dockerfile. You would have to explicitly set the LD_LIBRARY_PATH to point to OpenVINO™ libraries location. Refer our [dockerfile](https://github.com/microsoft/onnxruntime/blob/main/dockerfiles/Dockerfile.openvino).
*Note: The default Windows CMake Generator is Visual Studio 2019, but you can also use the newer Visual Studio 2022 by passing `--cmake_generator "Visual Studio 17 2022"` to `.\build.bat`*
*`--build_wheel` Creates python wheel file in dist/ folder. Enable it when building from source.
381
336
*`--use_openvino` builds the OpenVINO™ Execution Provider in ONNX Runtime.
382
337
*`<hardware_option>`: Specifies the default hardware target for building OpenVINO™ Execution Provider. This can be overriden dynamically at runtime with another option (refer to [OpenVINO™-ExecutionProvider](../execution-providers/OpenVINO-ExecutionProvider.md#summary-of-options) for more details on dynamic device selection). Below are the options for different Intel target devices.
383
338
384
-
Refer to [Intel GPU device naming convention](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html#device-naming-convention) for specifying the correct hardware target in cases where both integrated and discrete GPU's co-exist.
339
+
Refer to [Intel GPU device naming convention](https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html#device-naming-convention) forspecifying the correct hardware targetin cases where both integrated and discrete GPU's co-exist.
The DEVICE_TYPE can be any of these devices from this list ['CPU','GPU', 'NPU']
401
-
402
-
A minimum of two device's should be specified for a valid HETERO or MULTI or AUTO device build.
403
-
404
-
```
405
-
Example's: HETERO:GPU,CPU or AUTO:GPU,CPU or MULTI:GPU,CPU
406
-
```
407
349
408
350
#### Disable subgraph partition Feature
409
-
* Builds the OpenVINO™ Execution Provider in ONNX Runtime with sub graph partitioning disabled.
410
-
411
-
* With this option enabled. Fully supported models run on OpenVINO Execution Provider else they completely fall back to default CPU EP.
351
+
* Builds the OpenVINO™ Execution Provider in ONNX Runtime with graph partitioning disabled, which will run fully supported models on OpenVINO Execution Provider else they completely fall back to default CPU EP,
412
352
413
353
* To enable this feature during build time. Use `--use_openvino ` `<hardware_option>_NO_PARTITION`
414
354
415
355
```
416
-
Usage: --use_openvino CPU_FP32_NO_PARTITION or --use_openvino GPU_FP32_NO_PARTITION or
417
-
--use_openvino GPU_FP16_NO_PARTITION
356
+
Usage: --use_openvino CPU_NO_PARTITION or --use_openvino GPU_NO_PARTITION or --use_openvino NPU_NO_PARTITION
418
357
```
419
358
420
-
For more information on OpenVINO™ Execution Provider's ONNX Layer support, Topology support, and Intel hardware enabled, please refer to the document [OpenVINO™-ExecutionProvider](../execution-providers/OpenVINO-ExecutionProvider.md)
359
+
For more information on OpenVINO™ Execution Provider's ONNX Layer support, Topology support, and Intel hardware enabled, please refer to the document [OpenVINO™-ExecutionProvider](../execution-providers/OpenVINO-ExecutionProvider.md#support-coverage)
0 commit comments