Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 7 additions & 3 deletions docs/build/eps.md
Original file line number Diff line number Diff line change
Expand Up @@ -243,12 +243,16 @@ See more information on the NV TensorRT RTX Execution Provider [here](../executi
{: .no_toc }

* Follow [instructions for CUDA execution provider](#cuda) to install CUDA and setup environment variables.
* Intall TensorRT for RTX from nvidia.com (TODO: add link when available)
* Intall TensorRT for RTX from [here](https://developer.nvidia.com/tensorrt-rtx))

### Build Instructions
{: .no_toc }
`build.bat --config Release --parallel 32 --build_dir _build --build_shared_lib --use_nv_tensorrt_rtx --tensorrt_home "C:\dev\TensorRT-RTX-1.1.0.3" --cuda_home "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.9" --cmake_generator "Visual Studio 17 2022" --use_vcpkg`
Replace the --tensorrt_home and --cuda_home with correct paths to CUDA and TensorRT-RTX installations.

```bash
`build.bat --config Release --parallel 32 --build_dir _build --build_shared_lib --use_nv_tensorrt_rtx --tensorrt_rtx_home <path to TensorRT for RTX home> --cuda_home <path to CUDA home> --cmake_generator "Visual Studio 17 2022" --use_vcpkg`
Copy link

Copilot AI Jul 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the inline backticks around the build command inside the code block to avoid rendering issues.

Suggested change
`build.bat --config Release --parallel 32 --build_dir _build --build_shared_lib --use_nv_tensorrt_rtx --tensorrt_rtx_home <path to TensorRT for RTX home> --cuda_home <path to CUDA home> --cmake_generator "Visual Studio 17 2022" --use_vcpkg`
build.bat --config Release --parallel 32 --build_dir _build --build_shared_lib --use_nv_tensorrt_rtx --tensorrt_rtx_home <path to TensorRT for RTX home> --cuda_home <path to CUDA home> --cmake_generator "Visual Studio 17 2022" --use_vcpkg

Copilot uses AI. Check for mistakes.
```

Update the --tensorrt_rtx_home and --cuda_home with correct paths to CUDA and TensorRT-RTX installations.

## oneDNN

Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/Azure-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Cloud - Azure
description: Instructions to infer an ONNX model remotely with an Azure endpoint
parent: Execution Providers
nav_order: 13
nav_order: 801
redirect_from: /docs/reference/execution-providers/Azure-ExecutionProvider
---

Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/CUDA-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: NVIDIA - CUDA
description: Instructions to execute ONNX Runtime applications with CUDA
parent: Execution Providers
nav_order: 1
nav_order: 3
Copy link

Copilot AI Jul 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] This nav_order uses a single-digit value while others follow a three-digit scheme; consider aligning for consistency.

Suggested change
nav_order: 3
nav_order: 003

Copilot uses AI. Check for mistakes.
redirect_from: /docs/reference/execution-providers/CUDA-ExecutionProvider
---

Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/CoreML-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Apple - CoreML
description: Instructions to execute ONNX Runtime with CoreML
parent: Execution Providers
nav_order: 8
nav_order: 501
redirect_from: /docs/reference/execution-providers/CoreML-ExecutionProvider
---
{::options toc_levels="2" /}
Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/DirectML-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Windows - DirectML
description: Instructions to execute ONNX Runtime with the DirectML execution provider
parent: Execution Providers
nav_order: 5
nav_order: 201
redirect_from: /docs/reference/execution-providers/DirectML-ExecutionProvider
---

Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/EP-Context-Design.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: EP Context Design
description: ONNX Runtime EP Context Cache Feature Design
parent: Execution Providers
nav_order: 16
nav_order: 99902
redirect_from: /docs/reference/execution-providers/EP-Context-Design
---

Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/MIGraphX-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: AMD - MIGraphX
description: Instructions to execute ONNX Runtime with the AMD MIGraphX execution provider
parent: Execution Providers
nav_order: 11
nav_order: 702
redirect_from: /docs/reference/execution-providers/MIGraphX-ExecutionProvider
---

Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/NNAPI-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Android - NNAPI
description: Instructions to execute ONNX Runtime with the NNAPI execution provider
parent: Execution Providers
nav_order: 7
nav_order: 401
redirect_from: /docs/reference/execution-providers/NNAPI-ExecutionProvider
---
{::options toc_levels="2" /}
Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/OpenVINO-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Intel - OpenVINO™
description: Instructions to execute OpenVINO™ Execution Provider for ONNX Runtime.
parent: Execution Providers
nav_order: 3
nav_order: 101
redirect_from: /docs/reference/execution-providers/OpenVINO-ExecutionProvider
---

Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/QNN-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Qualcomm - QNN
description: Execute ONNX models with QNN Execution Provider
parent: Execution Providers
nav_order: 6
nav_order: 301
redirect_from: /docs/reference/execution-providers/QNN-ExecutionProvider
---

Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/ROCm-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: AMD - ROCm
description: Instructions to execute ONNX Runtime with the AMD ROCm execution provider
parent: Execution Providers
nav_order: 10
nav_order: 701
redirect_from: /docs/reference/execution-providers/ROCm-ExecutionProvider
---

Expand Down
4 changes: 2 additions & 2 deletions docs/execution-providers/TensorRTRTX-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: NVIDIA - TensorRT RTX
description: Instructions to execute ONNX Runtime on NVIDIA RTX GPUs with the Nvidia TensorRT RTX execution provider
parent: Execution Providers
nav_order: 17
nav_order: 1
redirect_from: /docs/reference/execution-providers/TensorRTRTX-ExecutionProvider
---

Expand All @@ -15,7 +15,7 @@ Just some of the things that make it a better fit on RTX PCs than our legacy Ten
* Much faster model compile/load times.
* Better usability in terms of use of cached models across multiple RTX GPUs.

The Nvidia TensorRT RTX execution provider in the ONNX Runtime makes use of NVIDIA's [TensorRT](https://developer.nvidia.com/tensorrt) RTX Deep Learning inferencing engine (TODO: correct link to TRT RTX documentation once available) to accelerate ONNX models on RTX GPUs. Microsoft and NVIDIA worked closely to integrate the TensorRT RTX execution provider with ONNX Runtime.
The Nvidia TensorRT RTX execution provider in the ONNX Runtime makes use of NVIDIA's [TensorRT for RTX](https://developer.nvidia.com/tensorrt-rtx) Deep Learning inferencing engine to accelerate ONNX models on RTX GPUs. Microsoft and NVIDIA worked closely to integrate the TensorRT RTX execution provider with ONNX Runtime.

Currently TensorRT RTX supports RTX GPUs from Ampere or later architectures. Support for Turing GPUs is coming soon.

Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/Vitis-AI-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: AMD - Vitis AI
description: Instructions to execute ONNX Runtime on AMD devices with the Vitis AI execution provider
parent: Execution Providers
nav_order: 12
nav_order: 703
redirect_from: /docs/execution-providers/community-maintained/Vitis-AI-ExecutionProvider
---

Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/Xnnpack-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: XNNPACK
description: Instructions to execute ONNX Runtime with the XNNPACK execution provider
parent: Execution Providers
nav_order: 9
nav_order: 601
---
{::options toc_levels="2" /}

Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/community-maintained/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Community-maintained
parent: Execution Providers
has_children: true
nav_order: 14
nav_order: 901
---
# Community-maintained Providers
This list of providers for specialized hardware is contributed and maintained by ONNX Runtime community partners.
Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/oneDNN-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Intel - oneDNN
description: Instructions to execute ONNX Runtime with the Intel oneDNN execution provider
parent: Execution Providers
nav_order: 4
nav_order: 102
redirect_from: /docs/reference/execution-providers/oneDNN-ExecutionProvider
---

Expand Down