You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/execution-providers/CUDA-ExecutionProvider.md
+107Lines changed: 107 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,6 +37,8 @@ ONNX Runtime built with cuDNN 8.x is not compatible with cuDNN 9.x, and vice ver
37
37
38
38
Note: Starting with version 1.19, **CUDA 12.x** becomes the default version when distributing [ONNX Runtime GPU packages](https://pypi.org/project/onnxruntime-gpu/) in PyPI.
39
39
40
+
To reduce the need for manual installations of CUDA and cuDNN, and ensure seamless integration between ONNX Runtime and PyTorch, the `onnxruntime-gpu` Python package offers API to load CUDA and cuDNN dynamic link libraries (DLLs) appropriately. For more details, refer to the [Work with PyTorch](#work-with-pytorch) and [Preload DLLs](#preload-dlls) sections.
41
+
40
42
### CUDA 12.x
41
43
42
44
| ONNX Runtime | CUDA | cuDNN | Notes |
@@ -76,6 +78,111 @@ For older versions, please reference the readme and build pages on the release b
76
78
77
79
For build instructions, please see the [BUILD page](../build/eps.md#cuda).
78
80
81
+
## Compatibility with PyTorch
82
+
83
+
The `onnxruntime-gpu` package is designed to work seamlessly with [PyTorch](https://pytorch.org/), provided both are built against the same major version of CUDA and cuDNN. When installing PyTorch with CUDA support (e.g., CUDA 12.x), the necessary CUDA and cuDNN DLLs are included, eliminating the need for separate installations of the CUDA toolkit or cuDNN.
84
+
85
+
To ensure ONNX Runtime utilizes the DLLs installed by PyTorch, you can preload these libraries before creating an inference session. This can be achieved by either importing PyTorch or by using the `onnxruntime.preload_dlls()` function.
86
+
87
+
**Example 1: Importing PyTorch**
88
+
89
+
```python
90
+
import torch
91
+
import onnxruntime
92
+
93
+
# Create an inference session with CUDA execution provider
The `onnxruntime-gpu` package provides the `preload_dlls` function to preload CUDA, cuDNN, and Microsoft Visual C++ (MSVC) runtime DLLs. This function offers flexibility in specifying which libraries to load and from which directories.
Copy file name to clipboardExpand all lines: docs/install/index.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,6 +38,8 @@ For ONNX Runtime GPU package, it is required to install [CUDA](https://developer
38
38
* In Windows, the path of CUDA `bin` and cuDNN `bin` directories must be added to the `PATH` environment variable.
39
39
* In Linux, the path of CUDA `lib64` and cuDNN `lib` directories must be added to the `LD_LIBRARY_PATH` environment variable.
40
40
41
+
For `onnxruntime-gpu` package, it is possible to work with PyTorch without the need for manual installations of CUDA or cuDNN. Refer to [Work with PyTorch](../execution-providers/CUDA-ExecutionProvider.md#work-with-pytorch) for more information.
0 commit comments