Skip to content
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions docs/execution-providers/DirectML-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@ redirect_from: /docs/reference/execution-providers/DirectML-ExecutionProvider
# DirectML Execution Provider
{: .no_toc }

{: .note }
**Note: DirectML is deprecated.** Please use [WinML](../get-started/with-windows.md) for Windows-based ONNX Runtime deployments. WinML provides the same ONNX Runtime APIs while dynamically selecting the best execution provider based on your hardware. See the [WinML install section](../install/#cccwinml-installs) for installation instructions.

The DirectML Execution Provider is a component of ONNX Runtime that uses [DirectML](https://docs.microsoft.com/en-us/windows/ai/directml/dml-intro) to accelerate inference of ONNX models. The DirectML execution provider is capable of greatly improving evaluation time of models using commodity GPU hardware, without sacrificing broad hardware support or requiring vendor-specific extensions to be installed.


Expand Down
16 changes: 13 additions & 3 deletions docs/get-started/with-windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,17 @@ nav_order: 9
# Get started with ONNX Runtime for Windows
{: .no_toc }

The ONNX Runtime Nuget package provides the ability to use the full [WinML API](https://docs.microsoft.com/en-us/windows/ai/windows-ml/api-reference).
**WinML is the recommended Windows development path for ONNX Runtime.** The ONNX Runtime NuGet package provides the ability to use the full [WinML API](https://docs.microsoft.com/en-us/windows/ai/windows-ml/api-reference).
This allows scenarios such as passing a [Windows.Media.VideoFrame](https://docs.microsoft.com/en-us/uwp/api/Windows.Media.VideoFrame) from your connected camera directly into the runtime for realtime inference.

WinML offers several advantages for Windows developers:
- **Same ONNX Runtime APIs**: WinML uses the same ONNX Runtime APIs you're already familiar with
- **Dynamic execution provider selection**: Automatically selects the best execution provider (EP) based on your hardware
- **Simplified deployment**: Reduces complexity for Windows developers by handling hardware optimization automatically

The WinML API is a WinRT API that shipped inside the Windows OS starting with build 1809 (RS5) in the Windows.AI.MachineLearning namespace. It embedded a version of the ONNX Runtime.

In addition to using the in-box version of WinML, WinML can also be installed as an application re-distributable package (see [Direct ML Windows](../execution-providers/DirectML-ExecutionProvider) for technical details).
In addition to using the in-box version of WinML, WinML can also be installed as an application re-distributable package. For legacy scenarios or specific DirectML requirements, see the [DirectML Execution Provider](../execution-providers/DirectML-ExecutionProvider) documentation (note: DirectML is deprecated).

## Contents
{: .no_toc }
Expand All @@ -26,7 +31,7 @@ In addition to using the in-box version of WinML, WinML can also be installed as

## Windows OS integration

ONNX Runtime is available in Windows 10 versions >= 1809 and all versions of Windows 11. It is embedded inside Windows.AI.MachineLearning.dll and exposed via the WinRT API (WinML for short). It includes the CPU execution provider and the [DirectML execution provider](../execution-providers/DirectML-ExecutionProvider) for GPU support.
ONNX Runtime is available in Windows 10 versions >= 1809 and all versions of Windows 11. It is embedded inside Windows.AI.MachineLearning.dll and exposed via the WinRT API (WinML for short). It includes the CPU execution provider and the [DirectML execution provider](../execution-providers/DirectML-ExecutionProvider) for GPU support (note: DirectML is deprecated - WinML is the preferred approach).

The high level design looks like this:

Expand Down Expand Up @@ -92,4 +97,9 @@ If the OS does not have the runtime you need you can switch to use the redist bi
|ORT release 1.4| 3|

See [here](https://docs.microsoft.com/en-us/windows/ai/windows-ml/onnx-versions) for more about opsets and ONNX version details in Windows OS distributions.

## Additional Resources

For more information about Windows Machine Learning (WinML), see the [Windows ML Overview](https://learn.microsoft.com/en-us/windows/ai/new-windows-ml/overview).

<p><a href="#">Back to top</a></p>
12 changes: 7 additions & 5 deletions docs/install/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -169,13 +169,15 @@ dotnet add package Microsoft.ML.OnnxRuntime.Gpu
Note: You don't need --interactive every time. dotnet will prompt you to add --interactive if it needs updated
credentials.

#### DirectML
#### DirectML (deprecated - use WinML instead)

```bash
dotnet add package Microsoft.ML.OnnxRuntime.DirectML
```

#### WinML
**Note**: DirectML is deprecated. For new Windows projects, use WinML instead:

#### WinML (recommended for Windows)

```bash
dotnet add package Microsoft.AI.MachineLearning
Expand Down Expand Up @@ -442,14 +444,14 @@ below:
| Python | If using pip, run `pip install --upgrade pip` prior to downloading. | | |
| | CPU: [**onnxruntime**](https://pypi.org/project/onnxruntime) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime/overview) | |
| | GPU (CUDA/TensorRT) for CUDA 12.x: [**onnxruntime-gpu**](https://pypi.org/project/onnxruntime-gpu) | [onnxruntime-gpu (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) |
| | GPU (DirectML): [**onnxruntime-directml**](https://pypi.org/project/onnxruntime-directml/) | [onnxruntime-directml (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-directml/overview/) | [View](../execution-providers/DirectML-ExecutionProvider.md#requirements) |
| | GPU (DirectML) **deprecated**: [**onnxruntime-directml**](https://pypi.org/project/onnxruntime-directml/) | [onnxruntime-directml (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-directml/overview/) | [View](../execution-providers/DirectML-ExecutionProvider.md#requirements) |
| | OpenVINO: [**intel/onnxruntime**](https://github.com/intel/onnxruntime/releases/latest) - *Intel managed* | | [View](../build/eps.md#openvino) |
| | TensorRT (Jetson): [**Jetson Zoo**](https://elinux.org/Jetson_Zoo#ONNX_Runtime) - *NVIDIA managed* | | |
| | Azure (Cloud): [**onnxruntime-azure**](https://pypi.org/project/onnxruntime-azure/) | | |
| C#/C/C++ | CPU: [**Microsoft.ML.OnnxRuntime**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | |
| | GPU (CUDA/TensorRT): [**Microsoft.ML.OnnxRuntime.Gpu**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | [View](../execution-providers/CUDA-ExecutionProvider) |
| | GPU (DirectML): [**Microsoft.ML.OnnxRuntime.DirectML**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview) | [View](../execution-providers/DirectML-ExecutionProvider) |
| WinML | [**Microsoft.AI.MachineLearning**](https://www.nuget.org/packages/Microsoft.AI.MachineLearning) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/NuGet/Microsoft.AI.MachineLearning/overview) | [View](https://docs.microsoft.com/en-us/windows/ai/windows-ml/port-app-to-nuget#prerequisites) |
| | GPU (DirectML) **deprecated**: [**Microsoft.ML.OnnxRuntime.DirectML**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview) | [View](../execution-providers/DirectML-ExecutionProvider) |
| WinML **recommended for Windows** | [**Microsoft.AI.MachineLearning**](https://www.nuget.org/packages/Microsoft.AI.MachineLearning) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/NuGet/Microsoft.AI.MachineLearning/overview) | [View](https://docs.microsoft.com/en-us/windows/ai/windows-ml/port-app-to-nuget#prerequisites) |
| Java | CPU: [**com.microsoft.onnxruntime:onnxruntime**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime) | | [View](../api/java) |
| | GPU (CUDA/TensorRT): [**com.microsoft.onnxruntime:onnxruntime_gpu**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime_gpu) | | [View](../api/java) |
| Android | [**com.microsoft.onnxruntime:onnxruntime-android**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime-android) | | [View](../install/index.md#install-on-android) |
Expand Down
Loading