Skip to content

Commit 4e3da9e

Browse files
CopilotMaanavD
andcommitted
Update docs to highlight WinML and deprecate DirectML
- Add deprecation notice to DirectML Execution Provider page with clear migration guidance to WinML - Update Windows Getting Started page to prioritize WinML as recommended path - Add developer guidance about WinML benefits (same APIs, dynamic EP selection, simplified deployment) - Update install instructions to clearly mark DirectML as deprecated - Add Windows ML Overview reference link - Update install table to highlight WinML as recommended for Windows and mark DirectML as deprecated Co-authored-by: MaanavD <24942306+MaanavD@users.noreply.github.com>
1 parent 1ef4fe5 commit 4e3da9e

File tree

3 files changed

+23
-8
lines changed

3 files changed

+23
-8
lines changed

docs/execution-providers/DirectML-ExecutionProvider.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,9 @@ redirect_from: /docs/reference/execution-providers/DirectML-ExecutionProvider
99
# DirectML Execution Provider
1010
{: .no_toc }
1111

12+
{: .note }
13+
**Note: DirectML is deprecated.** Please use [WinML](../get-started/with-windows.md) for Windows-based ONNX Runtime deployments. WinML provides the same ONNX Runtime APIs while dynamically selecting the best execution provider based on your hardware. See the [WinML install section](../install/#cccwinml-installs) for installation instructions.
14+
1215
The DirectML Execution Provider is a component of ONNX Runtime that uses [DirectML](https://docs.microsoft.com/en-us/windows/ai/directml/dml-intro) to accelerate inference of ONNX models. The DirectML execution provider is capable of greatly improving evaluation time of models using commodity GPU hardware, without sacrificing broad hardware support or requiring vendor-specific extensions to be installed.
1316

1417

docs/get-started/with-windows.md

Lines changed: 13 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,12 +10,17 @@ nav_order: 9
1010
# Get started with ONNX Runtime for Windows
1111
{: .no_toc }
1212

13-
The ONNX Runtime Nuget package provides the ability to use the full [WinML API](https://docs.microsoft.com/en-us/windows/ai/windows-ml/api-reference).
13+
**WinML is the recommended Windows development path for ONNX Runtime.** The ONNX Runtime NuGet package provides the ability to use the full [WinML API](https://docs.microsoft.com/en-us/windows/ai/windows-ml/api-reference).
1414
This allows scenarios such as passing a [Windows.Media.VideoFrame](https://docs.microsoft.com/en-us/uwp/api/Windows.Media.VideoFrame) from your connected camera directly into the runtime for realtime inference.
1515

16+
WinML offers several advantages for Windows developers:
17+
- **Same ONNX Runtime APIs**: WinML uses the same ONNX Runtime APIs you're already familiar with
18+
- **Dynamic execution provider selection**: Automatically selects the best execution provider (EP) based on your hardware
19+
- **Simplified deployment**: Reduces complexity for Windows developers by handling hardware optimization automatically
20+
1621
The WinML API is a WinRT API that shipped inside the Windows OS starting with build 1809 (RS5) in the Windows.AI.MachineLearning namespace. It embedded a version of the ONNX Runtime.
1722

18-
In addition to using the in-box version of WinML, WinML can also be installed as an application re-distributable package (see [Direct ML Windows](../execution-providers/DirectML-ExecutionProvider) for technical details).
23+
In addition to using the in-box version of WinML, WinML can also be installed as an application re-distributable package. For legacy scenarios or specific DirectML requirements, see the [DirectML Execution Provider](../execution-providers/DirectML-ExecutionProvider) documentation (note: DirectML is deprecated).
1924

2025
## Contents
2126
{: .no_toc }
@@ -26,7 +31,7 @@ In addition to using the in-box version of WinML, WinML can also be installed as
2631

2732
## Windows OS integration
2833

29-
ONNX Runtime is available in Windows 10 versions >= 1809 and all versions of Windows 11. It is embedded inside Windows.AI.MachineLearning.dll and exposed via the WinRT API (WinML for short). It includes the CPU execution provider and the [DirectML execution provider](../execution-providers/DirectML-ExecutionProvider) for GPU support.
34+
ONNX Runtime is available in Windows 10 versions >= 1809 and all versions of Windows 11. It is embedded inside Windows.AI.MachineLearning.dll and exposed via the WinRT API (WinML for short). It includes the CPU execution provider and the [DirectML execution provider](../execution-providers/DirectML-ExecutionProvider) for GPU support (note: DirectML is deprecated - WinML is the preferred approach).
3035

3136
The high level design looks like this:
3237

@@ -92,4 +97,9 @@ If the OS does not have the runtime you need you can switch to use the redist bi
9297
|ORT release 1.4| 3|
9398

9499
See [here](https://docs.microsoft.com/en-us/windows/ai/windows-ml/onnx-versions) for more about opsets and ONNX version details in Windows OS distributions.
100+
101+
## Additional Resources
102+
103+
For more information about Windows Machine Learning (WinML), see the [Windows ML Overview](https://learn.microsoft.com/en-us/windows/ai/new-windows-ml/overview).
104+
95105
<p><a href="#">Back to top</a></p>

docs/install/index.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -169,13 +169,15 @@ dotnet add package Microsoft.ML.OnnxRuntime.Gpu
169169
Note: You don't need --interactive every time. dotnet will prompt you to add --interactive if it needs updated
170170
credentials.
171171

172-
#### DirectML
172+
#### DirectML (deprecated - use WinML instead)
173173

174174
```bash
175175
dotnet add package Microsoft.ML.OnnxRuntime.DirectML
176176
```
177177

178-
#### WinML
178+
**Note**: DirectML is deprecated. For new Windows projects, use WinML instead:
179+
180+
#### WinML (recommended for Windows)
179181

180182
```bash
181183
dotnet add package Microsoft.AI.MachineLearning
@@ -442,14 +444,14 @@ below:
442444
| Python | If using pip, run `pip install --upgrade pip` prior to downloading. | | |
443445
| | CPU: [**onnxruntime**](https://pypi.org/project/onnxruntime) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime/overview) | |
444446
| | GPU (CUDA/TensorRT) for CUDA 12.x: [**onnxruntime-gpu**](https://pypi.org/project/onnxruntime-gpu) | [onnxruntime-gpu (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) |
445-
| | GPU (DirectML): [**onnxruntime-directml**](https://pypi.org/project/onnxruntime-directml/) | [onnxruntime-directml (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-directml/overview/) | [View](../execution-providers/DirectML-ExecutionProvider.md#requirements) |
447+
| | GPU (DirectML) **deprecated**: [**onnxruntime-directml**](https://pypi.org/project/onnxruntime-directml/) | [onnxruntime-directml (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-directml/overview/) | [View](../execution-providers/DirectML-ExecutionProvider.md#requirements) |
446448
| | OpenVINO: [**intel/onnxruntime**](https://github.com/intel/onnxruntime/releases/latest) - *Intel managed* | | [View](../build/eps.md#openvino) |
447449
| | TensorRT (Jetson): [**Jetson Zoo**](https://elinux.org/Jetson_Zoo#ONNX_Runtime) - *NVIDIA managed* | | |
448450
| | Azure (Cloud): [**onnxruntime-azure**](https://pypi.org/project/onnxruntime-azure/) | | |
449451
| C#/C/C++ | CPU: [**Microsoft.ML.OnnxRuntime**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | |
450452
| | GPU (CUDA/TensorRT): [**Microsoft.ML.OnnxRuntime.Gpu**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | [View](../execution-providers/CUDA-ExecutionProvider) |
451-
| | GPU (DirectML): [**Microsoft.ML.OnnxRuntime.DirectML**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview) | [View](../execution-providers/DirectML-ExecutionProvider) |
452-
| WinML | [**Microsoft.AI.MachineLearning**](https://www.nuget.org/packages/Microsoft.AI.MachineLearning) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/NuGet/Microsoft.AI.MachineLearning/overview) | [View](https://docs.microsoft.com/en-us/windows/ai/windows-ml/port-app-to-nuget#prerequisites) |
453+
| | GPU (DirectML) **deprecated**: [**Microsoft.ML.OnnxRuntime.DirectML**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview) | [View](../execution-providers/DirectML-ExecutionProvider) |
454+
| WinML **recommended for Windows** | [**Microsoft.AI.MachineLearning**](https://www.nuget.org/packages/Microsoft.AI.MachineLearning) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/NuGet/Microsoft.AI.MachineLearning/overview) | [View](https://docs.microsoft.com/en-us/windows/ai/windows-ml/port-app-to-nuget#prerequisites) |
453455
| Java | CPU: [**com.microsoft.onnxruntime:onnxruntime**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime) | | [View](../api/java) |
454456
| | GPU (CUDA/TensorRT): [**com.microsoft.onnxruntime:onnxruntime_gpu**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime_gpu) | | [View](../api/java) |
455457
| Android | [**com.microsoft.onnxruntime:onnxruntime-android**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime-android) | | [View](../install/index.md#install-on-android) |

0 commit comments

Comments
 (0)