You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update docs to highlight WinML and deprecate DirectML
- Add deprecation notice to DirectML Execution Provider page with clear migration guidance to WinML
- Update Windows Getting Started page to prioritize WinML as recommended path
- Add developer guidance about WinML benefits (same APIs, dynamic EP selection, simplified deployment)
- Update install instructions to clearly mark DirectML as deprecated
- Add Windows ML Overview reference link
- Update install table to highlight WinML as recommended for Windows and mark DirectML as deprecated
Co-authored-by: MaanavD <24942306+MaanavD@users.noreply.github.com>
**Note: DirectML is deprecated.** Please use [WinML](../get-started/with-windows.md) for Windows-based ONNX Runtime deployments. WinML provides the same ONNX Runtime APIs while dynamically selecting the best execution provider based on your hardware. See the [WinML install section](../install/#cccwinml-installs) for installation instructions.
14
+
12
15
The DirectML Execution Provider is a component of ONNX Runtime that uses [DirectML](https://docs.microsoft.com/en-us/windows/ai/directml/dml-intro) to accelerate inference of ONNX models. The DirectML execution provider is capable of greatly improving evaluation time of models using commodity GPU hardware, without sacrificing broad hardware support or requiring vendor-specific extensions to be installed.
Copy file name to clipboardExpand all lines: docs/get-started/with-windows.md
+13-3Lines changed: 13 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,12 +10,17 @@ nav_order: 9
10
10
# Get started with ONNX Runtime for Windows
11
11
{: .no_toc }
12
12
13
-
The ONNX Runtime Nuget package provides the ability to use the full [WinML API](https://docs.microsoft.com/en-us/windows/ai/windows-ml/api-reference).
13
+
**WinML is the recommended Windows development path for ONNX Runtime.**The ONNX Runtime NuGet package provides the ability to use the full [WinML API](https://docs.microsoft.com/en-us/windows/ai/windows-ml/api-reference).
14
14
This allows scenarios such as passing a [Windows.Media.VideoFrame](https://docs.microsoft.com/en-us/uwp/api/Windows.Media.VideoFrame) from your connected camera directly into the runtime for realtime inference.
15
15
16
+
WinML offers several advantages for Windows developers:
17
+
-**Same ONNX Runtime APIs**: WinML uses the same ONNX Runtime APIs you're already familiar with
18
+
-**Dynamic execution provider selection**: Automatically selects the best execution provider (EP) based on your hardware
19
+
-**Simplified deployment**: Reduces complexity for Windows developers by handling hardware optimization automatically
20
+
16
21
The WinML API is a WinRT API that shipped inside the Windows OS starting with build 1809 (RS5) in the Windows.AI.MachineLearning namespace. It embedded a version of the ONNX Runtime.
17
22
18
-
In addition to using the in-box version of WinML, WinML can also be installed as an application re-distributable package (see [Direct ML Windows](../execution-providers/DirectML-ExecutionProvider)for technical details).
23
+
In addition to using the in-box version of WinML, WinML can also be installed as an application re-distributable package. For legacy scenarios or specific DirectML requirements, see the [DirectML Execution Provider](../execution-providers/DirectML-ExecutionProvider)documentation (note: DirectML is deprecated).
19
24
20
25
## Contents
21
26
{: .no_toc }
@@ -26,7 +31,7 @@ In addition to using the in-box version of WinML, WinML can also be installed as
26
31
27
32
## Windows OS integration
28
33
29
-
ONNX Runtime is available in Windows 10 versions >= 1809 and all versions of Windows 11. It is embedded inside Windows.AI.MachineLearning.dll and exposed via the WinRT API (WinML for short). It includes the CPU execution provider and the [DirectML execution provider](../execution-providers/DirectML-ExecutionProvider) for GPU support.
34
+
ONNX Runtime is available in Windows 10 versions >= 1809 and all versions of Windows 11. It is embedded inside Windows.AI.MachineLearning.dll and exposed via the WinRT API (WinML for short). It includes the CPU execution provider and the [DirectML execution provider](../execution-providers/DirectML-ExecutionProvider) for GPU support (note: DirectML is deprecated - WinML is the preferred approach).
30
35
31
36
The high level design looks like this:
32
37
@@ -92,4 +97,9 @@ If the OS does not have the runtime you need you can switch to use the redist bi
92
97
|ORT release 1.4| 3|
93
98
94
99
See [here](https://docs.microsoft.com/en-us/windows/ai/windows-ml/onnx-versions) for more about opsets and ONNX version details in Windows OS distributions.
100
+
101
+
## Additional Resources
102
+
103
+
For more information about Windows Machine Learning (WinML), see the [Windows ML Overview](https://learn.microsoft.com/en-us/windows/ai/new-windows-ml/overview).
|| GPU (CUDA/TensorRT) for CUDA 12.x: [**onnxruntime-gpu**](https://pypi.org/project/onnxruntime-gpu)|[onnxruntime-gpu (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-gpu/overview/)|[View](../execution-providers/CUDA-ExecutionProvider.md#requirements)|
0 commit comments