You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**Note: DirectML is deprecated.**Please use[WinML](../get-started/with-windows.md) for Windows-based ONNX Runtime deployments. WinML provides the same ONNX Runtime APIs while dynamically selecting the best execution provider based on your hardware. See the [WinML install section](../install/#cccwinml-installs) for installation instructions.
13
+
**Note: DirectML is in sustained engineering mode.**DirectML continues to be supported, but new feature development has moved to[WinML](../get-started/with-windows.md) for Windows-based ONNX Runtime deployments. WinML provides the same ONNX Runtime APIs while dynamically selecting the best execution provider based on your hardware. See the [WinML install section](../install/#cccwinml-installs) for installation instructions.
14
14
15
15
The DirectML Execution Provider is a component of ONNX Runtime that uses [DirectML](https://docs.microsoft.com/en-us/windows/ai/directml/dml-intro) to accelerate inference of ONNX models. The DirectML execution provider is capable of greatly improving evaluation time of models using commodity GPU hardware, without sacrificing broad hardware support or requiring vendor-specific extensions to be installed.
Copy file name to clipboardExpand all lines: docs/get-started/with-windows.md
+10-4Lines changed: 10 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,12 +15,12 @@ This allows scenarios such as passing a [Windows.Media.VideoFrame](https://docs.
15
15
16
16
WinML offers several advantages for Windows developers:
17
17
-**Same ONNX Runtime APIs**: WinML uses the same ONNX Runtime APIs you're already familiar with
18
-
-**Dynamic execution provider selection**: Automatically selects the best execution provider (EP) based on your hardware
19
-
-**Simplified deployment**: Reduces complexity for Windows developers by handling hardware optimization automatically
18
+
-**Dynamic execution provider selection**: WinML automatically selects the best execution provider (EP) based on your customer's hardware, with mechanisms that you can override for manual fine-grained control
19
+
-**Simplified deployment**: Reduces complexity for Windows developers by deploying all needed dependencies for AI inference on the client
20
20
21
21
The WinML API is a WinRT API that shipped inside the Windows OS starting with build 1809 (RS5) in the Windows.AI.MachineLearning namespace. It embedded a version of the ONNX Runtime.
22
22
23
-
In addition to using the in-box version of WinML, WinML can also be installed as an application re-distributable package. For legacy scenarios or specific DirectML requirements, see the [DirectML Execution Provider](../execution-providers/DirectML-ExecutionProvider) documentation (note: DirectML is deprecated).
23
+
In addition to using the in-box version of WinML, WinML can also be installed as an application re-distributable package. For legacy scenarios or specific DirectML requirements, see the [DirectML Execution Provider](../execution-providers/DirectML-ExecutionProvider) documentation (note: DirectML is in sustained engineering mode).
24
24
25
25
## Contents
26
26
{: .no_toc }
@@ -31,7 +31,13 @@ In addition to using the in-box version of WinML, WinML can also be installed as
31
31
32
32
## Windows OS integration
33
33
34
-
ONNX Runtime is available in Windows 10 versions >= 1809 and all versions of Windows 11. It is embedded inside Windows.AI.MachineLearning.dll and exposed via the WinRT API (WinML for short). It includes the CPU execution provider and the [DirectML execution provider](../execution-providers/DirectML-ExecutionProvider) for GPU support (note: DirectML is deprecated - WinML is the preferred approach).
34
+
ONNX Runtime is available in Windows 10 versions >= 1809 and all versions of Windows 11. It is embedded inside Microsoft.Windows.AI.MachineLearning.dll and exposed via the WinRT API (WinML for short). It includes the CPU execution provider and the [DirectML execution provider](../execution-providers/DirectML-ExecutionProvider) for GPU support (note: DirectML is in sustained engineering mode - WinML is the preferred approach).
35
+
36
+
**Version Support:**
37
+
-**Windows 10 (1809+) & Windows 11 (before 24H2)**: ONNX Runtime works, but you need to manually select and manage models and execution providers yourself
38
+
-**Windows 11 (24H2+)**: WinML provides additional automation to help with execution provider selection and hardware optimization across Windows' broad and open ecosystem
39
+
40
+
For full support across all silicon vendors, WinML on Windows 11 24H2+ is recommended as it handles much of the complexity automatically.
|| GPU (CUDA/TensorRT) for CUDA 12.x: [**onnxruntime-gpu**](https://pypi.org/project/onnxruntime-gpu)|[onnxruntime-gpu (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-gpu/overview/)|[View](../execution-providers/CUDA-ExecutionProvider.md#requirements)|
0 commit comments