Skip to content

Commit ec95d65

Browse files
CopilotMaanavD
andcommitted
Update DirectML messaging to use sustained engineering terminology and improve WinML descriptions
Co-authored-by: MaanavD <24942306+MaanavD@users.noreply.github.com>
1 parent 4e3da9e commit ec95d65

File tree

3 files changed

+18
-10
lines changed

3 files changed

+18
-10
lines changed

docs/execution-providers/DirectML-ExecutionProvider.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ redirect_from: /docs/reference/execution-providers/DirectML-ExecutionProvider
1010
{: .no_toc }
1111

1212
{: .note }
13-
**Note: DirectML is deprecated.** Please use [WinML](../get-started/with-windows.md) for Windows-based ONNX Runtime deployments. WinML provides the same ONNX Runtime APIs while dynamically selecting the best execution provider based on your hardware. See the [WinML install section](../install/#cccwinml-installs) for installation instructions.
13+
**Note: DirectML is in sustained engineering mode.** DirectML continues to be supported, but new feature development has moved to [WinML](../get-started/with-windows.md) for Windows-based ONNX Runtime deployments. WinML provides the same ONNX Runtime APIs while dynamically selecting the best execution provider based on your hardware. See the [WinML install section](../install/#cccwinml-installs) for installation instructions.
1414

1515
The DirectML Execution Provider is a component of ONNX Runtime that uses [DirectML](https://docs.microsoft.com/en-us/windows/ai/directml/dml-intro) to accelerate inference of ONNX models. The DirectML execution provider is capable of greatly improving evaluation time of models using commodity GPU hardware, without sacrificing broad hardware support or requiring vendor-specific extensions to be installed.
1616

docs/get-started/with-windows.md

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,12 +15,12 @@ This allows scenarios such as passing a [Windows.Media.VideoFrame](https://docs.
1515

1616
WinML offers several advantages for Windows developers:
1717
- **Same ONNX Runtime APIs**: WinML uses the same ONNX Runtime APIs you're already familiar with
18-
- **Dynamic execution provider selection**: Automatically selects the best execution provider (EP) based on your hardware
19-
- **Simplified deployment**: Reduces complexity for Windows developers by handling hardware optimization automatically
18+
- **Dynamic execution provider selection**: WinML automatically selects the best execution provider (EP) based on your customer's hardware, with mechanisms that you can override for manual fine-grained control
19+
- **Simplified deployment**: Reduces complexity for Windows developers by deploying all needed dependencies for AI inference on the client
2020

2121
The WinML API is a WinRT API that shipped inside the Windows OS starting with build 1809 (RS5) in the Windows.AI.MachineLearning namespace. It embedded a version of the ONNX Runtime.
2222

23-
In addition to using the in-box version of WinML, WinML can also be installed as an application re-distributable package. For legacy scenarios or specific DirectML requirements, see the [DirectML Execution Provider](../execution-providers/DirectML-ExecutionProvider) documentation (note: DirectML is deprecated).
23+
In addition to using the in-box version of WinML, WinML can also be installed as an application re-distributable package. For legacy scenarios or specific DirectML requirements, see the [DirectML Execution Provider](../execution-providers/DirectML-ExecutionProvider) documentation (note: DirectML is in sustained engineering mode).
2424

2525
## Contents
2626
{: .no_toc }
@@ -31,7 +31,13 @@ In addition to using the in-box version of WinML, WinML can also be installed as
3131

3232
## Windows OS integration
3333

34-
ONNX Runtime is available in Windows 10 versions >= 1809 and all versions of Windows 11. It is embedded inside Windows.AI.MachineLearning.dll and exposed via the WinRT API (WinML for short). It includes the CPU execution provider and the [DirectML execution provider](../execution-providers/DirectML-ExecutionProvider) for GPU support (note: DirectML is deprecated - WinML is the preferred approach).
34+
ONNX Runtime is available in Windows 10 versions >= 1809 and all versions of Windows 11. It is embedded inside Microsoft.Windows.AI.MachineLearning.dll and exposed via the WinRT API (WinML for short). It includes the CPU execution provider and the [DirectML execution provider](../execution-providers/DirectML-ExecutionProvider) for GPU support (note: DirectML is in sustained engineering mode - WinML is the preferred approach).
35+
36+
**Version Support:**
37+
- **Windows 10 (1809+) & Windows 11 (before 24H2)**: ONNX Runtime works, but you need to manually select and manage models and execution providers yourself
38+
- **Windows 11 (24H2+)**: WinML provides additional automation to help with execution provider selection and hardware optimization across Windows' broad and open ecosystem
39+
40+
For full support across all silicon vendors, WinML on Windows 11 24H2+ is recommended as it handles much of the complexity automatically.
3541

3642
The high level design looks like this:
3743

docs/install/index.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,9 @@ pip install flatbuffers numpy packaging protobuf sympy
5555
pip install --pre --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime
5656
```
5757

58-
### Install ONNX Runtime GPU (DirectML)
58+
### Install ONNX Runtime GPU (DirectML) - Sustained Engineering Mode
59+
60+
**Note**: DirectML is in sustained engineering mode. For new Windows projects, consider [WinML](#winml-recommended-for-windows) instead.
5961

6062
```bash
6163
pip install onnxruntime-directml
@@ -169,13 +171,13 @@ dotnet add package Microsoft.ML.OnnxRuntime.Gpu
169171
Note: You don't need --interactive every time. dotnet will prompt you to add --interactive if it needs updated
170172
credentials.
171173

172-
#### DirectML (deprecated - use WinML instead)
174+
#### DirectML (sustained engineering mode - use WinML for new projects)
173175

174176
```bash
175177
dotnet add package Microsoft.ML.OnnxRuntime.DirectML
176178
```
177179

178-
**Note**: DirectML is deprecated. For new Windows projects, use WinML instead:
180+
**Note**: DirectML is in sustained engineering mode. For new Windows projects, use WinML instead:
179181

180182
#### WinML (recommended for Windows)
181183

@@ -444,13 +446,13 @@ below:
444446
| Python | If using pip, run `pip install --upgrade pip` prior to downloading. | | |
445447
| | CPU: [**onnxruntime**](https://pypi.org/project/onnxruntime) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime/overview) | |
446448
| | GPU (CUDA/TensorRT) for CUDA 12.x: [**onnxruntime-gpu**](https://pypi.org/project/onnxruntime-gpu) | [onnxruntime-gpu (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) |
447-
| | GPU (DirectML) **deprecated**: [**onnxruntime-directml**](https://pypi.org/project/onnxruntime-directml/) | [onnxruntime-directml (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-directml/overview/) | [View](../execution-providers/DirectML-ExecutionProvider.md#requirements) |
449+
| | GPU (DirectML) **sustained engineering mode**: [**onnxruntime-directml**](https://pypi.org/project/onnxruntime-directml/) | [onnxruntime-directml (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-directml/overview/) | [View](../execution-providers/DirectML-ExecutionProvider.md#requirements) |
448450
| | OpenVINO: [**intel/onnxruntime**](https://github.com/intel/onnxruntime/releases/latest) - *Intel managed* | | [View](../build/eps.md#openvino) |
449451
| | TensorRT (Jetson): [**Jetson Zoo**](https://elinux.org/Jetson_Zoo#ONNX_Runtime) - *NVIDIA managed* | | |
450452
| | Azure (Cloud): [**onnxruntime-azure**](https://pypi.org/project/onnxruntime-azure/) | | |
451453
| C#/C/C++ | CPU: [**Microsoft.ML.OnnxRuntime**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | |
452454
| | GPU (CUDA/TensorRT): [**Microsoft.ML.OnnxRuntime.Gpu**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | [View](../execution-providers/CUDA-ExecutionProvider) |
453-
| | GPU (DirectML) **deprecated**: [**Microsoft.ML.OnnxRuntime.DirectML**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview) | [View](../execution-providers/DirectML-ExecutionProvider) |
455+
| | GPU (DirectML) **sustained engineering mode**: [**Microsoft.ML.OnnxRuntime.DirectML**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview) | [View](../execution-providers/DirectML-ExecutionProvider) |
454456
| WinML **recommended for Windows** | [**Microsoft.AI.MachineLearning**](https://www.nuget.org/packages/Microsoft.AI.MachineLearning) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/NuGet/Microsoft.AI.MachineLearning/overview) | [View](https://docs.microsoft.com/en-us/windows/ai/windows-ml/port-app-to-nuget#prerequisites) |
455457
| Java | CPU: [**com.microsoft.onnxruntime:onnxruntime**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime) | | [View](../api/java) |
456458
| | GPU (CUDA/TensorRT): [**com.microsoft.onnxruntime:onnxruntime_gpu**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime_gpu) | | [View](../api/java) |

0 commit comments

Comments
 (0)