-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Description
Describe the issue
After i build the project using --use_dml option, i ran onnxruntime_perf_test.exe to validate whether this can use Intel NPU. So, i add "#define ENABLE_NPU_ADAPTER_ENUMERATION" to 'dml_provider_factory.cc'.
On Intel Lunar Lake Platform, .onnx which use CNN and RNN both ran successful, but On Intel Meteor Lake Platform, only RNN model could run. If i try to run CNN model on MTL platform, it shows "2025-01-09 18:42:29.1493690 [E:onnxruntime:, inference_session.cc:2154 onnxruntime::InferenceSession::Initialize::<lambda_ddd6d80b203c0fd79bf36f74745e4e94>::operator ()] Exception during initialization:" error. Please give me some advice. Thank you
To reproduce
add #define ENABLE_NPU_ADAPTER_ENUMERATION to dml_provider_factory.cc
build.bat --use_dml --build_shared_lib --cmake_generator "Visual Studio 17 2022" --skip_submodule_sync --config RelWithDebInfo
onnxruntime_perf_test.exe -I -m "times" -r 999999999 -e "dml" -i "device_filter|npu" .\60_80.onnx (input tensor: fp16[1,15])
onnxruntime_perf_test.exe -I -m "times" -r 999999999 -e "dml" -i "device_filter|npu" .\85_95.onnx (input: tensor fp16[1,3,256,256])
Urgency
No response
Platform
Windows
OS Version
Windows11 24H2 26100.2605
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
v1.20.1
ONNX Runtime API
C++
Architecture
X64
Execution Provider
DirectML
Execution Provider Library Version
No response