Skip to content

Exception during initialization using Intel NPU (Intel AI boost) #23305

@bonihaniboni

Description

@bonihaniboni

Describe the issue

After i build the project using --use_dml option, i ran onnxruntime_perf_test.exe to validate whether this can use Intel NPU. So, i add "#define ENABLE_NPU_ADAPTER_ENUMERATION" to 'dml_provider_factory.cc'.
On Intel Lunar Lake Platform, .onnx which use CNN and RNN both ran successful, but On Intel Meteor Lake Platform, only RNN model could run. If i try to run CNN model on MTL platform, it shows "2025-01-09 18:42:29.1493690 [E:onnxruntime:, inference_session.cc:2154 onnxruntime::InferenceSession::Initialize::<lambda_ddd6d80b203c0fd79bf36f74745e4e94>::operator ()] Exception during initialization:" error. Please give me some advice. Thank you

To reproduce

add #define ENABLE_NPU_ADAPTER_ENUMERATION to dml_provider_factory.cc
build.bat --use_dml --build_shared_lib --cmake_generator "Visual Studio 17 2022" --skip_submodule_sync --config RelWithDebInfo
onnxruntime_perf_test.exe -I -m "times" -r 999999999 -e "dml" -i "device_filter|npu" .\60_80.onnx (input tensor: fp16[1,15])
onnxruntime_perf_test.exe -I -m "times" -r 999999999 -e "dml" -i "device_filter|npu" .\85_95.onnx (input: tensor fp16[1,3,256,256])

Urgency

No response

Platform

Windows

OS Version

Windows11 24H2 26100.2605

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

v1.20.1

ONNX Runtime API

C++

Architecture

X64

Execution Provider

DirectML

Execution Provider Library Version

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    ep:DMLissues related to the DirectML execution providerstaleissues that have not been addressed in a while; categorized by a bot

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions