Skip to content

onnxruntime-node 1.16.0 dml and cuda backend bug. #17678

Open
@MountainAndMorning

Description

@MountainAndMorning

Describe the issue

It seems the onnxruntime-node 1.16.0 add support for the dml and cuda backend. However, when I try this library on the electron backend, a 'no available backend found' error found.

electron: 24.4.0
noderuntime-node 1.16.0
CUDA_PATH: C:\Program Files \NVIDIA GPU Computing Toolkit\CUDA\v11.3

To reproduce

import * as ort from 'onnxrunbtime-node'

let session = undefined
export async function openSession (modelBytes) {
    session = await ort.InferenceSession.create(modelBytes, {executionProviders: ['dml', 'cuda']})
}

Urgency

Yes.

Platform

Windows

OS Version

Windows 10

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

"onnxruntime-node": "^1.16.0",

ONNX Runtime API

JavaScript

Architecture

X64

Execution Provider

CUDA, DirectML

Execution Provider Library Version

CUDA v11.3

Metadata

Metadata

Assignees

Labels

api:Javascriptissues related to the Javascript APIep:CUDAissues related to the CUDA execution providerep:DMLissues related to the DirectML execution providerplatform:webissues related to ONNX Runtime web; typically submitted using templateplatform:windowsissues related to the Windows platform

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions