Skip to content

onnxruntime and onnxruntime-gpu can not co-exist and support CUDAExecutionProvider #26996

@fzehracetin

Description

@fzehracetin

Describe the issue

Hi,
I am using onnxruntime to support ONNX model inference runs in my project. My project can be installed as a library; I aim to support environments with both CPU and GPU capabilities. When I install both onnxruntime and onnxruntime-gpu packages, I can not use the CUDAExecutionProvider.

Specified provider ‘CUDAExecutionProvider’ is not in available provider names. Available providers: ‘AzureExecutionProvider, CPUExecutionProvider’

It would be great if these packages could coexist without falling back to CPUExecutionProvider.

To reproduce

Install both onnxruntime and onnxruntime-gpu packages on a GPU device. I am using a Tesla T4 GPU.

import onnxruntime

model_path = "your_onnx_model_name.onnx"
providers = ["CUDAExecutionProvider"]
self.session = onnxruntime.InferenceSession(
            model_path,
            providers=providers,
        )

This should raise the error I described above.

Urgency

Not urgent.

Platform

Linux

OS Version

Linux-6.6.105+-x86_64-with-glibc2.39

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.23.2

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

CUDA 13.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    staleissues that have not been addressed in a while; categorized by a bot

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions