Skip to content

How to implement a custom operator that support multiple compute device (CPU, CUDA)? #23317

@wangxianliang

Description

@wangxianliang

Ask a Question

I tried the following implementation, but had no effect.

CUDA implementation:

struct CustomOPGpu : Ort::CustomOpBase<CustomOPGpu , CustomKernel> {
		const char* GetName() const { return "CustomOP"; };
const char* GetExecutionProviderType() const { return "CUDAExecutionProvider"; };
...
}

CPU implementation:

struct CustomOPCpu : Ort::CustomOpBase<CustomOPCpu , CustomKernel> {
		const char* GetName() const { return "CustomOP"; };
const char* GetExecutionProviderType() const { return "CPUExecutionProvider"; };
...
}

The doc (https://onnxruntime.ai/docs/reference/operators/add-custom-op.html) doesn't have any sample codes.

Question

Further information

  • Relevant Area:

  • Is this issue related to a specific model?
    Model name:
    Model opset:

Notes

Metadata

Metadata

Assignees

No one assigned

    Labels

    ep:CUDAissues related to the CUDA execution providerstaleissues that have not been addressed in a while; categorized by a bot

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions