Skip to content

Problem loading Yolo Version 11 on Android Studio using Interpreter api and GPU delegate #997

Open
@AndrewQianNorthernVue

Description

@AndrewQianNorthernVue

I have converted a YoloV11 model using Ultralytics' export function into a .tflite format. I then attempted to load the interpreter using Interpreter(model, options). The options had a GPU delegate set:

        val delegate = GpuDelegate()
     
        val options = Interpreter.Options().addDelegate(delegate)
      
        interpreter = Interpreter(model, options)

This DID NOT work for YoloV11 but DID work for Yolov8, which I converted the exact same way. I also attempted to convert to onnx and then tflite, but the Interpreter(model, options) would also stall (just never finish) for yolov11. I believe this might be due to unsupported operations, and have looked at Netron, but it is quite tedious.

I would like to know if this is a known issue and if there is a fix. Here is some additional context:

  1. I attempted to just use the CPU for my YoloV11 using options.numThreads(4) instead of the delegate code above. It worked just fine.

  2. I attempted to use the GPU for YoloV8 which was converted the exact same way, it worked without issue.

  3. compatList.isDelegateSupportedOnThisDevice somehow returns false no matter which model I am loading, but this doesn't seem to stop my V8 model from loading and running.

  4. I am running a demensity chip on my device.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions