Description
I have converted a YoloV11 model using Ultralytics' export function into a .tflite format. I then attempted to load the interpreter using Interpreter(model, options). The options had a GPU delegate set:
val delegate = GpuDelegate()
val options = Interpreter.Options().addDelegate(delegate)
interpreter = Interpreter(model, options)
This DID NOT work for YoloV11 but DID work for Yolov8, which I converted the exact same way. I also attempted to convert to onnx and then tflite, but the Interpreter(model, options) would also stall (just never finish) for yolov11. I believe this might be due to unsupported operations, and have looked at Netron, but it is quite tedious.
I would like to know if this is a known issue and if there is a fix. Here is some additional context:
-
I attempted to just use the CPU for my YoloV11 using options.numThreads(4) instead of the delegate code above. It worked just fine.
-
I attempted to use the GPU for YoloV8 which was converted the exact same way, it worked without issue.
-
compatList.isDelegateSupportedOnThisDevice somehow returns false no matter which model I am loading, but this doesn't seem to stop my V8 model from loading and running.
-
I am running a demensity chip on my device.