Description
I'm trying to run the face detector 206 on batches of images larger than 1.
I'm using an approach based on/readapted from this guide.
Here is my code:
import numpy as np
import openvino as ov
from openvino.runtime import PartialShape
import cv2
core = ov.runtime.ie_api.Core()
detection_model_xml = "face-detection-0206.xml"
detection_model = core.read_model(model=detection_model_xml)
detection_input_layer = detection_model.input(0)
# the model looks like:
# <Model: 'torch-jit-export'
# inputs[
# <ConstOutput: names[image] shape[1,3,640,640] type: f32>
# ]
# outputs[
# <ConstOutput: names[boxes] shape[..750,5] type: f32>,
# <ConstOutput: names[labels] shape[..750] type: i64>
# ]>
new_shape = PartialShape([2,3,640,640]) # trying batch = 2, but ideally I'd like to use batch = -1 to support any batch size
detection_model.reshape({detection_input_layer.any_name: new_shape})
detection_compiled_model = core.compile_model(model=detection_model, device_name="CPU")
# the compiled model looks like
#<CompiledModel:
# inputs[
# <ConstOutput: names[image] shape[2,3,640,640] type: f32>
# ]
# outputs[
# <ConstOutput: names[boxes] shape[..750,5] type: f32>,
# <ConstOutput: names[labels] shape[..750] type: i64>
# ]>
# to run the model:
output = detection_compiled_model(input_data)
If I change the batch size of the input_data
(it can be batch=2 images if I use PartialShape([2,3,640,640]), or batch=any size if I use PartialShape([-1,3,640,640])), the model might take longer for larger batchs, but the output is always the same, and corresponds to the predictions for the first image.
I suspect this is because the output layers are dynamic (boxes: shape[..750,5]
and labels:shape[..750]
) and don't get reshaped according to the input batch size.
However, I just started using openvino a couple of days ago so I'm not sure of how to fix the problem and allow for larger batches.
Any suggestion?
Activity