Skip to content

[Bug]: Port for tensor name x was not found #21615

Open
@jikechao

Description

@jikechao

OpenVINO Version

openvino-nightly 2023.2.0.dev20231101

Operating System

Ubuntu 18.04 (LTS)

Device used for inference

CPU

Framework

ONNX

Model used

https://github.com/jikechao/onnx_models/blob/main/dropout.onnx

Issue description

For the given model containing Dropout operator, OpenVINO can convert it into OV IR correctly and compile it well. However, when executing inference, it will report an error:
Port for tensor name x was not found.
I consider such behavior a bug because OpenVINO doesn't align with the ONNXRuntime rather than crash.

Step-by-step reproduction

import openvino as ov
import numpy as np

ov_model = ov.convert_model('dropout.onnx')  # input=input_shapes


ir_path = f"temp_OVIR.xml"
ov.save_model(ov_model, ir_path, compress_to_fp16=False)
core = ov.Core()
model = core.read_model(ir_path)

compiled_model = core.compile_model(model=model, device_name="CPU")


input_x = np.random.random([3,  4, 5]).astype(np.float32)
input_data = {"x": input_x}

result = []
for output in ov_model.outputs:
    result.append(compiled_model(input_data)[output])

Relevant log output

Traceback (most recent call last):
  File "test.py", line 20, in <module>
    result.append(compiled_model(input_data)[output])
  File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\ie_api.py", line 384, in __call__
    return self._infer_request.infer(
  File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\ie_api.py", line 143, in infer
    return OVDict(super().infer(_data_dispatch(
  File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\utils\data_helpers\data_dispatcher.py", line 354, in _data_dispatch
    return create_shared(inputs, request) if is_shared else create_copied(inputs, request)
  File "C:\software\conda\envs\torch\lib\functools.py", line 877, in wrapper
    return dispatch(args[0].__class__)(*args, **kw)
  File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\utils\data_helpers\data_dispatcher.py", line 182, in _
    return {k: value_to_tensor(v, request=request, is_shared=True, key=k) for k, v in request._inputs_data.items()}
  File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\utils\data_helpers\data_dispatcher.py", line 182, in <dictcomp>
    return {k: value_to_tensor(v, request=request, is_shared=True, key=k) for k, v in request._inputs_data.items()}
  File "C:\software\conda\envs\torch\lib\functools.py", line 877, in wrapper
    return dispatch(args[0].__class__)(*args, **kw)
  File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\utils\data_helpers\data_dispatcher.py", line 59, in _
    tensor = get_request_tensor(request, key)
  File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\utils\data_helpers\data_dispatcher.py", line 27, in get_request_tensor
    return request.get_tensor(key)
RuntimeError: Exception from src\inference\src\infer_request.cpp:194:
Check '::getPort(port, name, {_impl->get_inputs(), _impl->get_outputs()})' failed at src\inference\src\infer_request.cpp:194:
Port for tensor name x was not found.

Issue submission checklist

  • I'm reporting an issue. It's not a question.
  • I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
  • There is reproducer code and related data files such as images, videos, models, etc.

Metadata

Metadata

Type

No type

Projects

Status

In Review

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions