Skip to content

[Bug]: Qwen2.5 VL inference in GPU gives output as !!!!!!!!!! #30440

Open
@nvsreerag

Description

@nvsreerag

OpenVINO Version

2025.2.0.dev20250505

Operating System

Windows System

Device used for inference

GPU

Framework

PyTorch

Model used

Qwen2.5 VL

Issue description

I have trying to do the inference of of Qwen2.5 VL model with OpenVINO and OpenVINO GenAI nightly and it is working in CPU. But When trying to do the inference of Qwen2.5 VL on GPU and the output is appearing as !!!!!!!!!! and not in a readable format.

Python package details are:

openvino                  2025.2.0.dev20250505
openvino-genai            2025.2.0.0.dev20250505

Do you have any suggestions or workarounds to resolve this?

Step-by-step reproduction

No response

Relevant log output

!!!!!!!!!!

Issue submission checklist

  • I'm reporting an issue. It's not a question.
  • I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
  • There is reproducer code and related data files such as images, videos, models, etc.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions