Skip to content
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 43 additions & 0 deletions samples/cpp/module_genai/config_yaml/Qwen3-Omni/config_prompt.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
global_context:
model_type: "qwen3_omni"

pipeline_modules:
pipeline_params:
type: "ParameterModule"
outputs:
- name: "prompt"
type: "String"
Copy link

Copilot AI Mar 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A blank line is missing between the pipeline_params and prompt_encoder module definitions. All other pipeline YAML configs in this directory consistently use blank lines to separate top-level modules (e.g., Qwen3.5-0.8B/config_text.yaml:10, Qwen3-Omni/config.yaml:12). Add a blank line after line 9 for consistency.

Suggested change
type: "String"
type: "String"

Copilot uses AI. Check for mistakes.
prompt_encoder:
type: "TextEncoderModule"
device: "GPU"
inputs:
- name: "prompt"
type: "String"
source: "pipeline_params.prompt"
outputs:
- name: "input_ids"
type: "OVTensor"
Copy link

Copilot AI Mar 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The prompt_encoder is missing the mask output declaration. The TextEncoderModule::run() method unconditionally writes to this->outputs["mask"].data for the QWEN3_OMNI model type (see src/cpp/src/module_genai/modules/md_text_encoder.cpp:159). While this won't crash (the map auto-inserts), it's inconsistent with every other TextEncoderModule config in the codebase — for example, the analogous text-only config at samples/cpp/module_genai/config_yaml/Qwen3.5-0.8B/config_text.yaml:21-22 and the sibling config.yaml at samples/cpp/module_genai/config_yaml/Qwen3-Omni/config.yaml:48-49 both declare the mask output. Add the mask output to maintain consistency and properly document the module's outputs.

Suggested change
type: "OVTensor"
type: "OVTensor"
- name: "mask"
type: "OVTensor"

Copilot uses AI. Check for mistakes.
params:
model_path: "./tests/module_genai/cpp/test_models/Qwen3-Omni-4B-Instruct-multilingual/"

llm:
type: "LLMInferenceSDPAModule"
device: "GPU"
inputs:
- name: "input_ids"
type: "OVTensor"
source: "prompt_encoder.input_ids"
outputs:
- name: "generated_text"
type: "String"
params:
model_path: "./tests/module_genai/cpp/test_models/Qwen3-Omni-4B-Instruct-multilingual/qwen3_omni_text_model.xml"
max_new_tokens: 512

pipeline_result:
type: "ResultModule"
description: "Collects final results and formats the output structure."
inputs:
- name: "generated_text"
type: "String"
source: "llm.generated_text"
Loading