Skip to content

[Bugfix] Robust Audio Data Handling in _create_audio_choice#1222

Open
LJH-LBJ wants to merge 1 commit intovllm-project:mainfrom
LJH-LBJ:fix-Qwen2.5-omni-stream-audio-tensor
Open

[Bugfix] Robust Audio Data Handling in _create_audio_choice#1222
LJH-LBJ wants to merge 1 commit intovllm-project:mainfrom
LJH-LBJ:fix-Qwen2.5-omni-stream-audio-tensor

Conversation

@LJH-LBJ
Copy link
Contributor

@LJH-LBJ LJH-LBJ commented Feb 5, 2026

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

Added type and shape checks for audio_data
Fix: #1083
Fix: JiusiServe#93

Test Plan

vllm serve /workspace/models/Qwen2.5-Omni-7B --omni --port 8014 --stage-configs-path ./tests/e2e/offline_inference/stage_configs/qwen2_5_omni_ci.yaml

python openai_chat_completion_client_for_multimodal_generation.py   --query-type mixed_modalities   --video-path sample_demo_1.mp4  --image-path cherry_blossom.jpg --audio-path mary_had_lamb.ogg  --prompt "What are the main activities shown in this video?" --stream

Test Result

(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361] [Summary] {'e2e_requests': 1,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]  'e2e_total_time_ms': 42622.36046791077,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]  'e2e_sum_time_ms': 42621.888875961304,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]  'e2e_total_tokens': 9210,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]  'e2e_avg_time_per_request_ms': 42621.888875961304,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]  'e2e_avg_tokens_per_s': 216.08615298123095,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]  'wall_time_ms': 42622.36046791077,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]  'final_stage_id': 2,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]  'stages': [{'stage_id': 0,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]              'requests': 1,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]              'tokens': 7989,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]              'total_time_ms': 7724.236726760864,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]              'avg_time_per_request_ms': 7724.236726760864,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]              'avg_tokens_per_s': 1034.2769496333347},
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]             {'stage_id': 1,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]              'requests': 1,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]              'tokens': 1221,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]              'total_time_ms': 24869.222402572632,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]              'avg_time_per_request_ms': 24869.222402572632,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]              'avg_tokens_per_s': 49.096830622001754},
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]             {'stage_id': 2,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]              'requests': 1,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]              'tokens': 0,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]              'total_time_ms': 9769.651412963867,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]              'avg_time_per_request_ms': 9769.651412963867,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]              'avg_tokens_per_s': 0.0}],
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]  'transfers': [{'from_stage': 0,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'to_stage': 1,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'samples': 1,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'total_bytes': 144337127,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'total_time_ms': 174.50428009033203,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'tx_mbps': 6617.012576438078,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'rx_samples': 1,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'rx_total_bytes': 144337127,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'rx_total_time_ms': 165.90595245361328,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'rx_mbps': 6959.94929008258,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'total_samples': 1,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'total_transfer_time_ms': 343.6088562011719,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'total_mbps': 3360.498413125773},
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                {'from_stage': 1,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'to_stage': 2,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'samples': 1,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'total_bytes': 29761143,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'total_time_ms': 31.46052360534668,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'tx_mbps': 7567.869721009253,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'rx_samples': 1,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'rx_total_bytes': 29761143,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'rx_total_time_ms': 32.97233581542969,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'rx_mbps': 7220.875867962746,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'total_samples': 1,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'total_transfer_time_ms': 67.53969192504883,
(APIServer pid=626669) INFO 02-05 17:27:14 [async_omni.py:361]                 'total_mbps': 3525.1736751215253}]}


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 73ee36b903

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +1682 to +1683
if isinstance(audio_data, list):
audio_data = torch.cat(audio_data, dim=-1)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve delta-only audio payloads in streaming mode

This change removes the stream-specific path and now always concatenates the full audio_data list, but in streaming responses _create_audio_choice(..., stream=True) is called for every emitted omni_res (see the loop around final_output_type == "audio"), and the output processor accumulates audio chunks in a growing list across emissions. As a result, each streamed chunk now contains all prior audio again instead of only the newest delta, which causes duplicated audio for clients that append deltas and makes payload size grow rapidly over long generations.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

1 participant