Skip to content

Add UCF101 dataset support for multi-modal benchmark#1210

Open
futurenitian wants to merge 3 commits intovllm-project:mainfrom
futurenitian:benchmark-ucf101
Open

Add UCF101 dataset support for multi-modal benchmark#1210
futurenitian wants to merge 3 commits intovllm-project:mainfrom
futurenitian:benchmark-ucf101

Conversation

@futurenitian
Copy link

This PR follows the RFC #752

Purpose

Add support for UCF101 dataset in multi-modal benchmark

File Structure

Core Benchmark Module (vllm_omni/benchmarks/)

vllm_omni/benchmarks/
├── data_modules/
│   ├── __init__.py
│   └── random_multi_modal_dataset.py
│   └── ucf101_multi_modal_dataset.py
├── metrics/
│   ├── __init__.py
│   └── metrics.py
├── patch/
│   ├── __init__.py
│   └── patch.py
└── serve.py

Test Plan

Test command

vllm bench serve \
  --omni \
  --port 43845 \
  --endpoint /v1/chat/completions \
  --backend openai-chat-omni \
  --model /home/models/Qwen/Qwen3-Omni-30B-A3B-Instruct \
  --dataset-name hf \
  --dataset-path dataset_path/Class_Name \
  --num-prompts 2 \
  --random-prefix-len 5 \
  --random-input-len 10 \
  --random-output-len 100 \
  --percentile-metrics ttft,tpot,itl,e2el,audio_ttfp,audio_rtf \
  --ignore-eos

Test Result

============ Serving Benchmark Result ============
Successful requests:                     2         
Failed requests:                         0         
Benchmark duration (s):                  13.94     
Request throughput (req/s):              0.14      
Peak concurrent requests:                2.00      
----------------End-to-end Latency----------------
Mean E2EL (ms):                          13726.47  
Median E2EL (ms):                        13726.47  
P99 E2EL (ms):                           13931.60  
================== Text Result ===================
Total input tokens:                      30        
Total generated tokens:                  10101     
Output token throughput (tok/s):         724.67    
Peak output token throughput (tok/s):    107.00    
Peak concurrent requests:                2.00      
Total Token throughput (tok/s):          726.82    
---------------Time to First Token----------------
Mean TTFT (ms):                          9069.45   
Median TTFT (ms):                        9069.45   
P99 TTFT (ms):                           9153.89   
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          0.92      
Median TPOT (ms):                        0.92      
P99 TPOT (ms):                           0.95      
---------------Inter-token Latency----------------
Mean ITL (ms):                           25.49     
Median ITL (ms):                         16.20     
P99 ITL (ms):                            543.62    
================== Audio Result ==================
Total audio duration generated(s):       15.79     
Total audio frames generated:            379050    
Audio throughput(audio duration/s):      1.13      
---------------Time to First Packet---------------
Mean AUDIO_TTFP (ms):                    13596.52  
Median AUDIO_TTFP (ms):                  13596.52  
P99 AUDIO_TTFP (ms):                     13794.41  
-----------------Real Time Factor-----------------
Mean AUDIO_RTF:                          0.00      
Median AUDIO_RTF:                        0.00      
P99 AUDIO_RTF:                           0.00      
==================================================

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

Add support for 'hf' dataset in input request sampling.

Signed-off-by: future <3172516720@qq.com>
Signed-off-by: future <3172516720@qq.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3007854fd3

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

elif args.dataset_name == "hf":
if not args.dataset_path:
raise ValueError("dataset_path must be specified for ucf101-subset dataset.")
dataset = UCF101MultiModalDataset(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Add missing import for UCF101MultiModalDataset

When args.dataset_name == "hf" this branch constructs UCF101MultiModalDataset, but patch.py never imports that symbol, so the benchmark will raise NameError at runtime as soon as the branch is hit. This makes the new dataset path unusable in the openai-chat-omni flow until the class is imported from vllm_omni.benchmarks.data_modules.ucf101_multi_modal_dataset.

Useful? React with 👍 / 👎.

Comment on lines +78 to +80
return {
"type": "video_url",
"video_url": {"url": f"data:video/mp4;base64,{video_base64}"},

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Fix MIME type for .avi videos in data URLs

UCF101 includes many .avi files, and load_ucf101_subset explicitly allows .avi, but process_ucf101_video always emits data:video/mp4;base64,.... When the selected file is AVI, downstream clients that rely on the MIME header can fail to decode or mis-handle the payload. The MIME type should be derived from the file extension (e.g., video/x-msvideo for .avi) to avoid decoding errors for the default UCF101 files.

Useful? React with 👍 / 👎.

Signed-off-by: future <3172516720@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant