Skip to content

fix: benchmark cannot run as expected#8

Merged
zhaochenyang20 merged 1 commit intozhaochenyang20:mainfrom
alphabetc1:fix/benchmark
Feb 21, 2026
Merged

fix: benchmark cannot run as expected#8
zhaochenyang20 merged 1 commit intozhaochenyang20:mainfrom
alphabetc1:fix/benchmark

Conversation

@alphabetc1
Copy link
Copy Markdown
Collaborator

@alphabetc1 alphabetc1 commented Feb 21, 2026

Motivation

fix #7 and #6

Modification

  1. Switches the benchmark module from sglang.bench_serving to sglang.multimodal_gen.benchmarks.bench_serving, which is the correct entry point for diffusion model benchmarks.
  2. Updates example model references from Wan-AI/Wan2.2-T2V-A14B-Diffusers to Qwen/Qwen-Image in both bench_router.py and bench_routing_algorithms.py.
  3. Adds outputs/ to .gitignore and removes accidentally committed generated files (images and stale benchmark results).
  4. Fixes a missing trailing newline in development.md.
  5. Use Qwen-Image instead of stable-diffusion-3 and return the image data as Base64 (no storage configuration required).

Benchmark Results:

SGLANG_USE_MODELSCOPE=TRUE python tests/benchmarks/diffusion_router/bench_routing_algorithms.py --model Qwen/Qwen-Image --num-workers 2 --num-prompts 10 --max-concurrency 2

algorithm,throughput_qps,latency_mean,latency_median,latency_p99,duration,completed_requests,failed_requests,error,throughput_qps_delta_pct,latency_mean_delta_pct,latency_median_delta_pct,latency_p99_delta_pct
least-request,0.03462250168909363,57.153994519263506,55.072025599190965,68.1018473076215,288.82950428593904,10,0,,32.68911687136227,-9.654714058749235,-0.8265809281806474,-26.858325059915124
round-robin,0.03392392039319653,58.62613519844599,55.09087577019818,75.73814111900516,294.77725109877065,10,0,,30.01183675891339,-7.327650627309435,-0.7926356376536625,-18.656912884173764
random,0.026092947564538397,63.26173405043988,55.53103453991935,93.10949928807095,344.92078665085137,9,1,,0.0,0.0,0.0,0.0
{
  "results": {
    "least-request": {
      "duration": 288.82950428593904,
      "completed_requests": 10,
      "failed_requests": 0,
      "throughput_qps": 0.03462250168909363,
      "latency_mean": 57.153994519263506,
      "latency_median": 55.072025599190965,
      "latency_p99": 68.1018473076215,
      "latency_p50": 55.072025599190965,
      "peak_memory_mb_max": 50108.0,
      "peak_memory_mb_mean": 50107.2,
      "peak_memory_mb_median": 50108.0
    },
    "round-robin": {
      "duration": 294.77725109877065,
      "completed_requests": 10,
      "failed_requests": 0,
      "throughput_qps": 0.03392392039319653,
      "latency_mean": 58.62613519844599,
      "latency_median": 55.09087577019818,
      "latency_p99": 75.73814111900516,
      "latency_p50": 55.09087577019818,
      "peak_memory_mb_max": 50108.0,
      "peak_memory_mb_mean": 50107.2,
      "peak_memory_mb_median": 50108.0
    },
    "random": {
      "duration": 344.92078665085137,
      "completed_requests": 9,
      "failed_requests": 1,
      "throughput_qps": 0.026092947564538397,
      "latency_mean": 63.26173405043988,
      "latency_median": 55.53103453991935,
      "latency_p99": 93.10949928807095,
      "latency_p50": 55.53103453991935,
      "peak_memory_mb_max": 50108.0,
      "peak_memory_mb_mean": 50107.11111111111,
      "peak_memory_mb_median": 50108.0
    }
  },
  "rows": [
    {
      "algorithm": "least-request",
      "throughput_qps": 0.03462250168909363,
      "latency_mean": 57.153994519263506,
      "latency_median": 55.072025599190965,
      "latency_p99": 68.1018473076215,
      "duration": 288.82950428593904,
      "completed_requests": 10,
      "failed_requests": 0,
      "error": "",
      "throughput_qps_delta_pct": 32.68911687136227,
      "latency_mean_delta_pct": -9.654714058749235,
      "latency_median_delta_pct": -0.8265809281806474,
      "latency_p99_delta_pct": -26.858325059915124
    },
    {
      "algorithm": "round-robin",
      "throughput_qps": 0.03392392039319653,
      "latency_mean": 58.62613519844599,
      "latency_median": 55.09087577019818,
      "latency_p99": 75.73814111900516,
      "duration": 294.77725109877065,
      "completed_requests": 10,
      "failed_requests": 0,
      "error": "",
      "throughput_qps_delta_pct": 30.01183675891339,
      "latency_mean_delta_pct": -7.327650627309435,
      "latency_median_delta_pct": -0.7926356376536625,
      "latency_p99_delta_pct": -18.656912884173764
    },
    {
      "algorithm": "random",
      "throughput_qps": 0.026092947564538397,
      "latency_mean": 63.26173405043988,
      "latency_median": 55.53103453991935,
      "latency_p99": 93.10949928807095,
      "duration": 344.92078665085137,
      "completed_requests": 9,
      "failed_requests": 1,
      "error": "",
      "throughput_qps_delta_pct": 0.0,
      "latency_mean_delta_pct": 0.0,
      "latency_median_delta_pct": 0.0,
      "latency_p99_delta_pct": 0.0
    }
  ],
  "baseline": "random"
}

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @alphabetc1, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses an issue where benchmark scripts were not running as expected by updating the core benchmarking module and correcting model references. It also includes general cleanup by ignoring generated output files and removing previously committed stale data, alongside a minor documentation formatting fix. These changes ensure the benchmark system functions correctly and maintains a clean repository state.

Highlights

  • Benchmark Module Update: The benchmark module has been switched from sglang.bench_serving to sglang.multimodal_gen.benchmarks.bench_serving, ensuring the correct entry point for diffusion model benchmarks is used.
  • Model Reference Updates: Example model references in benchmark scripts (bench_router.py and bench_routing_algorithms.py) have been updated from Wan-AI/Wan2.2-T2V-A14B-Diffusers to Qwen/Qwen-Image.
  • .gitignore and File Cleanup: The outputs/ directory has been added to .gitignore, and accidentally committed generated files (images and stale benchmark results) have been removed.
  • Documentation Fix: A missing trailing newline in development.md has been corrected.
Changelog
  • .gitignore
    • Added 'outputs/' directory to the ignore list for generated files.
  • development.md
    • Fixed a missing trailing newline at the end of the file.
  • tests/benchmarks/diffusion_router/bench_router.py
    • Updated the default model reference in the example command from 'Wan-AI/Wan2.2-T2V-A14B-Diffusers' to 'Qwen/Qwen-Image'.
    • Changed the benchmark module from sglang.bench_serving to sglang.multimodal_gen.benchmarks.bench_serving.
    • Removed the --backend sglang argument.
    • Renamed the --dataset-name argument to --dataset.
    • Refactored the --max-concurrency argument handling to be conditional.
    • Adjusted the example max-concurrency value from 4 to 2.
  • tests/benchmarks/diffusion_router/bench_routing_algorithms.py
    • Updated the default model reference in the example command from 'Wan-AI/Wan2.2-T2V-A14B-Diffusers' to 'Qwen/Qwen-Image'.
  • tests/benchmarks/diffusion_router/outputs/routing_algo_compare_20260220_215441/routing_algorithm_comparison.csv
    • Removed an accidentally committed benchmark results CSV file.
  • tests/benchmarks/diffusion_router/outputs/routing_algo_compare_20260220_215441/routing_algorithm_comparison.json
    • Removed an accidentally committed benchmark results JSON file.
Activity
  • No human activity (comments, reviews) has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@alphabetc1 alphabetc1 force-pushed the fix/benchmark branch 2 times, most recently from d6643db to 516f7a4 Compare February 21, 2026 09:10
@alphabetc1 alphabetc1 changed the title [CI] fix: benchmark cannot run as expected fix: benchmark cannot run as expected Feb 21, 2026
@zhaochenyang20 zhaochenyang20 merged commit 7e0a78c into zhaochenyang20:main Feb 21, 2026
1 check passed
@alphabetc1 alphabetc1 deleted the fix/benchmark branch February 22, 2026 04:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Two benchmark can not run as expected

2 participants