Skip to content

Conversation

@Honny1
Copy link
Collaborator

@Honny1 Honny1 commented Nov 3, 2025

Summary by CodeRabbit

  • New Features

    • Added a broad container benchmarking suite (many new CPU, memory, file I/O, network, image and lifecycle benchmarks) and improved synthetic-benchmark reporting with per-run logs.
  • Bug Fixes

    • Improved Windows stop reliability for container engines by adding a delay to avoid SSH termination issues.
  • Documentation

    • Added autogenerated documentation pages for all new benchmarks and reorganized the benchmarks index.

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 3, 2025
@openshift-ci
Copy link

openshift-ci bot commented Nov 3, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign tosokin for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link

openshift-ci bot commented Nov 3, 2025

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@coderabbitai
Copy link

coderabbitai bot commented Nov 3, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

Adds ~50 new autogenerated Ansible toolbox roles, defaults, tasks and RST docs for many Container_Bench benchmarks; migrates benchmark roles to use an exec_props dict, expands test matrix/images and Windows/Chocolatey support, refactors shared image-pull tasks, and extends visualization parsing/reporting to handle synthetic benchmark artifacts.

Changes

Cohort / File(s) Change Summary
Docs (auto-generated RST)
docs/toolbox.generated/Container_Bench.*.rst (24 files), docs/toolbox.generated/index.rst
Added many autogenerated ReST pages for new container_bench benchmarks and reorganized container_bench index entries.
Test config & manager
projects/container_bench/testing/config.yaml, projects/container_bench/testing/config_manager.py
Expanded benchmark suite and images, added Chocolatey config keys and ConfigManager.get_choco_config(), adjusted Podman machine and repo_version selections.
Installer / Windows support
projects/container_bench/testing/install.py, projects/container_bench/testing/container_engine.py
Added Windows branch to install dependencies via Chocolatey; appended Start-Sleep on Windows stop commands to mitigate SSH issues.
Toolbox API
projects/container_bench/toolbox/container_bench.py
Renamed/added many public benchmark methods and migrated signatures to accept a single exec_props dict; methods invoke RunAnsibleRole(locals()).
Ansible role defaults
projects/container_bench/toolbox/*/defaults/main/config.yml (≈30 files)
Replaced many per-role flat exec vars with standardized *_exec_props dict and set ansible_os_family: Linux in autogenerated defaults.
Ansible role tasks (benchmarks, sysbench, iperf, artifact)
projects/container_bench/toolbox/container_bench_*/tasks/main.yml (many files)
New/rewritten task flows that map exec_props → facts, set images/commands/log paths, and include OS-specific benchmark bodies (Unix/Windows).
Shared tasks — image handling
projects/container_bench/toolbox/shared_tasks/pull_image.yml, pull_single_image.yml
Refactored to support images_to_pull lists, unified per-image loop, tarfile existence checks, and consolidated load vs pull logic across OSes.
Shared benchmark bodies
projects/container_bench/toolbox/shared_tasks/{unix_default_benchmark_body.yml,windows_default_benchmark_body.yml,unix_synthetic_benchmark_body.yml,windows_synthetic_benchmark_body.yml}
Adjusted pull guards to consider images_to_pull; added new synthetic benchmark bodies for Unix/Windows with timestamping, artifact saving and cleanup.
Visualization — parsers & config
projects/container_bench/visualizations/benchmark/store/parsers.py, .../plotting/utils/config.py
Added synthetic benchmark detection/parsers, constants and artifact handling; expose metric_type and populate synthetic-specific fields for downstream rendering.
Visualization — reporting
projects/container_bench/visualizations/benchmark/plotting/{comparison_report.py,report.py}
Added synthetic benchmark table/log rendering, new style constants and card builders; integrated synthetic paths into report generation.
Tests
projects/container_bench/testing/test_container_bench.py
Updated test helpers to produce exec_props nested structure for benchmark invocation.
Removed / refactored mapping
projects/container_bench/toolbox/shared_tasks/map_variables.yml, .../container_bench_helloworld_*
Removed old long-name → short alias mapping and removed/rewrote helloworld/exec-specific defaults/tasks in favor of exec_props-based roles.

Sequence Diagram(s)

sequenceDiagram
    participant Caller as Toolbox/API
    participant Role as Ansible Role (container_bench_*)
    participant Shared as Shared Tasks
    participant Host as Target Host

    Caller->>Role: call <benchmark>(exec_props)
    Role->>Role: set_fact exec_props → binary_path, additional_args, exec_time_path, rootfull
    Role->>Role: set_fact image(s), prepare/cleanup, benchmark_command, log paths
    alt Non-Windows
        Role->>Shared: include unix_*_benchmark_body.yml
        Shared->>Host: pull images (image_to_pull or images_to_pull)
        Host->>Host: run prepare_commands, record timestamp, execute benchmark_command, write log & timestamp, cleanup
    else Windows
        Role->>Shared: include windows_*_benchmark_body.yml
        Shared->>Host: pull images (image_to_pull or images_to_pull)
        Host->>Host: run PowerShell prepare, timestamp, execute benchmark_command, write log & timestamp, cleanup
    end
    Role-->>Caller: RunAnsibleRole result
Loading
sequenceDiagram
    participant Parser
    participant FS as Filesystem
    participant Store
    participant Viz

    Parser->>FS: look for metrics.json
    alt metrics.json present
        Parser->>Store: parse performance metrics (container_bench)
    else
        Parser->>FS: match dirname → synthetic mapping
        Parser->>FS: read benchmark log and _ts.yaml
        Parser->>Store: parse synthetic metric and timestamp
    end
    Store->>Viz: render appropriate report branch by metric_type
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Areas requiring extra attention:

  • projects/container_bench/testing/config.yaml — many structural and matrix changes.
  • projects/container_bench/toolbox/container_bench.py — extensive public API renames and new methods.
  • projects/container_bench/visualizations/benchmark/store/parsers.py — new synthetic parsing logic and file/timestamp handling.
  • Windows flows and PowerShell tasks (container_engine.py, install.py, windows_* benchmark bodies) — path/command correctness and timing.

Possibly related PRs

Suggested labels

ok-to-test

Poem

🐇 I hopped through YAML, roles and logs,

Swapped many vars for tidy exec_props cogs.
Sysbench hums and iperf sings,
Windows naps with Start‑Sleep wings.
New docs and charts — a carrot of new things.

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title '[container_benchmarks] Benchmark Implementation' is clear and directly reflects the main change: implementation of container benchmark functionality with multiple new benchmark methods, configuration updates, and documentation.
Docstring Coverage ✅ Passed Docstring coverage is 84.00% which is sufficient. The required threshold is 80.00%.

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b4292c6 and 312dd8e.

📒 Files selected for processing (91)
  • docs/toolbox.generated/Container_Bench.artifact_add_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.commit_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.create_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.create_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.exec_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.image_build_large_build_context_benchmark.rst (2 hunks)
  • docs/toolbox.generated/Container_Bench.iperf_host_to_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.iperf_net_bridge_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.iperf_net_host_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.list_images_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.load_image_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.parallel_remove_image_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.pull_image_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.remove_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.remove_image_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.run_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.run_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.save_image_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.start_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.start_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.sysbench_cpu_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.sysbench_fileIO_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.sysbench_fileIO_mount_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.sysbench_memory_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.sysbench_memory_read_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.sysbench_memory_write_benchmark.rst (1 hunks)
  • docs/toolbox.generated/index.rst (1 hunks)
  • projects/container_bench/testing/config.yaml (6 hunks)
  • projects/container_bench/testing/config_manager.py (3 hunks)
  • projects/container_bench/testing/container_engine.py (2 hunks)
  • projects/container_bench/testing/install.py (1 hunks)
  • projects/container_bench/testing/test_container_bench.py (1 hunks)
  • projects/container_bench/toolbox/container_bench.py (1 hunks)
  • projects/container_bench/toolbox/container_bench_artifact_add_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_artifact_add_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_commit_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_commit_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_create_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_create_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_exec_benchmark/defaults/main/config.yml (0 hunks)
  • projects/container_bench/toolbox/container_bench_exec_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_exec_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_helloworld_benchmark/defaults/main/config.yml (0 hunks)
  • projects/container_bench/toolbox/container_bench_helloworld_benchmark/tasks/main.yml (0 hunks)
  • projects/container_bench/toolbox/container_bench_image_build_large_build_context_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_image_build_large_build_context_benchmark/tasks/main.yaml (1 hunks)
  • projects/container_bench/toolbox/container_bench_iperf_host_to_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_iperf_host_to_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_iperf_net_bridge_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_iperf_net_bridge_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_iperf_net_host_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_iperf_net_host_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_list_images_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_list_images_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_load_image_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_load_image_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_parallel_remove_image_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_parallel_remove_image_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_pull_image_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_pull_image_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_remove_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_remove_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_remove_image_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_remove_image_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_run_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_run_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_save_image_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_save_image_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_start_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_start_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_cpu_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_cpu_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_fileio_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_fileio_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_fileio_mount_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_fileio_mount_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_memory_read_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_memory_read_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_memory_write_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_memory_write_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/shared_tasks/map_variables.yml (0 hunks)
  • projects/container_bench/toolbox/shared_tasks/pull_image.yml (3 hunks)
  • projects/container_bench/toolbox/shared_tasks/pull_single_image.yml (1 hunks)
  • projects/container_bench/toolbox/shared_tasks/unix_default_benchmark_body.yml (1 hunks)
  • projects/container_bench/toolbox/shared_tasks/unix_synthetic_benchmark_body.yml (1 hunks)
  • projects/container_bench/toolbox/shared_tasks/windows_default_benchmark_body.yml (1 hunks)
  • projects/container_bench/toolbox/shared_tasks/windows_synthetic_benchmark_body.yml (1 hunks)
  • projects/container_bench/visualizations/benchmark/plotting/comparison_report.py (4 hunks)
  • projects/container_bench/visualizations/benchmark/plotting/report.py (3 hunks)
  • projects/container_bench/visualizations/benchmark/plotting/utils/config.py (2 hunks)
  • projects/container_bench/visualizations/benchmark/store/parsers.py (4 hunks)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Honny1
Copy link
Collaborator Author

Honny1 commented Nov 3, 2025

/test cont_bench-jump-ci
/cluster windows

@topsail-bot
Copy link

topsail-bot bot commented Nov 3, 2025

🔴 Test of 'container_bench test test_ci' failed after 03 hours 06 minutes 59 seconds. 🔴

• Link to the test results.

• Link to the reports index.

Test configuration:

PR_POSITIONAL_ARGS: cont_bench-jump-ci
PR_POSITIONAL_ARG_0: cont_bench-jump-ci

Failure indicator:

/tmp/topsail_202511031762194880/000__matbenchmarking/FAILURE | MatrixBenchmark benchmark failed.
RuntimeError: _run_test_matbenchmarking: matbench benchmark failed :/
Traceback (most recent call last):
  File "/opt/topsail/src/projects/container_bench/testing/test_container_bench.py", line 170, in matbench_run
    raise RuntimeError(msg)
RuntimeError: _run_test_matbenchmarking: matbench benchmark failed :/

/tmp/topsail_202511031762194880/000__matbenchmarking/container_bench_podman/003__test/000__podman_artifact_add_benchmark_run_dir/008__container_bench__artifact_add_benchmark_run_metrics/FAILURE | [008__container_bench__artifact_add_benchmark_run_metrics] ./run_toolbox.py container_bench artifact_add_benchmark --binary_path=C:/Users/jrodak-topsail/podman-custom/podman-windows-amd64.exe --rootfull=False --additional_args= --exec_time_path=C:/Users/jrodak-topsail/utils/exec_time.py --> 2
/tmp/topsail_202511031762194880/000__matbenchmarking/container_bench_podman/003__test/000__podman_artifact_add_benchmark_run_dir/FAILURE | CalledProcessError: Command 'set -o errexit;set -o pipefail;set -o nounset;set -o errtrace;ARTIFACT_DIR="/tmp/topsail_202511031762194880/000__matbenchmarking/container_bench_podman/003__test/000__podman_artifact_add_benchmark_run_dir" ARTIFACT_TOOLBOX_NAME_SUFFIX="_run_metrics" ./run_toolbox.py container_bench artifact_add_benchmark --binary_path='C:/Users/jrodak-topsail/podman-custom/podman-windows-amd64.exe' --rootfull='False' --additional_args='' --exec_time_path='C:/Users/jrodak-topsail/utils/exec_time.py'' returned non-zero exit status 2.
Traceback (most recent call last):

[...]

@Honny1
Copy link
Collaborator Author

Honny1 commented Nov 4, 2025

/test cont_bench-jump-ci
/cluster windows

@topsail-bot
Copy link

topsail-bot bot commented Nov 4, 2025

🟢 Test of 'container_bench test test_ci' succeeded after 01 hours 57 minutes 31 seconds. 🟢

• Link to the test results.

• Link to the reports index.

Test configuration:

PR_POSITIONAL_ARGS: cont_bench-jump-ci
PR_POSITIONAL_ARG_0: cont_bench-jump-ci

@Honny1
Copy link
Collaborator Author

Honny1 commented Nov 6, 2025

/test cont_bench-jump-ci
/cluster darwin

@topsail-bot
Copy link

topsail-bot bot commented Nov 6, 2025

🔴 Test of 'container_bench test test_ci' failed after 00 hours 04 minutes 49 seconds. 🔴

• Link to the test results.

• Link to the reports index.

Test configuration:

PR_POSITIONAL_ARGS: cont_bench-jump-ci
PR_POSITIONAL_ARG_0: cont_bench-jump-ci

Failure indicator:

/tmp/topsail_202511061762389787/000__matbenchmarking/FAILURE | MatrixBenchmark benchmark failed.
RuntimeError: _run_test_matbenchmarking: matbench benchmark failed :/
Traceback (most recent call last):
  File "/opt/topsail/src/projects/container_bench/testing/test_container_bench.py", line 170, in matbench_run
    raise RuntimeError(msg)
RuntimeError: _run_test_matbenchmarking: matbench benchmark failed :/

/tmp/topsail_202511061762389787/000__matbenchmarking/container_bench_podman/000__test/000__podman_artifact_add_benchmark_run_dir/002__container_bench__artifact_add_benchmark_run_metrics/FAILURE | [002__container_bench__artifact_add_benchmark_run_metrics] ./run_toolbox.py container_bench artifact_add_benchmark --binary_path=/Users/jrodak-topsail/topsail.ci/podman-v5.6.0/usr/bin/podman --rootfull=False --additional_args= --exec_time_path=/Users/jrodak-topsail/topsail.ci/utils/exec_time.py --> 2
/tmp/topsail_202511061762389787/000__matbenchmarking/container_bench_podman/000__test/000__podman_artifact_add_benchmark_run_dir/FAILURE | CalledProcessError: Command 'set -o errexit;set -o pipefail;set -o nounset;set -o errtrace;ARTIFACT_DIR="/tmp/topsail_202511061762389787/000__matbenchmarking/container_bench_podman/000__test/000__podman_artifact_add_benchmark_run_dir" ARTIFACT_TOOLBOX_NAME_SUFFIX="_run_metrics" ./run_toolbox.py container_bench artifact_add_benchmark --binary_path='/Users/jrodak-topsail/topsail.ci/podman-v5.6.0/usr/bin/podman' --rootfull='False' --additional_args='' --exec_time_path='/Users/jrodak-topsail/topsail.ci/utils/exec_time.py'' returned non-zero exit status 2.
Traceback (most recent call last):

[...]

@Honny1
Copy link
Collaborator Author

Honny1 commented Nov 6, 2025

/test cont_bench-jump-ci
/cluster darwin

@topsail-bot
Copy link

topsail-bot bot commented Nov 6, 2025

🟢 Test of 'container_bench test test_ci' succeeded after 01 hours 10 minutes 03 seconds. 🟢

• Link to the test results.

• Link to the reports index.

Test configuration:

PR_POSITIONAL_ARGS: cont_bench-jump-ci
PR_POSITIONAL_ARG_0: cont_bench-jump-ci

@Honny1
Copy link
Collaborator Author

Honny1 commented Nov 6, 2025

/test cont_bench-jump-ci
/cluster linux

@topsail-bot
Copy link

topsail-bot bot commented Nov 6, 2025

🔴 Test of 'container_bench test pre_cleanup_ci' failed after 00 hours 00 minutes 05 seconds. 🔴

• Link to the test results.

• No reports index generated...

Test configuration:

PR_POSITIONAL_ARGS: cont_bench-jump-ci
PR_POSITIONAL_ARG_0: cont_bench-jump-ci

Failure indicator: Empty. (See run.log)

@Honny1
Copy link
Collaborator Author

Honny1 commented Nov 6, 2025

/test cont_bench-jump-ci
/cluster linux

@topsail-bot
Copy link

topsail-bot bot commented Nov 6, 2025

🔴 Test of 'container_bench test test_ci' failed after 00 hours 22 minutes 50 seconds. 🔴

• Link to the test results.

• Link to the reports index.

Test configuration:

PR_POSITIONAL_ARGS: cont_bench-jump-ci
PR_POSITIONAL_ARG_0: cont_bench-jump-ci

Failure indicator:

/tmp/topsail_202511061762427190/000__matbenchmarking/FAILURE | MatrixBenchmark benchmark failed.
RuntimeError: _run_test_matbenchmarking: matbench benchmark failed :/
Traceback (most recent call last):
  File "/opt/topsail/src/projects/container_bench/testing/test_container_bench.py", line 170, in matbench_run
    raise RuntimeError(msg)
RuntimeError: _run_test_matbenchmarking: matbench benchmark failed :/

/tmp/topsail_202511061762427190/000__matbenchmarking/container_bench_podman/000__test/004__podman_sysbench_fileio_container_benchmark_run_dir/002__container_bench__sysbench_fileio_container_benchmark_run_metrics/FAILURE | [002__container_bench__sysbench_fileio_container_benchmark_run_metrics] ./run_toolbox.py container_bench sysbench_fileio_container_benchmark --binary_path=podman --rootfull=False --additional_args= --runtime crun --exec_time_path=/home/jrodak-topsail/utils/exec_time.py --> 1
/tmp/topsail_202511061762427190/000__matbenchmarking/container_bench_podman/000__test/004__podman_sysbench_fileio_container_benchmark_run_dir/FAILURE | CalledProcessError: Command 'set -o errexit;set -o pipefail;set -o nounset;set -o errtrace;ARTIFACT_DIR="/tmp/topsail_202511061762427190/000__matbenchmarking/container_bench_podman/000__test/004__podman_sysbench_fileio_container_benchmark_run_dir" ARTIFACT_TOOLBOX_NAME_SUFFIX="_run_metrics" ./run_toolbox.py container_bench sysbench_fileio_container_benchmark --binary_path='podman' --rootfull='False' --additional_args=' --runtime crun' --exec_time_path='/home/jrodak-topsail/utils/exec_time.py'' returned non-zero exit status 1.
Traceback (most recent call last):

[...]

@Honny1
Copy link
Collaborator Author

Honny1 commented Nov 6, 2025

/test cont_bench-jump-ci
/cluster linux

@topsail-bot
Copy link

topsail-bot bot commented Nov 6, 2025

🟢 Test of 'container_bench test test_ci' succeeded after 00 hours 56 minutes 24 seconds. 🟢

• Link to the test results.

• Link to the reports index.

Test configuration:

PR_POSITIONAL_ARGS: cont_bench-jump-ci
PR_POSITIONAL_ARG_0: cont_bench-jump-ci

@Honny1
Copy link
Collaborator Author

Honny1 commented Nov 6, 2025

/test cont_bench-jump-ci
/cluster linux

@Honny1 Honny1 force-pushed the cont-benchmarks branch 2 times, most recently from f9911f1 to 035cd53 Compare November 6, 2025 15:46
@Honny1
Copy link
Collaborator Author

Honny1 commented Nov 6, 2025

/test cont_bench-jump-ci
/cluster linux

@topsail-bot
Copy link

topsail-bot bot commented Nov 6, 2025

🔴 Test of 'container_bench test test_ci' failed after 00 hours 36 minutes 14 seconds. 🔴

• Link to the test results.

• Link to the reports index.

Test configuration:

PR_POSITIONAL_ARGS: cont_bench-jump-ci
PR_POSITIONAL_ARG_0: cont_bench-jump-ci

Failure indicator:

/tmp/topsail_202511061762444429/000__matbenchmarking/FAILURE | MatrixBenchmark benchmark failed.
RuntimeError: _run_test_matbenchmarking: matbench benchmark failed :/
Traceback (most recent call last):
  File "/opt/topsail/src/projects/container_bench/testing/test_container_bench.py", line 170, in matbench_run
    raise RuntimeError(msg)
RuntimeError: _run_test_matbenchmarking: matbench benchmark failed :/

/tmp/topsail_202511061762444429/000__matbenchmarking/container_bench_podman/000__test/014__podman_pull_image_benchmark_run_dir/002__container_bench__pull_image_benchmark_run_metrics/FAILURE | [002__container_bench__pull_image_benchmark_run_metrics] ./run_toolbox.py container_bench pull_image_benchmark --binary_path=podman --rootfull=False --additional_args= --runtime crun --exec_time_path=/home/jrodak-topsail/utils/exec_time.py --> 2
/tmp/topsail_202511061762444429/000__matbenchmarking/container_bench_podman/000__test/014__podman_pull_image_benchmark_run_dir/FAILURE | CalledProcessError: Command 'set -o errexit;set -o pipefail;set -o nounset;set -o errtrace;ARTIFACT_DIR="/tmp/topsail_202511061762444429/000__matbenchmarking/container_bench_podman/000__test/014__podman_pull_image_benchmark_run_dir" ARTIFACT_TOOLBOX_NAME_SUFFIX="_run_metrics" ./run_toolbox.py container_bench pull_image_benchmark --binary_path='podman' --rootfull='False' --additional_args=' --runtime crun' --exec_time_path='/home/jrodak-topsail/utils/exec_time.py'' returned non-zero exit status 2.
Traceback (most recent call last):

[...]

@Honny1
Copy link
Collaborator Author

Honny1 commented Nov 10, 2025

/test cont_bench-jump-ci
/cluster windows

@topsail-bot
Copy link

topsail-bot bot commented Nov 10, 2025

🔴 Test of 'container_bench test test_ci' failed after 00 hours 01 minutes 48 seconds. 🔴

• Link to the test results.

• Link to the reports index.

Test configuration:

PR_POSITIONAL_ARGS: cont_bench-jump-ci
PR_POSITIONAL_ARG_0: cont_bench-jump-ci

Failure indicator:

/tmp/topsail_202511101762767791/000__matbenchmarking/FAILURE | MatrixBenchmark benchmark failed.
RuntimeError: _run_test_matbenchmarking: matbench benchmark failed :/
Traceback (most recent call last):
  File "/opt/topsail/src/projects/container_bench/testing/test_container_bench.py", line 170, in matbench_run
    raise RuntimeError(msg)
RuntimeError: _run_test_matbenchmarking: matbench benchmark failed :/

/tmp/topsail_202511101762767791/000__matbenchmarking/container_bench_podman/000__test/000__podman_iperf_host_to_container_benchmark_run_dir/002__container_bench__iperf_host_to_container_benchmark_run_metrics/FAILURE | [002__container_bench__iperf_host_to_container_benchmark_run_metrics] ./run_toolbox.py container_bench iperf_host_to_container_benchmark --binary_path=podman --rootfull=False --additional_args= --exec_time_path=C:/Users/jrodak-topsail/utils/exec_time.py --> 2
/tmp/topsail_202511101762767791/000__matbenchmarking/container_bench_podman/000__test/000__podman_iperf_host_to_container_benchmark_run_dir/003__remote__retrieve/FAILURE | [003__remote__retrieve] ./run_toolbox.py remote retrieve --path=/tmp/topsail_202511101762767791/000__matbenchmarking/container_bench_podman/000__test/000__podman_iperf_host_to_container_benchmark_run_dir --dest=/tmp/topsail_202511101762767791/000__matbenchmarking/container_bench_podman/000__test/000__podman_iperf_host_to_container_benchmark_run_dir --> 2
/tmp/topsail_202511101762767791/000__matbenchmarking/container_bench_podman/000__test/000__podman_iperf_host_to_container_benchmark_run_dir/FAILURE | CalledProcessError: Command 'set -o errexit;set -o pipefail;set -o nounset;set -o errtrace;ARTIFACT_DIR="/tmp/topsail_202511101762767791/000__matbenchmarking/container_bench_podman/000__test/000__podman_iperf_host_to_container_benchmark_run_dir" ARTIFACT_TOOLBOX_NAME_SUFFIX="_run_metrics" ./run_toolbox.py container_bench iperf_host_to_container_benchmark --binary_path='podman' --rootfull='False' --additional_args='' --exec_time_path='C:/Users/jrodak-topsail/utils/exec_time.py'' returned non-zero exit status 2.

[...]

@Honny1
Copy link
Collaborator Author

Honny1 commented Nov 10, 2025

/test cont_bench-jump-ci
/cluster windows

@Honny1
Copy link
Collaborator Author

Honny1 commented Nov 10, 2025

/test cont_bench-jump-ci
/cluster windows

1 similar comment
@Honny1
Copy link
Collaborator Author

Honny1 commented Nov 10, 2025

/test cont_bench-jump-ci
/cluster windows

@topsail-bot
Copy link

topsail-bot bot commented Nov 10, 2025

🔴 Test of 'container_bench test test_ci' failed after 01 hours 57 minutes 20 seconds. 🔴

• Link to the test results.

• Link to the reports index.

Test configuration:

PR_POSITIONAL_ARGS: cont_bench-jump-ci
PR_POSITIONAL_ARG_0: cont_bench-jump-ci

Failure indicator:

/tmp/topsail_202511101762780909/000__matbenchmarking/FAILURE | MatrixBenchmark benchmark failed.
RuntimeError: _run_test_matbenchmarking: matbench benchmark failed :/
Traceback (most recent call last):
  File "/opt/topsail/src/projects/container_bench/testing/test_container_bench.py", line 170, in matbench_run
    raise RuntimeError(msg)
RuntimeError: _run_test_matbenchmarking: matbench benchmark failed :/

/tmp/topsail_202511101762780909/000__matbenchmarking/container_bench_docker/000__test/010__docker_iperf_host_to_container_benchmark_run_dir/002__container_bench__iperf_host_to_container_benchmark_run_metrics/FAILURE | [002__container_bench__iperf_host_to_container_benchmark_run_metrics] ./run_toolbox.py container_bench iperf_host_to_container_benchmark --binary_path=docker --rootfull=False --additional_args= --exec_time_path=C:/Users/jrodak-topsail/utils/exec_time.py --> 2
/tmp/topsail_202511101762780909/000__matbenchmarking/container_bench_docker/000__test/010__docker_iperf_host_to_container_benchmark_run_dir/003__remote__retrieve/FAILURE | [003__remote__retrieve] ./run_toolbox.py remote retrieve --path=/tmp/topsail_202511101762780909/000__matbenchmarking/container_bench_docker/000__test/010__docker_iperf_host_to_container_benchmark_run_dir --dest=/tmp/topsail_202511101762780909/000__matbenchmarking/container_bench_docker/000__test/010__docker_iperf_host_to_container_benchmark_run_dir --> 2
/tmp/topsail_202511101762780909/000__matbenchmarking/container_bench_docker/000__test/010__docker_iperf_host_to_container_benchmark_run_dir/FAILURE | CalledProcessError: Command 'set -o errexit;set -o pipefail;set -o nounset;set -o errtrace;ARTIFACT_DIR="/tmp/topsail_202511101762780909/000__matbenchmarking/container_bench_docker/000__test/010__docker_iperf_host_to_container_benchmark_run_dir" ARTIFACT_TOOLBOX_NAME_SUFFIX="_run_metrics" ./run_toolbox.py container_bench iperf_host_to_container_benchmark --binary_path='docker' --rootfull='False' --additional_args='' --exec_time_path='C:/Users/jrodak-topsail/utils/exec_time.py'' returned non-zero exit status 2.

[...]

@Honny1
Copy link
Collaborator Author

Honny1 commented Nov 10, 2025

/test cont_bench-jump-ci
/cluster windows

@topsail-bot
Copy link

topsail-bot bot commented Nov 10, 2025

🟢 Test of 'container_bench test test_ci' succeeded after 02 hours 55 minutes 39 seconds. 🟢

• Link to the test results.

• Link to the reports index.

Test configuration:

PR_POSITIONAL_ARGS: cont_bench-jump-ci
PR_POSITIONAL_ARG_0: cont_bench-jump-ci

@Honny1 Honny1 marked this pull request as ready for review November 11, 2025 07:49
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 11, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

♻️ Duplicate comments (8)
docs/toolbox.generated/Container_Bench.start_benchmark.rst (1)

1-39: Naming distinction requires verification.

This file documents start_benchmark while the prior file documents start_container_benchmark. Verify this distinction is intentional and reflects an actual distinction between two separate benchmark components, not a naming inconsistency.

docs/toolbox.generated/Container_Bench.sysbench_cpu_benchmark.rst (1)

1-39: **** - Same auto-generated file concern as Container_Bench.start_container_benchmark.rst.

projects/container_bench/toolbox/container_bench_sysbench_fileio_mount_benchmark/tasks/main.yml (1)

2-10: **** - Same assert with when: false code smell as container_bench_commit_container_benchmark/tasks/main.yml. See prior review comment.

docs/toolbox.generated/Container_Bench.sysbench_fileIO_container_benchmark.rst (1)

1-39: Possible filename/title naming inconsistency.

The filename uses sysbench_fileIO_container_benchmark (camelCase "fileIO"), but the document title on line 9 uses sysbench_fileio_container_benchmark (lowercase "fileio"). Also note that file 6 uses container_bench_sysbench_fileio_mount_benchmark (lowercase).

Verify consistency: should this be fileio (lowercase) or fileIO (camelCase) across filenames, task names, and documentation titles? Inconsistency can lead to user confusion and tooling issues.

docs/toolbox.generated/Container_Bench.remove_image_benchmark.rst (1)

1-39: **** - Same auto-generated file in version control concern as prior documentation files.

projects/container_bench/toolbox/container_bench_save_image_benchmark/tasks/main.yml (2)

2-9: Same validation pattern issue as load_image_benchmark.

This file has the same non-executing assertion with when: false. The earlier comment about verification applies here as well.


23-31: Verify cleanup_commands usage in shared tasks.

projects/container_bench/toolbox/container_bench_pull_image_benchmark/tasks/main.yml (1)

2-9: Same validation assertion issue as previous benchmark files.

🧹 Nitpick comments (21)
projects/container_bench/toolbox/container_bench_parallel_remove_image_benchmark/tasks/main.yml (1)

28-38: Consider quoting variables in shell commands for defense-in-depth.

Lines 31 and 37 interpolate {{ binary_path }} and {{ additional_args }} without quoting. While these are trusted Ansible variables, adding quotes (e.g., "{{ binary_path }}") would provide defense-in-depth against unintended word splitting if these variables ever contain spaces or special characters. This is especially relevant for the additional_args parameter.

Example: Change line 31 from:

{{ binary_path }} {{ additional_args }} rmi -f ...

to:

"{{ binary_path }}" {{ additional_args }} rmi -f ...

This is a defensive coding practice rather than a critical issue, since Ansible controls these variables.

projects/container_bench/toolbox/container_bench_exec_container_benchmark/tasks/main.yml (1)

18-20: Use the image_to_pull variable to avoid duplication.

Line 20 hardcodes the image reference that's already defined in image_to_pull on Line 18. Consider using the variable for consistency and maintainability.

Apply this change to use the variable:

     prepare_commands: |
-      {{ binary_path }} {{ additional_args }} run --name benchmark_exec -d registry.fedoraproject.org/fedora:42 sleep infinity
+      {{ binary_path }} {{ additional_args }} run --name benchmark_exec -d {{ image_to_pull }} sleep infinity
projects/container_bench/visualizations/benchmark/store/parsers.py (1)

192-196: Narrow the caught exception types

Catching a bare Exception risks hiding bugs unrelated to missing files or YAML parsing issues. Please scope the handler to the concrete cases we expect (e.g. OSError, yaml.YAMLError) so genuine regressions surface.

-    except Exception:
+    except (OSError, yaml.YAMLError):
         return None
projects/container_bench/visualizations/benchmark/plotting/comparison_report.py (2)

62-67: Consider floating-point comparison tolerance.

The exact equality comparison (read_value == best_read) may fail due to floating-point precision issues if values are computed rather than directly assigned. If values originate from calculations, consider using a small epsilon tolerance for comparison.


347-347: Remove unused enumerate index.

The loop index from enumerate(configurations) is not used. Consider using for config in configurations: instead.

Apply this diff:

-    for _, config in enumerate(configurations):
+    for config in configurations:
projects/container_bench/testing/container_engine.py (2)

261-263: Document or verify the 30-second sleep duration.

The hardcoded 30-second sleep is a workaround for Windows SSH session termination issues. Consider:

  1. Is 30 seconds the minimum required duration, or could it be shorter?
  2. Could a polling approach verify the machine has stopped rather than using a fixed delay?
  3. Document why 30 seconds specifically (e.g., based on testing, Windows process cleanup time, etc.).

A fixed sleep may be too short under heavy load or unnecessarily long on faster systems.


340-342: Document or verify the 30-second sleep duration.

Same concern as the podman machine stop: the 30-second hardcoded sleep could benefit from:

  1. Justification for the specific duration
  2. Verification that Docker Desktop has actually stopped (polling approach)
  3. Documentation of testing or Windows-specific behavior that necessitates this delay

Consider whether both stops require the same duration or if they have different requirements.

projects/container_bench/toolbox/container_bench_sysbench_memory_write_benchmark/tasks/main.yml (1)

3-10: Review comment overstates the validation gap—variables are validated downstream, though indirectly.

The assertion with when: false is confirmed dead code, but this doesn't create an unvalidated variable problem:

  1. map_variables.yml validates implicitly: Accessing vars[benchmark_role_name + '_binary_path'] will fail with a Jinja2 error if the variable is undefined—this happens before any benchmark execution.

  2. unix/windows benchmark body files validate explicitly: Both include explicit fail tasks that check prepare_commands, benchmark_command, cleanup_commands, and log_file_name are defined. These facts are set by the set_fact task, ensuring validation.

  3. Defaults provide fallback values: All four required variables have default entries in defaults/main/config.yml, even if some are empty. The variables are technically "defined" even if not properly configured by the user.

The actual concern is not about missing validation but about clarity: The assertion task is dead code that misleads readers into thinking validation occurs at task execution time. Removing the when: false and the assertion (relying on the downstream implicit and explicit validation) would be clearer, or converting it to an explicit early-fail if early validation is desired.

The cleanup_commands image name quoting is acceptable as-is.

projects/container_bench/toolbox/container_bench_artifact_add_benchmark/tasks/main.yml (1)

8-15: Assert with when: false is unusual; consider moving validation to runtime.

The assertion block never executes (when: false), so it doesn't validate that required variables are actually set. If these variables should be mandatory, either move the assertion to a conditional block that runs (e.g., when: ansible_os_family is defined) or rely on the map_variables.yml task to fail if variables are missing.

Consider one of these alternatives:

Option 1: Remove the assertion if variables are guaranteed by map_variables.yml

-- name: Validate required variables are defined
-  assert:
-    that:
-      - container_bench_artifact_add_benchmark_exec_time_path is defined
-      - container_bench_artifact_add_benchmark_binary_path is defined
-      - container_bench_artifact_add_benchmark_rootfull is defined
-      - container_bench_artifact_add_benchmark_additional_args is defined
-  when: false  # This task never runs but validates variable definitions

Option 2: Validate only when running (remove when: false)

- name: Validate required variables are defined
  assert:
    that:
      - container_bench_artifact_add_benchmark_exec_time_path is defined
      - container_bench_artifact_add_benchmark_binary_path is defined
      - container_bench_artifact_add_benchmark_rootfull is defined
      - container_bench_artifact_add_benchmark_additional_args is defined
-  when: false  # This task never runs but validates variable definitions
projects/container_bench/toolbox/container_bench_remove_image_benchmark/tasks/main.yml (1)

2-9: Assert with when: false pattern repeated here; consider runtime validation.

Similar to the artifact_add_benchmark file, the assertion block never executes. See the recommendation in the artifact_add_benchmark review for handling this pattern.

docs/toolbox.generated/Container_Bench.start_container_benchmark.rst (1)

1-39: Auto-generated files should not be committed to version control.

Lines 4-5 indicate this file is auto-generated via repo generate_toolbox_rst_documentation. Auto-generated files typically should not be committed to version control; instead, they should be generated as part of the CI/build pipeline and potentially cached in artifacts. Including them in the PR makes diffs harder to review and increases merge conflicts.

Consider whether these documentation files should be excluded from version control (e.g., via .gitignore or similar) and regenerated during CI runs instead.

projects/container_bench/toolbox/container_bench_create_container_benchmark/tasks/main.yml (1)

2-9: The when: false validation pattern requires clarification.

The validation task uses when: false to prevent execution while still checking variable definitions. This is clever but non-intuitive. Consider adding an inline comment or using a more standard pattern (e.g., using block with early validation). However, if this pattern is used consistently across the codebase, it's acceptable.

projects/container_bench/toolbox/shared_tasks/unix_synthetic_benchmark_body.yml (2)

43-47: No separator between stdout and stderr in saved output.

Line 45-46 combines benchmark_output.stdout and benchmark_output.stderr without a separator. For benchmarks producing both stdout and stderr, this could make parsing ambiguous. Consider adding a header or separator:

  - name: Save benchmark output to file
    copy:
      content: |
-       {{ benchmark_output.stdout }}{{ benchmark_output.stderr }}
+       === STDOUT ===
+       {{ benchmark_output.stdout }}
+       === STDERR ===
+       {{ benchmark_output.stderr }}
      dest: "{{ artifact_extra_logs_dir }}/artifacts/{{ log_file_name }}"
      mode: '0644'

This improves debuggability and aligns with standard logging practices.


49-54: Timestamp file naming uses string replacement on log_file_name.

Line 53 relies on log_file_name containing .log to create the timestamp filename. If a task sets log_file_name without the .log extension, the filter will fail silently or produce unexpected results. Consider a more robust approach:

  - name: Save benchmark timestamp
    copy:
      content: |
        start_time: {{ benchmark_start_time.stdout | trim }}
-     dest: "{{ artifact_extra_logs_dir }}/artifacts/{{ log_file_name | replace('.log', '_ts.yaml') }}"
+     dest: "{{ artifact_extra_logs_dir }}/artifacts/{{ log_file_name }}_ts.yaml"
      mode: '0644'

This approach appends _ts.yaml regardless of the log_file_name format.

projects/container_bench/toolbox/shared_tasks/pull_image.yml (1)

22-30: Clarify precedence when both image sources are provided.

Line 29 silently prioritizes image_name over images_to_pull if both are defined. This should be documented explicitly in the usage comment (lines 3-11) to prevent confusion. Either enforce mutual exclusivity or document the precedence clearly.

Additionally, the logic at line 29 could be simplified for readability:

  • Current: images_list: "{{ [image_name] if (image_name is defined and image_name | length > 0) else images_to_pull }}"
  • Consider: using a conditional include or explicit error if both are provided
projects/container_bench/toolbox/container_bench_sysbench_fileio_container_benchmark/tasks/main.yml (1)

2-10: Clarify intent of no-op validation task.

The validation task uses when: false, meaning it never executes at runtime. While this pattern may serve static analysis tools, it creates confusion about actual runtime validation. The comment suggests it validates variable definitions, but no validation occurs.

Options:

  1. Remove the task if it's truly not needed
  2. Remove when: false to enable actual runtime validation
  3. Add a comment explaining this is for static analysis only

Consider which approach aligns with the broader benchmark infrastructure.

projects/container_bench/toolbox/container_bench_start_container_benchmark/tasks/main.yml (1)

2-9: Clarify intent of no-op validation task (same pattern as container_bench_sysbench_fileio_container_benchmark).

projects/container_bench/toolbox/container_bench_list_images_benchmark/tasks/main.yml (1)

2-9: Clarify intent of no-op validation task (consistent pattern across all benchmark tasks).

This is the third occurrence of the when: false validation pattern. Consider consolidating the approach across all benchmark tasks.

projects/container_bench/toolbox/container_bench_pull_image_benchmark/tasks/main.yml (1)

44-49: TLS handling asymmetry between podman and other container runtimes.

Only podman gets --tls-verify=false flag; Docker and other runtimes pull without this flag. This assumes Docker/docker-compatible runtimes will succeed with default TLS validation, which may fail with self-signed certificates.

Consider making TLS verification a configurable parameter, similar to additional_args. This would allow flexibility for environments with non-standard certificate setups:

container_bench_pull_image_benchmark_tls_verify: true  # In defaults

Then use:

{% set tls_flag = '--tls-verify=false' if not container_bench_pull_image_benchmark_tls_verify else '--tls-verify=true' %}

This makes the behavior explicit and configurable.

projects/container_bench/toolbox/container_bench_remove_container_benchmark/tasks/main.yml (1)

2-9: Clarify the purpose of the non-executing validation task.

The validation task has when: false, which means it never executes. While the comment indicates this is intentional for validating variable definitions (likely for IDE/linting support), this pattern may be confusing to maintainers. Consider documenting this pattern in a shared location or using a more explicit approach.

If this is a common pattern across multiple benchmark tasks, consider adding a comment referencing shared documentation that explains this approach.

projects/container_bench/testing/config.yaml (1)

354-371: Document commented matbenchmarking fields.

Lines 368-371 show several matbenchmarking iterable test fields commented out (test.podman.repo_version, test.podman.machine_provider, test.podman.runtime). Consider adding a comment explaining why these are disabled and under what conditions they should be re-enabled.

This would help future maintainers understand the test matrix configuration decisions.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d18c72c and e500823.

📒 Files selected for processing (83)
  • docs/toolbox.generated/Container_Bench.artifact_add_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.commit_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.create_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.create_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.exec_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.iperf_host_to_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.iperf_net_bridge_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.iperf_net_host_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.list_images_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.load_image_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.parallel_remove_image_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.pull_image_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.remove_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.remove_image_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.run_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.run_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.save_image_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.start_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.start_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.sysbench_cpu_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.sysbench_fileIO_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.sysbench_fileIO_mount_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.sysbench_memory_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.sysbench_memory_read_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.sysbench_memory_write_benchmark.rst (1 hunks)
  • docs/toolbox.generated/index.rst (1 hunks)
  • projects/container_bench/testing/config.yaml (7 hunks)
  • projects/container_bench/testing/config_manager.py (2 hunks)
  • projects/container_bench/testing/container_engine.py (2 hunks)
  • projects/container_bench/testing/install.py (1 hunks)
  • projects/container_bench/toolbox/container_bench.py (2 hunks)
  • projects/container_bench/toolbox/container_bench_artifact_add_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_artifact_add_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_commit_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_commit_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_create_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_create_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_exec_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_exec_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_iperf_host_to_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_iperf_host_to_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_iperf_net_bridge_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_iperf_net_bridge_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_iperf_net_host_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_iperf_net_host_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_list_images_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_list_images_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_load_image_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_load_image_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_parallel_remove_image_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_parallel_remove_image_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_pull_image_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_pull_image_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_remove_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_remove_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_remove_image_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_remove_image_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_run_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_run_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_save_image_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_save_image_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_start_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_start_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_cpu_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_cpu_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_fileio_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_fileio_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_fileio_mount_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_fileio_mount_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_memory_read_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_memory_read_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_memory_write_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_sysbench_memory_write_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/shared_tasks/pull_image.yml (3 hunks)
  • projects/container_bench/toolbox/shared_tasks/pull_single_image.yml (1 hunks)
  • projects/container_bench/toolbox/shared_tasks/unix_default_benchmark_body.yml (1 hunks)
  • projects/container_bench/toolbox/shared_tasks/unix_synthetic_benchmark_body.yml (1 hunks)
  • projects/container_bench/toolbox/shared_tasks/windows_default_benchmark_body.yml (1 hunks)
  • projects/container_bench/toolbox/shared_tasks/windows_synthetic_benchmark_body.yml (1 hunks)
  • projects/container_bench/visualizations/benchmark/plotting/comparison_report.py (4 hunks)
  • projects/container_bench/visualizations/benchmark/plotting/report.py (3 hunks)
  • projects/container_bench/visualizations/benchmark/plotting/utils/config.py (1 hunks)
  • projects/container_bench/visualizations/benchmark/store/parsers.py (4 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
projects/container_bench/testing/install.py (2)
projects/container_bench/testing/config_manager.py (3)
  • SystemType (5-8)
  • ConfigManager (82-263)
  • get_choco_config (203-208)
projects/container_bench/testing/remote_access.py (1)
  • run_with_ansible_ssh_conf (86-143)
projects/container_bench/testing/container_engine.py (1)
projects/container_bench/testing/config_manager.py (2)
  • ConfigManager (82-263)
  • is_windows (92-93)
projects/container_bench/toolbox/container_bench.py (2)
projects/core/library/ansible_toolbox.py (3)
  • AnsibleRole (60-85)
  • AnsibleMappedParams (88-90)
  • RunAnsibleRole (110-369)
projects/container_bench/testing/container_engine.py (1)
  • additional_args (110-117)
projects/container_bench/visualizations/benchmark/plotting/report.py (2)
projects/container_bench/visualizations/benchmark/plotting/utils/shared.py (1)
  • create_summary_info_card (155-162)
projects/matrix_benchmarking/visualizations/helpers/plotting/html.py (1)
  • info_card (6-34)
projects/container_bench/visualizations/benchmark/plotting/comparison_report.py (2)
projects/container_bench/visualizations/benchmark/plotting/utils/config.py (1)
  • generate_display_config_label (318-337)
projects/container_bench/visualizations/benchmark/plotting/utils/shared.py (2)
  • create_config_cell (70-75)
  • create_table_header (65-67)
🪛 Ruff (0.14.4)
projects/container_bench/visualizations/benchmark/store/parsers.py

169-169: Consider moving this statement to an else block

(TRY300)


195-195: Do not catch blind exception: Exception

(BLE001)

@Honny1 Honny1 force-pushed the cont-benchmarks branch 2 times, most recently from e8bfd11 to b4292c6 Compare November 11, 2025 16:23
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (1)
projects/container_bench/toolbox/container_bench_create_container_benchmark/tasks/main.yml (1)

16-22: ⚠️ Unresolved: Missing required log_file_name variable (duplicate of prior critical issue).

A previous review flagged that the set_fact block is missing the log_file_name variable, which is required by the downstream unix_default_benchmark_body.yml and windows_default_benchmark_body.yml tasks. This critical issue remains unresolved in the current code.

Add log_file_name to the set_fact block:

 - name: Set common commands (shared across all platforms)
   set_fact:
     image_to_pull: "quay.io/podman/hello:latest"
     prepare_commands: "" # Image is pulled using shared pull_image task below
     cleanup_commands: "" # Cleanup handled by shared tasks
+    log_file_name: "create_container.log"
     benchmark_command: |
       {{ binary_path }} {{ additional_args }} create quay.io/podman/hello:latest podman_hello_world

Verify that the filename create_container.log aligns with the naming conventions used elsewhere in the benchmark suite.

🧹 Nitpick comments (2)
projects/container_bench/toolbox/container_bench_load_image_benchmark/tasks/main.yml (1)

2-9: Validation task with when: false is unusual but acceptable if documented.

The validation task is configured to never run (when: false). According to the comment, this is intended for static validation only. While this pattern works, it's unconventional. Ensure this aligns with other benchmarks in the codebase and that the pattern is documented or explained for future maintainers.

Consider whether this pattern should be:

  • Applied consistently across all benchmark roles for uniformity
  • Replaced with a more conventional pre-flight check that actually runs
  • Retained with explicit documentation in a style guide or README

This is a minor note assuming the pattern is already established elsewhere in the codebase.

projects/container_bench/toolbox/container_bench.py (1)

22-22: Consider improving docstring consistency.

For better consistency with other benchmark descriptions, consider these minor refinements:

  • Line 22: "Runs the container create benchmark with the given runtime" (instead of "create benchmark")
  • Line 41: "Runs the container start benchmark with the given runtime" (instead of "start benchmark")
  • Line 383: "Runs a container benchmark with the given runtime" (to avoid repetition of "run")

These changes would align the descriptions with the more explicit naming pattern used in other benchmarks.

Also applies to: 41-41, 383-383

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e500823 and b4292c6.

📒 Files selected for processing (53)
  • docs/toolbox.generated/Container_Bench.commit_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.create_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.create_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.exec_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.iperf_host_to_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.iperf_net_host_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.list_images_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.load_image_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.parallel_remove_image_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.pull_image_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.remove_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.remove_image_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.run_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.run_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.save_image_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.start_benchmark.rst (1 hunks)
  • docs/toolbox.generated/Container_Bench.start_container_benchmark.rst (1 hunks)
  • docs/toolbox.generated/index.rst (1 hunks)
  • projects/container_bench/testing/config.yaml (7 hunks)
  • projects/container_bench/testing/container_engine.py (2 hunks)
  • projects/container_bench/toolbox/container_bench.py (2 hunks)
  • projects/container_bench/toolbox/container_bench_commit_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_commit_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_create_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_create_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_exec_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_exec_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_iperf_host_to_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_iperf_net_bridge_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_iperf_net_host_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_list_images_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_list_images_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_load_image_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_load_image_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_parallel_remove_image_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_parallel_remove_image_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_pull_image_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_pull_image_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_remove_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_remove_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_remove_image_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_remove_image_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_run_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_run_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_save_image_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_save_image_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_start_container_benchmark/defaults/main/config.yml (1 hunks)
  • projects/container_bench/toolbox/container_bench_start_container_benchmark/tasks/main.yml (1 hunks)
  • projects/container_bench/toolbox/shared_tasks/pull_image.yml (3 hunks)
  • projects/container_bench/toolbox/shared_tasks/pull_single_image.yml (1 hunks)
  • projects/container_bench/toolbox/shared_tasks/unix_default_benchmark_body.yml (1 hunks)
  • projects/container_bench/toolbox/shared_tasks/windows_default_benchmark_body.yml (1 hunks)
  • projects/container_bench/visualizations/benchmark/plotting/utils/config.py (2 hunks)
✅ Files skipped from review due to trivial changes (4)
  • docs/toolbox.generated/Container_Bench.iperf_net_host_benchmark.rst
  • projects/container_bench/toolbox/container_bench_pull_image_benchmark/tasks/main.yml
  • docs/toolbox.generated/Container_Bench.start_container_benchmark.rst
  • docs/toolbox.generated/Container_Bench.iperf_host_to_container_benchmark.rst
🚧 Files skipped from review as they are similar to previous changes (35)
  • projects/container_bench/toolbox/container_bench_parallel_remove_image_benchmark/tasks/main.yml
  • projects/container_bench/toolbox/container_bench_save_image_benchmark/defaults/main/config.yml
  • projects/container_bench/toolbox/container_bench_start_container_benchmark/defaults/main/config.yml
  • docs/toolbox.generated/Container_Bench.pull_image_benchmark.rst
  • docs/toolbox.generated/Container_Bench.create_benchmark.rst
  • projects/container_bench/toolbox/container_bench_save_image_benchmark/tasks/main.yml
  • docs/toolbox.generated/Container_Bench.start_benchmark.rst
  • projects/container_bench/toolbox/container_bench_run_container_benchmark/defaults/main/config.yml
  • docs/toolbox.generated/Container_Bench.parallel_remove_image_benchmark.rst
  • projects/container_bench/toolbox/container_bench_create_container_benchmark/defaults/main/config.yml
  • projects/container_bench/toolbox/shared_tasks/windows_default_benchmark_body.yml
  • projects/container_bench/toolbox/container_bench_start_container_benchmark/tasks/main.yml
  • docs/toolbox.generated/Container_Bench.run_container_benchmark.rst
  • projects/container_bench/toolbox/container_bench_exec_container_benchmark/defaults/main/config.yml
  • projects/container_bench/toolbox/shared_tasks/pull_image.yml
  • projects/container_bench/toolbox/container_bench_remove_image_benchmark/tasks/main.yml
  • docs/toolbox.generated/Container_Bench.load_image_benchmark.rst
  • projects/container_bench/visualizations/benchmark/plotting/utils/config.py
  • projects/container_bench/toolbox/container_bench_remove_container_benchmark/defaults/main/config.yml
  • projects/container_bench/testing/container_engine.py
  • docs/toolbox.generated/Container_Bench.remove_image_benchmark.rst
  • projects/container_bench/toolbox/container_bench_parallel_remove_image_benchmark/defaults/main/config.yml
  • projects/container_bench/toolbox/container_bench_commit_container_benchmark/tasks/main.yml
  • docs/toolbox.generated/Container_Bench.commit_container_benchmark.rst
  • projects/container_bench/toolbox/container_bench_iperf_host_to_container_benchmark/tasks/main.yml
  • docs/toolbox.generated/Container_Bench.remove_container_benchmark.rst
  • projects/container_bench/toolbox/container_bench_remove_image_benchmark/defaults/main/config.yml
  • projects/container_bench/toolbox/container_bench_list_images_benchmark/tasks/main.yml
  • projects/container_bench/toolbox/container_bench_iperf_net_host_benchmark/tasks/main.yml
  • docs/toolbox.generated/Container_Bench.save_image_benchmark.rst
  • docs/toolbox.generated/Container_Bench.create_container_benchmark.rst
  • projects/container_bench/toolbox/container_bench_remove_container_benchmark/tasks/main.yml
  • docs/toolbox.generated/Container_Bench.list_images_benchmark.rst
  • projects/container_bench/toolbox/container_bench_iperf_net_bridge_benchmark/tasks/main.yml
  • projects/container_bench/toolbox/shared_tasks/unix_default_benchmark_body.yml
🧰 Additional context used
🧬 Code graph analysis (1)
projects/container_bench/toolbox/container_bench.py (2)
projects/core/library/ansible_toolbox.py (3)
  • AnsibleRole (60-85)
  • AnsibleMappedParams (88-90)
  • RunAnsibleRole (110-369)
projects/container_bench/testing/container_engine.py (1)
  • additional_args (110-117)
🔇 Additional comments (28)
projects/container_bench/toolbox/container_bench_list_images_benchmark/defaults/main/config.yml (3)

1-1: Incomplete auto-generation header comment.

The trailing ellipsis "..." in the header comment suggests the message may be truncated or incomplete.

Please verify if this is the intended header text or if it should be completed.


8-9: Mandatory parameters lack default values—verify this is intentional.

Lines 8-9 and 12-13 define mandatory parameters (exec_time_path and binary_path) but leave them empty without defaults. This can cause issues if:

  • The playbook/tasks fail silently when these variables are unset
  • Users are unaware these must be provided via inventory or vars

Confirm whether this design pattern is intentional and whether the related documentation (referenced in the AI summary) adequately guides users to set these values.

Also applies to: 12-13


22-22: Verify ansible_os_family hardcoding with Windows support.

Per the PR summary, Windows support is being added to the container benchmarks. However, ansible_os_family: Linux is hardcoded as a default. This may need to be either:

  • Set conditionally based on the target OS, or
  • Removed and left for the role to detect/set at runtime

Verify this aligns with the intended Windows/multi-OS support strategy.

projects/container_bench/testing/config.yaml (5)

157-157: Verify the intent behind reducing Podman machine CPUs from 12 to 8.

This change may impact benchmark consistency and reproducibility. Confirm whether this is intentional for resource optimization or a mistake, and ensure it doesn't skew comparative results across different test runs or environments.


263-267: Clarify why artifact_add_benchmark supports only Podman.

This is the only benchmark in the new suite that doesn't support both Podman and Docker. If this is intentional, consider adding a comment explaining the limitation. If it should support Docker, update the configuration accordingly.


368-371: Clarify why multiple matbenchmarking iterable_test_fields are commented out.

Lines 368–371 show that test.podman.repo_version, test.podman.rootful, test.podman.machine_provider, and test.podman.runtime are commented out. This significantly reduces the test matrix dimensions. Confirm whether this is intentional (e.g., to speed up CI) or accidental, and document the rationale.


185-267: Benchmark definitions are well-structured and consistent.

The new benchmark suite follows a consistent pattern with runs: 1 and explicit supported_container_engines declarations. The definitions are clear and align with the expanded container image set. All benchmarks appear to be properly referenced in the test configuration.


71-74: Chocolatey configuration for Windows support looks appropriate.

The new choco section follows the same structure as existing package manager configurations (brew, dnf), making it a natural extension for Windows support. The inclusion of podman-cli as a dependency aligns with the PR's Windows enablement objective.

projects/container_bench/toolbox/shared_tasks/pull_single_image.yml (1)

49-49: Unquoted current_image_name in pull commands poses similar risk.

Lines 49 and 53 use {{ current_image_name }} in shell/win_shell commands without quoting. While image names are less likely to contain spaces than file paths, container registries allow special characters in image names (colons, slashes for namespaces, etc.). If current_image_name contains spaces or shell metacharacters, the pull command will fail or execute unexpectedly.

Apply this fix:

 - name: Pull image from registry (Unix/Linux/macOS)
   become: "{{ rootfull | default(omit) }}"
-  shell: "{{ binary_path }} {{ additional_args | default('') }} pull {{ current_image_name }}"
+  shell: "{{ binary_path }} {{ additional_args | default('') }} pull {{ current_image_name | quote }}"
   when: ansible_os_family != 'Windows' and not tarfile_exists

 - name: Pull image from registry (Windows)
-  ansible.windows.win_shell: "{{ binary_path }} {{ additional_args | default('') }} pull {{ current_image_name }}"
+  ansible.windows.win_shell: "{{ binary_path }} {{ additional_args | default('') }} pull {{ current_image_name | quote }}"
   when: ansible_os_family == 'Windows' and not tarfile_exists

Also applies to: 53-53

projects/container_bench/toolbox/container_bench_commit_container_benchmark/defaults/main/config.yml (2)

1-3: Auto-generated file structure is correct.

The header, source component reference, and generation command are properly documented per Ansible toolbox conventions.


20-22: No changes required—ansible_os_family default does not block Windows support.

The concern is based on a misunderstanding of Ansible's behavior. When the role executes on Windows, Ansible automatically gathers facts during the gather_facts phase and sets ansible_os_family to Windows, overriding the default value. The default ansible_os_family: Linux in the config file serves only as a fallback when facts are not gathered, not as the operative value during execution.

The role properly supports Windows with explicit conditionals:

  • Non-Windows tasks execute when ansible_os_family != 'Windows'
  • Windows-specific tasks execute when ansible_os_family == 'Windows'

Windows execution paths are protected and will function correctly on Windows systems.

Likely an incorrect or invalid review comment.

projects/container_bench/toolbox/container_bench_run_container_benchmark/tasks/main.yml (3)

18-22: Verify image tag change and benchmark command syntax.

The image was updated from quay.io/podman/hello to quay.io/podman/hello:latest. While this is explicit and reasonable, confirm that:

  1. The :latest tag is available and stable for the benchmark.
  2. The benchmark command syntax {{ binary_path }} {{ additional_args }} run --rm quay.io/podman/hello:latest is correct for all supported container runtimes (podman/docker).

5-8: Variable names are correctly aligned—no action needed.

The four variable names (container_bench_run_container_benchmark_exec_time_path, container_bench_run_container_benchmark_binary_path, container_bench_run_container_benchmark_rootfull, container_bench_run_container_benchmark_additional_args) are consistently defined in defaults/main/config.yml, validated in the assertion block, and properly used via the variable mapping mechanism in shared tasks. The naming follows the role naming convention and contains no inconsistencies or conflicts.


22-22: Variable setup is correct; no action needed.

The variables binary_path and additional_args referenced on line 22 are properly configured:

  • They are dynamically mapped by map_variables.yml (included at line 12) from the longer-named variables defined in defaults/main/config.yml
  • The role's assertion (lines 5–9) documents the required variables
  • The execution order is correct: the mapping task completes before line 22 uses the short variable names

The intentionally empty defaults for these variables is the correct pattern—they are mandatory parameters that callers must provide.

projects/container_bench/toolbox/container_bench_load_image_benchmark/defaults/main/config.yml (1)

1-22: Configuration file looks good.

The defaults are properly structured, variables are consistently named, and the file follows Ansible conventions. The auto-generated header and component reference align with the load_image_benchmark role renaming.

projects/container_bench/toolbox/container_bench_load_image_benchmark/tasks/main.yml (1)

11-14: All shared task files exist and are correctly structured with proper variable handling.

Verification confirms:

  • All three referenced shared task files exist at the specified paths
  • map_variables.yml correctly creates aliases: binary_path, additional_args, exec_time_path, rootfull
  • main.yml defines all required variables before including the shared tasks:
    • image_to_pull (line 17)
    • prepare_commands (lines 17-18) using mapped binary_path and additional_args
    • benchmark_command (lines 18-19) using mapped binary_path and additional_args
    • cleanup_commands (lines 21-28, platform-specific)
  • The unix_default_benchmark_body.yml validates these variables (lines 6-13) and uses them correctly in subsequent tasks
  • Variable scope and Ansible's play variable availability ensure all included tasks have access to these variables

The implementation is correct and well-structured.

docs/toolbox.generated/Container_Bench.run_benchmark.rst (1)

1-39: LGTM! Auto-generated documentation is well-structured.

The auto-generated documentation follows the expected format and clearly describes the run_benchmark tool and its parameters.

docs/toolbox.generated/Container_Bench.exec_container_benchmark.rst (1)

1-39: LGTM! Auto-generated documentation is consistent.

The documentation is well-structured and provides clear parameter descriptions for the exec_container_benchmark tool.

docs/toolbox.generated/index.rst (1)

56-80: LGTM! Documentation index properly expanded.

The container_bench section has been appropriately updated to include all the new benchmark tools. The formatting is consistent with existing entries.

projects/container_bench/toolbox/container_bench_exec_container_benchmark/tasks/main.yml (2)

5-14: LGTM! Variable renaming is consistent.

The variable names have been properly updated from container_bench_exec_benchmark_* to container_bench_exec_container_benchmark_*, and the role name matches the file path structure.


18-20: Fedora 42 image is confirmed available.

The registry successfully lists Fedora tags 40, 41, 42, 43, and 44, confirming that registry.fedoraproject.org/fedora:42 is both accessible and a valid reference. The code change is correct and requires no modification.

projects/container_bench/toolbox/container_bench_pull_image_benchmark/defaults/main/config.yml (1)

1-22: LGTM! Configuration properly updated.

The configuration file has been correctly updated with the new pull_image_benchmark naming. All variable names follow the consistent pattern container_bench_pull_image_benchmark_*.

projects/container_bench/toolbox/container_bench.py (6)

12-29: LGTM! Method renamed and properly structured.

The create_container_benchmark method (renamed from helloworld_benchmark) follows the correct pattern with appropriate decorators and consistent parameter signature.


31-48: LGTM! Method renamed and properly structured.

The start_container_benchmark method (renamed from exec_benchmark) is correctly implemented with the standard benchmark pattern.


50-200: LGTM! New benchmark methods are well-structured.

All the new container and image management benchmarks (remove_container_benchmark, pull_image_benchmark, list_images_benchmark, remove_image_benchmark, commit_container_benchmark, load_image_benchmark, save_image_benchmark, parallel_remove_image_benchmark) follow the consistent pattern and have appropriate role mappings.


202-295: LGTM! Sysbench benchmarks are properly implemented.

All sysbench benchmark methods (sysbench_cpu_benchmark, sysbench_memory_read_benchmark, sysbench_memory_write_benchmark, sysbench_fileio_container_benchmark, sysbench_fileio_mount_benchmark) are correctly structured with consistent naming and parameters.


297-352: LGTM! Network benchmarks are correctly implemented.

The iperf3 network benchmarks (iperf_net_bridge_benchmark, iperf_net_host_benchmark, iperf_host_to_container_benchmark) are properly structured. The previous docstring issues noted in past reviews have been successfully addressed.


354-409: LGTM! Additional benchmarks are well-structured.

The artifact_add_benchmark, run_container_benchmark, and exec_container_benchmark methods follow the established pattern and have correct role mappings.

Comment on lines 278 to 280
supported_container_engines:
- podman
- docker
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at first glance, it looks like this could be "optimized" to take less space in the config file, like specifying the unsupported engines, or saying supported_container_engines is missing, support all

(just a thought)

Comment on lines 3 to 8
assert:
that:
- container_bench_create_container_benchmark_exec_time_path is defined
- container_bench_create_container_benchmark_binary_path is defined
- container_bench_create_container_benchmark_rootfull is defined
- container_bench_create_container_benchmark_additional_args is defined
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure to understand why you assert that? while this already ensures that it will be there:

    def create_container_benchmark(
            self,
            exec_time_path,
            binary_path,
            rootfull=False,
            additional_args="",
    ):
    ```
is it to run without the Python wrapper?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with the linter is that it only checks if variables are used in main.yml. It ignores the part where I map those variables to shorter versions for use in other parts. I guess that's the trade-off for being DRY.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hum, I guess I see. Can you show me where you map them?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nvm, I found it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

- name: Map long variable names to short aliases for readability
  set_fact:
    binary_path: "{{ vars[benchmark_role_name + '_binary_path'] }}"
    additional_args: "{{ vars[benchmark_role_name + '_additional_args'] }}"
    exec_time_path: "{{ vars[benchmark_role_name + '_exec_time_path'] }}"
    rootfull: "{{ vars[benchmark_role_name + '_rootfull'] }}"

got it.
One thing you may do to avoid that is to pass a dict from Python to ansible
something like

run_toolbox(exec_props=dict(path=..., args=...., path=...))

then in Ansible

set_facts:
  exec_props: my_role_exec_props # this one needs to change for each role

set_facts:
  path: exec_props["path"]
  args: exec_props["args"]

(not sure which version is cleaner though ^^)

@kpouget
Copy link
Contributor

kpouget commented Nov 12, 2025

thanks @Honny1 , good work. Merging 👍🏻

@kpouget kpouget enabled auto-merge November 12, 2025 09:40
@kpouget kpouget merged commit 924e95a into openshift-psap:main Nov 12, 2025
7 of 8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants