-
Notifications
You must be signed in to change notification settings - Fork 20
[container_benchmarks] Benchmark Implementation #856
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
Skipping CI for Draft Pull Request. |
|
Caution Review failedThe pull request is closed. WalkthroughAdds ~50 new autogenerated Ansible toolbox roles, defaults, tasks and RST docs for many Container_Bench benchmarks; migrates benchmark roles to use an Changes
Sequence Diagram(s)sequenceDiagram
participant Caller as Toolbox/API
participant Role as Ansible Role (container_bench_*)
participant Shared as Shared Tasks
participant Host as Target Host
Caller->>Role: call <benchmark>(exec_props)
Role->>Role: set_fact exec_props → binary_path, additional_args, exec_time_path, rootfull
Role->>Role: set_fact image(s), prepare/cleanup, benchmark_command, log paths
alt Non-Windows
Role->>Shared: include unix_*_benchmark_body.yml
Shared->>Host: pull images (image_to_pull or images_to_pull)
Host->>Host: run prepare_commands, record timestamp, execute benchmark_command, write log & timestamp, cleanup
else Windows
Role->>Shared: include windows_*_benchmark_body.yml
Shared->>Host: pull images (image_to_pull or images_to_pull)
Host->>Host: run PowerShell prepare, timestamp, execute benchmark_command, write log & timestamp, cleanup
end
Role-->>Caller: RunAnsibleRole result
sequenceDiagram
participant Parser
participant FS as Filesystem
participant Store
participant Viz
Parser->>FS: look for metrics.json
alt metrics.json present
Parser->>Store: parse performance metrics (container_bench)
else
Parser->>FS: match dirname → synthetic mapping
Parser->>FS: read benchmark log and _ts.yaml
Parser->>Store: parse synthetic metric and timestamp
end
Store->>Viz: render appropriate report branch by metric_type
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Areas requiring extra attention:
Possibly related PRs
Suggested labels
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (91)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
4d4e95a to
16baf03
Compare
|
/test cont_bench-jump-ci |
|
🔴 Test of 'container_bench test test_ci' failed after 03 hours 06 minutes 59 seconds. 🔴 • Link to the test results. • Link to the reports index. Test configuration: |
16baf03 to
5e73be5
Compare
|
/test cont_bench-jump-ci |
|
🟢 Test of 'container_bench test test_ci' succeeded after 01 hours 57 minutes 31 seconds. 🟢 • Link to the test results. • Link to the reports index. Test configuration: |
|
/test cont_bench-jump-ci |
|
🔴 Test of 'container_bench test test_ci' failed after 00 hours 04 minutes 49 seconds. 🔴 • Link to the test results. • Link to the reports index. Test configuration: |
5e73be5 to
7337117
Compare
|
/test cont_bench-jump-ci |
|
🟢 Test of 'container_bench test test_ci' succeeded after 01 hours 10 minutes 03 seconds. 🟢 • Link to the test results. • Link to the reports index. Test configuration: |
|
/test cont_bench-jump-ci |
|
🔴 Test of 'container_bench test pre_cleanup_ci' failed after 00 hours 00 minutes 05 seconds. 🔴 • Link to the test results. • No reports index generated... Test configuration: Failure indicator: Empty. (See run.log) |
7337117 to
d89a71d
Compare
|
/test cont_bench-jump-ci |
|
🔴 Test of 'container_bench test test_ci' failed after 00 hours 22 minutes 50 seconds. 🔴 • Link to the test results. • Link to the reports index. Test configuration: |
d89a71d to
879f398
Compare
|
/test cont_bench-jump-ci |
|
🟢 Test of 'container_bench test test_ci' succeeded after 00 hours 56 minutes 24 seconds. 🟢 • Link to the test results. • Link to the reports index. Test configuration: |
879f398 to
6477790
Compare
|
/test cont_bench-jump-ci |
f9911f1 to
035cd53
Compare
|
/test cont_bench-jump-ci |
|
🔴 Test of 'container_bench test test_ci' failed after 00 hours 36 minutes 14 seconds. 🔴 • Link to the test results. • Link to the reports index. Test configuration: |
035cd53 to
ab149ce
Compare
|
/test cont_bench-jump-ci |
|
🔴 Test of 'container_bench test test_ci' failed after 00 hours 01 minutes 48 seconds. 🔴 • Link to the test results. • Link to the reports index. Test configuration: |
9b0be15 to
80361ad
Compare
|
/test cont_bench-jump-ci |
80361ad to
595f563
Compare
|
/test cont_bench-jump-ci |
1 similar comment
|
/test cont_bench-jump-ci |
|
🔴 Test of 'container_bench test test_ci' failed after 01 hours 57 minutes 20 seconds. 🔴 • Link to the test results. • Link to the reports index. Test configuration: |
595f563 to
e500823
Compare
|
/test cont_bench-jump-ci |
|
🟢 Test of 'container_bench test test_ci' succeeded after 02 hours 55 minutes 39 seconds. 🟢 • Link to the test results. • Link to the reports index. Test configuration: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
♻️ Duplicate comments (8)
docs/toolbox.generated/Container_Bench.start_benchmark.rst (1)
1-39: Naming distinction requires verification.This file documents
start_benchmarkwhile the prior file documentsstart_container_benchmark. Verify this distinction is intentional and reflects an actual distinction between two separate benchmark components, not a naming inconsistency.docs/toolbox.generated/Container_Bench.sysbench_cpu_benchmark.rst (1)
1-39: **** - Same auto-generated file concern as Container_Bench.start_container_benchmark.rst.projects/container_bench/toolbox/container_bench_sysbench_fileio_mount_benchmark/tasks/main.yml (1)
2-10: **** - Same assert withwhen: falsecode smell as container_bench_commit_container_benchmark/tasks/main.yml. See prior review comment.docs/toolbox.generated/Container_Bench.sysbench_fileIO_container_benchmark.rst (1)
1-39: Possible filename/title naming inconsistency.The filename uses
sysbench_fileIO_container_benchmark(camelCase "fileIO"), but the document title on line 9 usessysbench_fileio_container_benchmark(lowercase "fileio"). Also note that file 6 usescontainer_bench_sysbench_fileio_mount_benchmark(lowercase).Verify consistency: should this be
fileio(lowercase) orfileIO(camelCase) across filenames, task names, and documentation titles? Inconsistency can lead to user confusion and tooling issues.docs/toolbox.generated/Container_Bench.remove_image_benchmark.rst (1)
1-39: **** - Same auto-generated file in version control concern as prior documentation files.projects/container_bench/toolbox/container_bench_save_image_benchmark/tasks/main.yml (2)
2-9: Same validation pattern issue as load_image_benchmark.This file has the same non-executing assertion with
when: false. The earlier comment about verification applies here as well.
23-31: Verify cleanup_commands usage in shared tasks.projects/container_bench/toolbox/container_bench_pull_image_benchmark/tasks/main.yml (1)
2-9: Same validation assertion issue as previous benchmark files.
🧹 Nitpick comments (21)
projects/container_bench/toolbox/container_bench_parallel_remove_image_benchmark/tasks/main.yml (1)
28-38: Consider quoting variables in shell commands for defense-in-depth.Lines 31 and 37 interpolate
{{ binary_path }}and{{ additional_args }}without quoting. While these are trusted Ansible variables, adding quotes (e.g.,"{{ binary_path }}") would provide defense-in-depth against unintended word splitting if these variables ever contain spaces or special characters. This is especially relevant for theadditional_argsparameter.Example: Change line 31 from:
{{ binary_path }} {{ additional_args }} rmi -f ...to:
"{{ binary_path }}" {{ additional_args }} rmi -f ...This is a defensive coding practice rather than a critical issue, since Ansible controls these variables.
projects/container_bench/toolbox/container_bench_exec_container_benchmark/tasks/main.yml (1)
18-20: Use theimage_to_pullvariable to avoid duplication.Line 20 hardcodes the image reference that's already defined in
image_to_pullon Line 18. Consider using the variable for consistency and maintainability.Apply this change to use the variable:
prepare_commands: | - {{ binary_path }} {{ additional_args }} run --name benchmark_exec -d registry.fedoraproject.org/fedora:42 sleep infinity + {{ binary_path }} {{ additional_args }} run --name benchmark_exec -d {{ image_to_pull }} sleep infinityprojects/container_bench/visualizations/benchmark/store/parsers.py (1)
192-196: Narrow the caught exception typesCatching a bare
Exceptionrisks hiding bugs unrelated to missing files or YAML parsing issues. Please scope the handler to the concrete cases we expect (e.g.OSError,yaml.YAMLError) so genuine regressions surface.- except Exception: + except (OSError, yaml.YAMLError): return Noneprojects/container_bench/visualizations/benchmark/plotting/comparison_report.py (2)
62-67: Consider floating-point comparison tolerance.The exact equality comparison (
read_value == best_read) may fail due to floating-point precision issues if values are computed rather than directly assigned. If values originate from calculations, consider using a small epsilon tolerance for comparison.
347-347: Remove unused enumerate index.The loop index from
enumerate(configurations)is not used. Consider usingfor config in configurations:instead.Apply this diff:
- for _, config in enumerate(configurations): + for config in configurations:projects/container_bench/testing/container_engine.py (2)
261-263: Document or verify the 30-second sleep duration.The hardcoded 30-second sleep is a workaround for Windows SSH session termination issues. Consider:
- Is 30 seconds the minimum required duration, or could it be shorter?
- Could a polling approach verify the machine has stopped rather than using a fixed delay?
- Document why 30 seconds specifically (e.g., based on testing, Windows process cleanup time, etc.).
A fixed sleep may be too short under heavy load or unnecessarily long on faster systems.
340-342: Document or verify the 30-second sleep duration.Same concern as the podman machine stop: the 30-second hardcoded sleep could benefit from:
- Justification for the specific duration
- Verification that Docker Desktop has actually stopped (polling approach)
- Documentation of testing or Windows-specific behavior that necessitates this delay
Consider whether both stops require the same duration or if they have different requirements.
projects/container_bench/toolbox/container_bench_sysbench_memory_write_benchmark/tasks/main.yml (1)
3-10: Review comment overstates the validation gap—variables are validated downstream, though indirectly.The assertion with
when: falseis confirmed dead code, but this doesn't create an unvalidated variable problem:
map_variables.yml validates implicitly: Accessing
vars[benchmark_role_name + '_binary_path']will fail with a Jinja2 error if the variable is undefined—this happens before any benchmark execution.unix/windows benchmark body files validate explicitly: Both include explicit
failtasks that checkprepare_commands,benchmark_command,cleanup_commands, andlog_file_nameare defined. These facts are set by theset_facttask, ensuring validation.Defaults provide fallback values: All four required variables have default entries in
defaults/main/config.yml, even if some are empty. The variables are technically "defined" even if not properly configured by the user.The actual concern is not about missing validation but about clarity: The assertion task is dead code that misleads readers into thinking validation occurs at task execution time. Removing the
when: falseand the assertion (relying on the downstream implicit and explicit validation) would be clearer, or converting it to an explicit early-fail if early validation is desired.The cleanup_commands image name quoting is acceptable as-is.
projects/container_bench/toolbox/container_bench_artifact_add_benchmark/tasks/main.yml (1)
8-15: Assert withwhen: falseis unusual; consider moving validation to runtime.The assertion block never executes (
when: false), so it doesn't validate that required variables are actually set. If these variables should be mandatory, either move the assertion to a conditional block that runs (e.g.,when: ansible_os_family is defined) or rely on the map_variables.yml task to fail if variables are missing.Consider one of these alternatives:
Option 1: Remove the assertion if variables are guaranteed by map_variables.yml
-- name: Validate required variables are defined - assert: - that: - - container_bench_artifact_add_benchmark_exec_time_path is defined - - container_bench_artifact_add_benchmark_binary_path is defined - - container_bench_artifact_add_benchmark_rootfull is defined - - container_bench_artifact_add_benchmark_additional_args is defined - when: false # This task never runs but validates variable definitionsOption 2: Validate only when running (remove
when: false)- name: Validate required variables are defined assert: that: - container_bench_artifact_add_benchmark_exec_time_path is defined - container_bench_artifact_add_benchmark_binary_path is defined - container_bench_artifact_add_benchmark_rootfull is defined - container_bench_artifact_add_benchmark_additional_args is defined - when: false # This task never runs but validates variable definitionsprojects/container_bench/toolbox/container_bench_remove_image_benchmark/tasks/main.yml (1)
2-9: Assert withwhen: falsepattern repeated here; consider runtime validation.Similar to the artifact_add_benchmark file, the assertion block never executes. See the recommendation in the artifact_add_benchmark review for handling this pattern.
docs/toolbox.generated/Container_Bench.start_container_benchmark.rst (1)
1-39: Auto-generated files should not be committed to version control.Lines 4-5 indicate this file is auto-generated via
repo generate_toolbox_rst_documentation. Auto-generated files typically should not be committed to version control; instead, they should be generated as part of the CI/build pipeline and potentially cached in artifacts. Including them in the PR makes diffs harder to review and increases merge conflicts.Consider whether these documentation files should be excluded from version control (e.g., via
.gitignoreor similar) and regenerated during CI runs instead.projects/container_bench/toolbox/container_bench_create_container_benchmark/tasks/main.yml (1)
2-9: Thewhen: falsevalidation pattern requires clarification.The validation task uses
when: falseto prevent execution while still checking variable definitions. This is clever but non-intuitive. Consider adding an inline comment or using a more standard pattern (e.g., usingblockwith early validation). However, if this pattern is used consistently across the codebase, it's acceptable.projects/container_bench/toolbox/shared_tasks/unix_synthetic_benchmark_body.yml (2)
43-47: No separator between stdout and stderr in saved output.Line 45-46 combines
benchmark_output.stdoutandbenchmark_output.stderrwithout a separator. For benchmarks producing both stdout and stderr, this could make parsing ambiguous. Consider adding a header or separator:- name: Save benchmark output to file copy: content: | - {{ benchmark_output.stdout }}{{ benchmark_output.stderr }} + === STDOUT === + {{ benchmark_output.stdout }} + === STDERR === + {{ benchmark_output.stderr }} dest: "{{ artifact_extra_logs_dir }}/artifacts/{{ log_file_name }}" mode: '0644'This improves debuggability and aligns with standard logging practices.
49-54: Timestamp file naming uses string replacement onlog_file_name.Line 53 relies on
log_file_namecontaining.logto create the timestamp filename. If a task setslog_file_namewithout the.logextension, the filter will fail silently or produce unexpected results. Consider a more robust approach:- name: Save benchmark timestamp copy: content: | start_time: {{ benchmark_start_time.stdout | trim }} - dest: "{{ artifact_extra_logs_dir }}/artifacts/{{ log_file_name | replace('.log', '_ts.yaml') }}" + dest: "{{ artifact_extra_logs_dir }}/artifacts/{{ log_file_name }}_ts.yaml" mode: '0644'This approach appends
_ts.yamlregardless of the log_file_name format.projects/container_bench/toolbox/shared_tasks/pull_image.yml (1)
22-30: Clarify precedence when both image sources are provided.Line 29 silently prioritizes
image_nameoverimages_to_pullif both are defined. This should be documented explicitly in the usage comment (lines 3-11) to prevent confusion. Either enforce mutual exclusivity or document the precedence clearly.Additionally, the logic at line 29 could be simplified for readability:
- Current:
images_list: "{{ [image_name] if (image_name is defined and image_name | length > 0) else images_to_pull }}"- Consider: using a conditional include or explicit error if both are provided
projects/container_bench/toolbox/container_bench_sysbench_fileio_container_benchmark/tasks/main.yml (1)
2-10: Clarify intent of no-op validation task.The validation task uses
when: false, meaning it never executes at runtime. While this pattern may serve static analysis tools, it creates confusion about actual runtime validation. The comment suggests it validates variable definitions, but no validation occurs.Options:
- Remove the task if it's truly not needed
- Remove
when: falseto enable actual runtime validation- Add a comment explaining this is for static analysis only
Consider which approach aligns with the broader benchmark infrastructure.
projects/container_bench/toolbox/container_bench_start_container_benchmark/tasks/main.yml (1)
2-9: Clarify intent of no-op validation task (same pattern as container_bench_sysbench_fileio_container_benchmark).projects/container_bench/toolbox/container_bench_list_images_benchmark/tasks/main.yml (1)
2-9: Clarify intent of no-op validation task (consistent pattern across all benchmark tasks).This is the third occurrence of the
when: falsevalidation pattern. Consider consolidating the approach across all benchmark tasks.projects/container_bench/toolbox/container_bench_pull_image_benchmark/tasks/main.yml (1)
44-49: TLS handling asymmetry between podman and other container runtimes.Only podman gets
--tls-verify=falseflag; Docker and other runtimes pull without this flag. This assumes Docker/docker-compatible runtimes will succeed with default TLS validation, which may fail with self-signed certificates.Consider making TLS verification a configurable parameter, similar to
additional_args. This would allow flexibility for environments with non-standard certificate setups:container_bench_pull_image_benchmark_tls_verify: true # In defaultsThen use:
{% set tls_flag = '--tls-verify=false' if not container_bench_pull_image_benchmark_tls_verify else '--tls-verify=true' %}This makes the behavior explicit and configurable.
projects/container_bench/toolbox/container_bench_remove_container_benchmark/tasks/main.yml (1)
2-9: Clarify the purpose of the non-executing validation task.The validation task has
when: false, which means it never executes. While the comment indicates this is intentional for validating variable definitions (likely for IDE/linting support), this pattern may be confusing to maintainers. Consider documenting this pattern in a shared location or using a more explicit approach.If this is a common pattern across multiple benchmark tasks, consider adding a comment referencing shared documentation that explains this approach.
projects/container_bench/testing/config.yaml (1)
354-371: Document commented matbenchmarking fields.Lines 368-371 show several matbenchmarking iterable test fields commented out (test.podman.repo_version, test.podman.machine_provider, test.podman.runtime). Consider adding a comment explaining why these are disabled and under what conditions they should be re-enabled.
This would help future maintainers understand the test matrix configuration decisions.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (83)
docs/toolbox.generated/Container_Bench.artifact_add_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.commit_container_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.create_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.create_container_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.exec_container_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.iperf_host_to_container_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.iperf_net_bridge_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.iperf_net_host_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.list_images_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.load_image_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.parallel_remove_image_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.pull_image_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.remove_container_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.remove_image_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.run_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.run_container_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.save_image_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.start_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.start_container_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.sysbench_cpu_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.sysbench_fileIO_container_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.sysbench_fileIO_mount_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.sysbench_memory_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.sysbench_memory_read_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.sysbench_memory_write_benchmark.rst(1 hunks)docs/toolbox.generated/index.rst(1 hunks)projects/container_bench/testing/config.yaml(7 hunks)projects/container_bench/testing/config_manager.py(2 hunks)projects/container_bench/testing/container_engine.py(2 hunks)projects/container_bench/testing/install.py(1 hunks)projects/container_bench/toolbox/container_bench.py(2 hunks)projects/container_bench/toolbox/container_bench_artifact_add_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_artifact_add_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_commit_container_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_commit_container_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_create_container_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_create_container_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_exec_container_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_exec_container_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_iperf_host_to_container_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_iperf_host_to_container_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_iperf_net_bridge_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_iperf_net_bridge_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_iperf_net_host_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_iperf_net_host_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_list_images_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_list_images_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_load_image_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_load_image_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_parallel_remove_image_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_parallel_remove_image_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_pull_image_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_pull_image_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_remove_container_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_remove_container_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_remove_image_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_remove_image_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_run_container_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_run_container_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_save_image_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_save_image_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_start_container_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_start_container_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_sysbench_cpu_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_sysbench_cpu_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_sysbench_fileio_container_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_sysbench_fileio_container_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_sysbench_fileio_mount_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_sysbench_fileio_mount_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_sysbench_memory_read_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_sysbench_memory_read_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_sysbench_memory_write_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_sysbench_memory_write_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/shared_tasks/pull_image.yml(3 hunks)projects/container_bench/toolbox/shared_tasks/pull_single_image.yml(1 hunks)projects/container_bench/toolbox/shared_tasks/unix_default_benchmark_body.yml(1 hunks)projects/container_bench/toolbox/shared_tasks/unix_synthetic_benchmark_body.yml(1 hunks)projects/container_bench/toolbox/shared_tasks/windows_default_benchmark_body.yml(1 hunks)projects/container_bench/toolbox/shared_tasks/windows_synthetic_benchmark_body.yml(1 hunks)projects/container_bench/visualizations/benchmark/plotting/comparison_report.py(4 hunks)projects/container_bench/visualizations/benchmark/plotting/report.py(3 hunks)projects/container_bench/visualizations/benchmark/plotting/utils/config.py(1 hunks)projects/container_bench/visualizations/benchmark/store/parsers.py(4 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
projects/container_bench/testing/install.py (2)
projects/container_bench/testing/config_manager.py (3)
SystemType(5-8)ConfigManager(82-263)get_choco_config(203-208)projects/container_bench/testing/remote_access.py (1)
run_with_ansible_ssh_conf(86-143)
projects/container_bench/testing/container_engine.py (1)
projects/container_bench/testing/config_manager.py (2)
ConfigManager(82-263)is_windows(92-93)
projects/container_bench/toolbox/container_bench.py (2)
projects/core/library/ansible_toolbox.py (3)
AnsibleRole(60-85)AnsibleMappedParams(88-90)RunAnsibleRole(110-369)projects/container_bench/testing/container_engine.py (1)
additional_args(110-117)
projects/container_bench/visualizations/benchmark/plotting/report.py (2)
projects/container_bench/visualizations/benchmark/plotting/utils/shared.py (1)
create_summary_info_card(155-162)projects/matrix_benchmarking/visualizations/helpers/plotting/html.py (1)
info_card(6-34)
projects/container_bench/visualizations/benchmark/plotting/comparison_report.py (2)
projects/container_bench/visualizations/benchmark/plotting/utils/config.py (1)
generate_display_config_label(318-337)projects/container_bench/visualizations/benchmark/plotting/utils/shared.py (2)
create_config_cell(70-75)create_table_header(65-67)
🪛 Ruff (0.14.4)
projects/container_bench/visualizations/benchmark/store/parsers.py
169-169: Consider moving this statement to an else block
(TRY300)
195-195: Do not catch blind exception: Exception
(BLE001)
projects/container_bench/toolbox/container_bench_artifact_add_benchmark/tasks/main.yml
Show resolved
Hide resolved
projects/container_bench/toolbox/container_bench_create_container_benchmark/tasks/main.yml
Show resolved
Hide resolved
projects/container_bench/toolbox/container_bench_pull_image_benchmark/tasks/main.yml
Show resolved
Hide resolved
projects/container_bench/toolbox/container_bench_sysbench_memory_write_benchmark/tasks/main.yml
Show resolved
Hide resolved
e8bfd11 to
b4292c6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
♻️ Duplicate comments (1)
projects/container_bench/toolbox/container_bench_create_container_benchmark/tasks/main.yml (1)
16-22:⚠️ Unresolved: Missing requiredlog_file_namevariable (duplicate of prior critical issue).A previous review flagged that the
set_factblock is missing thelog_file_namevariable, which is required by the downstreamunix_default_benchmark_body.ymlandwindows_default_benchmark_body.ymltasks. This critical issue remains unresolved in the current code.Add
log_file_nameto theset_factblock:- name: Set common commands (shared across all platforms) set_fact: image_to_pull: "quay.io/podman/hello:latest" prepare_commands: "" # Image is pulled using shared pull_image task below cleanup_commands: "" # Cleanup handled by shared tasks + log_file_name: "create_container.log" benchmark_command: | {{ binary_path }} {{ additional_args }} create quay.io/podman/hello:latest podman_hello_worldVerify that the filename
create_container.logaligns with the naming conventions used elsewhere in the benchmark suite.
🧹 Nitpick comments (2)
projects/container_bench/toolbox/container_bench_load_image_benchmark/tasks/main.yml (1)
2-9: Validation task withwhen: falseis unusual but acceptable if documented.The validation task is configured to never run (
when: false). According to the comment, this is intended for static validation only. While this pattern works, it's unconventional. Ensure this aligns with other benchmarks in the codebase and that the pattern is documented or explained for future maintainers.Consider whether this pattern should be:
- Applied consistently across all benchmark roles for uniformity
- Replaced with a more conventional pre-flight check that actually runs
- Retained with explicit documentation in a style guide or README
This is a minor note assuming the pattern is already established elsewhere in the codebase.
projects/container_bench/toolbox/container_bench.py (1)
22-22: Consider improving docstring consistency.For better consistency with other benchmark descriptions, consider these minor refinements:
- Line 22: "Runs the container create benchmark with the given runtime" (instead of "create benchmark")
- Line 41: "Runs the container start benchmark with the given runtime" (instead of "start benchmark")
- Line 383: "Runs a container benchmark with the given runtime" (to avoid repetition of "run")
These changes would align the descriptions with the more explicit naming pattern used in other benchmarks.
Also applies to: 41-41, 383-383
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (53)
docs/toolbox.generated/Container_Bench.commit_container_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.create_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.create_container_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.exec_container_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.iperf_host_to_container_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.iperf_net_host_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.list_images_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.load_image_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.parallel_remove_image_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.pull_image_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.remove_container_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.remove_image_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.run_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.run_container_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.save_image_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.start_benchmark.rst(1 hunks)docs/toolbox.generated/Container_Bench.start_container_benchmark.rst(1 hunks)docs/toolbox.generated/index.rst(1 hunks)projects/container_bench/testing/config.yaml(7 hunks)projects/container_bench/testing/container_engine.py(2 hunks)projects/container_bench/toolbox/container_bench.py(2 hunks)projects/container_bench/toolbox/container_bench_commit_container_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_commit_container_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_create_container_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_create_container_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_exec_container_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_exec_container_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_iperf_host_to_container_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_iperf_net_bridge_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_iperf_net_host_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_list_images_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_list_images_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_load_image_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_load_image_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_parallel_remove_image_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_parallel_remove_image_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_pull_image_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_pull_image_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_remove_container_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_remove_container_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_remove_image_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_remove_image_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_run_container_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_run_container_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_save_image_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_save_image_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/container_bench_start_container_benchmark/defaults/main/config.yml(1 hunks)projects/container_bench/toolbox/container_bench_start_container_benchmark/tasks/main.yml(1 hunks)projects/container_bench/toolbox/shared_tasks/pull_image.yml(3 hunks)projects/container_bench/toolbox/shared_tasks/pull_single_image.yml(1 hunks)projects/container_bench/toolbox/shared_tasks/unix_default_benchmark_body.yml(1 hunks)projects/container_bench/toolbox/shared_tasks/windows_default_benchmark_body.yml(1 hunks)projects/container_bench/visualizations/benchmark/plotting/utils/config.py(2 hunks)
✅ Files skipped from review due to trivial changes (4)
- docs/toolbox.generated/Container_Bench.iperf_net_host_benchmark.rst
- projects/container_bench/toolbox/container_bench_pull_image_benchmark/tasks/main.yml
- docs/toolbox.generated/Container_Bench.start_container_benchmark.rst
- docs/toolbox.generated/Container_Bench.iperf_host_to_container_benchmark.rst
🚧 Files skipped from review as they are similar to previous changes (35)
- projects/container_bench/toolbox/container_bench_parallel_remove_image_benchmark/tasks/main.yml
- projects/container_bench/toolbox/container_bench_save_image_benchmark/defaults/main/config.yml
- projects/container_bench/toolbox/container_bench_start_container_benchmark/defaults/main/config.yml
- docs/toolbox.generated/Container_Bench.pull_image_benchmark.rst
- docs/toolbox.generated/Container_Bench.create_benchmark.rst
- projects/container_bench/toolbox/container_bench_save_image_benchmark/tasks/main.yml
- docs/toolbox.generated/Container_Bench.start_benchmark.rst
- projects/container_bench/toolbox/container_bench_run_container_benchmark/defaults/main/config.yml
- docs/toolbox.generated/Container_Bench.parallel_remove_image_benchmark.rst
- projects/container_bench/toolbox/container_bench_create_container_benchmark/defaults/main/config.yml
- projects/container_bench/toolbox/shared_tasks/windows_default_benchmark_body.yml
- projects/container_bench/toolbox/container_bench_start_container_benchmark/tasks/main.yml
- docs/toolbox.generated/Container_Bench.run_container_benchmark.rst
- projects/container_bench/toolbox/container_bench_exec_container_benchmark/defaults/main/config.yml
- projects/container_bench/toolbox/shared_tasks/pull_image.yml
- projects/container_bench/toolbox/container_bench_remove_image_benchmark/tasks/main.yml
- docs/toolbox.generated/Container_Bench.load_image_benchmark.rst
- projects/container_bench/visualizations/benchmark/plotting/utils/config.py
- projects/container_bench/toolbox/container_bench_remove_container_benchmark/defaults/main/config.yml
- projects/container_bench/testing/container_engine.py
- docs/toolbox.generated/Container_Bench.remove_image_benchmark.rst
- projects/container_bench/toolbox/container_bench_parallel_remove_image_benchmark/defaults/main/config.yml
- projects/container_bench/toolbox/container_bench_commit_container_benchmark/tasks/main.yml
- docs/toolbox.generated/Container_Bench.commit_container_benchmark.rst
- projects/container_bench/toolbox/container_bench_iperf_host_to_container_benchmark/tasks/main.yml
- docs/toolbox.generated/Container_Bench.remove_container_benchmark.rst
- projects/container_bench/toolbox/container_bench_remove_image_benchmark/defaults/main/config.yml
- projects/container_bench/toolbox/container_bench_list_images_benchmark/tasks/main.yml
- projects/container_bench/toolbox/container_bench_iperf_net_host_benchmark/tasks/main.yml
- docs/toolbox.generated/Container_Bench.save_image_benchmark.rst
- docs/toolbox.generated/Container_Bench.create_container_benchmark.rst
- projects/container_bench/toolbox/container_bench_remove_container_benchmark/tasks/main.yml
- docs/toolbox.generated/Container_Bench.list_images_benchmark.rst
- projects/container_bench/toolbox/container_bench_iperf_net_bridge_benchmark/tasks/main.yml
- projects/container_bench/toolbox/shared_tasks/unix_default_benchmark_body.yml
🧰 Additional context used
🧬 Code graph analysis (1)
projects/container_bench/toolbox/container_bench.py (2)
projects/core/library/ansible_toolbox.py (3)
AnsibleRole(60-85)AnsibleMappedParams(88-90)RunAnsibleRole(110-369)projects/container_bench/testing/container_engine.py (1)
additional_args(110-117)
🔇 Additional comments (28)
projects/container_bench/toolbox/container_bench_list_images_benchmark/defaults/main/config.yml (3)
1-1: Incomplete auto-generation header comment.The trailing ellipsis "..." in the header comment suggests the message may be truncated or incomplete.
Please verify if this is the intended header text or if it should be completed.
8-9: Mandatory parameters lack default values—verify this is intentional.Lines 8-9 and 12-13 define mandatory parameters (
exec_time_pathandbinary_path) but leave them empty without defaults. This can cause issues if:
- The playbook/tasks fail silently when these variables are unset
- Users are unaware these must be provided via inventory or vars
Confirm whether this design pattern is intentional and whether the related documentation (referenced in the AI summary) adequately guides users to set these values.
Also applies to: 12-13
22-22: Verifyansible_os_familyhardcoding with Windows support.Per the PR summary, Windows support is being added to the container benchmarks. However,
ansible_os_family: Linuxis hardcoded as a default. This may need to be either:
- Set conditionally based on the target OS, or
- Removed and left for the role to detect/set at runtime
Verify this aligns with the intended Windows/multi-OS support strategy.
projects/container_bench/testing/config.yaml (5)
157-157: Verify the intent behind reducing Podman machine CPUs from 12 to 8.This change may impact benchmark consistency and reproducibility. Confirm whether this is intentional for resource optimization or a mistake, and ensure it doesn't skew comparative results across different test runs or environments.
263-267: Clarify whyartifact_add_benchmarksupports only Podman.This is the only benchmark in the new suite that doesn't support both Podman and Docker. If this is intentional, consider adding a comment explaining the limitation. If it should support Docker, update the configuration accordingly.
368-371: Clarify why multiple matbenchmarking iterable_test_fields are commented out.Lines 368–371 show that
test.podman.repo_version,test.podman.rootful,test.podman.machine_provider, andtest.podman.runtimeare commented out. This significantly reduces the test matrix dimensions. Confirm whether this is intentional (e.g., to speed up CI) or accidental, and document the rationale.
185-267: Benchmark definitions are well-structured and consistent.The new benchmark suite follows a consistent pattern with
runs: 1and explicitsupported_container_enginesdeclarations. The definitions are clear and align with the expanded container image set. All benchmarks appear to be properly referenced in the test configuration.
71-74: Chocolatey configuration for Windows support looks appropriate.The new
chocosection follows the same structure as existing package manager configurations (brew, dnf), making it a natural extension for Windows support. The inclusion ofpodman-clias a dependency aligns with the PR's Windows enablement objective.projects/container_bench/toolbox/shared_tasks/pull_single_image.yml (1)
49-49: Unquotedcurrent_image_namein pull commands poses similar risk.Lines 49 and 53 use
{{ current_image_name }}in shell/win_shell commands without quoting. While image names are less likely to contain spaces than file paths, container registries allow special characters in image names (colons, slashes for namespaces, etc.). Ifcurrent_image_namecontains spaces or shell metacharacters, the pull command will fail or execute unexpectedly.Apply this fix:
- name: Pull image from registry (Unix/Linux/macOS) become: "{{ rootfull | default(omit) }}" - shell: "{{ binary_path }} {{ additional_args | default('') }} pull {{ current_image_name }}" + shell: "{{ binary_path }} {{ additional_args | default('') }} pull {{ current_image_name | quote }}" when: ansible_os_family != 'Windows' and not tarfile_exists - name: Pull image from registry (Windows) - ansible.windows.win_shell: "{{ binary_path }} {{ additional_args | default('') }} pull {{ current_image_name }}" + ansible.windows.win_shell: "{{ binary_path }} {{ additional_args | default('') }} pull {{ current_image_name | quote }}" when: ansible_os_family == 'Windows' and not tarfile_existsAlso applies to: 53-53
projects/container_bench/toolbox/container_bench_commit_container_benchmark/defaults/main/config.yml (2)
1-3: Auto-generated file structure is correct.The header, source component reference, and generation command are properly documented per Ansible toolbox conventions.
20-22: No changes required—ansible_os_familydefault does not block Windows support.The concern is based on a misunderstanding of Ansible's behavior. When the role executes on Windows, Ansible automatically gathers facts during the
gather_factsphase and setsansible_os_familytoWindows, overriding the default value. The defaultansible_os_family: Linuxin the config file serves only as a fallback when facts are not gathered, not as the operative value during execution.The role properly supports Windows with explicit conditionals:
- Non-Windows tasks execute when
ansible_os_family != 'Windows'- Windows-specific tasks execute when
ansible_os_family == 'Windows'Windows execution paths are protected and will function correctly on Windows systems.
Likely an incorrect or invalid review comment.
projects/container_bench/toolbox/container_bench_run_container_benchmark/tasks/main.yml (3)
18-22: Verify image tag change and benchmark command syntax.The image was updated from
quay.io/podman/hellotoquay.io/podman/hello:latest. While this is explicit and reasonable, confirm that:
- The
:latesttag is available and stable for the benchmark.- The benchmark command syntax
{{ binary_path }} {{ additional_args }} run --rm quay.io/podman/hello:latestis correct for all supported container runtimes (podman/docker).
5-8: Variable names are correctly aligned—no action needed.The four variable names (
container_bench_run_container_benchmark_exec_time_path,container_bench_run_container_benchmark_binary_path,container_bench_run_container_benchmark_rootfull,container_bench_run_container_benchmark_additional_args) are consistently defined indefaults/main/config.yml, validated in the assertion block, and properly used via the variable mapping mechanism in shared tasks. The naming follows the role naming convention and contains no inconsistencies or conflicts.
22-22: Variable setup is correct; no action needed.The variables
binary_pathandadditional_argsreferenced on line 22 are properly configured:
- They are dynamically mapped by
map_variables.yml(included at line 12) from the longer-named variables defined indefaults/main/config.yml- The role's assertion (lines 5–9) documents the required variables
- The execution order is correct: the mapping task completes before line 22 uses the short variable names
The intentionally empty defaults for these variables is the correct pattern—they are mandatory parameters that callers must provide.
projects/container_bench/toolbox/container_bench_load_image_benchmark/defaults/main/config.yml (1)
1-22: Configuration file looks good.The defaults are properly structured, variables are consistently named, and the file follows Ansible conventions. The auto-generated header and component reference align with the load_image_benchmark role renaming.
projects/container_bench/toolbox/container_bench_load_image_benchmark/tasks/main.yml (1)
11-14: All shared task files exist and are correctly structured with proper variable handling.Verification confirms:
- All three referenced shared task files exist at the specified paths
map_variables.ymlcorrectly creates aliases:binary_path,additional_args,exec_time_path,rootfullmain.ymldefines all required variables before including the shared tasks:
image_to_pull(line 17)prepare_commands(lines 17-18) using mappedbinary_pathandadditional_argsbenchmark_command(lines 18-19) using mappedbinary_pathandadditional_argscleanup_commands(lines 21-28, platform-specific)- The
unix_default_benchmark_body.ymlvalidates these variables (lines 6-13) and uses them correctly in subsequent tasks- Variable scope and Ansible's play variable availability ensure all included tasks have access to these variables
The implementation is correct and well-structured.
docs/toolbox.generated/Container_Bench.run_benchmark.rst (1)
1-39: LGTM! Auto-generated documentation is well-structured.The auto-generated documentation follows the expected format and clearly describes the
run_benchmarktool and its parameters.docs/toolbox.generated/Container_Bench.exec_container_benchmark.rst (1)
1-39: LGTM! Auto-generated documentation is consistent.The documentation is well-structured and provides clear parameter descriptions for the
exec_container_benchmarktool.docs/toolbox.generated/index.rst (1)
56-80: LGTM! Documentation index properly expanded.The container_bench section has been appropriately updated to include all the new benchmark tools. The formatting is consistent with existing entries.
projects/container_bench/toolbox/container_bench_exec_container_benchmark/tasks/main.yml (2)
5-14: LGTM! Variable renaming is consistent.The variable names have been properly updated from
container_bench_exec_benchmark_*tocontainer_bench_exec_container_benchmark_*, and the role name matches the file path structure.
18-20: Fedora 42 image is confirmed available.The registry successfully lists Fedora tags 40, 41, 42, 43, and 44, confirming that
registry.fedoraproject.org/fedora:42is both accessible and a valid reference. The code change is correct and requires no modification.projects/container_bench/toolbox/container_bench_pull_image_benchmark/defaults/main/config.yml (1)
1-22: LGTM! Configuration properly updated.The configuration file has been correctly updated with the new
pull_image_benchmarknaming. All variable names follow the consistent patterncontainer_bench_pull_image_benchmark_*.projects/container_bench/toolbox/container_bench.py (6)
12-29: LGTM! Method renamed and properly structured.The
create_container_benchmarkmethod (renamed fromhelloworld_benchmark) follows the correct pattern with appropriate decorators and consistent parameter signature.
31-48: LGTM! Method renamed and properly structured.The
start_container_benchmarkmethod (renamed fromexec_benchmark) is correctly implemented with the standard benchmark pattern.
50-200: LGTM! New benchmark methods are well-structured.All the new container and image management benchmarks (
remove_container_benchmark,pull_image_benchmark,list_images_benchmark,remove_image_benchmark,commit_container_benchmark,load_image_benchmark,save_image_benchmark,parallel_remove_image_benchmark) follow the consistent pattern and have appropriate role mappings.
202-295: LGTM! Sysbench benchmarks are properly implemented.All sysbench benchmark methods (
sysbench_cpu_benchmark,sysbench_memory_read_benchmark,sysbench_memory_write_benchmark,sysbench_fileio_container_benchmark,sysbench_fileio_mount_benchmark) are correctly structured with consistent naming and parameters.
297-352: LGTM! Network benchmarks are correctly implemented.The iperf3 network benchmarks (
iperf_net_bridge_benchmark,iperf_net_host_benchmark,iperf_host_to_container_benchmark) are properly structured. The previous docstring issues noted in past reviews have been successfully addressed.
354-409: LGTM! Additional benchmarks are well-structured.The
artifact_add_benchmark,run_container_benchmark, andexec_container_benchmarkmethods follow the established pattern and have correct role mappings.
.../container_bench/toolbox/container_bench_commit_container_benchmark/defaults/main/config.yml
Outdated
Show resolved
Hide resolved
projects/container_bench/toolbox/container_bench_load_image_benchmark/tasks/main.yml
Show resolved
Hide resolved
projects/container_bench/toolbox/container_bench_run_container_benchmark/tasks/main.yml
Outdated
Show resolved
Hide resolved
| supported_container_engines: | ||
| - podman | ||
| - docker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
at first glance, it looks like this could be "optimized" to take less space in the config file, like specifying the unsupported engines, or saying supported_container_engines is missing, support all
(just a thought)
| assert: | ||
| that: | ||
| - container_bench_create_container_benchmark_exec_time_path is defined | ||
| - container_bench_create_container_benchmark_binary_path is defined | ||
| - container_bench_create_container_benchmark_rootfull is defined | ||
| - container_bench_create_container_benchmark_additional_args is defined |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure to understand why you assert that? while this already ensures that it will be there:
def create_container_benchmark(
self,
exec_time_path,
binary_path,
rootfull=False,
additional_args="",
):
```
is it to run without the Python wrapper?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem with the linter is that it only checks if variables are used in main.yml. It ignores the part where I map those variables to shorter versions for use in other parts. I guess that's the trade-off for being DRY.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hum, I guess I see. Can you show me where you map them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nvm, I found it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- name: Map long variable names to short aliases for readability
set_fact:
binary_path: "{{ vars[benchmark_role_name + '_binary_path'] }}"
additional_args: "{{ vars[benchmark_role_name + '_additional_args'] }}"
exec_time_path: "{{ vars[benchmark_role_name + '_exec_time_path'] }}"
rootfull: "{{ vars[benchmark_role_name + '_rootfull'] }}"
got it.
One thing you may do to avoid that is to pass a dict from Python to ansible
something like
run_toolbox(exec_props=dict(path=..., args=...., path=...))
then in Ansible
set_facts:
exec_props: my_role_exec_props # this one needs to change for each role
set_facts:
path: exec_props["path"]
args: exec_props["args"]
(not sure which version is cleaner though ^^)
Signed-off-by: Jan Rodák <[email protected]>
Signed-off-by: Jan Rodák <[email protected]>
…port Signed-off-by: Jan Rodák <[email protected]>
Signed-off-by: Jan Rodák <[email protected]>
Signed-off-by: Jan Rodák <[email protected]>
…dict parameter also refactor supported_container_engine to simplify config. Signed-off-by: Jan Rodák <[email protected]>
5ed893a to
312dd8e
Compare
|
thanks @Honny1 , good work. Merging 👍🏻 |
Summary by CodeRabbit
New Features
Bug Fixes
Documentation