test(workbenches): add a test to validate ImageStreams#1126
test(workbenches): add a test to validate ImageStreams#1126dbasunag merged 2 commits intoopendatahub-io:mainfrom
Conversation
|
The following are automatically added/executed:
Available user actions:
Supported labels{'/wip', '/verified', '/build-push-pr-image', '/cherry-pick', '/lgtm', '/hold'} |
c5b2157 to
b3098a7
Compare
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new pytest module that implements health validation for OpenShift ImageStream resources: per-tag checks for resolved image references and ImportSuccess conditions, comparison of Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tests/workbenches/test_imagestream_health.py (2)
147-148: Consider usingpytest.fail()instead of rawAssertionError.
pytest.fail()is the idiomatic way to fail a test with a message in pytest — it provides better integration with pytest's reporting and avoids theAssertionErrorappearing in tracebacks as if it were an assertion statement.Suggested change
if errors: - raise AssertionError("\n".join(errors)) + pytest.fail("\n".join(errors))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/workbenches/test_imagestream_health.py` around lines 147 - 148, Replace the explicit raise AssertionError("\n".join(errors)) with pytest.fail("\n".join(errors)) to use pytest's idiomatic test-failure reporting; ensure the module imports pytest (add import pytest if not already present) and keep the error message exactly as "\n".join(errors) so existing messages remain unchanged.
151-157: Hardcoded expected ImageStream counts are brittle.The counts
11and7will break whenever a notebook or runtime image is added/removed upstream. Consider either:
- Making these counts configurable via
py_configso different product versions can override them, or- Removing the exact-count assertion and relying solely on the per-tag health checks (or using a minimum bound like
>= 1).If exact counts are intentional as a regression guard, add a comment documenting why and where the canonical list lives so future maintainers know what to update.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/workbenches/test_imagestream_health.py` around lines 151 - 157, The test hardcodes exact ImageStream counts (the pytest param expected_imagestream_count values 11 and 7) which is brittle; update the paramization in tests/workbenches/test_imagestream_health.py so the expected counts come from a configurable source (e.g., a py_config value like py_config.expected_imagestream_counts[label_selector]) with a sensible default fallback (or change the assertion that uses expected_imagestream_count to assert a minimum like >= 1 instead of exact equality); alternatively, if exact counts must be enforced, add a clear comment next to the label_selector/expected_imagestream_count entries pointing to the canonical list location and why they are fixed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/workbenches/test_imagestream_health.py`:
- Around line 147-148: Replace the explicit raise
AssertionError("\n".join(errors)) with pytest.fail("\n".join(errors)) to use
pytest's idiomatic test-failure reporting; ensure the module imports pytest (add
import pytest if not already present) and keep the error message exactly as
"\n".join(errors) so existing messages remain unchanged.
- Around line 151-157: The test hardcodes exact ImageStream counts (the pytest
param expected_imagestream_count values 11 and 7) which is brittle; update the
paramization in tests/workbenches/test_imagestream_health.py so the expected
counts come from a configurable source (e.g., a py_config value like
py_config.expected_imagestream_counts[label_selector]) with a sensible default
fallback (or change the assertion that uses expected_imagestream_count to assert
a minimum like >= 1 instead of exact equality); alternatively, if exact counts
must be enforced, add a clear comment next to the
label_selector/expected_imagestream_count entries pointing to the canonical list
location and why they are fixed.
b3098a7 to
089fa74
Compare
089fa74 to
d685a79
Compare
|
I implemented one proposal from coderabbitai, the other one I don't want to implement now. Also rebased against the latest main. |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
tests/workbenches/test_imagestream_health.py (1)
151-156: Hard-coded expected counts will cause false failures across RHOAI/ODH versions and environments.
11and7are specific to one particular release. A deployment with a different version, a minimal install, or an upgrade that adds/removes an image will fail this smoke test for the wrong reason. The namespace is already sourced frompy_config; these counts should follow the same pattern (e.g.,py_config["expected_notebook_imagestream_count"]).♻️ Suggested approach
-@pytest.mark.parametrize( - "label_selector, expected_imagestream_count", - [ - pytest.param("opendatahub.io/notebook-image=true", 11, id="notebook_imagestreams"), - pytest.param("opendatahub.io/runtime-image=true", 7, id="runtime_imagestreams"), - ], -) +@pytest.mark.parametrize( + "label_selector, expected_imagestream_count", + [ + pytest.param( + "opendatahub.io/notebook-image=true", + py_config.get("expected_notebook_imagestream_count", 11), + id="notebook_imagestreams", + ), + pytest.param( + "opendatahub.io/runtime-image=true", + py_config.get("expected_runtime_imagestream_count", 7), + id="runtime_imagestreams", + ), + ], +)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/workbenches/test_imagestream_health.py` around lines 151 - 156, The test uses hard-coded expected counts (11 and 7) in the pytest.mark.parametrize tuple causing brittle failures; update the parameters to read expected counts from the test config (use py_config keys such as py_config["expected_notebook_imagestream_count"] and py_config["expected_runtime_imagestream_count"]) instead of literals, and ensure the parametrized ids remain descriptive (e.g., "notebook_imagestreams", "runtime_imagestreams"); modify the pytest.mark.parametrize for ("label_selector, expected_imagestream_count") to reference those py_config entries so the test adapts to different RHOAI/ODH versions/environments.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/workbenches/test_imagestream_health.py`:
- Around line 163-167: The test docstring in the ImageStream health test uses a
Given-When-Then narrative but lacks the Google-style Args: section required by
the project guideline; update the test's docstring (the triple-quoted string for
the ImageStream health test in tests/workbenches/test_imagestream_health.py) to
follow Google-format by adding an "Args:" section (e.g., "Args: None" if the
test takes no parameters) and, if applicable, document any fixtures/parameters
used by the test so the docstring conforms to the guideline and linter checks.
- Around line 12-16: Replace the locally-defined ImageStream subclass with an
import from ocp_resources (add "from ocp_resources.image_stream import
ImageStream" and remove the local class declaration) so the test uses the
upstream implementation; replace the hard-coded expected counts (the literals 11
and 7 used in the assertions that validate notebook and runtime image counts)
with values obtained from configuration or discovered from the
deployment/fixture instead of fixed constants; and update the test
function/class docstring to include Google-style "Args:" and "Returns:" sections
for full compliance.
- Line 18: Add the missing "workbenches" component marker to the pytest.ini
markers list and apply it to the test by updating the pytestmark in
tests/workbenches/test_imagestream_health.py (the pytestmark variable) so the
test has both the smoke and workbenches markers; modify pytest.ini to include a
"workbenches" entry under markers and change pytestmark in
test_imagestream_health.py to include pytest.mark.workbenches alongside
pytest.mark.smoke.
---
Nitpick comments:
In `@tests/workbenches/test_imagestream_health.py`:
- Around line 151-156: The test uses hard-coded expected counts (11 and 7) in
the pytest.mark.parametrize tuple causing brittle failures; update the
parameters to read expected counts from the test config (use py_config keys such
as py_config["expected_notebook_imagestream_count"] and
py_config["expected_runtime_imagestream_count"]) instead of literals, and ensure
the parametrized ids remain descriptive (e.g., "notebook_imagestreams",
"runtime_imagestreams"); modify the pytest.mark.parametrize for
("label_selector, expected_imagestream_count") to reference those py_config
entries so the test adapts to different RHOAI/ODH versions/environments.
andyatmiami
left a comment
There was a problem hiding this comment.
nothing here particularly blocking in terms of review..
i see @harshad16 already approved - which i'm comfortable treating as means to merge this. i had commented earlier on this but forgot to submit..
|
Thank you for your reviews, guys! Andy, I'll look in the rest of your comments tomorrow. I may be implementing some of your proposals! 🙂 |
|
Rebased against the latest main branch. |
59aae6f to
59c0a15
Compare
dbasunag
left a comment
There was a problem hiding this comment.
My comments are only for your considerations and are non-blocking.
59c0a15 to
065e588
Compare
|
Thank you, @dbasunag . I just pushed a change to make ruff happy again. |
|
Status of building tag latest: success. |
Description
Introduces a new test to check that state of ImageStream resources related to workbench images.
Note: after we merge this, we shall backport to 3.3 at least:
How Has This Been Tested?
Example of a problemmatic ImageStream:
Example success call:
Merge criteria:
Summary by CodeRabbit