Reduce smoke for model_registry component#1122
Conversation
|
/hold |
|
The following are automatically added/executed:
Available user actions:
Supported labels{'/wip', '/hold', '/lgtm', '/cherry-pick', '/build-push-pr-image', '/verified'} |
|
/build-push-pr-image |
|
Status of building tag pr-1122: success. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughBulk metadata-only pytest marker updates across many model_registry tests: smoke/sanity markers were removed, replaced, or moved (into pytest.param entries) and new tier markers (tier1/tier2/tier3) and some new tests/classes were added. No test logic or assertions were changed. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
tests/model_registry/model_catalog/metadata/test_sources_endpoint.py (1)
8-21:⚠️ Potential issue | 🟡 MinorAdd
model_registrycomponent marker alongside the newsanitytier marker.The test is now part of the
sanitysuite but carries no component marker. Per coding guidelines, relevant markers frompytest.ini(including component markers likemodel_registry) should be applied to tests in the smoke/sanity tiers. Themodel_registrycomponent marker belongs either at the individual test or in the module-levelpytestmark.🏷️ Proposed fix: add component marker to module-level `pytestmark`
-pytestmark = [pytest.mark.usefixtures("updated_dsc_component_state_scope_session", "model_registry_namespace")] +pytestmark = [ + pytest.mark.model_registry, + pytest.mark.usefixtures("updated_dsc_component_state_scope_session", "model_registry_namespace"), +]Based on learnings: "pytest markers for tier or component are not required for tests that are not part of smoke or sanity test suites" — the converse applies here — and the coding guideline: "Apply relevant markers from
pytest.ini: tier (smoke,sanity,tier1,tier2), component (model_serving,model_registry,llama_stack)…"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_catalog/metadata/test_sources_endpoint.py` around lines 8 - 21, The test is marked with the sanity tier but lacks the required component marker; update the module-level pytestmark list (the pytestmark variable) to include the model_registry marker alongside the existing fixtures, e.g., add pytest.mark.model_registry to the list so the TestSourcesEndpoint class (and its sanity parametrized test) is properly tagged for the model_registry component.tests/model_registry/image_validation/test_verify_rhoai_images.py (1)
44-50:⚠️ Potential issue | 🟡 MinorBoth test methods are missing mandatory docstrings.
The coding guidelines require every test to have a Google-format docstring in Given-When-Then form. Neither
test_verify_pod_imagesnortest_verify_model_registry_pod_imageshas one. Example fortest_verify_pod_images:def test_verify_pod_images( self: Self, admin_client: DynamicClient, resource_pods: list[Pod], related_images_refs: Set[str], ): + """Verify that pod images meet registry and digest requirements. + + Given a set of running pods selected by label, + When their container image references are inspected, + Then each image must be hosted in registry.redhat.io, use a sha256 digest, + and be listed in the CSV's relatedImages section. + """ validate_images(pods_to_validate=resource_pods, related_images_refs=related_images_refs)As per coding guidelines: "Every test MUST have a docstring explaining what it tests" and "Use Given-When-Then format in test docstrings for behavioral clarity".
Also applies to: 78-84
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/image_validation/test_verify_rhoai_images.py` around lines 44 - 50, Add Google-style docstrings in Given-When-Then form to the test functions test_verify_pod_images and test_verify_model_registry_pod_images: for each test, insert a triple-quoted docstring immediately below the def line that states the Given preconditions, the When action (what the test invokes, e.g., calling validate_images or the model-registry validation helper), and the Then expected outcome (what validate_images should assert or validate). Ensure the docstrings are concise, follow the Google format, and cover the inputs (admin_client, resource_pods, related_images_refs) and the expected validation results.tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py (1)
77-96:⚠️ Potential issue | 🟡 Minor
smokeon the inner param leaks into all four outer DB-backend combinations, includingmariadb
marks=pytest.mark.smokeat line 83 is placed on the inner parametrize param. Pytest evaluates marks per test-instance independently, so thesmokemarker gets attached to every combination of outer params × this inner param:
Outer param Inner param Effective markers {}(default)test_validate_registered_modelsanity,smokepostgrestest_validate_registered_modelsanity,smokedb_name=defaulttest_validate_registered_modeltier1,smokemariadbtest_validate_registered_modelsmokeonly ← unintendedRunning
pytest -m smokewill now include themariadbbackend (previously unmarked, deliberately excluded from tiered runs) and thetier1backend. If the intent is to keep smoke coverage to just the default-DB and postgres backends, thesmokemarker should be moved to the outer params that carrysanity, or a dedicated outer param withsmokeshould be introduced.💡 Alternative: keep smoke scoped to the default-DB outer param only
- pytest.param( - {}, - {}, - MODEL_REGISTER_DATA, - marks=(pytest.mark.sanity), - ), + pytest.param( + {}, + {}, + MODEL_REGISTER_DATA, + marks=(pytest.mark.smoke, pytest.mark.sanity), + ),Then remove
marks=pytest.mark.smokefrom the inner param (line 83) so thesmokelabel is driven entirely by the outer param, not broadcast across all backends.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py` around lines 77 - 96, The smoke marker on the inner `@pytest.mark.parametrize` (the param that yields MODEL_REGISTER / "register_model") is leaking into all DB-backend combinations; remove marks=pytest.mark.smoke from that inner param and instead apply pytest.mark.smoke on the appropriate outer parametrization entries that represent the intended backends (or add a separate outer param entry scoped to smoke), e.g. move the marks=pytest.mark.smoke decoration to the outer param list that contains the default-DB/postgres entries so only those combinations get the smoke marker; update the parametrization around MODEL_REGISTER / MODEL_VERSION / MODEL_ARTIFACT accordingly.
🧹 Nitpick comments (6)
tests/model_registry/model_registry/rbac/test_mr_rbac_sa.py (2)
58-64: Missing@pytest.mark.model_registrycomponent marker on asanitytest.Per coding guidelines,
sanitytests should carry the relevant component marker. Both tests in this class are nowsanitybut neither declares@pytest.mark.model_registry.+ `@pytest.mark.model_registry` `@pytest.mark.sanity` `@pytest.mark.usefixtures`("sa_namespace", "service_account", "mr_access_role", "mr_access_role_binding") def test_service_account_access_granted(As per coding guidelines: "Apply relevant markers from
pytest.ini: tier (smoke,sanity,tier1,tier2), component (model_serving,model_registry,llama_stack), infrastructure (gpu,parallel,slow)."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_registry/rbac/test_mr_rbac_sa.py` around lines 58 - 64, Add the missing component marker to the sanity tests by annotating the test function test_service_account_access_granted (and the sibling test(s) in the same class) with `@pytest.mark.model_registry` in addition to the existing `@pytest.mark.sanity` so they include the required component marker; update the decorators for the test functions (e.g., above test_service_account_access_granted and its counterpart) to include `@pytest.mark.model_registry` alongside `@pytest.mark.sanity` and keep existing usefixtures unchanged.
59-64:service_accountis redundant inusefixtures.
service_accountis listed in both@pytest.mark.usefixtures(line 59) and the function signature (line 62). Since its return value is consumed in the test body (line 73), the function-parameter form is sufficient — theusefixturesentry is redundant and should be removed.♻️ Proposed fix
- `@pytest.mark.usefixtures`("sa_namespace", "service_account", "mr_access_role", "mr_access_role_binding") + `@pytest.mark.usefixtures`("sa_namespace", "mr_access_role", "mr_access_role_binding") def test_service_account_access_granted( self: Self, service_account: ServiceAccount,Based on learnings: "prefer declaring dependencies via explicit fixture parameters in test function signatures rather than always using
pytest.mark.usefixtures… reserveusefixturesfor cases where you need fixture side effects without needing to access the fixture value in the test."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_registry/rbac/test_mr_rbac_sa.py` around lines 59 - 64, The decorator currently lists the service_account fixture redundantly; remove "service_account" from the `@pytest.mark.usefixtures` call and keep it as an explicit parameter in the test signature of test_service_account_access_granted so the test consumes the fixture value (leave "sa_namespace", "mr_access_role", and "mr_access_role_binding" in usefixtures).tests/model_registry/model_catalog/metadata/test_sources_endpoint.py (2)
22-27: Add-> Nonereturn type annotation (mypy strict).The test function lacks a return type annotation, which violates the mypy strict requirement enforced across this repo.
♻️ Proposed fix
- def test_sources_endpoint_returns_all_sources_regardless_of_enabled_field( + def test_sources_endpoint_returns_all_sources_regardless_of_enabled_field( # type: ignore[override] if subclassing, otherwise:def test_sources_endpoint_returns_all_sources_regardless_of_enabled_field( self, sparse_override_catalog_source: dict, model_catalog_rest_url: list[str], model_registry_rest_headers: dict[str, str], - ): + ) -> None:As per coding guidelines: "Add type annotations to test code and fixtures (mypy strict enforced)."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_catalog/metadata/test_sources_endpoint.py` around lines 22 - 27, The test function test_sources_endpoint_returns_all_sources_regardless_of_enabled_field is missing a return type annotation; update its signature to include the mypy-required return type annotation by changing the def line to add "-> None" after the parameter list (keep all existing parameters and formatting intact) so the function signature becomes def test_sources_endpoint_returns_all_sources_regardless_of_enabled_field(... ) -> None:.
28-30: Docstring should follow Given-When-Then + Google format.The current docstring is a bare Jira reference and satisfies neither the Google-format requirement nor the Given-When-Then behavioral clarity guideline.
📝 Proposed docstring
- """ - RHOAIENG-41633: Test that sources endpoint returns ALL sources regardless of enabled field value. - """ + """Test that sources endpoint returns ALL sources regardless of enabled field value. + + RHOAIENG-41633 + + Given: + A catalog source with ``enabled`` set to False via sparse_override_catalog_source. + When: + A GET request is made to the sources endpoint. + Then: + All sources (both enabled and disabled) are returned, with correct status values. + """As per coding guidelines: "Use Given-When-Then format in test docstrings for behavioral clarity" and "Write Google-format docstrings for tests and fixtures."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_catalog/metadata/test_sources_endpoint.py` around lines 28 - 30, Replace the Jira-only docstring for the test that verifies the sources endpoint returns all sources (the docstring at the top of test_sources_endpoint.py) with a Google-style Given-When-Then docstring: state Given (there are sources with enabled true and false), When (the client calls the sources endpoint), Then (the response contains all sources regardless of the enabled field), and include a short summary line and Args/Returns sections if applicable to match the Google docstring convention.tests/model_registry/model_catalog/rbac/test_catalog_rbac.py (1)
72-78: Both params carry the samesanitymark — simplify to a method-level decorator.Using
pytest.param(..., marks=pytest.mark.sanity)on every entry when all entries share the same mark adds noise without benefit. A single method-level decorator + a plain list is cleaner.♻️ Proposed simplification
+ `@pytest.mark.sanity` `@pytest.mark.parametrize`( "configmap_name", [ - pytest.param(DEFAULT_MODEL_CATALOG_CM, marks=pytest.mark.sanity), - pytest.param(DEFAULT_CUSTOM_MODEL_CATALOG, marks=pytest.mark.sanity), + DEFAULT_MODEL_CATALOG_CM, + DEFAULT_CUSTOM_MODEL_CATALOG, ], )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_catalog/rbac/test_catalog_rbac.py` around lines 72 - 78, The pytest parametrization currently applies pytest.mark.sanity to each entry (DEFAULT_MODEL_CATALOG_CM and DEFAULT_CUSTOM_MODEL_CATALOG) which is redundant; remove the per-item marks and instead add a single `@pytest.mark.sanity` decorator on the test function that uses pytest.mark.parametrize("configmap_name", [DEFAULT_MODEL_CATALOG_CM, DEFAULT_CUSTOM_MODEL_CATALOG]); update the param list to plain values and add the method-level `@pytest.mark.sanity` above the test to keep behavior the same while simplifying the code.tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py (1)
98-108: Missing docstrings on four modified test methodsThe following test methods that received marker changes in this PR are missing docstrings, violating the project guideline requiring a Given-When-Then docstring for every test:
test_validate_model_registry_resource(line 98)test_default_postgres_db_pod_log(line 160)test_model_registry_validate_api_version(line 180)test_model_registry_validate_kuberbacproxy_enabled(line 196)📝 Example docstrings in GWT format
def test_validate_model_registry_resource( self: Self, registered_model_rest_api: dict[str, Any], expected_params: dict[str, str], data_key: str, ): + """ + Given a registered model REST API response, + When validating resource attributes for a given data key, + Then the actual resource data matches the expected parameters. + """def test_default_postgres_db_pod_log( self: Self, ... ): + """ + Given the default postgres deployment is running, + When reading the postgres container log, + Then it contains the expected connection-accepted string. + """def test_model_registry_validate_api_version( self: Self, ... ): + """ + Given a deployed ModelRegistry instance, + When reading its apiVersion field, + Then it matches the expected group/version string. + """def test_model_registry_validate_kuberbacproxy_enabled( self: Self, ... ): + """ + Given a deployed ModelRegistry instance, + When inspecting the spec, + Then istio is not enabled and kubeRBACProxy serviceRoute is 'enabled'. + """As per coding guidelines: "Every test MUST have a docstring explaining what it tests" and "Use Given-When-Then format in test docstrings for behavioral clarity".
Also applies to: 160-177, 180-193, 196-203
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py` around lines 98 - 108, Four test functions (test_validate_model_registry_resource, test_default_postgres_db_pod_log, test_model_registry_validate_api_version, test_model_registry_validate_kuberbacproxy_enabled) are missing the required Given-When-Then docstrings; add a concise GWT docstring at the start of each test function describing the initial state (Given), the action being performed (When), and the expected outcome/assertion (Then) so they comply with project guidelines and make test intent explicit; update the three listed test functions by inserting a one- or two-sentence GWT docstring in each function body immediately after the def line.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@tests/model_registry/image_validation/test_verify_rhoai_images.py`:
- Around line 44-50: Add Google-style docstrings in Given-When-Then form to the
test functions test_verify_pod_images and test_verify_model_registry_pod_images:
for each test, insert a triple-quoted docstring immediately below the def line
that states the Given preconditions, the When action (what the test invokes,
e.g., calling validate_images or the model-registry validation helper), and the
Then expected outcome (what validate_images should assert or validate). Ensure
the docstrings are concise, follow the Google format, and cover the inputs
(admin_client, resource_pods, related_images_refs) and the expected validation
results.
In `@tests/model_registry/model_catalog/metadata/test_sources_endpoint.py`:
- Around line 8-21: The test is marked with the sanity tier but lacks the
required component marker; update the module-level pytestmark list (the
pytestmark variable) to include the model_registry marker alongside the existing
fixtures, e.g., add pytest.mark.model_registry to the list so the
TestSourcesEndpoint class (and its sanity parametrized test) is properly tagged
for the model_registry component.
In
`@tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py`:
- Around line 77-96: The smoke marker on the inner `@pytest.mark.parametrize` (the
param that yields MODEL_REGISTER / "register_model") is leaking into all
DB-backend combinations; remove marks=pytest.mark.smoke from that inner param
and instead apply pytest.mark.smoke on the appropriate outer parametrization
entries that represent the intended backends (or add a separate outer param
entry scoped to smoke), e.g. move the marks=pytest.mark.smoke decoration to the
outer param list that contains the default-DB/postgres entries so only those
combinations get the smoke marker; update the parametrization around
MODEL_REGISTER / MODEL_VERSION / MODEL_ARTIFACT accordingly.
---
Nitpick comments:
In `@tests/model_registry/model_catalog/metadata/test_sources_endpoint.py`:
- Around line 22-27: The test function
test_sources_endpoint_returns_all_sources_regardless_of_enabled_field is missing
a return type annotation; update its signature to include the mypy-required
return type annotation by changing the def line to add "-> None" after the
parameter list (keep all existing parameters and formatting intact) so the
function signature becomes def
test_sources_endpoint_returns_all_sources_regardless_of_enabled_field(... ) ->
None:.
- Around line 28-30: Replace the Jira-only docstring for the test that verifies
the sources endpoint returns all sources (the docstring at the top of
test_sources_endpoint.py) with a Google-style Given-When-Then docstring: state
Given (there are sources with enabled true and false), When (the client calls
the sources endpoint), Then (the response contains all sources regardless of the
enabled field), and include a short summary line and Args/Returns sections if
applicable to match the Google docstring convention.
In `@tests/model_registry/model_catalog/rbac/test_catalog_rbac.py`:
- Around line 72-78: The pytest parametrization currently applies
pytest.mark.sanity to each entry (DEFAULT_MODEL_CATALOG_CM and
DEFAULT_CUSTOM_MODEL_CATALOG) which is redundant; remove the per-item marks and
instead add a single `@pytest.mark.sanity` decorator on the test function that
uses pytest.mark.parametrize("configmap_name", [DEFAULT_MODEL_CATALOG_CM,
DEFAULT_CUSTOM_MODEL_CATALOG]); update the param list to plain values and add
the method-level `@pytest.mark.sanity` above the test to keep behavior the same
while simplifying the code.
In `@tests/model_registry/model_registry/rbac/test_mr_rbac_sa.py`:
- Around line 58-64: Add the missing component marker to the sanity tests by
annotating the test function test_service_account_access_granted (and the
sibling test(s) in the same class) with `@pytest.mark.model_registry` in addition
to the existing `@pytest.mark.sanity` so they include the required component
marker; update the decorators for the test functions (e.g., above
test_service_account_access_granted and its counterpart) to include
`@pytest.mark.model_registry` alongside `@pytest.mark.sanity` and keep existing
usefixtures unchanged.
- Around line 59-64: The decorator currently lists the service_account fixture
redundantly; remove "service_account" from the `@pytest.mark.usefixtures` call and
keep it as an explicit parameter in the test signature of
test_service_account_access_granted so the test consumes the fixture value
(leave "sa_namespace", "mr_access_role", and "mr_access_role_binding" in
usefixtures).
In
`@tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py`:
- Around line 98-108: Four test functions
(test_validate_model_registry_resource, test_default_postgres_db_pod_log,
test_model_registry_validate_api_version,
test_model_registry_validate_kuberbacproxy_enabled) are missing the required
Given-When-Then docstrings; add a concise GWT docstring at the start of each
test function describing the initial state (Given), the action being performed
(When), and the expected outcome/assertion (Then) so they comply with project
guidelines and make test intent explicit; update the three listed test functions
by inserting a one- or two-sentence GWT docstring in each function body
immediately after the def line.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/model_registry/model_catalog/rbac/test_catalog_rbac.py (1)
71-77: Both params carry the same mark; consider hoisting to the test level.Since both
DEFAULT_MODEL_CATALOG_CMandDEFAULT_CUSTOM_MODEL_CATALOGreceivepytest.mark.sanity, the per-pytest.parammarks are redundant. A single test-level decorator is cleaner and makes the intent immediately obvious:♻️ Proposed simplification
+ `@pytest.mark.sanity` `@pytest.mark.parametrize`( "configmap_name", - [ - pytest.param(DEFAULT_MODEL_CATALOG_CM, marks=pytest.mark.sanity), - pytest.param(DEFAULT_CUSTOM_MODEL_CATALOG, marks=pytest.mark.sanity), - ], + [DEFAULT_MODEL_CATALOG_CM, DEFAULT_CUSTOM_MODEL_CATALOG], )If the two cases are expected to diverge in the future (e.g., one becoming
tier1), keepingpytest.paramentries is fine — but a brief comment explaining the deliberate duplication would help.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_catalog/rbac/test_catalog_rbac.py` around lines 71 - 77, Both pytest.param entries in the pytest.mark.parametrize call apply the same pytest.mark.sanity mark; remove the redundant per-parameter marks and hoist the mark to the test itself. Edit the pytest.mark.parametrize that enumerates DEFAULT_MODEL_CATALOG_CM and DEFAULT_CUSTOM_MODEL_CATALOG to use plain pytest.param values (or a simple list) and add a single `@pytest.mark.sanity` decorator above the test function that uses this parameterization (or apply the mark at module-level if appropriate) so the two values remain covered without duplicated marks; if you intend them to diverge later, add a short comment near the DEFAULT_MODEL_CATALOG_CM / DEFAULT_CUSTOM_MODEL_CATALOG usage explaining the deliberate duplication.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/model_registry/model_catalog/rbac/test_catalog_rbac.py`:
- Around line 71-77: Both pytest.param entries in the pytest.mark.parametrize
call apply the same pytest.mark.sanity mark; remove the redundant per-parameter
marks and hoist the mark to the test itself. Edit the pytest.mark.parametrize
that enumerates DEFAULT_MODEL_CATALOG_CM and DEFAULT_CUSTOM_MODEL_CATALOG to use
plain pytest.param values (or a simple list) and add a single
`@pytest.mark.sanity` decorator above the test function that uses this
parameterization (or apply the mark at module-level if appropriate) so the two
values remain covered without duplicated marks; if you intend them to diverge
later, add a short comment near the DEFAULT_MODEL_CATALOG_CM /
DEFAULT_CUSTOM_MODEL_CATALOG usage explaining the deliberate duplication.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (8)
tests/model_registry/image_validation/test_verify_rhoai_images.pytests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.pytests/model_registry/model_catalog/metadata/test_sources_endpoint.pytests/model_registry/model_catalog/rbac/test_catalog_rbac.pytests/model_registry/model_registry/rbac/test_mr_rbac_sa.pytests/model_registry/model_registry/rest_api/test_model_registry_rest_api.pytests/model_registry/model_registry/rest_api/test_model_registry_secure_db.pytests/model_registry/model_registry/rest_api/test_multiple_mr.py
💤 Files with no reviewable changes (2)
- tests/model_registry/model_registry/rest_api/test_multiple_mr.py
- tests/model_registry/model_registry/rest_api/test_model_registry_secure_db.py
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.py
- tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tests/model_registry/model_catalog/metadata/test_catalog_preview.py (1)
267-283: Consider adding defensive JSON parsing.If the server returns a non-JSON error response,
exc_info.value.response.json()at line 278 will raise aJSONDecodeError, masking the actual error. This is likely fine if the API contract guarantees JSON error responses, but defensive handling could improve debuggability.🛡️ Optional: defensive JSON parsing
assert exc_info.value.response.status_code == expected_status_code, ( f"Expected status code {expected_status_code}, got {exc_info.value.response.status_code}" ) - error_response = exc_info.value.response.json() - error_message = error_response.get("message", "") + try: + error_response = exc_info.value.response.json() + error_message = error_response.get("message", "") + except requests.exceptions.JSONDecodeError: + error_message = exc_info.value.response.text assert expected_error_message in error_message, (🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_catalog/metadata/test_catalog_preview.py` around lines 267 - 283, The test currently calls exc_info.value.response.json() without guarding against non-JSON responses; update the assertion block in test_catalog_preview.py (around execute_model_catalog_post_command usage and exc_info) to defensively parse the response by trying response.json() in a try/except (catch ValueError/JSONDecodeError) and falling back to response.text (or an empty string) so the subsequent check against expected_error_message still runs and any JSON decode problems don't mask the original HTTP error; keep references to exc_info.value.response, execute_model_catalog_post_command, and expected_error_message when implementing the fix.tests/model_registry/model_catalog/metadata/test_labels_endpoint.py (1)
44-54: Docstring references outdated marker category.The docstring on line 53 still says "Sanity test:" but the marker has been changed to
tier1. Consider updating the docstring to reflect the new categorization for consistency.📝 Suggested docstring update
def test_labels_endpoint_configmap_updates( ... ): """ - Sanity test: Edit the editable ConfigMap and verify changes are reflected in API. + Tier1 test: Edit the editable ConfigMap and verify changes are reflected in API. """🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_catalog/metadata/test_labels_endpoint.py` around lines 44 - 54, Update the docstring in test_labels_endpoint_configmap_updates to reflect the new pytest marker (tier1) instead of the outdated "Sanity test:" wording; locate the test_labels_endpoint_configmap_updates function and modify the first line of its triple-quoted docstring to something like "Tier1 test: Edit the editable ConfigMap and verify changes are reflected in API." or otherwise remove "Sanity test:" so the description matches the `@pytest.mark.tier1` marker.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/model_registry/model_catalog/metadata/test_catalog_preview.py`:
- Around line 267-283: The test currently calls exc_info.value.response.json()
without guarding against non-JSON responses; update the assertion block in
test_catalog_preview.py (around execute_model_catalog_post_command usage and
exc_info) to defensively parse the response by trying response.json() in a
try/except (catch ValueError/JSONDecodeError) and falling back to response.text
(or an empty string) so the subsequent check against expected_error_message
still runs and any JSON decode problems don't mask the original HTTP error; keep
references to exc_info.value.response, execute_model_catalog_post_command, and
expected_error_message when implementing the fix.
In `@tests/model_registry/model_catalog/metadata/test_labels_endpoint.py`:
- Around line 44-54: Update the docstring in
test_labels_endpoint_configmap_updates to reflect the new pytest marker (tier1)
instead of the outdated "Sanity test:" wording; locate the
test_labels_endpoint_configmap_updates function and modify the first line of its
triple-quoted docstring to something like "Tier1 test: Edit the editable
ConfigMap and verify changes are reflected in API." or otherwise remove "Sanity
test:" so the description matches the `@pytest.mark.tier1` marker.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (22)
tests/model_registry/image_validation/test_verify_rhoai_images.pytests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.pytests/model_registry/model_catalog/catalog_config/test_model_catalog_negative.pytests/model_registry/model_catalog/huggingface/test_huggingface_model_deployment.pytests/model_registry/model_catalog/huggingface/test_huggingface_negative.pytests/model_registry/model_catalog/metadata/test_catalog_preview.pytests/model_registry/model_catalog/metadata/test_labels_endpoint.pytests/model_registry/model_catalog/metadata/test_sources_endpoint.pytests/model_registry/model_catalog/rbac/test_catalog_rbac.pytests/model_registry/model_catalog/search/test_model_search.pytests/model_registry/model_catalog/sorting/test_model_sorting.pytests/model_registry/model_catalog/sorting/test_sorting_functionality.pytests/model_registry/model_registry/negative_tests/test_db_migration.pytests/model_registry/model_registry/negative_tests/test_model_registry_creation_negative.pytests/model_registry/model_registry/python_client/test_model_registry_creation.pytests/model_registry/model_registry/rbac/test_mr_rbac.pytests/model_registry/model_registry/rbac/test_mr_rbac_sa.pytests/model_registry/model_registry/rest_api/test_model_registry_rest_api.pytests/model_registry/model_registry/rest_api/test_model_registry_secure_db.pytests/model_registry/model_registry/rest_api/test_multiple_mr.pytests/model_registry/scc/test_model_catalog_scc.pytests/model_registry/scc/test_model_registry_scc.py
✅ Files skipped from review due to trivial changes (1)
- tests/model_registry/model_catalog/search/test_model_search.py
🚧 Files skipped from review as they are similar to previous changes (4)
- tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py
- tests/model_registry/model_catalog/metadata/test_sources_endpoint.py
- tests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.py
- tests/model_registry/image_validation/test_verify_rhoai_images.py
|
/build-push-pr-image |
|
Status of building tag pr-1122: success. |
|
/build-push-pr-image |
|
Status of building tag pr-1122: success. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@tests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.py`:
- Line 38: The test suite is using an unregistered pytest marker 'tier3' causing
failures with --strict-markers; either register "tier3" in the markers section
of pytest.ini (alongside existing entries for tier1 and tier2) or remove/replace
usages of the 'tier3' marker in the affected tests (e.g., tests in
model_registry like test_db_migration.py,
test_model_registry_creation_negative.py, test_model_registry_secure_db.py, and
the file under review
tests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.py)
so they use an existing registered marker.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (4)
tests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.pytests/model_registry/model_registry/rest_api/test_model_registry_rest_api.pytests/model_registry/scc/test_model_catalog_scc.pytests/model_registry/scc/test_model_registry_scc.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py
|
/build-push-pr-image |
|
Status of building tag pr-1122: success. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pytest.ini`:
- Around line 13-17: Remove the obsolete "sanity" marker entry from the pytest
marker registry and ensure "tier1" remains defined and describes the intended
priority; specifically delete the line registering the sanity marker (the
"sanity" symbol) from the markers block in pytest.ini and confirm the "tier1"
marker entry (and its description) is present and accurate so old usage is no
longer allowed under --strict-markers; if you want a temporary alias instead,
replace the deleted line with a clear note that "sanity" is an alias and include
a removal date in the comment.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (1)
pytest.ini
|
/build-push-pr-image |
|
Status of building tag pr-1122: success. |
|
Status of building tag latest: success. |
Description
Rework on smoke to reduce run time to 5 minutes, rename sanity to tier1 and cascade tier1 to tier2, add tier3 marker for negative tests to follow what decided on slack
BVT (rhoai health) --> Smoke(very high critical priority tests) --> Tier 1 (High-priority tests excluding Smoke) --> Tier 2 (Medium/low-priority positive tests) --> Tier 3 (Negative and destructive tests)How Has This Been Tested?
Merge criteria:
Summary by CodeRabbit