Skip to content

Reduce smoke for model_registry component#1122

Merged
fege merged 11 commits intoopendatahub-io:mainfrom
fege:reduce_smoke
Mar 9, 2026
Merged

Reduce smoke for model_registry component#1122
fege merged 11 commits intoopendatahub-io:mainfrom
fege:reduce_smoke

Conversation

@fege
Copy link
Copy Markdown
Contributor

@fege fege commented Feb 18, 2026

Description

Rework on smoke to reduce run time to 5 minutes, rename sanity to tier1 and cascade tier1 to tier2, add tier3 marker for negative tests to follow what decided on slack BVT (rhoai health) --> Smoke(very high critical priority tests) --> Tier 1 (High-priority tests  excluding Smoke) --> Tier 2 (Medium/low-priority positive tests) --> Tier 3 (Negative and destructive tests)

How Has This Been Tested?

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

  • Tests
    • Reorganized test categorization and priorities: many smoke/sanity marks converted to tier1/tier2/tier3 and some smoke marks moved to specific parameter cases.
    • Removed several module/class-level smoke markers and applied more granular marks to individual tests and parameters.
    • Added new negative/error-handling tests and several tier3/tier2-marked cases to expand coverage.
  • Configuration
    • Updated available test markers and their descriptions, and introduced several new markers for broader test classification.

@fege fege requested review from dbasunag and lugi0 as code owners February 18, 2026 15:10
@fege
Copy link
Copy Markdown
Contributor Author

fege commented Feb 18, 2026

/hold

@github-actions
Copy link
Copy Markdown

The following are automatically added/executed:

  • PR size label.
  • Run pre-commit
  • Run tox
  • Add PR author as the PR assignee
  • Build image based on the PR

Available user actions:

  • To mark a PR as WIP, add /wip in a comment. To remove it from the PR comment /wip cancel to the PR.
  • To block merging of a PR, add /hold in a comment. To un-block merging of PR comment /hold cancel.
  • To mark a PR as approved, add /lgtm in a comment. To remove, add /lgtm cancel.
    lgtm label removed on each new commit push.
  • To mark PR as verified comment /verified to the PR, to un-verify comment /verified cancel to the PR.
    verified label removed on each new commit push.
  • To Cherry-pick a merged PR /cherry-pick <target_branch_name> to the PR. If <target_branch_name> is valid,
    and the current PR is merged, a cherry-picked PR would be created and linked to the current PR.
  • To build and push image to quay, add /build-push-pr-image in a comment. This would create an image with tag
    pr-<pr_number> to quay repository. This image tag, however would be deleted on PR merge or close action.
Supported labels

{'/wip', '/hold', '/lgtm', '/cherry-pick', '/build-push-pr-image', '/verified'}

@fege
Copy link
Copy Markdown
Contributor Author

fege commented Feb 18, 2026

/build-push-pr-image

@fege fege changed the title Reduce smoke Reduce smoke for model_registry component Feb 18, 2026
@github-actions
Copy link
Copy Markdown

Status of building tag pr-1122: success.
Status of pushing tag pr-1122 to image registry: success.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Feb 18, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Bulk metadata-only pytest marker updates across many model_registry tests: smoke/sanity markers were removed, replaced, or moved (into pytest.param entries) and new tier markers (tier1/tier2/tier3) and some new tests/classes were added. No test logic or assertions were changed.

Changes

Cohort / File(s) Summary
Image validation & pod-image params
tests/model_registry/image_validation/test_verify_rhoai_images.py
Removed module-level smoke from pytestmark; adjusted parameter marks for test_verify_pod_images (one param set now smoke, another tier2); two resource-pod tests changed to tier1.
Rest API & secure DB
tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py, tests/model_registry/model_registry/rest_api/test_model_registry_secure_db.py, tests/model_registry/model_registry/rest_api/test_multiple_mr.py
Reassigned many test marks (replacing smoke/sanity with tier1/tier2/tier3), added tier2/tier3 marks, and moved some marks into specific pytest.param entries.
Catalog config & cleanup
tests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.py, tests/model_registry/model_catalog/catalog_config/test_model_catalog_negative.py
Reclassified numerous parametrized tests from smoke/sanity to tier1/tier2; added @pytest.mark.tier3 to TestDefaultCatalogNegative.
Catalog metadata / preview / endpoints
tests/model_registry/model_catalog/metadata/test_sources_endpoint.py, tests/model_registry/model_catalog/metadata/test_catalog_preview.py, tests/model_registry/model_catalog/metadata/test_labels_endpoint.py
Changed test_sources_endpoint_returns_all_sources... to tier1; added new TestCatalogPreviewErrorHandling with test_catalog_preview_invalid_config; changed labels endpoint test to tier1.
RBAC tests
tests/model_registry/model_catalog/rbac/test_catalog_rbac.py, tests/model_registry/model_registry/rbac/test_mr_rbac_sa.py, tests/model_registry/model_registry/rbac/test_mr_rbac.py
Converted parameterizations to pytest.param with per-param marks (assigning smoke/tier1 per config); moved several tests from smoke/sanity to tier1.
HuggingFace & negative tests
tests/model_registry/model_catalog/huggingface/test_huggingface_model_deployment.py, tests/model_registry/model_catalog/huggingface/test_huggingface_negative.py, tests/model_registry/model_registry/negative_tests/*
Updated deployment test marker from sanitytier1; added @pytest.mark.tier3 to negative test classes/methods.
Search & sorting
tests/model_registry/model_catalog/search/test_model_search.py, tests/model_registry/model_catalog/sorting/test_model_sorting.py, tests/model_registry/model_catalog/sorting/test_sorting_functionality.py
Replaced several sanity marks with tier1; added tier3 to specific invalid-query / unsupported-field tests.
Python client & SCC
tests/model_registry/model_registry/python_client/test_model_registry_creation.py, tests/model_registry/scc/test_model_catalog_scc.py, tests/model_registry/scc/test_model_registry_scc.py
Replaced multiple sanity markers with tier1 in python-client tests; changed SCC tests from sanitytier2.
Misc small marker-only edits
tests/model_registry/model_registry/negative_tests/test_db_migration.py, tests/model_registry/model_registry/negative_tests/test_model_registry_creation_negative.py
Changed a few sanity markers to tier3; minor per-test marker adjustments elsewhere.
Test config
pytest.ini
Updated marker definitions and descriptions; added new markers (tier3, fuzzer, ocp_interop, downstream_only, cluster_health, operator_health, component_health, skip_must_gather, install).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 79.17% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ⚠️ Warning The PR description covers the main objective (test marker reorganization) but lacks required testing details and merge criteria are unchecked. Add 'How it has been tested' details (Locally/Jenkins) and confirm all merge criteria: commits squashed, testing instructions provided, and manual verification completed.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: reducing the scope of smoke tests in the model_registry component by reclassifying them to tier1/tier2/tier3.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
tests/model_registry/model_catalog/metadata/test_sources_endpoint.py (1)

8-21: ⚠️ Potential issue | 🟡 Minor

Add model_registry component marker alongside the new sanity tier marker.

The test is now part of the sanity suite but carries no component marker. Per coding guidelines, relevant markers from pytest.ini (including component markers like model_registry) should be applied to tests in the smoke/sanity tiers. The model_registry component marker belongs either at the individual test or in the module-level pytestmark.

🏷️ Proposed fix: add component marker to module-level `pytestmark`
-pytestmark = [pytest.mark.usefixtures("updated_dsc_component_state_scope_session", "model_registry_namespace")]
+pytestmark = [
+    pytest.mark.model_registry,
+    pytest.mark.usefixtures("updated_dsc_component_state_scope_session", "model_registry_namespace"),
+]

Based on learnings: "pytest markers for tier or component are not required for tests that are not part of smoke or sanity test suites" — the converse applies here — and the coding guideline: "Apply relevant markers from pytest.ini: tier (smoke, sanity, tier1, tier2), component (model_serving, model_registry, llama_stack)…"

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_registry/model_catalog/metadata/test_sources_endpoint.py` around
lines 8 - 21, The test is marked with the sanity tier but lacks the required
component marker; update the module-level pytestmark list (the pytestmark
variable) to include the model_registry marker alongside the existing fixtures,
e.g., add pytest.mark.model_registry to the list so the TestSourcesEndpoint
class (and its sanity parametrized test) is properly tagged for the
model_registry component.
tests/model_registry/image_validation/test_verify_rhoai_images.py (1)

44-50: ⚠️ Potential issue | 🟡 Minor

Both test methods are missing mandatory docstrings.

The coding guidelines require every test to have a Google-format docstring in Given-When-Then form. Neither test_verify_pod_images nor test_verify_model_registry_pod_images has one. Example for test_verify_pod_images:

     def test_verify_pod_images(
         self: Self,
         admin_client: DynamicClient,
         resource_pods: list[Pod],
         related_images_refs: Set[str],
     ):
+        """Verify that pod images meet registry and digest requirements.
+
+        Given a set of running pods selected by label,
+        When their container image references are inspected,
+        Then each image must be hosted in registry.redhat.io, use a sha256 digest,
+        and be listed in the CSV's relatedImages section.
+        """
         validate_images(pods_to_validate=resource_pods, related_images_refs=related_images_refs)

As per coding guidelines: "Every test MUST have a docstring explaining what it tests" and "Use Given-When-Then format in test docstrings for behavioral clarity".

Also applies to: 78-84

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_registry/image_validation/test_verify_rhoai_images.py` around
lines 44 - 50, Add Google-style docstrings in Given-When-Then form to the test
functions test_verify_pod_images and test_verify_model_registry_pod_images: for
each test, insert a triple-quoted docstring immediately below the def line that
states the Given preconditions, the When action (what the test invokes, e.g.,
calling validate_images or the model-registry validation helper), and the Then
expected outcome (what validate_images should assert or validate). Ensure the
docstrings are concise, follow the Google format, and cover the inputs
(admin_client, resource_pods, related_images_refs) and the expected validation
results.
tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py (1)

77-96: ⚠️ Potential issue | 🟡 Minor

smoke on the inner param leaks into all four outer DB-backend combinations, including mariadb

marks=pytest.mark.smoke at line 83 is placed on the inner parametrize param. Pytest evaluates marks per test-instance independently, so the smoke marker gets attached to every combination of outer params × this inner param:

Outer param Inner param Effective markers
{} (default) test_validate_registered_model sanity, smoke
postgres test_validate_registered_model sanity, smoke
db_name=default test_validate_registered_model tier1, smoke
mariadb test_validate_registered_model smoke only ← unintended

Running pytest -m smoke will now include the mariadb backend (previously unmarked, deliberately excluded from tiered runs) and the tier1 backend. If the intent is to keep smoke coverage to just the default-DB and postgres backends, the smoke marker should be moved to the outer params that carry sanity, or a dedicated outer param with smoke should be introduced.

💡 Alternative: keep smoke scoped to the default-DB outer param only
-        pytest.param(
-            {},
-            {},
-            MODEL_REGISTER_DATA,
-            marks=(pytest.mark.sanity),
-        ),
+        pytest.param(
+            {},
+            {},
+            MODEL_REGISTER_DATA,
+            marks=(pytest.mark.smoke, pytest.mark.sanity),
+        ),

Then remove marks=pytest.mark.smoke from the inner param (line 83) so the smoke label is driven entirely by the outer param, not broadcast across all backends.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py`
around lines 77 - 96, The smoke marker on the inner `@pytest.mark.parametrize`
(the param that yields MODEL_REGISTER / "register_model") is leaking into all
DB-backend combinations; remove marks=pytest.mark.smoke from that inner param
and instead apply pytest.mark.smoke on the appropriate outer parametrization
entries that represent the intended backends (or add a separate outer param
entry scoped to smoke), e.g. move the marks=pytest.mark.smoke decoration to the
outer param list that contains the default-DB/postgres entries so only those
combinations get the smoke marker; update the parametrization around
MODEL_REGISTER / MODEL_VERSION / MODEL_ARTIFACT accordingly.
🧹 Nitpick comments (6)
tests/model_registry/model_registry/rbac/test_mr_rbac_sa.py (2)

58-64: Missing @pytest.mark.model_registry component marker on a sanity test.

Per coding guidelines, sanity tests should carry the relevant component marker. Both tests in this class are now sanity but neither declares @pytest.mark.model_registry.

+    `@pytest.mark.model_registry`
     `@pytest.mark.sanity`
     `@pytest.mark.usefixtures`("sa_namespace", "service_account", "mr_access_role", "mr_access_role_binding")
     def test_service_account_access_granted(

As per coding guidelines: "Apply relevant markers from pytest.ini: tier (smoke, sanity, tier1, tier2), component (model_serving, model_registry, llama_stack), infrastructure (gpu, parallel, slow)."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_registry/model_registry/rbac/test_mr_rbac_sa.py` around lines 58
- 64, Add the missing component marker to the sanity tests by annotating the
test function test_service_account_access_granted (and the sibling test(s) in
the same class) with `@pytest.mark.model_registry` in addition to the existing
`@pytest.mark.sanity` so they include the required component marker; update the
decorators for the test functions (e.g., above
test_service_account_access_granted and its counterpart) to include
`@pytest.mark.model_registry` alongside `@pytest.mark.sanity` and keep existing
usefixtures unchanged.

59-64: service_account is redundant in usefixtures.

service_account is listed in both @pytest.mark.usefixtures (line 59) and the function signature (line 62). Since its return value is consumed in the test body (line 73), the function-parameter form is sufficient — the usefixtures entry is redundant and should be removed.

♻️ Proposed fix
-    `@pytest.mark.usefixtures`("sa_namespace", "service_account", "mr_access_role", "mr_access_role_binding")
+    `@pytest.mark.usefixtures`("sa_namespace", "mr_access_role", "mr_access_role_binding")
     def test_service_account_access_granted(
         self: Self,
         service_account: ServiceAccount,

Based on learnings: "prefer declaring dependencies via explicit fixture parameters in test function signatures rather than always using pytest.mark.usefixtures… reserve usefixtures for cases where you need fixture side effects without needing to access the fixture value in the test."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_registry/model_registry/rbac/test_mr_rbac_sa.py` around lines 59
- 64, The decorator currently lists the service_account fixture redundantly;
remove "service_account" from the `@pytest.mark.usefixtures` call and keep it as
an explicit parameter in the test signature of
test_service_account_access_granted so the test consumes the fixture value
(leave "sa_namespace", "mr_access_role", and "mr_access_role_binding" in
usefixtures).
tests/model_registry/model_catalog/metadata/test_sources_endpoint.py (2)

22-27: Add -> None return type annotation (mypy strict).

The test function lacks a return type annotation, which violates the mypy strict requirement enforced across this repo.

♻️ Proposed fix
-    def test_sources_endpoint_returns_all_sources_regardless_of_enabled_field(
+    def test_sources_endpoint_returns_all_sources_regardless_of_enabled_field(  # type: ignore[override] if subclassing, otherwise:
     def test_sources_endpoint_returns_all_sources_regardless_of_enabled_field(
         self,
         sparse_override_catalog_source: dict,
         model_catalog_rest_url: list[str],
         model_registry_rest_headers: dict[str, str],
-    ):
+    ) -> None:

As per coding guidelines: "Add type annotations to test code and fixtures (mypy strict enforced)."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_registry/model_catalog/metadata/test_sources_endpoint.py` around
lines 22 - 27, The test function
test_sources_endpoint_returns_all_sources_regardless_of_enabled_field is missing
a return type annotation; update its signature to include the mypy-required
return type annotation by changing the def line to add "-> None" after the
parameter list (keep all existing parameters and formatting intact) so the
function signature becomes def
test_sources_endpoint_returns_all_sources_regardless_of_enabled_field(... ) ->
None:.

28-30: Docstring should follow Given-When-Then + Google format.

The current docstring is a bare Jira reference and satisfies neither the Google-format requirement nor the Given-When-Then behavioral clarity guideline.

📝 Proposed docstring
-        """
-        RHOAIENG-41633: Test that sources endpoint returns ALL sources regardless of enabled field value.
-        """
+        """Test that sources endpoint returns ALL sources regardless of enabled field value.
+
+        RHOAIENG-41633
+
+        Given:
+            A catalog source with ``enabled`` set to False via sparse_override_catalog_source.
+        When:
+            A GET request is made to the sources endpoint.
+        Then:
+            All sources (both enabled and disabled) are returned, with correct status values.
+        """

As per coding guidelines: "Use Given-When-Then format in test docstrings for behavioral clarity" and "Write Google-format docstrings for tests and fixtures."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_registry/model_catalog/metadata/test_sources_endpoint.py` around
lines 28 - 30, Replace the Jira-only docstring for the test that verifies the
sources endpoint returns all sources (the docstring at the top of
test_sources_endpoint.py) with a Google-style Given-When-Then docstring: state
Given (there are sources with enabled true and false), When (the client calls
the sources endpoint), Then (the response contains all sources regardless of the
enabled field), and include a short summary line and Args/Returns sections if
applicable to match the Google docstring convention.
tests/model_registry/model_catalog/rbac/test_catalog_rbac.py (1)

72-78: Both params carry the same sanity mark — simplify to a method-level decorator.

Using pytest.param(..., marks=pytest.mark.sanity) on every entry when all entries share the same mark adds noise without benefit. A single method-level decorator + a plain list is cleaner.

♻️ Proposed simplification
+    `@pytest.mark.sanity`
     `@pytest.mark.parametrize`(
         "configmap_name",
         [
-            pytest.param(DEFAULT_MODEL_CATALOG_CM, marks=pytest.mark.sanity),
-            pytest.param(DEFAULT_CUSTOM_MODEL_CATALOG, marks=pytest.mark.sanity),
+            DEFAULT_MODEL_CATALOG_CM,
+            DEFAULT_CUSTOM_MODEL_CATALOG,
         ],
     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_registry/model_catalog/rbac/test_catalog_rbac.py` around lines 72
- 78, The pytest parametrization currently applies pytest.mark.sanity to each
entry (DEFAULT_MODEL_CATALOG_CM and DEFAULT_CUSTOM_MODEL_CATALOG) which is
redundant; remove the per-item marks and instead add a single
`@pytest.mark.sanity` decorator on the test function that uses
pytest.mark.parametrize("configmap_name", [DEFAULT_MODEL_CATALOG_CM,
DEFAULT_CUSTOM_MODEL_CATALOG]); update the param list to plain values and add
the method-level `@pytest.mark.sanity` above the test to keep behavior the same
while simplifying the code.
tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py (1)

98-108: Missing docstrings on four modified test methods

The following test methods that received marker changes in this PR are missing docstrings, violating the project guideline requiring a Given-When-Then docstring for every test:

  • test_validate_model_registry_resource (line 98)
  • test_default_postgres_db_pod_log (line 160)
  • test_model_registry_validate_api_version (line 180)
  • test_model_registry_validate_kuberbacproxy_enabled (line 196)
📝 Example docstrings in GWT format
 def test_validate_model_registry_resource(
     self: Self,
     registered_model_rest_api: dict[str, Any],
     expected_params: dict[str, str],
     data_key: str,
 ):
+    """
+    Given a registered model REST API response,
+    When validating resource attributes for a given data key,
+    Then the actual resource data matches the expected parameters.
+    """
 def test_default_postgres_db_pod_log(
     self: Self,
     ...
 ):
+    """
+    Given the default postgres deployment is running,
+    When reading the postgres container log,
+    Then it contains the expected connection-accepted string.
+    """
 def test_model_registry_validate_api_version(
     self: Self,
     ...
 ):
+    """
+    Given a deployed ModelRegistry instance,
+    When reading its apiVersion field,
+    Then it matches the expected group/version string.
+    """
 def test_model_registry_validate_kuberbacproxy_enabled(
     self: Self,
     ...
 ):
+    """
+    Given a deployed ModelRegistry instance,
+    When inspecting the spec,
+    Then istio is not enabled and kubeRBACProxy serviceRoute is 'enabled'.
+    """

As per coding guidelines: "Every test MUST have a docstring explaining what it tests" and "Use Given-When-Then format in test docstrings for behavioral clarity".

Also applies to: 160-177, 180-193, 196-203

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py`
around lines 98 - 108, Four test functions
(test_validate_model_registry_resource, test_default_postgres_db_pod_log,
test_model_registry_validate_api_version,
test_model_registry_validate_kuberbacproxy_enabled) are missing the required
Given-When-Then docstrings; add a concise GWT docstring at the start of each
test function describing the initial state (Given), the action being performed
(When), and the expected outcome/assertion (Then) so they comply with project
guidelines and make test intent explicit; update the three listed test functions
by inserting a one- or two-sentence GWT docstring in each function body
immediately after the def line.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@tests/model_registry/image_validation/test_verify_rhoai_images.py`:
- Around line 44-50: Add Google-style docstrings in Given-When-Then form to the
test functions test_verify_pod_images and test_verify_model_registry_pod_images:
for each test, insert a triple-quoted docstring immediately below the def line
that states the Given preconditions, the When action (what the test invokes,
e.g., calling validate_images or the model-registry validation helper), and the
Then expected outcome (what validate_images should assert or validate). Ensure
the docstrings are concise, follow the Google format, and cover the inputs
(admin_client, resource_pods, related_images_refs) and the expected validation
results.

In `@tests/model_registry/model_catalog/metadata/test_sources_endpoint.py`:
- Around line 8-21: The test is marked with the sanity tier but lacks the
required component marker; update the module-level pytestmark list (the
pytestmark variable) to include the model_registry marker alongside the existing
fixtures, e.g., add pytest.mark.model_registry to the list so the
TestSourcesEndpoint class (and its sanity parametrized test) is properly tagged
for the model_registry component.

In
`@tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py`:
- Around line 77-96: The smoke marker on the inner `@pytest.mark.parametrize` (the
param that yields MODEL_REGISTER / "register_model") is leaking into all
DB-backend combinations; remove marks=pytest.mark.smoke from that inner param
and instead apply pytest.mark.smoke on the appropriate outer parametrization
entries that represent the intended backends (or add a separate outer param
entry scoped to smoke), e.g. move the marks=pytest.mark.smoke decoration to the
outer param list that contains the default-DB/postgres entries so only those
combinations get the smoke marker; update the parametrization around
MODEL_REGISTER / MODEL_VERSION / MODEL_ARTIFACT accordingly.

---

Nitpick comments:
In `@tests/model_registry/model_catalog/metadata/test_sources_endpoint.py`:
- Around line 22-27: The test function
test_sources_endpoint_returns_all_sources_regardless_of_enabled_field is missing
a return type annotation; update its signature to include the mypy-required
return type annotation by changing the def line to add "-> None" after the
parameter list (keep all existing parameters and formatting intact) so the
function signature becomes def
test_sources_endpoint_returns_all_sources_regardless_of_enabled_field(... ) ->
None:.
- Around line 28-30: Replace the Jira-only docstring for the test that verifies
the sources endpoint returns all sources (the docstring at the top of
test_sources_endpoint.py) with a Google-style Given-When-Then docstring: state
Given (there are sources with enabled true and false), When (the client calls
the sources endpoint), Then (the response contains all sources regardless of the
enabled field), and include a short summary line and Args/Returns sections if
applicable to match the Google docstring convention.

In `@tests/model_registry/model_catalog/rbac/test_catalog_rbac.py`:
- Around line 72-78: The pytest parametrization currently applies
pytest.mark.sanity to each entry (DEFAULT_MODEL_CATALOG_CM and
DEFAULT_CUSTOM_MODEL_CATALOG) which is redundant; remove the per-item marks and
instead add a single `@pytest.mark.sanity` decorator on the test function that
uses pytest.mark.parametrize("configmap_name", [DEFAULT_MODEL_CATALOG_CM,
DEFAULT_CUSTOM_MODEL_CATALOG]); update the param list to plain values and add
the method-level `@pytest.mark.sanity` above the test to keep behavior the same
while simplifying the code.

In `@tests/model_registry/model_registry/rbac/test_mr_rbac_sa.py`:
- Around line 58-64: Add the missing component marker to the sanity tests by
annotating the test function test_service_account_access_granted (and the
sibling test(s) in the same class) with `@pytest.mark.model_registry` in addition
to the existing `@pytest.mark.sanity` so they include the required component
marker; update the decorators for the test functions (e.g., above
test_service_account_access_granted and its counterpart) to include
`@pytest.mark.model_registry` alongside `@pytest.mark.sanity` and keep existing
usefixtures unchanged.
- Around line 59-64: The decorator currently lists the service_account fixture
redundantly; remove "service_account" from the `@pytest.mark.usefixtures` call and
keep it as an explicit parameter in the test signature of
test_service_account_access_granted so the test consumes the fixture value
(leave "sa_namespace", "mr_access_role", and "mr_access_role_binding" in
usefixtures).

In
`@tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py`:
- Around line 98-108: Four test functions
(test_validate_model_registry_resource, test_default_postgres_db_pod_log,
test_model_registry_validate_api_version,
test_model_registry_validate_kuberbacproxy_enabled) are missing the required
Given-When-Then docstrings; add a concise GWT docstring at the start of each
test function describing the initial state (Given), the action being performed
(When), and the expected outcome/assertion (Then) so they comply with project
guidelines and make test intent explicit; update the three listed test functions
by inserting a one- or two-sentence GWT docstring in each function body
immediately after the def line.

dbasunag
dbasunag previously approved these changes Feb 18, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tests/model_registry/model_catalog/rbac/test_catalog_rbac.py (1)

71-77: Both params carry the same mark; consider hoisting to the test level.

Since both DEFAULT_MODEL_CATALOG_CM and DEFAULT_CUSTOM_MODEL_CATALOG receive pytest.mark.sanity, the per-pytest.param marks are redundant. A single test-level decorator is cleaner and makes the intent immediately obvious:

♻️ Proposed simplification
+   `@pytest.mark.sanity`
    `@pytest.mark.parametrize`(
        "configmap_name",
-       [
-           pytest.param(DEFAULT_MODEL_CATALOG_CM, marks=pytest.mark.sanity),
-           pytest.param(DEFAULT_CUSTOM_MODEL_CATALOG, marks=pytest.mark.sanity),
-       ],
+       [DEFAULT_MODEL_CATALOG_CM, DEFAULT_CUSTOM_MODEL_CATALOG],
    )

If the two cases are expected to diverge in the future (e.g., one becoming tier1), keeping pytest.param entries is fine — but a brief comment explaining the deliberate duplication would help.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_registry/model_catalog/rbac/test_catalog_rbac.py` around lines 71
- 77, Both pytest.param entries in the pytest.mark.parametrize call apply the
same pytest.mark.sanity mark; remove the redundant per-parameter marks and hoist
the mark to the test itself. Edit the pytest.mark.parametrize that enumerates
DEFAULT_MODEL_CATALOG_CM and DEFAULT_CUSTOM_MODEL_CATALOG to use plain
pytest.param values (or a simple list) and add a single `@pytest.mark.sanity`
decorator above the test function that uses this parameterization (or apply the
mark at module-level if appropriate) so the two values remain covered without
duplicated marks; if you intend them to diverge later, add a short comment near
the DEFAULT_MODEL_CATALOG_CM / DEFAULT_CUSTOM_MODEL_CATALOG usage explaining the
deliberate duplication.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/model_registry/model_catalog/rbac/test_catalog_rbac.py`:
- Around line 71-77: Both pytest.param entries in the pytest.mark.parametrize
call apply the same pytest.mark.sanity mark; remove the redundant per-parameter
marks and hoist the mark to the test itself. Edit the pytest.mark.parametrize
that enumerates DEFAULT_MODEL_CATALOG_CM and DEFAULT_CUSTOM_MODEL_CATALOG to use
plain pytest.param values (or a simple list) and add a single
`@pytest.mark.sanity` decorator above the test function that uses this
parameterization (or apply the mark at module-level if appropriate) so the two
values remain covered without duplicated marks; if you intend them to diverge
later, add a short comment near the DEFAULT_MODEL_CATALOG_CM /
DEFAULT_CUSTOM_MODEL_CATALOG usage explaining the deliberate duplication.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between 02b545d and d8ee0ab.

📒 Files selected for processing (8)
  • tests/model_registry/image_validation/test_verify_rhoai_images.py
  • tests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.py
  • tests/model_registry/model_catalog/metadata/test_sources_endpoint.py
  • tests/model_registry/model_catalog/rbac/test_catalog_rbac.py
  • tests/model_registry/model_registry/rbac/test_mr_rbac_sa.py
  • tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py
  • tests/model_registry/model_registry/rest_api/test_model_registry_secure_db.py
  • tests/model_registry/model_registry/rest_api/test_multiple_mr.py
💤 Files with no reviewable changes (2)
  • tests/model_registry/model_registry/rest_api/test_multiple_mr.py
  • tests/model_registry/model_registry/rest_api/test_model_registry_secure_db.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.py
  • tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
tests/model_registry/model_catalog/metadata/test_catalog_preview.py (1)

267-283: Consider adding defensive JSON parsing.

If the server returns a non-JSON error response, exc_info.value.response.json() at line 278 will raise a JSONDecodeError, masking the actual error. This is likely fine if the API contract guarantees JSON error responses, but defensive handling could improve debuggability.

🛡️ Optional: defensive JSON parsing
         assert exc_info.value.response.status_code == expected_status_code, (
             f"Expected status code {expected_status_code}, got {exc_info.value.response.status_code}"
         )

-        error_response = exc_info.value.response.json()
-        error_message = error_response.get("message", "")
+        try:
+            error_response = exc_info.value.response.json()
+            error_message = error_response.get("message", "")
+        except requests.exceptions.JSONDecodeError:
+            error_message = exc_info.value.response.text

         assert expected_error_message in error_message, (
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_registry/model_catalog/metadata/test_catalog_preview.py` around
lines 267 - 283, The test currently calls exc_info.value.response.json() without
guarding against non-JSON responses; update the assertion block in
test_catalog_preview.py (around execute_model_catalog_post_command usage and
exc_info) to defensively parse the response by trying response.json() in a
try/except (catch ValueError/JSONDecodeError) and falling back to response.text
(or an empty string) so the subsequent check against expected_error_message
still runs and any JSON decode problems don't mask the original HTTP error; keep
references to exc_info.value.response, execute_model_catalog_post_command, and
expected_error_message when implementing the fix.
tests/model_registry/model_catalog/metadata/test_labels_endpoint.py (1)

44-54: Docstring references outdated marker category.

The docstring on line 53 still says "Sanity test:" but the marker has been changed to tier1. Consider updating the docstring to reflect the new categorization for consistency.

📝 Suggested docstring update
     def test_labels_endpoint_configmap_updates(
         ...
     ):
         """
-        Sanity test: Edit the editable ConfigMap and verify changes are reflected in API.
+        Tier1 test: Edit the editable ConfigMap and verify changes are reflected in API.
         """
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_registry/model_catalog/metadata/test_labels_endpoint.py` around
lines 44 - 54, Update the docstring in test_labels_endpoint_configmap_updates to
reflect the new pytest marker (tier1) instead of the outdated "Sanity test:"
wording; locate the test_labels_endpoint_configmap_updates function and modify
the first line of its triple-quoted docstring to something like "Tier1 test:
Edit the editable ConfigMap and verify changes are reflected in API." or
otherwise remove "Sanity test:" so the description matches the
`@pytest.mark.tier1` marker.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/model_registry/model_catalog/metadata/test_catalog_preview.py`:
- Around line 267-283: The test currently calls exc_info.value.response.json()
without guarding against non-JSON responses; update the assertion block in
test_catalog_preview.py (around execute_model_catalog_post_command usage and
exc_info) to defensively parse the response by trying response.json() in a
try/except (catch ValueError/JSONDecodeError) and falling back to response.text
(or an empty string) so the subsequent check against expected_error_message
still runs and any JSON decode problems don't mask the original HTTP error; keep
references to exc_info.value.response, execute_model_catalog_post_command, and
expected_error_message when implementing the fix.

In `@tests/model_registry/model_catalog/metadata/test_labels_endpoint.py`:
- Around line 44-54: Update the docstring in
test_labels_endpoint_configmap_updates to reflect the new pytest marker (tier1)
instead of the outdated "Sanity test:" wording; locate the
test_labels_endpoint_configmap_updates function and modify the first line of its
triple-quoted docstring to something like "Tier1 test: Edit the editable
ConfigMap and verify changes are reflected in API." or otherwise remove "Sanity
test:" so the description matches the `@pytest.mark.tier1` marker.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between d8ee0ab and 44086b9.

📒 Files selected for processing (22)
  • tests/model_registry/image_validation/test_verify_rhoai_images.py
  • tests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.py
  • tests/model_registry/model_catalog/catalog_config/test_model_catalog_negative.py
  • tests/model_registry/model_catalog/huggingface/test_huggingface_model_deployment.py
  • tests/model_registry/model_catalog/huggingface/test_huggingface_negative.py
  • tests/model_registry/model_catalog/metadata/test_catalog_preview.py
  • tests/model_registry/model_catalog/metadata/test_labels_endpoint.py
  • tests/model_registry/model_catalog/metadata/test_sources_endpoint.py
  • tests/model_registry/model_catalog/rbac/test_catalog_rbac.py
  • tests/model_registry/model_catalog/search/test_model_search.py
  • tests/model_registry/model_catalog/sorting/test_model_sorting.py
  • tests/model_registry/model_catalog/sorting/test_sorting_functionality.py
  • tests/model_registry/model_registry/negative_tests/test_db_migration.py
  • tests/model_registry/model_registry/negative_tests/test_model_registry_creation_negative.py
  • tests/model_registry/model_registry/python_client/test_model_registry_creation.py
  • tests/model_registry/model_registry/rbac/test_mr_rbac.py
  • tests/model_registry/model_registry/rbac/test_mr_rbac_sa.py
  • tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py
  • tests/model_registry/model_registry/rest_api/test_model_registry_secure_db.py
  • tests/model_registry/model_registry/rest_api/test_multiple_mr.py
  • tests/model_registry/scc/test_model_catalog_scc.py
  • tests/model_registry/scc/test_model_registry_scc.py
✅ Files skipped from review due to trivial changes (1)
  • tests/model_registry/model_catalog/search/test_model_search.py
🚧 Files skipped from review as they are similar to previous changes (4)
  • tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py
  • tests/model_registry/model_catalog/metadata/test_sources_endpoint.py
  • tests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.py
  • tests/model_registry/image_validation/test_verify_rhoai_images.py

@fege
Copy link
Copy Markdown
Contributor Author

fege commented Feb 27, 2026

/build-push-pr-image

@github-actions
Copy link
Copy Markdown

Status of building tag pr-1122: success.
Status of pushing tag pr-1122 to image registry: success.

@fege
Copy link
Copy Markdown
Contributor Author

fege commented Feb 27, 2026

/build-push-pr-image

@github-actions
Copy link
Copy Markdown

Status of building tag pr-1122: success.
Status of pushing tag pr-1122 to image registry: success.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@tests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.py`:
- Line 38: The test suite is using an unregistered pytest marker 'tier3' causing
failures with --strict-markers; either register "tier3" in the markers section
of pytest.ini (alongside existing entries for tier1 and tier2) or remove/replace
usages of the 'tier3' marker in the affected tests (e.g., tests in
model_registry like test_db_migration.py,
test_model_registry_creation_negative.py, test_model_registry_secure_db.py, and
the file under review
tests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.py)
so they use an existing registered marker.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between 44086b9 and df0c819.

📒 Files selected for processing (4)
  • tests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.py
  • tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py
  • tests/model_registry/scc/test_model_catalog_scc.py
  • tests/model_registry/scc/test_model_registry_scc.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py

@fege fege requested a review from a team as a code owner February 27, 2026 13:53
@fege
Copy link
Copy Markdown
Contributor Author

fege commented Feb 27, 2026

/build-push-pr-image

@github-actions
Copy link
Copy Markdown

Status of building tag pr-1122: success.
Status of pushing tag pr-1122 to image registry: success.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pytest.ini`:
- Around line 13-17: Remove the obsolete "sanity" marker entry from the pytest
marker registry and ensure "tier1" remains defined and describes the intended
priority; specifically delete the line registering the sanity marker (the
"sanity" symbol) from the markers block in pytest.ini and confirm the "tier1"
marker entry (and its description) is present and accurate so old usage is no
longer allowed under --strict-markers; if you want a temporary alias instead,
replace the deleted line with a clear note that "sanity" is an alias and include
a removal date in the comment.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between df0c819 and ba9e777.

📒 Files selected for processing (1)
  • pytest.ini

Comment thread pytest.ini
@fege
Copy link
Copy Markdown
Contributor Author

fege commented Feb 27, 2026

/build-push-pr-image

@github-actions
Copy link
Copy Markdown

Status of building tag pr-1122: success.
Status of pushing tag pr-1122 to image registry: success.

@fege fege enabled auto-merge (squash) March 9, 2026 17:01
@fege fege merged commit ec4297d into opendatahub-io:main Mar 9, 2026
8 checks passed
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Mar 9, 2026

Status of building tag latest: success.
Status of pushing tag latest to image registry: success.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants