Skip to content

Add tests with artifacts property#882

Merged
dbasunag merged 7 commits intoopendatahub-io:mainfrom
dbasunag:advanced_search
Dec 2, 2025
Merged

Add tests with artifacts property#882
dbasunag merged 7 commits intoopendatahub-io:mainfrom
dbasunag:advanced_search

Conversation

@dbasunag
Copy link
Copy Markdown
Collaborator

@dbasunag dbasunag commented Nov 25, 2025

Description

How Has This Been Tested?

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

  • Tests

    • Added advanced parameterized tests to validate complex model-artifact filtering with AND/OR semantics.
    • Added a fixture that supplies model lists from filter queries.
  • Tests

    • Introduced reusable validators to evaluate per-artifact criteria (AND/OR) across model artifacts.
  • Chores

    • Made test HTTP helper logging conditional to avoid logging empty request parameters.

✏️ Tip: You can customize this high-level summary in your review settings.

@github-actions
Copy link
Copy Markdown

The following are automatically added/executed:

  • PR size label.
  • Run pre-commit
  • Run tox
  • Add PR author as the PR assignee
  • Build image based on the PR

Available user actions:

  • To mark a PR as WIP, add /wip in a comment. To remove it from the PR comment /wip cancel to the PR.
  • To block merging of a PR, add /hold in a comment. To un-block merging of PR comment /hold cancel.
  • To mark a PR as approved, add /lgtm in a comment. To remove, add /lgtm cancel.
    lgtm label removed on each new commit push.
  • To mark PR as verified comment /verified to the PR, to un-verify comment /verified cancel to the PR.
    verified label removed on each new commit push.
  • To Cherry-pick a merged PR /cherry-pick <target_branch_name> to the PR. If <target_branch_name> is valid,
    and the current PR is merged, a cherry-picked PR would be created and linked to the current PR.
  • To build and push image to quay, add /build-push-pr-image in a comment. This would create an image with tag
    pr-<pr_number> to quay repository. This image tag, however would be deleted on PR merge or close action.
Supported labels

{'/lgtm', '/cherry-pick', '/verified', '/build-push-pr-image', '/hold', '/wip'}

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Nov 25, 2025

📝 Walkthrough

Walkthrough

Adds a pytest fixture to fetch model names from catalog filter queries, introduces per-criterion AND/OR artifact-validation helpers, adds a parameterized test exercising advanced filter queries against model artifacts, and makes params logging conditional in an HTTP helper.

Changes

Cohort / File(s) Change Summary
Fixture for filter-query models
tests/model_registry/model_catalog/conftest.py
Adds models_from_filter_query pytest fixture that reads request.param as filter_query, calls get_models_from_catalog_api with additional_params containing filterQuery, asserts items exist, maps to model names, logs and returns the list.
Artifact validation helpers
tests/model_registry/model_catalog/utils.py
Adds _validate_single_criterion(artifact_name, custom_properties, validation), _get_artifact_validation_results(artifact, expected_validations), validate_model_artifacts_match_criteria_and(all_model_artifacts, expected_validations, model_name) and validate_model_artifacts_match_criteria_or(all_model_artifacts, expected_validations, model_name) to evaluate artifact custom properties with type conversions and support exact/min/max/contains semantics; returns per-criterion messages and aggregated AND/OR results.
Advanced search test changes
tests/model_registry/model_catalog/test_model_search.py
Imports the new validators, removes test_idp_user from pytestmark fixtures, and adds test_filter_query_advanced_model_search which parameterizes over filter queries, pages artifacts dynamically, selects AND/OR validator by logic_type, accumulates failures, and asserts none remain.
Logging tweak
tests/model_registry/utils.py
Modifies execute_get_call to log params only when params is truthy (avoids logging None).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Review correctness and edge cases in _validate_single_criterion (type conversions, date/number parsing, "contains" behavior).
  • Verify aggregation, short-circuiting, and logging in validate_model_artifacts_match_criteria_and / _or.
  • Confirm models_from_filter_query handles empty/malformed API responses and constructs additional_params as expected.
  • Check test paging logic and parameterization in test_filter_query_advanced_model_search.

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'Add tests with artifacts property' is vague and generic, using non-descriptive phrasing that doesn't convey meaningful information about what the PR actually implements. Revise the title to be more specific about the advanced search functionality being tested, such as 'Add advanced model search tests with AND/OR filter criteria validation' or similar.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 90.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
tests/model_registry/model_catalog/conftest.py (1)

20-20: models_from_filter_query fixture cleanly reuses catalog API helper

The fixture wiring to get_models_from_catalog_api and the assertion/logging around returned models look correct and align with how other tests build filterQuery via additional_params.

If you want to tighten readability/typing a bit, you could annotate the fixture arguments and return type:

-@pytest.fixture
-def models_from_filter_query(
-    request,
-    model_catalog_rest_url: list[str],
-    model_registry_rest_headers: dict[str, str],
-):
+@pytest.fixture
+def models_from_filter_query(
+    request: pytest.FixtureRequest,
+    model_catalog_rest_url: list[str],
+    model_registry_rest_headers: dict[str, str],
+) -> list[str]:

Purely optional, as the current implementation is already clear.

Also applies to: 204-227

tests/model_registry/model_catalog/test_model_search.py (1)

21-22: Advanced artifact filter test correctly uses AND/OR validators; watch catalog assumption and consider simplifying logic selection

The new test_filter_query_advanced_model_search parametrization plus imports of validate_model_artifacts_match_criteria_and / _or are wired correctly:

  • models_from_filter_query supplies model names per filterQuery.
  • Artifacts are fetched via fetch_all_artifacts_with_dynamic_paging, and the appropriate validator is chosen based on logic_type.
  • Aggregating failing model_names into errors gives a helpful assertion message.

Two minor points to consider:

  1. Assumption on catalog ID

    The test always fetches artifacts from sources/{VALIDATED_CATALOG_ID} while the filterQuery itself doesn’t constrain source/catalog. That’s fine as long as all these filter queries only ever match models in the validated catalog. If new catalogs gain compatible custom properties, this could start failing with HTTP/ResourceNotFoundError even though the filter logic is correct.

    If you expect broader usage later, you might want to:

    • Either pass a sourceLabel/source_id constraint into the filter query, or
    • Derive the source_id from the models response instead of hard-coding VALIDATED_CATALOG_ID.
  2. Simplify validator selection and shorten the error message (Ruff TRY003)

    You can avoid the explicit if/elif/else and the long ValueError message by dispatching via a map:

  •        validation_result = None
    
  •        # Select validation function based on logic type
    
  •        if logic_type == "and":
    
  •            validation_result = validate_model_artifacts_match_criteria_and(
    
  •                all_model_artifacts=all_model_artifacts, expected_validations=expected_value, model_name=model_name
    
  •            )
    
  •        elif logic_type == "or":
    
  •            validation_result = validate_model_artifacts_match_criteria_or(
    
  •                all_model_artifacts=all_model_artifacts, expected_validations=expected_value, model_name=model_name
    
  •            )
    
  •        else:
    
  •            raise ValueError(f"Invalid logic_type: {logic_type}. Must be 'and' or 'or'")
    
  •        validators = {
    
  •            "and": validate_model_artifacts_match_criteria_and,
    
  •            "or": validate_model_artifacts_match_criteria_or,
    
  •        }
    
  •        try:
    
  •            validator = validators[logic_type]
    
  •        except KeyError:
    
  •            raise ValueError(f"Invalid logic_type: {logic_type}") from None
    
  •        validation_result = validator(
    
  •            all_model_artifacts=all_model_artifacts,
    
  •            expected_validations=expected_value,
    
  •            model_name=model_name,
    
  •        )
    
    
    This keeps the error message short (addressing TRY003) and makes it easy to extend if you add more logic types later.
    
    

Overall, the test structure and use of the new validators look solid.

Also applies to: 583-679

tests/model_registry/model_catalog/utils.py (1)

971-1105: Criterion-based artifact validators look correct; consider a small robustness tweak

The new helpers _validate_single_criterion, validate_model_artifacts_match_criteria_and, and validate_model_artifacts_match_criteria_or are consistent with their intended behavior:

  • Type coercion based on key_type and comparison modes (exact, min, max, contains) is straightforward.
  • The AND helper short-circuits on first failing condition per artifact and returns True as soon as one artifact satisfies all validations.
  • The OR helper returns True on the first successful criterion across all artifacts and logs failures clearly.

One low-risk robustness improvement is around customProperties access; both validators currently do:

custom_properties = artifact["customProperties"]

If an artifact ever lacks customProperties or has it as None, this will raise rather than reporting a clean “missing” validation failure. Since _validate_single_criterion already handles a missing key gracefully, you can make the outer functions more defensive:

-        artifact_name = artifact.get("name")
-        custom_properties = artifact["customProperties"]
+        artifact_name = artifact.get("name")
+        custom_properties = artifact.get("customProperties") or {}

Apply the same pattern in the OR validator. That keeps the behavior for well-formed artifacts identical while avoiding KeyError/TypeError if future catalog entries are incomplete but you still want the test to fail via the returned False rather than an unexpected exception.

Everything else in these helpers looks good.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ee22b90 and 2da5bc8.

📒 Files selected for processing (4)
  • tests/model_registry/model_catalog/conftest.py (2 hunks)
  • tests/model_registry/model_catalog/test_model_search.py (2 hunks)
  • tests/model_registry/model_catalog/utils.py (1 hunks)
  • tests/model_registry/utils.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
tests/model_registry/model_catalog/conftest.py (2)
tests/model_registry/model_catalog/utils.py (1)
  • get_models_from_catalog_api (694-747)
tests/model_registry/conftest.py (2)
  • model_catalog_rest_url (646-655)
  • model_registry_rest_headers (323-324)
tests/model_registry/model_catalog/test_model_search.py (4)
tests/model_registry/model_catalog/utils.py (4)
  • validate_model_artifacts_match_criteria_and (1024-1068)
  • validate_model_artifacts_match_criteria_or (1071-1105)
  • ResourceNotFoundError (30-31)
  • fetch_all_artifacts_with_dynamic_paging (750-781)
tests/model_registry/utils.py (1)
  • get_model_catalog_pod (660-663)
tests/model_registry/model_catalog/conftest.py (1)
  • models_from_filter_query (205-227)
tests/model_registry/conftest.py (2)
  • model_catalog_rest_url (646-655)
  • model_registry_rest_headers (323-324)
🪛 Ruff (0.14.5)
tests/model_registry/model_catalog/test_model_search.py

669-669: Avoid specifying long messages outside the exception class

(TRY003)

🔇 Additional comments (2)
tests/model_registry/utils.py (1)

701-702: Conditional params logging is appropriate

Only logging params when truthy reduces noisy logs without changing request behavior; this is fine even when params is {} (no log is acceptable here).

tests/model_registry/model_catalog/test_model_search.py (1)

29-29: Module-wide pytestmark usage is consistent

Applying updated_dsc_component_state_scope_session and model_registry_namespace at module level keeps individual test signatures lean and aligns with typical pytest patterns.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
tests/model_registry/model_catalog/utils.py (3)

971-1021: Clarify type handling and comparison semantics in _validate_single_criterion

The helper assumes validation["value"] is already the correct Python type and that contains is only used with string value. If a caller accidentally passes a mismatched type (e.g., string "10" for an int_value/min comparison, or an int for contains), you can get surprising results or a TypeError.

Consider tightening this a bit:

  • Normalize expected_val to the same Python type you derive for artifact_value (e.g., cast to int/float for numeric key types) so JSON‑originated strings don’t silently mis‑compare.
  • For contains, explicitly coerce expected_val to str or assert that it is a string, to avoid runtime errors.
  • Optionally add an else branch for unsupported comparison_type that logs a warning and returns a clear "unknown comparison" message.

This keeps the helper defensive against misconfigured tests while preserving its current behavior for well‑formed inputs.


1024-1068: Make validate_model_artifacts_match_criteria_and more defensive against edge cases

Two small robustness points:

  • custom_properties = artifact["customProperties"] will raise KeyError if an artifact is missing that field. Since _validate_single_criterion already handles missing keys gracefully, you could safely use artifact.get("customProperties", {}) here to avoid blowing up on slightly different API shapes.
  • When expected_validations is empty, conditions_passed == len(expected_validations) is 0 == 0, so the first artifact will cause the function to return True. If an empty validations list is not a valid use‑case, adding a quick assert expected_validations (or an early return with a clear log) would prevent accidental vacuous “success”.

These tweaks keep failures in the test data or API shape from surfacing as opaque KeyErrors or unexpected green tests.


1071-1105: Align OR‑criteria helper with AND helper for consistency and resilience

validate_model_artifacts_match_criteria_or is very similar to the AND variant but slightly less defensive:

  • artifact_name = artifact.get("name") yields None in log messages if the name is missing. Using the same default as the AND variant (.get("name", "missing_artifact_name")) would make logs easier to read.
  • As with the AND helper, custom_properties = artifact["customProperties"] can be safely changed to .get("customProperties", {}) to avoid a hard failure if some artifacts don’t carry custom properties.

Functionally it’s correct; these are just small consistency/robustness improvements with low risk.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2da5bc8 and 965f689.

📒 Files selected for processing (1)
  • tests/model_registry/model_catalog/utils.py (1 hunks)

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
tests/model_registry/model_catalog/utils.py (3)

1010-1061: Tighten key_type handling and comparison robustness in _validate_single_criterion

The structure and intent look good, but a couple of details are worth tightening up:

  • key_type is used both as the nested dict key (custom_properties.get(key_name, {}).get(key_type, None)) and as a discriminator in the conversion branch (if key_type == "int_value", etc.). This only works if your artifact payloads actually use "int_value", "double_value", "string_value" as keys. If the API returns "intValue", "doubleValue", "stringValue" (or any other variant) and you pass those through, you’ll always hit the “unknown key_type” path and silently fail validations. It’s safer to either:
    • Normalize key_type once (e.g., map camelCase to the expected internal names), or
    • Accept both variants in the conversion branch (e.g., if key_type in ("int_value", "intValue"): etc.).
  • For numeric comparisons (min/max), this assumes expected_val is already numeric. If these come from YAML/JSON as strings, you’ll get a TypeError when comparing int/float to str. Consider normalizing expected_val alongside artifact_value for numeric key types.
  • If comparison_type is misspelled or unsupported, condition_met stays False without any explicit signal. Logging or raising on unknown comparison_type would help catch misconfigured test data early.

A small normalization step around key_type and an explicit guard for unsupported comparison_type would make this helper much more robust to configuration drift.

-    # Convert value to appropriate type
-    try:
-        if key_type == "int_value":
-            artifact_value = int(raw_value)
-        elif key_type == "double_value":
-            artifact_value = float(raw_value)
-        elif key_type == "string_value":
-            artifact_value = str(raw_value)
-        else:
-            LOGGER.warning(f"Unknown key_type: {key_type}")
-            return False, f"{key_name}: unknown type {key_type}"
-    except (ValueError, TypeError):
-        return False, f"{key_name}: conversion error"
+    # Normalize and convert value to appropriate type
+    int_keys = ("int_value", "intValue")
+    double_keys = ("double_value", "doubleValue")
+    string_keys = ("string_value", "stringValue")
+
+    try:
+        if key_type in int_keys:
+            artifact_value = int(raw_value)
+        elif key_type in double_keys:
+            artifact_value = float(raw_value)
+        elif key_type in string_keys:
+            artifact_value = str(raw_value)
+        else:
+            LOGGER.warning(f"Unknown key_type: {key_type}")
+            return False, f"{key_name}: unknown type {key_type}"
+    except (ValueError, TypeError):
+        return False, f"{key_name}: conversion error"
+
+    if comparison_type not in {"exact", "min", "max", "contains"}:
+        LOGGER.warning(f"Unknown comparison_type: {comparison_type}")
+        return False, f"{key_name}: unknown comparison {comparison_type}"

1063-1107: Confirm semantics for empty validation lists in validate_model_artifacts_match_criteria_and

The AND logic per artifact is clear and matches the docstring: the model passes if at least one artifact satisfies all validations, and you short‑circuit on the first failing condition per artifact, which is efficient.

One edge case to be aware of: if expected_validations is empty, conditions_passed == len(expected_validations) will be True for the first artifact, so the function returns True (“passes all 0 validations”). If you never call this with an empty list, that’s fine; otherwise you may want an explicit guard to treat “no criteria” as a configuration error instead of an automatic pass.

 def validate_model_artifacts_match_criteria_and(
@@
-    for artifact in all_model_artifacts:
+    if not expected_validations:
+        raise ValueError("expected_validations must be non-empty for AND logic")
+
+    for artifact in all_model_artifacts:
@@
-        if conditions_passed == len(expected_validations):
+        if conditions_passed == len(expected_validations):
             LOGGER.info(

1110-1144: Align OR semantics and logging with AND helper, and handle nameless artifacts

The OR helper is straightforward and efficient, but a couple of small tweaks could improve consistency and avoid surprises:

  • artifact_name = artifact.get("name") can yield None, which then flows into _validate_single_criterion and log messages. Using the same default as the AND variant ("missing_artifact_name") would keep logs cleaner and symmetric.
  • The current semantics are “model passes if ∃ artifact, ∃ validation such that the artifact satisfies that validation”. That’s likely what you want for OR filters, but it’s slightly weaker than “∃ artifact such that it satisfies (v1 OR v2 OR …) as a group”. If you ever need grouped ORs over multiple criteria for the same artifact, you may want to compute all condition results per artifact (like in the earlier combined suggestion) and apply any() on that set before moving to the next artifact.

If the current semantics match how the backend evaluates OR filter queries, only the artifact_name default needs adjusting.

-    for artifact in all_model_artifacts:
-        artifact_name = artifact.get("name")
+    for artifact in all_model_artifacts:
+        artifact_name = artifact.get("name", "missing_artifact_name")
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 965f689 and 1f02dad.

📒 Files selected for processing (1)
  • tests/model_registry/model_catalog/utils.py (1 hunks)

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
tests/model_registry/model_catalog/utils.py (1)

1048-1058: Handle unknown comparison types explicitly.

Unknown comparison types silently result in condition_met = False without logging or error indication, making debugging difficult.

Apply this diff to add explicit handling:

     # Perform comparison based on type
     condition_met = False
     if comparison_type == "exact":
         condition_met = artifact_value == expected_val
     elif comparison_type == "min":
         condition_met = artifact_value >= expected_val
     elif comparison_type == "max":
         condition_met = artifact_value <= expected_val
     elif comparison_type == "contains" and key_type == "string_value":
         condition_met = expected_val in artifact_value
+    else:
+        LOGGER.warning(f"Unknown comparison type '{comparison_type}' for {key_name}")
+        return False, f"{key_name}: unknown comparison '{comparison_type}'"
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1f02dad and 1d674dd.

📒 Files selected for processing (1)
  • tests/model_registry/model_catalog/utils.py (1 hunks)
🔇 Additional comments (2)
tests/model_registry/model_catalog/utils.py (2)

1086-1104: LGTM: AND validation logic is correct.

The function correctly implements "at least one artifact satisfies ALL criteria" semantics with appropriate logging.


1107-1123: LGTM: OR validation logic is correct.

The function correctly implements "at least one artifact satisfies AT LEAST ONE criterion" semantics with appropriate logging for both success and failure cases.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/model_registry/model_catalog/test_model_search.py (1)

669-669: Extract the error message to improve maintainability.

The long error message in the ValueError should be extracted as suggested by static analysis.

Apply this diff:

+            INVALID_LOGIC_TYPE_MSG = "Invalid logic_type: {}. Must be 'and' or 'or'"
             else:
-                raise ValueError(f"Invalid logic_type: {logic_type}. Must be 'and' or 'or'")
+                raise ValueError(INVALID_LOGIC_TYPE_MSG.format(logic_type))
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5dc6fe8 and a99f9e1.

📒 Files selected for processing (4)
  • tests/model_registry/model_catalog/conftest.py (2 hunks)
  • tests/model_registry/model_catalog/test_model_search.py (2 hunks)
  • tests/model_registry/model_catalog/utils.py (1 hunks)
  • tests/model_registry/utils.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • tests/model_registry/utils.py
  • tests/model_registry/model_catalog/conftest.py
  • tests/model_registry/model_catalog/utils.py
🧰 Additional context used
🧬 Code graph analysis (1)
tests/model_registry/model_catalog/test_model_search.py (3)
tests/model_registry/model_catalog/utils.py (3)
  • validate_model_artifacts_match_criteria_and (1086-1104)
  • validate_model_artifacts_match_criteria_or (1107-1123)
  • fetch_all_artifacts_with_dynamic_paging (789-820)
tests/model_registry/model_catalog/conftest.py (1)
  • models_from_filter_query (205-227)
tests/model_registry/conftest.py (2)
  • model_catalog_rest_url (640-649)
  • model_registry_rest_headers (324-325)
🪛 Ruff (0.14.6)
tests/model_registry/model_catalog/test_model_search.py

669-669: Avoid specifying long messages outside the exception class

(TRY003)

🔇 Additional comments (4)
tests/model_registry/model_catalog/test_model_search.py (4)

21-22: LGTM!

The new validator imports are correctly added and utilized by the new test below.


583-632: LGTM!

The test parametrization is well-structured with descriptive test IDs and comprehensive coverage of AND/OR filter logic scenarios.


29-29: Verify that removing test_idp_user from pytestmark doesn't break tests.

The test_idp_user fixture has been removed from the pytestmark list. Confirm that none of the test methods in this file depend on this fixture, either directly or indirectly, by searching for any references to test_idp_user within the test implementations.


648-648: Verify catalog ID assumption or make it dynamic.

The test hardcodes VALIDATED_CATALOG_ID when constructing the artifacts URL. Verify that the models_from_filter_query fixture guarantees all returned models belong to this catalog. If models from other catalogs are possible, the artifact fetch will fail with 404 errors.

Consider one of these solutions:

Solution 1: Ensure the fixture only returns models from the validated catalog by modifying the filter query to include a catalog constraint.

Solution 2: Make the test fetch the catalog ID dynamically for each model. Modify the fixture to return tuples of (model_name, catalog_id) instead of just model names.

@dbasunag
Copy link
Copy Markdown
Collaborator Author

dbasunag commented Dec 1, 2025

/verified

@rhods-ci-bot rhods-ci-bot added Verified Verified pr in Jenkins commented-by-fege labels Dec 1, 2025
Copy link
Copy Markdown
Contributor

@fege fege left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@dbasunag dbasunag enabled auto-merge (squash) December 2, 2025 13:16
Copy link
Copy Markdown
Contributor

@kpunwatk kpunwatk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@dbasunag dbasunag merged commit 6c29e38 into opendatahub-io:main Dec 2, 2025
12 checks passed
@dbasunag dbasunag deleted the advanced_search branch December 2, 2025 15:20
@github-actions
Copy link
Copy Markdown

github-actions bot commented Dec 2, 2025

Status of building tag latest: success.
Status of pushing tag latest to image registry: success.

mwaykole pushed a commit to mwaykole/opendatahub-tests that referenced this pull request Jan 23, 2026
* Add tests with artifacts property

* default name for artifacts missing name

* Add suggested code based on Federico's comment
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants