Add tests with artifacts property#882
Conversation
|
The following are automatically added/executed:
Available user actions:
Supported labels{'/lgtm', '/cherry-pick', '/verified', '/build-push-pr-image', '/hold', '/wip'} |
📝 WalkthroughWalkthroughAdds a pytest fixture to fetch model names from catalog filter queries, introduces per-criterion AND/OR artifact-validation helpers, adds a parameterized test exercising advanced filter queries against model artifacts, and makes params logging conditional in an HTTP helper. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
tests/model_registry/model_catalog/conftest.py (1)
20-20: models_from_filter_query fixture cleanly reuses catalog API helperThe fixture wiring to
get_models_from_catalog_apiand the assertion/logging around returned models look correct and align with how other tests buildfilterQueryviaadditional_params.If you want to tighten readability/typing a bit, you could annotate the fixture arguments and return type:
-@pytest.fixture -def models_from_filter_query( - request, - model_catalog_rest_url: list[str], - model_registry_rest_headers: dict[str, str], -): +@pytest.fixture +def models_from_filter_query( + request: pytest.FixtureRequest, + model_catalog_rest_url: list[str], + model_registry_rest_headers: dict[str, str], +) -> list[str]:Purely optional, as the current implementation is already clear.
Also applies to: 204-227
tests/model_registry/model_catalog/test_model_search.py (1)
21-22: Advanced artifact filter test correctly uses AND/OR validators; watch catalog assumption and consider simplifying logic selectionThe new
test_filter_query_advanced_model_searchparametrization plus imports ofvalidate_model_artifacts_match_criteria_and/_orare wired correctly:
models_from_filter_querysupplies model names perfilterQuery.- Artifacts are fetched via
fetch_all_artifacts_with_dynamic_paging, and the appropriate validator is chosen based onlogic_type.- Aggregating failing
model_names intoerrorsgives a helpful assertion message.Two minor points to consider:
Assumption on catalog ID
The test always fetches artifacts from
sources/{VALIDATED_CATALOG_ID}while thefilterQueryitself doesn’t constrain source/catalog. That’s fine as long as all these filter queries only ever match models in the validated catalog. If new catalogs gain compatible custom properties, this could start failing with HTTP/ResourceNotFoundErroreven though the filter logic is correct.If you expect broader usage later, you might want to:
- Either pass a
sourceLabel/source_idconstraint into the filter query, or- Derive the
source_idfrom the models response instead of hard-codingVALIDATED_CATALOG_ID.Simplify validator selection and shorten the error message (Ruff TRY003)
You can avoid the explicit
if/elif/elseand the longValueErrormessage by dispatching via a map:
validation_result = None# Select validation function based on logic typeif logic_type == "and":validation_result = validate_model_artifacts_match_criteria_and(all_model_artifacts=all_model_artifacts, expected_validations=expected_value, model_name=model_name)elif logic_type == "or":validation_result = validate_model_artifacts_match_criteria_or(all_model_artifacts=all_model_artifacts, expected_validations=expected_value, model_name=model_name)else:raise ValueError(f"Invalid logic_type: {logic_type}. Must be 'and' or 'or'")
validators = {"and": validate_model_artifacts_match_criteria_and,"or": validate_model_artifacts_match_criteria_or,}try:validator = validators[logic_type]except KeyError:raise ValueError(f"Invalid logic_type: {logic_type}") from Nonevalidation_result = validator(all_model_artifacts=all_model_artifacts,expected_validations=expected_value,model_name=model_name,)This keeps the error message short (addressing TRY003) and makes it easy to extend if you add more logic types later.Overall, the test structure and use of the new validators look solid.
Also applies to: 583-679
tests/model_registry/model_catalog/utils.py (1)
971-1105: Criterion-based artifact validators look correct; consider a small robustness tweakThe new helpers
_validate_single_criterion,validate_model_artifacts_match_criteria_and, andvalidate_model_artifacts_match_criteria_orare consistent with their intended behavior:
- Type coercion based on
key_typeand comparison modes (exact,min,max,contains) is straightforward.- The AND helper short-circuits on first failing condition per artifact and returns
Trueas soon as one artifact satisfies all validations.- The OR helper returns
Trueon the first successful criterion across all artifacts and logs failures clearly.One low-risk robustness improvement is around
customPropertiesaccess; both validators currently do:custom_properties = artifact["customProperties"]If an artifact ever lacks
customPropertiesor has it asNone, this will raise rather than reporting a clean “missing” validation failure. Since_validate_single_criterionalready handles a missing key gracefully, you can make the outer functions more defensive:- artifact_name = artifact.get("name") - custom_properties = artifact["customProperties"] + artifact_name = artifact.get("name") + custom_properties = artifact.get("customProperties") or {}Apply the same pattern in the OR validator. That keeps the behavior for well-formed artifacts identical while avoiding KeyError/TypeError if future catalog entries are incomplete but you still want the test to fail via the returned
Falserather than an unexpected exception.Everything else in these helpers looks good.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
tests/model_registry/model_catalog/conftest.py(2 hunks)tests/model_registry/model_catalog/test_model_search.py(2 hunks)tests/model_registry/model_catalog/utils.py(1 hunks)tests/model_registry/utils.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
tests/model_registry/model_catalog/conftest.py (2)
tests/model_registry/model_catalog/utils.py (1)
get_models_from_catalog_api(694-747)tests/model_registry/conftest.py (2)
model_catalog_rest_url(646-655)model_registry_rest_headers(323-324)
tests/model_registry/model_catalog/test_model_search.py (4)
tests/model_registry/model_catalog/utils.py (4)
validate_model_artifacts_match_criteria_and(1024-1068)validate_model_artifacts_match_criteria_or(1071-1105)ResourceNotFoundError(30-31)fetch_all_artifacts_with_dynamic_paging(750-781)tests/model_registry/utils.py (1)
get_model_catalog_pod(660-663)tests/model_registry/model_catalog/conftest.py (1)
models_from_filter_query(205-227)tests/model_registry/conftest.py (2)
model_catalog_rest_url(646-655)model_registry_rest_headers(323-324)
🪛 Ruff (0.14.5)
tests/model_registry/model_catalog/test_model_search.py
669-669: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (2)
tests/model_registry/utils.py (1)
701-702: Conditional params logging is appropriateOnly logging
paramswhen truthy reduces noisy logs without changing request behavior; this is fine even whenparamsis{}(no log is acceptable here).tests/model_registry/model_catalog/test_model_search.py (1)
29-29: Module-wide pytestmark usage is consistentApplying
updated_dsc_component_state_scope_sessionandmodel_registry_namespaceat module level keeps individual test signatures lean and aligns with typical pytest patterns.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
tests/model_registry/model_catalog/utils.py (3)
971-1021: Clarify type handling and comparison semantics in_validate_single_criterionThe helper assumes
validation["value"]is already the correct Python type and thatcontainsis only used with stringvalue. If a caller accidentally passes a mismatched type (e.g., string"10"for anint_value/mincomparison, or an int forcontains), you can get surprising results or aTypeError.Consider tightening this a bit:
- Normalize
expected_valto the same Python type you derive forartifact_value(e.g., cast toint/floatfor numeric key types) so JSON‑originated strings don’t silently mis‑compare.- For
contains, explicitly coerceexpected_valtostror assert that it is a string, to avoid runtime errors.- Optionally add an
elsebranch for unsupportedcomparison_typethat logs a warning and returns a clear"unknown comparison"message.This keeps the helper defensive against misconfigured tests while preserving its current behavior for well‑formed inputs.
1024-1068: Makevalidate_model_artifacts_match_criteria_andmore defensive against edge casesTwo small robustness points:
custom_properties = artifact["customProperties"]will raiseKeyErrorif an artifact is missing that field. Since_validate_single_criterionalready handles missing keys gracefully, you could safely useartifact.get("customProperties", {})here to avoid blowing up on slightly different API shapes.- When
expected_validationsis empty,conditions_passed == len(expected_validations)is0 == 0, so the first artifact will cause the function to returnTrue. If an empty validations list is not a valid use‑case, adding a quickassert expected_validations(or an early return with a clear log) would prevent accidental vacuous “success”.These tweaks keep failures in the test data or API shape from surfacing as opaque KeyErrors or unexpected green tests.
1071-1105: Align OR‑criteria helper with AND helper for consistency and resilience
validate_model_artifacts_match_criteria_oris very similar to the AND variant but slightly less defensive:
artifact_name = artifact.get("name")yieldsNonein log messages if the name is missing. Using the same default as the AND variant (.get("name", "missing_artifact_name")) would make logs easier to read.- As with the AND helper,
custom_properties = artifact["customProperties"]can be safely changed to.get("customProperties", {})to avoid a hard failure if some artifacts don’t carry custom properties.Functionally it’s correct; these are just small consistency/robustness improvements with low risk.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
tests/model_registry/model_catalog/utils.py (3)
1010-1061: Tighten key_type handling and comparison robustness in_validate_single_criterionThe structure and intent look good, but a couple of details are worth tightening up:
key_typeis used both as the nested dict key (custom_properties.get(key_name, {}).get(key_type, None)) and as a discriminator in the conversion branch (if key_type == "int_value", etc.). This only works if your artifact payloads actually use"int_value","double_value","string_value"as keys. If the API returns"intValue","doubleValue","stringValue"(or any other variant) and you pass those through, you’ll always hit the “unknown key_type” path and silently fail validations. It’s safer to either:
- Normalize
key_typeonce (e.g., map camelCase to the expected internal names), or- Accept both variants in the conversion branch (e.g.,
if key_type in ("int_value", "intValue"):etc.).- For numeric comparisons (
min/max), this assumesexpected_valis already numeric. If these come from YAML/JSON as strings, you’ll get aTypeErrorwhen comparingint/floattostr. Consider normalizingexpected_valalongsideartifact_valuefor numeric key types.- If
comparison_typeis misspelled or unsupported,condition_metstaysFalsewithout any explicit signal. Logging or raising on unknowncomparison_typewould help catch misconfigured test data early.A small normalization step around
key_typeand an explicit guard for unsupportedcomparison_typewould make this helper much more robust to configuration drift.- # Convert value to appropriate type - try: - if key_type == "int_value": - artifact_value = int(raw_value) - elif key_type == "double_value": - artifact_value = float(raw_value) - elif key_type == "string_value": - artifact_value = str(raw_value) - else: - LOGGER.warning(f"Unknown key_type: {key_type}") - return False, f"{key_name}: unknown type {key_type}" - except (ValueError, TypeError): - return False, f"{key_name}: conversion error" + # Normalize and convert value to appropriate type + int_keys = ("int_value", "intValue") + double_keys = ("double_value", "doubleValue") + string_keys = ("string_value", "stringValue") + + try: + if key_type in int_keys: + artifact_value = int(raw_value) + elif key_type in double_keys: + artifact_value = float(raw_value) + elif key_type in string_keys: + artifact_value = str(raw_value) + else: + LOGGER.warning(f"Unknown key_type: {key_type}") + return False, f"{key_name}: unknown type {key_type}" + except (ValueError, TypeError): + return False, f"{key_name}: conversion error" + + if comparison_type not in {"exact", "min", "max", "contains"}: + LOGGER.warning(f"Unknown comparison_type: {comparison_type}") + return False, f"{key_name}: unknown comparison {comparison_type}"
1063-1107: Confirm semantics for empty validation lists invalidate_model_artifacts_match_criteria_andThe AND logic per artifact is clear and matches the docstring: the model passes if at least one artifact satisfies all validations, and you short‑circuit on the first failing condition per artifact, which is efficient.
One edge case to be aware of: if
expected_validationsis empty,conditions_passed == len(expected_validations)will beTruefor the first artifact, so the function returnsTrue(“passes all 0 validations”). If you never call this with an empty list, that’s fine; otherwise you may want an explicit guard to treat “no criteria” as a configuration error instead of an automatic pass.def validate_model_artifacts_match_criteria_and( @@ - for artifact in all_model_artifacts: + if not expected_validations: + raise ValueError("expected_validations must be non-empty for AND logic") + + for artifact in all_model_artifacts: @@ - if conditions_passed == len(expected_validations): + if conditions_passed == len(expected_validations): LOGGER.info(
1110-1144: Align OR semantics and logging with AND helper, and handle nameless artifactsThe OR helper is straightforward and efficient, but a couple of small tweaks could improve consistency and avoid surprises:
artifact_name = artifact.get("name")can yieldNone, which then flows into_validate_single_criterionand log messages. Using the same default as the AND variant ("missing_artifact_name") would keep logs cleaner and symmetric.- The current semantics are “model passes if ∃ artifact, ∃ validation such that the artifact satisfies that validation”. That’s likely what you want for OR filters, but it’s slightly weaker than “∃ artifact such that it satisfies (v1 OR v2 OR …) as a group”. If you ever need grouped ORs over multiple criteria for the same artifact, you may want to compute all condition results per artifact (like in the earlier combined suggestion) and apply
any()on that set before moving to the next artifact.If the current semantics match how the backend evaluates OR filter queries, only the
artifact_namedefault needs adjusting.- for artifact in all_model_artifacts: - artifact_name = artifact.get("name") + for artifact in all_model_artifacts: + artifact_name = artifact.get("name", "missing_artifact_name")
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
tests/model_registry/model_catalog/utils.py (1)
1048-1058: Handle unknown comparison types explicitly.Unknown comparison types silently result in
condition_met = Falsewithout logging or error indication, making debugging difficult.Apply this diff to add explicit handling:
# Perform comparison based on type condition_met = False if comparison_type == "exact": condition_met = artifact_value == expected_val elif comparison_type == "min": condition_met = artifact_value >= expected_val elif comparison_type == "max": condition_met = artifact_value <= expected_val elif comparison_type == "contains" and key_type == "string_value": condition_met = expected_val in artifact_value + else: + LOGGER.warning(f"Unknown comparison type '{comparison_type}' for {key_name}") + return False, f"{key_name}: unknown comparison '{comparison_type}'"
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/model_registry/model_catalog/utils.py(1 hunks)
🔇 Additional comments (2)
tests/model_registry/model_catalog/utils.py (2)
1086-1104: LGTM: AND validation logic is correct.The function correctly implements "at least one artifact satisfies ALL criteria" semantics with appropriate logging.
1107-1123: LGTM: OR validation logic is correct.The function correctly implements "at least one artifact satisfies AT LEAST ONE criterion" semantics with appropriate logging for both success and failure cases.
1d674dd to
5dc6fe8
Compare
c3a049d to
5b2610b
Compare
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
tests/model_registry/model_catalog/test_model_search.py (1)
669-669: Extract the error message to improve maintainability.The long error message in the
ValueErrorshould be extracted as suggested by static analysis.Apply this diff:
+ INVALID_LOGIC_TYPE_MSG = "Invalid logic_type: {}. Must be 'and' or 'or'" else: - raise ValueError(f"Invalid logic_type: {logic_type}. Must be 'and' or 'or'") + raise ValueError(INVALID_LOGIC_TYPE_MSG.format(logic_type))
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
tests/model_registry/model_catalog/conftest.py(2 hunks)tests/model_registry/model_catalog/test_model_search.py(2 hunks)tests/model_registry/model_catalog/utils.py(1 hunks)tests/model_registry/utils.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- tests/model_registry/utils.py
- tests/model_registry/model_catalog/conftest.py
- tests/model_registry/model_catalog/utils.py
🧰 Additional context used
🧬 Code graph analysis (1)
tests/model_registry/model_catalog/test_model_search.py (3)
tests/model_registry/model_catalog/utils.py (3)
validate_model_artifacts_match_criteria_and(1086-1104)validate_model_artifacts_match_criteria_or(1107-1123)fetch_all_artifacts_with_dynamic_paging(789-820)tests/model_registry/model_catalog/conftest.py (1)
models_from_filter_query(205-227)tests/model_registry/conftest.py (2)
model_catalog_rest_url(640-649)model_registry_rest_headers(324-325)
🪛 Ruff (0.14.6)
tests/model_registry/model_catalog/test_model_search.py
669-669: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (4)
tests/model_registry/model_catalog/test_model_search.py (4)
21-22: LGTM!The new validator imports are correctly added and utilized by the new test below.
583-632: LGTM!The test parametrization is well-structured with descriptive test IDs and comprehensive coverage of AND/OR filter logic scenarios.
29-29: Verify that removingtest_idp_userfrom pytestmark doesn't break tests.The
test_idp_userfixture has been removed from thepytestmarklist. Confirm that none of the test methods in this file depend on this fixture, either directly or indirectly, by searching for any references totest_idp_userwithin the test implementations.
648-648: Verify catalog ID assumption or make it dynamic.The test hardcodes
VALIDATED_CATALOG_IDwhen constructing the artifacts URL. Verify that themodels_from_filter_queryfixture guarantees all returned models belong to this catalog. If models from other catalogs are possible, the artifact fetch will fail with 404 errors.Consider one of these solutions:
Solution 1: Ensure the fixture only returns models from the validated catalog by modifying the filter query to include a catalog constraint.
Solution 2: Make the test fetch the catalog ID dynamically for each model. Modify the fixture to return tuples of
(model_name, catalog_id)instead of just model names.
|
/verified |
|
Status of building tag latest: success. |
* Add tests with artifacts property * default name for artifacts missing name * Add suggested code based on Federico's comment
Description
How Has This Been Tested?
Merge criteria:
Summary by CodeRabbit
Tests
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.