Add tests for labels endpoint#905
Conversation
Signed-off-by: lugi0 <lgiorgi@redhat.com>
|
The following are automatically added/executed:
Available user actions:
Supported labels{'/wip', '/verified', '/build-push-pr-image', '/hold', '/lgtm', '/cherry-pick'} |
📝 WalkthroughWalkthroughAdds tests and supporting utilities to validate model catalog label synchronization: two tests for the labels endpoint (default data and configmap-updates) plus helper functions to read labels from ConfigMaps and the API, a verifier, and a fixture to patch the editable ConfigMap. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (4)
tests/model_registry/model_catalog/test_labels_endpoint.py (1)
48-95: Ensure_check_updated_labelsworks withTimeoutSamplersemanticsThe overall pattern (temporary patch via
ResourceEditor, then polling the API until it reflects the new label) is solid. One potential gotcha is that_check_updated_labelsonly performs assertions and doesn’t return anything (i.e., returnsNone):
- If
TimeoutSamplertreats a truthy return value as success and keeps retrying on exceptions or falsy results, then aNonereturn could cause an unnecessary timeout even when all assertions pass.- Making the success condition explicit avoids coupling the test behavior to subtle library semantics.
I suggest making
_check_updated_labelsreturn a truthy value on success:with ResourceEditor(patches={sources_cm: patches}): def _check_updated_labels(): # Get updated expected labels from ConfigMaps expected_labels = get_labels_from_configmaps( admin_client=admin_client, namespace=model_registry_namespace ) @@ # Verify they match (including the new label) verify_labels_match(expected_labels=expected_labels, api_labels=api_labels) + # Explicitly signal success to TimeoutSampler + return TruePlease double-check how
TimeoutSamplerin this project decides when to stop (truthiness vs. “no exception”). If it already stops on “no exception” regardless of return value, this change is defensive but harmless; if it checks truthiness, it’s necessary to avoid spurious timeouts.tests/model_registry/model_catalog/utils.py (3)
1127-1152: Consider slightly hardening ConfigMap YAML parsing
get_labels_from_configmapsis straightforward and matches how other helpers in this file treat ConfigMaps. Two minor robustness points you might consider:
yaml.safe_load(...)can returnNonefor empty content, in which caseif "labels" in default_data:would raise aTypeError.- If
"sources.yaml"is missing in either ConfigMap, the current code raisesKeyErrorwith no context.If you’d like clearer failures and a bit more resilience, something along these lines makes the helper safer without changing its external behavior when the ConfigMaps are well-formed:
- default_cm = ConfigMap(name="model-catalog-default-sources", client=admin_client, namespace=namespace) - default_data = yaml.safe_load(default_cm.instance.data["sources.yaml"]) - if "labels" in default_data: - labels.extend(default_data["labels"]) + default_cm = ConfigMap(name="model-catalog-default-sources", client=admin_client, namespace=namespace) + default_raw = default_cm.instance.data.get("sources.yaml") + assert default_raw is not None, f"'sources.yaml' missing in {default_cm.name}" + default_data = yaml.safe_load(default_raw) or {} + if "labels" in default_data: + labels.extend(default_data["labels"]) @@ - sources_cm = ConfigMap(name="model-catalog-sources", client=admin_client, namespace=namespace) - sources_data = yaml.safe_load(sources_cm.instance.data["sources.yaml"]) - if "labels" in sources_data: - labels.extend(sources_data["labels"]) + sources_cm = ConfigMap(name="model-catalog-sources", client=admin_client, namespace=namespace) + sources_raw = sources_cm.instance.data.get("sources.yaml") + assert sources_raw is not None, f"'sources.yaml' missing in {sources_cm.name}" + sources_data = yaml.safe_load(sources_raw) or {} + if "labels" in sources_data: + labels.extend(sources_data["labels"])Not mandatory, but it gives more actionable assertion messages if cluster config drifts.
1155-1169: Guard against missingitemskey in labels API response (optional)
get_labels_from_apiassumes thatexecute_get_commandalways returns a dict with an"items"key and will raiseKeyErrorotherwise. If you prefer a clearer failure mode in tests, you could add an assertion before accessing"items":url = f"{model_catalog_rest_url}labels" headers = get_rest_headers(token=user_token) response = execute_get_command(url=url, headers=headers) - return response["items"] + assert "items" in response, f"Labels endpoint response missing 'items': {response}" + return response["items"]This keeps behavior functionally similar (still fails fast) but the error message is more targeted when the API contract is broken.
1172-1196:verify_labels_matchlogic is correct; consider minor performance/readability tweakThe current nested-loop comparison correctly enforces that every expected label (by
name,displayName, anddescription) is present in the API response. For the label counts you’re likely dealing with, the O(n²) scan is fine.If you ever expect larger label sets or just want to simplify the logic, you could prebuild a set of tuples from
api_labelsand compare against it:- for expected_label in expected_labels: - found = False - for api_label in api_labels: - if ( - expected_label.get("name") == api_label.get("name") - and expected_label.get("displayName") == api_label.get("displayName") - and expected_label.get("description") == api_label.get("description") - ): - found = True - break - - assert found, f"Expected label not found in API response: {expected_label}" + api_label_keys = { + (lbl.get("name"), lbl.get("displayName"), lbl.get("description")) for lbl in api_labels + } + + for expected_label in expected_labels: + key = ( + expected_label.get("name"), + expected_label.get("displayName"), + expected_label.get("description"), + ) + assert key in api_label_keys, f"Expected label not found in API response: {expected_label}"Not required, but it tightens the intent and scales better if the label set ever grows.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
tests/model_registry/model_catalog/test_labels_endpoint.py(1 hunks)tests/model_registry/model_catalog/utils.py(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
tests/model_registry/model_catalog/utils.py (1)
tests/model_registry/utils.py (2)
get_rest_headers(666-671)execute_get_command(715-723)
🔇 Additional comments (2)
tests/model_registry/model_catalog/test_labels_endpoint.py (1)
25-47: Default-data smoke test is clear and uses shared helpers correctlyThe smoke test wiring (fixtures, ConfigMap-derived expectations, API call via
get_labels_from_api, andverify_labels_match) looks consistent with the new utilities and keeps the assertions centralized. No issues from my side here.tests/model_registry/model_catalog/utils.py (1)
26-26: Reusingget_rest_headerskeeps HTTP access consistentPulling in
get_rest_headersand using it for the labels endpoint keeps header construction aligned with the rest of the model registry tests. This avoids hand-rolled headers and makes future changes (e.g., auth tweaks) easier.
Signed-off-by: lugi0 <lgiorgi@redhat.com>
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
tests/model_registry/model_catalog/test_labels_endpoint.py (1)
56-56: Remove unnecessary variable assignment.The assignment
_ = labels_configmap_patchis unnecessary. Pytest fixtures execute their side effects (the ConfigMap patching via ResourceEditor) simply by being listed as parameters. The variable assignment serves no purpose.Apply this diff:
- _ = labels_configmap_patch - def _check_updated_labels():tests/model_registry/model_catalog/utils.py (2)
1156-1170: Consider adding error handling for API response structure.The function assumes the API response contains an "items" key. If the API returns an error response, this could raise a
KeyError.Consider adding validation:
def get_labels_from_api(model_catalog_rest_url: str, user_token: str) -> List[Dict[str, Any]]: """ Get labels from the API endpoint. Args: model_catalog_rest_url: Base URL for model catalog API user_token: Authentication token Returns: List of label dictionaries from API response """ url = f"{model_catalog_rest_url}labels" headers = get_rest_headers(token=user_token) response = execute_get_command(url=url, headers=headers) + assert "items" in response, f"API response missing 'items' key: {response}" return response["items"]
1173-1197: Consider optimizing label matching with a set-based approach.The nested loop has O(n×m) complexity. For test code with small datasets, this is acceptable, but a set/dict-based approach would be more efficient.
Example optimization using a set of tuples:
def verify_labels_match(expected_labels: List[Dict[str, Any]], api_labels: List[Dict[str, Any]]) -> None: """ Verify that all expected labels are present in the API response. Args: expected_labels: Labels expected from ConfigMaps api_labels: Labels returned by API Raises: AssertionError: If any expected label is not found in API response """ LOGGER.info(f"Verifying {len(expected_labels)} expected labels against {len(api_labels)} API labels") # Create a set of label tuples for efficient lookup api_label_set = { (label.get("name"), label.get("displayName"), label.get("description")) for label in api_labels } for expected_label in expected_labels: expected_tuple = ( expected_label.get("name"), expected_label.get("displayName"), expected_label.get("description"), ) assert expected_tuple in api_label_set, f"Expected label not found in API response: {expected_label}"
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
tests/model_registry/model_catalog/conftest.py(2 hunks)tests/model_registry/model_catalog/test_labels_endpoint.py(1 hunks)tests/model_registry/model_catalog/utils.py(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
tests/model_registry/model_catalog/conftest.py (2)
tests/conftest.py (1)
admin_client(78-79)tests/model_registry/conftest.py (1)
model_registry_namespace(75-76)
tests/model_registry/model_catalog/test_labels_endpoint.py (5)
utilities/infra.py (1)
get_openshift_token(605-619)tests/model_registry/model_catalog/utils.py (3)
get_labels_from_configmaps(1128-1153)get_labels_from_api(1156-1170)verify_labels_match(1173-1197)tests/conftest.py (1)
admin_client(78-79)tests/model_registry/conftest.py (2)
model_registry_namespace(75-76)model_catalog_rest_url(640-649)tests/model_registry/model_catalog/conftest.py (1)
labels_configmap_patch(231-251)
tests/model_registry/model_catalog/utils.py (3)
tests/model_registry/utils.py (2)
execute_get_command(714-722)get_rest_headers(665-670)tests/conftest.py (1)
admin_client(78-79)tests/model_registry/conftest.py (1)
model_catalog_rest_url(640-649)
🔇 Additional comments (4)
tests/model_registry/model_catalog/conftest.py (1)
230-251: LGTM! Fixture correctly implements ConfigMap patching.The fixture properly uses
ResourceEditorto temporarily patch the ConfigMap with a test label, automatically restoring the original state after the test completes. The function scope ensures test isolation.tests/model_registry/model_catalog/test_labels_endpoint.py (2)
22-43: Test logic is correct and well-structured.The smoke test properly validates that labels from ConfigMaps match the API response using the helper utilities.
58-71: Polling logic correctly handles ConfigMap propagation delay.The use of
TimeoutSamplerwith 60-second timeout and 5-second intervals is appropriate for waiting for ConfigMap changes to propagate to the API.tests/model_registry/model_catalog/utils.py (1)
1128-1153: Function correctly aggregates labels from both ConfigMaps.The function properly reads labels from both the default and custom ConfigMaps, safely handling cases where the labels key might not exist.
|
Status of building tag latest: success. |
* feat: add tests for labels endpoint Signed-off-by: lugi0 <lgiorgi@redhat.com> * fix: address review comments Signed-off-by: lugi0 <lgiorgi@redhat.com> --------- Signed-off-by: lugi0 <lgiorgi@redhat.com>
Description
How Has This Been Tested?
Merge criteria:
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.