Skip to content

Add tests for labels endpoint#905

Merged
dbasunag merged 3 commits intoopendatahub-io:mainfrom
lugi0:feat/labels
Dec 3, 2025
Merged

Add tests for labels endpoint#905
dbasunag merged 3 commits intoopendatahub-io:mainfrom
lugi0:feat/labels

Conversation

@lugi0
Copy link
Copy Markdown
Contributor

@lugi0 lugi0 commented Dec 3, 2025

Description

How Has This Been Tested?

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

  • Tests
    • Added end-to-end tests for the model catalog labels endpoint to validate label synchronization between configuration sources and the REST API, including checks that label updates propagate correctly.
    • Added test helpers and a fixture to read/compare labels and to simulate editable configuration changes for update verification.

✏️ Tip: You can customize this high-level summary in your review settings.

Signed-off-by: lugi0 <lgiorgi@redhat.com>
@lugi0 lugi0 requested review from a team, dbasunag and fege as code owners December 3, 2025 14:31
@github-actions
Copy link
Copy Markdown

github-actions bot commented Dec 3, 2025

The following are automatically added/executed:

  • PR size label.
  • Run pre-commit
  • Run tox
  • Add PR author as the PR assignee
  • Build image based on the PR

Available user actions:

  • To mark a PR as WIP, add /wip in a comment. To remove it from the PR comment /wip cancel to the PR.
  • To block merging of a PR, add /hold in a comment. To un-block merging of PR comment /hold cancel.
  • To mark a PR as approved, add /lgtm in a comment. To remove, add /lgtm cancel.
    lgtm label removed on each new commit push.
  • To mark PR as verified comment /verified to the PR, to un-verify comment /verified cancel to the PR.
    verified label removed on each new commit push.
  • To Cherry-pick a merged PR /cherry-pick <target_branch_name> to the PR. If <target_branch_name> is valid,
    and the current PR is merged, a cherry-picked PR would be created and linked to the current PR.
  • To build and push image to quay, add /build-push-pr-image in a comment. This would create an image with tag
    pr-<pr_number> to quay repository. This image tag, however would be deleted on PR merge or close action.
Supported labels

{'/wip', '/verified', '/build-push-pr-image', '/hold', '/lgtm', '/cherry-pick'}

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Dec 3, 2025

📝 Walkthrough

Walkthrough

Adds tests and supporting utilities to validate model catalog label synchronization: two tests for the labels endpoint (default data and configmap-updates) plus helper functions to read labels from ConfigMaps and the API, a verifier, and a fixture to patch the editable ConfigMap.

Changes

Cohort / File(s) Summary
Test module for labels endpoint
tests/model_registry/model_catalog/test_labels_endpoint.py
New test module with two tests: test_labels_endpoint_default_data (smoke) that reads expected labels from ConfigMaps and compares them to the REST API response; test_labels_endpoint_configmap_updates (sanity) that patches the editable ConfigMap, polls the API (TimeoutSampler) for up to 60s to verify updates are reflected.
Utility functions for label retrieval and verification
tests/model_registry/model_catalog/utils.py
Adds three functions: get_labels_from_configmaps(admin_client, namespace) to aggregate labels from model-catalog-default-sources and model-catalog-sources ConfigMaps; get_labels_from_api(model_catalog_rest_url, user_token) to GET labels from the REST endpoint; verify_labels_match(expected_labels, api_labels) to assert label equality by name, displayName, and description.
Test fixtures / ConfigMap patching
tests/model_registry/model_catalog/conftest.py
Adds labels_configmap_patch fixture that creates/targets the editable ConfigMap (using DEFAULT_CUSTOM_MODEL_CATALOG), ensures a labels list in sources.yaml, appends a test label, and yields the applied patch payload via ResourceEditor. Also imports DEFAULT_CUSTOM_MODEL_CATALOG.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Review YAML parsing and modification of sources.yaml to ensure label insertion is correct and idempotent.
  • Verify ResourceEditor patch payload structure and that the fixture targets the correct ConfigMap name/namespace.
  • Validate TimeoutSampler usage (60s total, 5s interval) and the checker function’s retry semantics.
  • Confirm label comparison handles missing/null fields and matches API contract (fields: name, displayName, description).

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 77.78% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Add tests for labels endpoint' directly and clearly describes the main change: adding new test functionality for the labels endpoint in the model catalog.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
tests/model_registry/model_catalog/test_labels_endpoint.py (1)

48-95: Ensure _check_updated_labels works with TimeoutSampler semantics

The overall pattern (temporary patch via ResourceEditor, then polling the API until it reflects the new label) is solid. One potential gotcha is that _check_updated_labels only performs assertions and doesn’t return anything (i.e., returns None):

  • If TimeoutSampler treats a truthy return value as success and keeps retrying on exceptions or falsy results, then a None return could cause an unnecessary timeout even when all assertions pass.
  • Making the success condition explicit avoids coupling the test behavior to subtle library semantics.

I suggest making _check_updated_labels return a truthy value on success:

         with ResourceEditor(patches={sources_cm: patches}):
 
             def _check_updated_labels():
                 # Get updated expected labels from ConfigMaps
                 expected_labels = get_labels_from_configmaps(
                     admin_client=admin_client, namespace=model_registry_namespace
                 )
@@
                 # Verify they match (including the new label)
                 verify_labels_match(expected_labels=expected_labels, api_labels=api_labels)
+                # Explicitly signal success to TimeoutSampler
+                return True

Please double-check how TimeoutSampler in this project decides when to stop (truthiness vs. “no exception”). If it already stops on “no exception” regardless of return value, this change is defensive but harmless; if it checks truthiness, it’s necessary to avoid spurious timeouts.

tests/model_registry/model_catalog/utils.py (3)

1127-1152: Consider slightly hardening ConfigMap YAML parsing

get_labels_from_configmaps is straightforward and matches how other helpers in this file treat ConfigMaps. Two minor robustness points you might consider:

  • yaml.safe_load(...) can return None for empty content, in which case if "labels" in default_data: would raise a TypeError.
  • If "sources.yaml" is missing in either ConfigMap, the current code raises KeyError with no context.

If you’d like clearer failures and a bit more resilience, something along these lines makes the helper safer without changing its external behavior when the ConfigMaps are well-formed:

-    default_cm = ConfigMap(name="model-catalog-default-sources", client=admin_client, namespace=namespace)
-    default_data = yaml.safe_load(default_cm.instance.data["sources.yaml"])
-    if "labels" in default_data:
-        labels.extend(default_data["labels"])
+    default_cm = ConfigMap(name="model-catalog-default-sources", client=admin_client, namespace=namespace)
+    default_raw = default_cm.instance.data.get("sources.yaml")
+    assert default_raw is not None, f"'sources.yaml' missing in {default_cm.name}"
+    default_data = yaml.safe_load(default_raw) or {}
+    if "labels" in default_data:
+        labels.extend(default_data["labels"])
@@
-    sources_cm = ConfigMap(name="model-catalog-sources", client=admin_client, namespace=namespace)
-    sources_data = yaml.safe_load(sources_cm.instance.data["sources.yaml"])
-    if "labels" in sources_data:
-        labels.extend(sources_data["labels"])
+    sources_cm = ConfigMap(name="model-catalog-sources", client=admin_client, namespace=namespace)
+    sources_raw = sources_cm.instance.data.get("sources.yaml")
+    assert sources_raw is not None, f"'sources.yaml' missing in {sources_cm.name}"
+    sources_data = yaml.safe_load(sources_raw) or {}
+    if "labels" in sources_data:
+        labels.extend(sources_data["labels"])

Not mandatory, but it gives more actionable assertion messages if cluster config drifts.


1155-1169: Guard against missing items key in labels API response (optional)

get_labels_from_api assumes that execute_get_command always returns a dict with an "items" key and will raise KeyError otherwise. If you prefer a clearer failure mode in tests, you could add an assertion before accessing "items":

     url = f"{model_catalog_rest_url}labels"
     headers = get_rest_headers(token=user_token)
     response = execute_get_command(url=url, headers=headers)
-    return response["items"]
+    assert "items" in response, f"Labels endpoint response missing 'items': {response}"
+    return response["items"]

This keeps behavior functionally similar (still fails fast) but the error message is more targeted when the API contract is broken.


1172-1196: verify_labels_match logic is correct; consider minor performance/readability tweak

The current nested-loop comparison correctly enforces that every expected label (by name, displayName, and description) is present in the API response. For the label counts you’re likely dealing with, the O(n²) scan is fine.

If you ever expect larger label sets or just want to simplify the logic, you could prebuild a set of tuples from api_labels and compare against it:

-    for expected_label in expected_labels:
-        found = False
-        for api_label in api_labels:
-            if (
-                expected_label.get("name") == api_label.get("name")
-                and expected_label.get("displayName") == api_label.get("displayName")
-                and expected_label.get("description") == api_label.get("description")
-            ):
-                found = True
-                break
-
-        assert found, f"Expected label not found in API response: {expected_label}"
+    api_label_keys = {
+        (lbl.get("name"), lbl.get("displayName"), lbl.get("description")) for lbl in api_labels
+    }
+
+    for expected_label in expected_labels:
+        key = (
+            expected_label.get("name"),
+            expected_label.get("displayName"),
+            expected_label.get("description"),
+        )
+        assert key in api_label_keys, f"Expected label not found in API response: {expected_label}"

Not required, but it tightens the intent and scales better if the label set ever grows.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d2c19b9 and 3796087.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (2)
  • tests/model_registry/model_catalog/test_labels_endpoint.py (1 hunks)
  • tests/model_registry/model_catalog/utils.py (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
tests/model_registry/model_catalog/utils.py (1)
tests/model_registry/utils.py (2)
  • get_rest_headers (666-671)
  • execute_get_command (715-723)
🔇 Additional comments (2)
tests/model_registry/model_catalog/test_labels_endpoint.py (1)

25-47: Default-data smoke test is clear and uses shared helpers correctly

The smoke test wiring (fixtures, ConfigMap-derived expectations, API call via get_labels_from_api, and verify_labels_match) looks consistent with the new utilities and keeps the assertions centralized. No issues from my side here.

tests/model_registry/model_catalog/utils.py (1)

26-26: Reusing get_rest_headers keeps HTTP access consistent

Pulling in get_rest_headers and using it for the labels endpoint keeps header construction aligned with the rest of the model registry tests. This avoids hand-rolled headers and makes future changes (e.g., auth tweaks) easier.

Copy link
Copy Markdown
Collaborator

@dbasunag dbasunag left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @lugi0 for the PR. A few comments, looks good otherwise!

Signed-off-by: lugi0 <lgiorgi@redhat.com>
Copy link
Copy Markdown
Contributor

@fege fege left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
tests/model_registry/model_catalog/test_labels_endpoint.py (1)

56-56: Remove unnecessary variable assignment.

The assignment _ = labels_configmap_patch is unnecessary. Pytest fixtures execute their side effects (the ConfigMap patching via ResourceEditor) simply by being listed as parameters. The variable assignment serves no purpose.

Apply this diff:

-        _ = labels_configmap_patch
-
         def _check_updated_labels():
tests/model_registry/model_catalog/utils.py (2)

1156-1170: Consider adding error handling for API response structure.

The function assumes the API response contains an "items" key. If the API returns an error response, this could raise a KeyError.

Consider adding validation:

 def get_labels_from_api(model_catalog_rest_url: str, user_token: str) -> List[Dict[str, Any]]:
     """
     Get labels from the API endpoint.
 
     Args:
         model_catalog_rest_url: Base URL for model catalog API
         user_token: Authentication token
 
     Returns:
         List of label dictionaries from API response
     """
     url = f"{model_catalog_rest_url}labels"
     headers = get_rest_headers(token=user_token)
     response = execute_get_command(url=url, headers=headers)
+    assert "items" in response, f"API response missing 'items' key: {response}"
     return response["items"]

1173-1197: Consider optimizing label matching with a set-based approach.

The nested loop has O(n×m) complexity. For test code with small datasets, this is acceptable, but a set/dict-based approach would be more efficient.

Example optimization using a set of tuples:

def verify_labels_match(expected_labels: List[Dict[str, Any]], api_labels: List[Dict[str, Any]]) -> None:
    """
    Verify that all expected labels are present in the API response.

    Args:
        expected_labels: Labels expected from ConfigMaps
        api_labels: Labels returned by API

    Raises:
        AssertionError: If any expected label is not found in API response
    """
    LOGGER.info(f"Verifying {len(expected_labels)} expected labels against {len(api_labels)} API labels")

    # Create a set of label tuples for efficient lookup
    api_label_set = {
        (label.get("name"), label.get("displayName"), label.get("description"))
        for label in api_labels
    }

    for expected_label in expected_labels:
        expected_tuple = (
            expected_label.get("name"),
            expected_label.get("displayName"),
            expected_label.get("description"),
        )
        assert expected_tuple in api_label_set, f"Expected label not found in API response: {expected_label}"
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3796087 and 97fc3d1.

📒 Files selected for processing (3)
  • tests/model_registry/model_catalog/conftest.py (2 hunks)
  • tests/model_registry/model_catalog/test_labels_endpoint.py (1 hunks)
  • tests/model_registry/model_catalog/utils.py (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
tests/model_registry/model_catalog/conftest.py (2)
tests/conftest.py (1)
  • admin_client (78-79)
tests/model_registry/conftest.py (1)
  • model_registry_namespace (75-76)
tests/model_registry/model_catalog/test_labels_endpoint.py (5)
utilities/infra.py (1)
  • get_openshift_token (605-619)
tests/model_registry/model_catalog/utils.py (3)
  • get_labels_from_configmaps (1128-1153)
  • get_labels_from_api (1156-1170)
  • verify_labels_match (1173-1197)
tests/conftest.py (1)
  • admin_client (78-79)
tests/model_registry/conftest.py (2)
  • model_registry_namespace (75-76)
  • model_catalog_rest_url (640-649)
tests/model_registry/model_catalog/conftest.py (1)
  • labels_configmap_patch (231-251)
tests/model_registry/model_catalog/utils.py (3)
tests/model_registry/utils.py (2)
  • execute_get_command (714-722)
  • get_rest_headers (665-670)
tests/conftest.py (1)
  • admin_client (78-79)
tests/model_registry/conftest.py (1)
  • model_catalog_rest_url (640-649)
🔇 Additional comments (4)
tests/model_registry/model_catalog/conftest.py (1)

230-251: LGTM! Fixture correctly implements ConfigMap patching.

The fixture properly uses ResourceEditor to temporarily patch the ConfigMap with a test label, automatically restoring the original state after the test completes. The function scope ensures test isolation.

tests/model_registry/model_catalog/test_labels_endpoint.py (2)

22-43: Test logic is correct and well-structured.

The smoke test properly validates that labels from ConfigMaps match the API response using the helper utilities.


58-71: Polling logic correctly handles ConfigMap propagation delay.

The use of TimeoutSampler with 60-second timeout and 5-second intervals is appropriate for waiting for ConfigMap changes to propagate to the API.

tests/model_registry/model_catalog/utils.py (1)

1128-1153: Function correctly aggregates labels from both ConfigMaps.

The function properly reads labels from both the default and custom ConfigMaps, safely handling cases where the labels key might not exist.

@dbasunag dbasunag merged commit 23e2790 into opendatahub-io:main Dec 3, 2025
10 checks passed
@github-actions
Copy link
Copy Markdown

github-actions bot commented Dec 3, 2025

Status of building tag latest: success.
Status of pushing tag latest to image registry: success.

mwaykole pushed a commit to mwaykole/opendatahub-tests that referenced this pull request Jan 23, 2026
* feat: add tests for labels endpoint

Signed-off-by: lugi0 <lgiorgi@redhat.com>

* fix: address review comments

Signed-off-by: lugi0 <lgiorgi@redhat.com>

---------

Signed-off-by: lugi0 <lgiorgi@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants