Skip to content

feat: Add provider list smoke test within evalhub#1286

Merged
sheltoncyril merged 2 commits intoopendatahub-io:mainfrom
kpunwatk:add_providerlist
Mar 25, 2026
Merged

feat: Add provider list smoke test within evalhub#1286
sheltoncyril merged 2 commits intoopendatahub-io:mainfrom
kpunwatk:add_providerlist

Conversation

@kpunwatk
Copy link
Copy Markdown
Contributor

@kpunwatk kpunwatk commented Mar 24, 2026

Pull Request

Summary

Related Issues

  • Fixes:
  • JIRA:

How it has been tested

  • Locally
  • Jenkins

Additional Requirements

  • If this PR introduces a new test image, did you create a PR to mirror it in disconnected environment?
  • If this PR introduces new marker(s)/adds a new component, was relevant ticket created to update relevant Jenkins job?

Summary by CodeRabbit

  • Tests
    • Added validation tests for the EvalHub providers endpoint.
    • Added RBAC fixtures to provision scoped service account and role binding for provider access during tests.
    • Renamed test grouping and updated test parameters for clearer test organization and intent.

@github-actions
Copy link
Copy Markdown

The following are automatically added/executed:

  • PR size label.
  • Run pre-commit
  • Run tox
  • Add PR author as the PR assignee
  • Build image based on the PR

Available user actions:

  • To mark a PR as WIP, add /wip in a comment. To remove it from the PR comment /wip cancel to the PR.
  • To block merging of a PR, add /hold in a comment. To un-block merging of PR comment /hold cancel.
  • To mark a PR as approved, add /lgtm in a comment. To remove, add /lgtm cancel.
    lgtm label removed on each new commit push.
  • To mark PR as verified comment /verified to the PR, to un-verify comment /verified cancel to the PR.
    verified label removed on each new commit push.
  • To Cherry-pick a merged PR /cherry-pick <target_branch_name> to the PR. If <target_branch_name> is valid,
    and the current PR is merged, a cherry-picked PR would be created and linked to the current PR.
  • To build and push image to quay, add /build-push-pr-image in a comment. This would create an image with tag
    pr-<pr_number> to quay repository. This image tag, however would be deleted on PR merge or close action.
Supported labels

{'/build-push-pr-image', '/cherry-pick', '/wip', '/lgtm', '/verified', '/hold'}

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 24, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds provider-endpoint validation to EvalHub tests: new test method, utilities to call /providers with tenant-aware headers, RBAC fixtures (ServiceAccount and RoleBinding) for provider access, and a new constant naming the required ClusterRole.

Changes

Cohort / File(s) Summary
Tests: health & providers
tests/model_explainability/evalhub/test_evalhub_health.py
Renamed test class to TestEvalHub, changed parametrized namespace to test-evalhub-health-providers, and added test_evalhub_providers_list to validate providers endpoint.
Fixtures / RBAC
tests/model_explainability/evalhub/conftest.py
Added class-scoped fixtures evalhub_scoped_sa (ServiceAccount) and evalhub_providers_role_binding (RoleBinding) to provision RBAC resources in the test namespace.
Provider utilities
tests/model_explainability/evalhub/utils.py
Added validate_evalhub_providers() to GET the providers endpoint (HTTPS, verify=ca_bundle_file, timeout=10), introduced TENANT_HEADER and _build_headers() to include X-Tenant when provided.
Constants
tests/model_explainability/evalhub/constants.py
Added EVALHUB_PROVIDERS_ACCESS_CLUSTER_ROLE = "trustyai-service-operator-evalhub-providers-access".

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Security & Logic Issues

  • Overly strict response assertion: validate_evalhub_providers asserts items exists and is non-empty, forcing failure on a legitimate empty provider list. Action: relax assertion to allow empty lists or make presence configurable. (CWE-20: Improper Input Validation)

  • Insufficient input validation for TLS and host inputs: ca_bundle_file and host are used without validation, risking cryptic failures or TLS misuse. Action: validate file existence/readability and canonicalize/validate host (scheme/hostname). (CWE-295: Improper Certificate Validation)

  • Tenant header handling may mask authz issues: _build_headers() conditionally omits X-Tenant; inconsistent usage could permit unintended access or hide authorization failures. Action: document header requirements and fail fast when tenant is required. (CWE-285: Improper Authorization)

  • No explicit error handling around network requests: relying on requests.get(...).raise_for_status() is acceptable, but consider catching and enriching exceptions to provide clearer test diagnostics. Action: wrap network calls to add context in raised errors. (CWE-200: Information Exposure — ensure error messages do not leak sensitive tokens)

🚥 Pre-merge checks | ✅ 1 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is entirely empty—no summary of changes, no linked issues, and unchecked testing checkboxes. Required information is missing. Fill in the Summary section with the actual changes made. Link any related issues. Check and document which testing methods were used (Locally/Jenkins).
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: adding a provider list smoke test to the evalhub test suite.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tests/model_explainability/evalhub/utils.py (1)

80-81: Logging full response body may expose sensitive data in test logs.

If the API returns any authentication details, tokens, or internal configuration in error responses, these will be written to logs. For a test utility this is often acceptable, but consider whether this endpoint could return sensitive information.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_explainability/evalhub/utils.py` around lines 80 - 81, The test
utility currently logs the full HTTP response body via LOGGER.info(f"Response
body: {response.text}"), which can expose sensitive data; change this to log a
sanitized or truncated body instead: implement or call a sanitizer (e.g.,
sanitize_response_body(response) or a small utility) that strips/obfuscates
common secrets (authorization tokens, passwords, api keys) and/or limits output
length (e.g., first N chars) and use that sanitized string in LOGGER.info;
update the LOGGER call and reference the response variable so you only log safe
content.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/model_explainability/evalhub/utils.py`:
- Around line 51-93: The function validate_evalhub_providers currently validates
the response but never returns it, contradicting the declared -> dict and
docstring; update validate_evalhub_providers to return the parsed paginated
response (the local variable data) after performing the assertions so callers
receive the dict, ensuring callers of validate_evalhub_providers get the
validated response object.

---

Nitpick comments:
In `@tests/model_explainability/evalhub/utils.py`:
- Around line 80-81: The test utility currently logs the full HTTP response body
via LOGGER.info(f"Response body: {response.text}"), which can expose sensitive
data; change this to log a sanitized or truncated body instead: implement or
call a sanitizer (e.g., sanitize_response_body(response) or a small utility)
that strips/obfuscates common secrets (authorization tokens, passwords, api
keys) and/or limits output length (e.g., first N chars) and use that sanitized
string in LOGGER.info; update the LOGGER call and reference the response
variable so you only log safe content.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 0a5ed13c-b226-4b1d-a8b5-413591900f15

📥 Commits

Reviewing files that changed from the base of the PR and between b28f0ed and 795be46.

📒 Files selected for processing (2)
  • tests/model_explainability/evalhub/test_evalhub_health.py
  • tests/model_explainability/evalhub/utils.py

Copy link
Copy Markdown
Contributor

@sheltoncyril sheltoncyril left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/model_explainability/evalhub/utils.py`:
- Around line 80-87: The code currently logs full authenticated response content
via LOGGER.info(f"Response body: {response.text}") and LOGGER.info(f"EvalHub
providers response: {data"); remove or reduce these to avoid exposing secrets:
keep only non-sensitive metadata such as response.status_code
(LOGGER.info(f"Response status: {response.status_code}")), and replace the full
payload logs with either a debug-level sanitized summary (e.g., number of
providers or top-level keys) or a masked/filtered version before logging; update
the uses around response.text and response.json() so raw payloads are not
written to INFO logs and ensure any detailed payload logging is gated behind
LOGGER.debug or explicit sanitization in the same block where
response.raise_for_status() is called.
- Around line 89-93: The current assertions only check presence and truthiness
of response keys; tighten them by asserting types and structure: ensure
data["items"] is a non-empty list and each element is a dict (e.g., assert
isinstance(data["items"], list) and len(data["items"]) > 0 and
all(isinstance(it, dict) for it in data["items"])), ensure data["total_count"]
and data["limit"] are integers (and optionally non-negative) via isinstance
checks, and keep the existing presence assertions for data; update the
assertions around the variable data and data["items"] to reflect these
type/structure checks to avoid accepting malformed payloads like "items": "ok".

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 1febe8fa-986d-4442-a315-0082875c43ec

📥 Commits

Reviewing files that changed from the base of the PR and between 795be46 and 1964fcf.

📒 Files selected for processing (1)
  • tests/model_explainability/evalhub/utils.py

sheltoncyril
sheltoncyril previously approved these changes Mar 24, 2026
Copy link
Copy Markdown
Contributor

@sheltoncyril sheltoncyril left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

fege
fege previously approved these changes Mar 24, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
tests/model_explainability/evalhub/utils.py (1)

84-88: ⚠️ Potential issue | 🟡 Minor

Strengthen providers schema assertions to reject malformed payloads.

Current checks can pass invalid shapes (for example, non-list items). Enforce type and minimal value constraints before asserting non-empty items.

Remediation diff
-    assert "items" in data, "Response missing 'items' field"
-    assert "total_count" in data, "Response missing 'total_count' field"
-    assert "limit" in data, "Response missing 'limit' field"
-
-    assert data["items"], "Providers list should not be empty"
+    assert isinstance(data, dict), "Response must be a JSON object"
+    assert "items" in data, "Response missing 'items' field"
+    assert "total_count" in data, "Response missing 'total_count' field"
+    assert "limit" in data, "Response missing 'limit' field"
+    assert isinstance(data["items"], list), "'items' must be a list"
+    assert all(isinstance(item, dict) for item in data["items"]), "All items must be objects"
+    assert isinstance(data["total_count"], int) and data["total_count"] >= 0, "'total_count' must be a non-negative integer"
+    assert isinstance(data["limit"], int) and data["limit"] >= 0, "'limit' must be a non-negative integer"
+    assert data["items"], "Providers list should not be empty"

As per coding guidelines, **: REVIEW PRIORITIES: 3. Bug-prone patterns and error handling gaps.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_explainability/evalhub/utils.py` around lines 84 - 88, The
current assertions on the response payload are too weak: before asserting
non-empty providers, validate types and minimal values to reject malformed
shapes. Specifically, ensure data contains keys "items", "total_count", and
"limit", then assert data["items"] is a list and has length > 0, assert
data["total_count"] is an int (>= 0), and assert data["limit"] is an int (>= 0)
so that the subsequent assert data["items"] check cannot pass for non-list or
invalid numeric types; update the checks around the data variable and the
"items"/"total_count"/"limit" validations accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/model_explainability/evalhub/utils.py`:
- Line 70: The f-string assigned to url in
tests/model_explainability/evalhub/utils.py is unterminated; update the
assignment for url so the f-string is properly closed (e.g., finish the string
with a closing quote) and ensure it concatenates host and EVALHUB_PROVIDERS_PATH
correctly (reference: the url variable, host identifier, and
EVALHUB_PROVIDERS_PATH constant).

---

Duplicate comments:
In `@tests/model_explainability/evalhub/utils.py`:
- Around line 84-88: The current assertions on the response payload are too
weak: before asserting non-empty providers, validate types and minimal values to
reject malformed shapes. Specifically, ensure data contains keys "items",
"total_count", and "limit", then assert data["items"] is a list and has length >
0, assert data["total_count"] is an int (>= 0), and assert data["limit"] is an
int (>= 0) so that the subsequent assert data["items"] check cannot pass for
non-list or invalid numeric types; update the checks around the data variable
and the "items"/"total_count"/"limit" validations accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 3c88a73b-d780-4fb8-9a54-eb77c201f499

📥 Commits

Reviewing files that changed from the base of the PR and between 1964fcf and 54ff4e6.

📒 Files selected for processing (1)
  • tests/model_explainability/evalhub/utils.py

sheltoncyril
sheltoncyril previously approved these changes Mar 24, 2026
Copy link
Copy Markdown
Contributor

@sheltoncyril sheltoncyril left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
tests/model_explainability/evalhub/test_evalhub_health.py (2)

39-39: Missing type annotation for model_namespace.

Other fixture parameters have explicit type hints (str, Route), but model_namespace lacks one. Consider adding the appropriate type (likely Namespace from ocp_resources).

Suggested fix
+from ocp_resources.namespace import Namespace
+
 ...
 
     def test_evalhub_providers_list(
         self,
         current_client_token: str,
         evalhub_ca_bundle_file: str,
         evalhub_route: Route,
-        model_namespace,
+        model_namespace: Namespace,
     ) -> None:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_explainability/evalhub/test_evalhub_health.py` at line 39, Add an
explicit type annotation for the fixture parameter model_namespace (likely use
Namespace from ocp_resources) where it is declared in the test
(test_evalhub_health / the fixture signature that includes model_namespace) so
it matches other fixtures that use explicit types like str and Route; import
Namespace from ocp_resources if not already imported and annotate the parameter
as model_namespace: Namespace.

21-32: Consistency: test_evalhub_health_endpoint should also type-hint model_namespace if applicable.

This existing test doesn't use model_namespace as a parameter, but it's parametrized at class level with indirect=True. If the fixture is implicitly available but unused, consider whether this test should validate tenant-scoped health or whether the class-level parametrization should be scoped only to tests that need it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_explainability/evalhub/test_evalhub_health.py` around lines 21 -
32, Test test_evalhub_health_endpoint omits the model_namespace parameter while
the test class is parametrized with model_namespace (indirect=True); either add
model_namespace to the test signature or remove/limit the class-level
parametrization to avoid an unused fixture—update the test function
test_evalhub_health_endpoint to accept model_namespace: str and pass it where
appropriate (or if tenant-scoped validation is not needed, restrict the
class-level parametrization so only tests that require model_namespace receive
it).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/model_explainability/evalhub/test_evalhub_health.py`:
- Around line 34-48: The test method test_evalhub_providers_list is missing
explicit RBAC fixtures; update its signature to include the fixtures
evalhub_scoped_sa and evalhub_providers_role_binding as parameters so the
ServiceAccount and RoleBinding are created, and annotate the existing
model_namespace parameter with type Namespace for consistency; keep the call to
validate_evalhub_providers unchanged (use the same host, token, ca_bundle_file,
tenant arguments).

---

Nitpick comments:
In `@tests/model_explainability/evalhub/test_evalhub_health.py`:
- Line 39: Add an explicit type annotation for the fixture parameter
model_namespace (likely use Namespace from ocp_resources) where it is declared
in the test (test_evalhub_health / the fixture signature that includes
model_namespace) so it matches other fixtures that use explicit types like str
and Route; import Namespace from ocp_resources if not already imported and
annotate the parameter as model_namespace: Namespace.
- Around line 21-32: Test test_evalhub_health_endpoint omits the model_namespace
parameter while the test class is parametrized with model_namespace
(indirect=True); either add model_namespace to the test signature or
remove/limit the class-level parametrization to avoid an unused fixture—update
the test function test_evalhub_health_endpoint to accept model_namespace: str
and pass it where appropriate (or if tenant-scoped validation is not needed,
restrict the class-level parametrization so only tests that require
model_namespace receive it).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: ed01137b-daed-4241-8cb5-873e49e60786

📥 Commits

Reviewing files that changed from the base of the PR and between 54ff4e6 and 3abea8f.

📒 Files selected for processing (4)
  • tests/model_explainability/evalhub/conftest.py
  • tests/model_explainability/evalhub/constants.py
  • tests/model_explainability/evalhub/test_evalhub_health.py
  • tests/model_explainability/evalhub/utils.py
✅ Files skipped from review due to trivial changes (1)
  • tests/model_explainability/evalhub/constants.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/model_explainability/evalhub/conftest.py
  • tests/model_explainability/evalhub/utils.py

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
tests/model_explainability/evalhub/test_evalhub_health.py (1)

34-48: ⚠️ Potential issue | 🟠 Major

RBAC fixtures not requested; test will fail with 403.

Per prior review and conftest.py, evalhub_scoped_sa and evalhub_providers_role_binding create the ServiceAccount and RoleBinding needed for providers access. Neither is autouse=True nor chained through existing parameters. Without them, current_client_token won't have providers permissions.

Proposed fix
     def test_evalhub_providers_list(
         self,
         current_client_token: str,
         evalhub_ca_bundle_file: str,
         evalhub_route: Route,
-        model_namespace,
+        model_namespace: "Namespace",
+        evalhub_scoped_sa,  # noqa: ARG002 - triggers SA creation
+        evalhub_providers_role_binding,  # noqa: ARG002 - triggers RoleBinding creation
     ) -> None:
#!/bin/bash
# Verify RBAC fixtures exist and lack autouse
rg -n "def evalhub_scoped_sa|def evalhub_providers_role_binding|autouse" tests/model_explainability/evalhub/conftest.py -B2 -A3
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_explainability/evalhub/test_evalhub_health.py` around lines 34 -
48, The test test_evalhub_providers_list fails with 403 because it does not
request the RBAC fixtures that grant providers access; add the evalhub_scoped_sa
and evalhub_providers_role_binding fixtures to the test function signature so
the ServiceAccount and RoleBinding are created before the call to
validate_evalhub_providers (i.e., include evalhub_scoped_sa and
evalhub_providers_role_binding as parameters to test_evalhub_providers_list
alongside current_client_token), ensuring the current_client_token has providers
permissions; do not change autouse flags—just request the fixtures in the test
signature.
tests/model_explainability/evalhub/utils.py (1)

86-89: ⚠️ Potential issue | 🟡 Minor

Assertion only checks truthiness; malformed payloads pass (CWE-20).

data.get("items") passes for {"items": "not-a-list"}. Prior review flagged this. Add type assertion.

Proposed fix
     data = response.json()
-    assert data.get("items"), f"Smoke test failed: Providers list is empty for tenant {tenant}"
+    assert isinstance(data.get("items"), list), f"Smoke test failed: 'items' must be a list for tenant {tenant}"
+    assert data["items"], f"Smoke test failed: Providers list is empty for tenant {tenant}"

     return data
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_explainability/evalhub/utils.py` around lines 86 - 89, The
assertion only checks truthiness of data.get("items") and allows non-list types;
update the check after response.json() to assert that data.get("items") is an
instance of list and non-empty (e.g., use isinstance(data.get("items"), list)
and len(data["items"]) > 0) and update the assertion message to indicate an
unexpected or malformed "items" payload for the tenant; reference the variables
data and the "items" key to locate the check to replace.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@tests/model_explainability/evalhub/test_evalhub_health.py`:
- Around line 34-48: The test test_evalhub_providers_list fails with 403 because
it does not request the RBAC fixtures that grant providers access; add the
evalhub_scoped_sa and evalhub_providers_role_binding fixtures to the test
function signature so the ServiceAccount and RoleBinding are created before the
call to validate_evalhub_providers (i.e., include evalhub_scoped_sa and
evalhub_providers_role_binding as parameters to test_evalhub_providers_list
alongside current_client_token), ensuring the current_client_token has providers
permissions; do not change autouse flags—just request the fixtures in the test
signature.

In `@tests/model_explainability/evalhub/utils.py`:
- Around line 86-89: The assertion only checks truthiness of data.get("items")
and allows non-list types; update the check after response.json() to assert that
data.get("items") is an instance of list and non-empty (e.g., use
isinstance(data.get("items"), list) and len(data["items"]) > 0) and update the
assertion message to indicate an unexpected or malformed "items" payload for the
tenant; reference the variables data and the "items" key to locate the check to
replace.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: b5160677-5638-47c7-b9b1-b30764bf746d

📥 Commits

Reviewing files that changed from the base of the PR and between 3abea8f and 50ebdfc.

📒 Files selected for processing (4)
  • tests/model_explainability/evalhub/conftest.py
  • tests/model_explainability/evalhub/constants.py
  • tests/model_explainability/evalhub/test_evalhub_health.py
  • tests/model_explainability/evalhub/utils.py
✅ Files skipped from review due to trivial changes (1)
  • tests/model_explainability/evalhub/constants.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/model_explainability/evalhub/conftest.py

	modified:   tests/model_explainability/evalhub/test_evalhub_health.py
	modified:   tests/model_explainability/evalhub/utils.py

Signed-off-by: Karishma Punwatkar <kpunwatk@redhat.com>

	modified:   tests/model_explainability/evalhub/conftest.py
	modified:   tests/model_explainability/evalhub/constants.py
	modified:   tests/model_explainability/evalhub/test_evalhub_health.py
	modified:   tests/model_explainability/evalhub/utils.py

	modified:   tests/model_explainability/evalhub/conftest.py
	modified:   tests/model_explainability/evalhub/constants.py
	modified:   tests/model_explainability/evalhub/test_evalhub_health.py
	modified:   tests/model_explainability/evalhub/utils.py

	modified:   tests/model_explainability/evalhub/conftest.py
	modified:   tests/model_explainability/evalhub/constants.py
	modified:   tests/model_explainability/evalhub/test_evalhub_health.py
	modified:   tests/model_explainability/evalhub/utils.py

	modified:   tests/model_explainability/evalhub/test_evalhub_health.py
	modified:   tests/model_explainability/evalhub/utils.py
@kpunwatk
Copy link
Copy Markdown
Contributor Author

Hi @fege @sheltoncyril could you please re-approve the PR, Please

Copy link
Copy Markdown
Contributor

@sheltoncyril sheltoncyril left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@sheltoncyril sheltoncyril merged commit 84eb111 into opendatahub-io:main Mar 25, 2026
10 checks passed
@github-actions
Copy link
Copy Markdown

Status of building tag latest: success.
Status of pushing tag latest to image registry: success.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants