feat: Add provider list smoke test within evalhub#1286
feat: Add provider list smoke test within evalhub#1286sheltoncyril merged 2 commits intoopendatahub-io:mainfrom
Conversation
|
The following are automatically added/executed:
Available user actions:
Supported labels{'/build-push-pr-image', '/cherry-pick', '/wip', '/lgtm', '/verified', '/hold'} |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds provider-endpoint validation to EvalHub tests: new test method, utilities to call /providers with tenant-aware headers, RBAC fixtures (ServiceAccount and RoleBinding) for provider access, and a new constant naming the required ClusterRole. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Security & Logic Issues
🚥 Pre-merge checks | ✅ 1 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tests/model_explainability/evalhub/utils.py (1)
80-81: Logging full response body may expose sensitive data in test logs.If the API returns any authentication details, tokens, or internal configuration in error responses, these will be written to logs. For a test utility this is often acceptable, but consider whether this endpoint could return sensitive information.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_explainability/evalhub/utils.py` around lines 80 - 81, The test utility currently logs the full HTTP response body via LOGGER.info(f"Response body: {response.text}"), which can expose sensitive data; change this to log a sanitized or truncated body instead: implement or call a sanitizer (e.g., sanitize_response_body(response) or a small utility) that strips/obfuscates common secrets (authorization tokens, passwords, api keys) and/or limits output length (e.g., first N chars) and use that sanitized string in LOGGER.info; update the LOGGER call and reference the response variable so you only log safe content.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/model_explainability/evalhub/utils.py`:
- Around line 51-93: The function validate_evalhub_providers currently validates
the response but never returns it, contradicting the declared -> dict and
docstring; update validate_evalhub_providers to return the parsed paginated
response (the local variable data) after performing the assertions so callers
receive the dict, ensuring callers of validate_evalhub_providers get the
validated response object.
---
Nitpick comments:
In `@tests/model_explainability/evalhub/utils.py`:
- Around line 80-81: The test utility currently logs the full HTTP response body
via LOGGER.info(f"Response body: {response.text}"), which can expose sensitive
data; change this to log a sanitized or truncated body instead: implement or
call a sanitizer (e.g., sanitize_response_body(response) or a small utility)
that strips/obfuscates common secrets (authorization tokens, passwords, api
keys) and/or limits output length (e.g., first N chars) and use that sanitized
string in LOGGER.info; update the LOGGER call and reference the response
variable so you only log safe content.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 0a5ed13c-b226-4b1d-a8b5-413591900f15
📒 Files selected for processing (2)
tests/model_explainability/evalhub/test_evalhub_health.pytests/model_explainability/evalhub/utils.py
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/model_explainability/evalhub/utils.py`:
- Around line 80-87: The code currently logs full authenticated response content
via LOGGER.info(f"Response body: {response.text}") and LOGGER.info(f"EvalHub
providers response: {data"); remove or reduce these to avoid exposing secrets:
keep only non-sensitive metadata such as response.status_code
(LOGGER.info(f"Response status: {response.status_code}")), and replace the full
payload logs with either a debug-level sanitized summary (e.g., number of
providers or top-level keys) or a masked/filtered version before logging; update
the uses around response.text and response.json() so raw payloads are not
written to INFO logs and ensure any detailed payload logging is gated behind
LOGGER.debug or explicit sanitization in the same block where
response.raise_for_status() is called.
- Around line 89-93: The current assertions only check presence and truthiness
of response keys; tighten them by asserting types and structure: ensure
data["items"] is a non-empty list and each element is a dict (e.g., assert
isinstance(data["items"], list) and len(data["items"]) > 0 and
all(isinstance(it, dict) for it in data["items"])), ensure data["total_count"]
and data["limit"] are integers (and optionally non-negative) via isinstance
checks, and keep the existing presence assertions for data; update the
assertions around the variable data and data["items"] to reflect these
type/structure checks to avoid accepting malformed payloads like "items": "ok".
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 1febe8fa-986d-4442-a315-0082875c43ec
📒 Files selected for processing (1)
tests/model_explainability/evalhub/utils.py
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
tests/model_explainability/evalhub/utils.py (1)
84-88:⚠️ Potential issue | 🟡 MinorStrengthen providers schema assertions to reject malformed payloads.
Current checks can pass invalid shapes (for example, non-list
items). Enforce type and minimal value constraints before asserting non-empty items.Remediation diff
- assert "items" in data, "Response missing 'items' field" - assert "total_count" in data, "Response missing 'total_count' field" - assert "limit" in data, "Response missing 'limit' field" - - assert data["items"], "Providers list should not be empty" + assert isinstance(data, dict), "Response must be a JSON object" + assert "items" in data, "Response missing 'items' field" + assert "total_count" in data, "Response missing 'total_count' field" + assert "limit" in data, "Response missing 'limit' field" + assert isinstance(data["items"], list), "'items' must be a list" + assert all(isinstance(item, dict) for item in data["items"]), "All items must be objects" + assert isinstance(data["total_count"], int) and data["total_count"] >= 0, "'total_count' must be a non-negative integer" + assert isinstance(data["limit"], int) and data["limit"] >= 0, "'limit' must be a non-negative integer" + assert data["items"], "Providers list should not be empty"As per coding guidelines,
**: REVIEW PRIORITIES: 3. Bug-prone patterns and error handling gaps.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_explainability/evalhub/utils.py` around lines 84 - 88, The current assertions on the response payload are too weak: before asserting non-empty providers, validate types and minimal values to reject malformed shapes. Specifically, ensure data contains keys "items", "total_count", and "limit", then assert data["items"] is a list and has length > 0, assert data["total_count"] is an int (>= 0), and assert data["limit"] is an int (>= 0) so that the subsequent assert data["items"] check cannot pass for non-list or invalid numeric types; update the checks around the data variable and the "items"/"total_count"/"limit" validations accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/model_explainability/evalhub/utils.py`:
- Line 70: The f-string assigned to url in
tests/model_explainability/evalhub/utils.py is unterminated; update the
assignment for url so the f-string is properly closed (e.g., finish the string
with a closing quote) and ensure it concatenates host and EVALHUB_PROVIDERS_PATH
correctly (reference: the url variable, host identifier, and
EVALHUB_PROVIDERS_PATH constant).
---
Duplicate comments:
In `@tests/model_explainability/evalhub/utils.py`:
- Around line 84-88: The current assertions on the response payload are too
weak: before asserting non-empty providers, validate types and minimal values to
reject malformed shapes. Specifically, ensure data contains keys "items",
"total_count", and "limit", then assert data["items"] is a list and has length >
0, assert data["total_count"] is an int (>= 0), and assert data["limit"] is an
int (>= 0) so that the subsequent assert data["items"] check cannot pass for
non-list or invalid numeric types; update the checks around the data variable
and the "items"/"total_count"/"limit" validations accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 3c88a73b-d780-4fb8-9a54-eb77c201f499
📒 Files selected for processing (1)
tests/model_explainability/evalhub/utils.py
54ff4e6 to
d0d8333
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
tests/model_explainability/evalhub/test_evalhub_health.py (2)
39-39: Missing type annotation formodel_namespace.Other fixture parameters have explicit type hints (
str,Route), butmodel_namespacelacks one. Consider adding the appropriate type (likelyNamespacefromocp_resources).Suggested fix
+from ocp_resources.namespace import Namespace + ... def test_evalhub_providers_list( self, current_client_token: str, evalhub_ca_bundle_file: str, evalhub_route: Route, - model_namespace, + model_namespace: Namespace, ) -> None:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_explainability/evalhub/test_evalhub_health.py` at line 39, Add an explicit type annotation for the fixture parameter model_namespace (likely use Namespace from ocp_resources) where it is declared in the test (test_evalhub_health / the fixture signature that includes model_namespace) so it matches other fixtures that use explicit types like str and Route; import Namespace from ocp_resources if not already imported and annotate the parameter as model_namespace: Namespace.
21-32: Consistency:test_evalhub_health_endpointshould also type-hintmodel_namespaceif applicable.This existing test doesn't use
model_namespaceas a parameter, but it's parametrized at class level withindirect=True. If the fixture is implicitly available but unused, consider whether this test should validate tenant-scoped health or whether the class-level parametrization should be scoped only to tests that need it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_explainability/evalhub/test_evalhub_health.py` around lines 21 - 32, Test test_evalhub_health_endpoint omits the model_namespace parameter while the test class is parametrized with model_namespace (indirect=True); either add model_namespace to the test signature or remove/limit the class-level parametrization to avoid an unused fixture—update the test function test_evalhub_health_endpoint to accept model_namespace: str and pass it where appropriate (or if tenant-scoped validation is not needed, restrict the class-level parametrization so only tests that require model_namespace receive it).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/model_explainability/evalhub/test_evalhub_health.py`:
- Around line 34-48: The test method test_evalhub_providers_list is missing
explicit RBAC fixtures; update its signature to include the fixtures
evalhub_scoped_sa and evalhub_providers_role_binding as parameters so the
ServiceAccount and RoleBinding are created, and annotate the existing
model_namespace parameter with type Namespace for consistency; keep the call to
validate_evalhub_providers unchanged (use the same host, token, ca_bundle_file,
tenant arguments).
---
Nitpick comments:
In `@tests/model_explainability/evalhub/test_evalhub_health.py`:
- Line 39: Add an explicit type annotation for the fixture parameter
model_namespace (likely use Namespace from ocp_resources) where it is declared
in the test (test_evalhub_health / the fixture signature that includes
model_namespace) so it matches other fixtures that use explicit types like str
and Route; import Namespace from ocp_resources if not already imported and
annotate the parameter as model_namespace: Namespace.
- Around line 21-32: Test test_evalhub_health_endpoint omits the model_namespace
parameter while the test class is parametrized with model_namespace
(indirect=True); either add model_namespace to the test signature or
remove/limit the class-level parametrization to avoid an unused fixture—update
the test function test_evalhub_health_endpoint to accept model_namespace: str
and pass it where appropriate (or if tenant-scoped validation is not needed,
restrict the class-level parametrization so only tests that require
model_namespace receive it).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: ed01137b-daed-4241-8cb5-873e49e60786
📒 Files selected for processing (4)
tests/model_explainability/evalhub/conftest.pytests/model_explainability/evalhub/constants.pytests/model_explainability/evalhub/test_evalhub_health.pytests/model_explainability/evalhub/utils.py
✅ Files skipped from review due to trivial changes (1)
- tests/model_explainability/evalhub/constants.py
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/model_explainability/evalhub/conftest.py
- tests/model_explainability/evalhub/utils.py
3abea8f to
50ebdfc
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (2)
tests/model_explainability/evalhub/test_evalhub_health.py (1)
34-48:⚠️ Potential issue | 🟠 MajorRBAC fixtures not requested; test will fail with 403.
Per prior review and conftest.py,
evalhub_scoped_saandevalhub_providers_role_bindingcreate the ServiceAccount and RoleBinding needed for providers access. Neither isautouse=Truenor chained through existing parameters. Without them,current_client_tokenwon't have providers permissions.Proposed fix
def test_evalhub_providers_list( self, current_client_token: str, evalhub_ca_bundle_file: str, evalhub_route: Route, - model_namespace, + model_namespace: "Namespace", + evalhub_scoped_sa, # noqa: ARG002 - triggers SA creation + evalhub_providers_role_binding, # noqa: ARG002 - triggers RoleBinding creation ) -> None:#!/bin/bash # Verify RBAC fixtures exist and lack autouse rg -n "def evalhub_scoped_sa|def evalhub_providers_role_binding|autouse" tests/model_explainability/evalhub/conftest.py -B2 -A3🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_explainability/evalhub/test_evalhub_health.py` around lines 34 - 48, The test test_evalhub_providers_list fails with 403 because it does not request the RBAC fixtures that grant providers access; add the evalhub_scoped_sa and evalhub_providers_role_binding fixtures to the test function signature so the ServiceAccount and RoleBinding are created before the call to validate_evalhub_providers (i.e., include evalhub_scoped_sa and evalhub_providers_role_binding as parameters to test_evalhub_providers_list alongside current_client_token), ensuring the current_client_token has providers permissions; do not change autouse flags—just request the fixtures in the test signature.tests/model_explainability/evalhub/utils.py (1)
86-89:⚠️ Potential issue | 🟡 MinorAssertion only checks truthiness; malformed payloads pass (CWE-20).
data.get("items")passes for{"items": "not-a-list"}. Prior review flagged this. Add type assertion.Proposed fix
data = response.json() - assert data.get("items"), f"Smoke test failed: Providers list is empty for tenant {tenant}" + assert isinstance(data.get("items"), list), f"Smoke test failed: 'items' must be a list for tenant {tenant}" + assert data["items"], f"Smoke test failed: Providers list is empty for tenant {tenant}" return data🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_explainability/evalhub/utils.py` around lines 86 - 89, The assertion only checks truthiness of data.get("items") and allows non-list types; update the check after response.json() to assert that data.get("items") is an instance of list and non-empty (e.g., use isinstance(data.get("items"), list) and len(data["items"]) > 0) and update the assertion message to indicate an unexpected or malformed "items" payload for the tenant; reference the variables data and the "items" key to locate the check to replace.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@tests/model_explainability/evalhub/test_evalhub_health.py`:
- Around line 34-48: The test test_evalhub_providers_list fails with 403 because
it does not request the RBAC fixtures that grant providers access; add the
evalhub_scoped_sa and evalhub_providers_role_binding fixtures to the test
function signature so the ServiceAccount and RoleBinding are created before the
call to validate_evalhub_providers (i.e., include evalhub_scoped_sa and
evalhub_providers_role_binding as parameters to test_evalhub_providers_list
alongside current_client_token), ensuring the current_client_token has providers
permissions; do not change autouse flags—just request the fixtures in the test
signature.
In `@tests/model_explainability/evalhub/utils.py`:
- Around line 86-89: The assertion only checks truthiness of data.get("items")
and allows non-list types; update the check after response.json() to assert that
data.get("items") is an instance of list and non-empty (e.g., use
isinstance(data.get("items"), list) and len(data["items"]) > 0) and update the
assertion message to indicate an unexpected or malformed "items" payload for the
tenant; reference the variables data and the "items" key to locate the check to
replace.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: b5160677-5638-47c7-b9b1-b30764bf746d
📒 Files selected for processing (4)
tests/model_explainability/evalhub/conftest.pytests/model_explainability/evalhub/constants.pytests/model_explainability/evalhub/test_evalhub_health.pytests/model_explainability/evalhub/utils.py
✅ Files skipped from review due to trivial changes (1)
- tests/model_explainability/evalhub/constants.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/model_explainability/evalhub/conftest.py
modified: tests/model_explainability/evalhub/test_evalhub_health.py modified: tests/model_explainability/evalhub/utils.py Signed-off-by: Karishma Punwatkar <kpunwatk@redhat.com> modified: tests/model_explainability/evalhub/conftest.py modified: tests/model_explainability/evalhub/constants.py modified: tests/model_explainability/evalhub/test_evalhub_health.py modified: tests/model_explainability/evalhub/utils.py modified: tests/model_explainability/evalhub/conftest.py modified: tests/model_explainability/evalhub/constants.py modified: tests/model_explainability/evalhub/test_evalhub_health.py modified: tests/model_explainability/evalhub/utils.py modified: tests/model_explainability/evalhub/conftest.py modified: tests/model_explainability/evalhub/constants.py modified: tests/model_explainability/evalhub/test_evalhub_health.py modified: tests/model_explainability/evalhub/utils.py modified: tests/model_explainability/evalhub/test_evalhub_health.py modified: tests/model_explainability/evalhub/utils.py
50ebdfc to
9c8cff4
Compare
|
Hi @fege @sheltoncyril could you please re-approve the PR, Please |
|
Status of building tag latest: success. |
Pull Request
Summary
Related Issues
How it has been tested
Additional Requirements
Summary by CodeRabbit