feat(evalhub): add EvalHub service health check tests#1145
feat(evalhub): add EvalHub service health check tests#1145sheltoncyril merged 6 commits intoopendatahub-io:mainfrom
Conversation
|
The following are automatically added/executed:
Available user actions:
Supported labels{'/lgtm', '/wip', '/verified', '/hold', '/cherry-pick', '/build-push-pr-image'} |
📝 WalkthroughWalkthroughAdds test infrastructure and utilities for EvalHub: new pytest fixtures to manage EvalHub CR lifecycle, constants and a health-check utility, a health test, and a NamespacedResource wrapper class for the EvalHub custom resource. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
utilities/resources/evalhub.py (1)
3-6: Avoid utility-layer imports fromtests.*modules.At Line 3,
utilities/resources/evalhub.pydepends ontests.model_explainability.evalhub.constants, which creates a reversed module boundary. Please move shared EvalHub CR metadata to a non-test shared module and import from there in both places.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utilities/resources/evalhub.py` around lines 3 - 6, The file utilities/resources/evalhub.py must not import from tests.model_explainability.evalhub.constants; extract the shared CR metadata constants (EVALHUB_KIND, EVALHUB_PLURAL) into a new non-test shared module (e.g., a module named evalhub_constants or shared.evalhub_constants) and update both utilities/resources/evalhub.py and tests.model_explainability.evalhub.constants to import EVALHUB_KIND and EVALHUB_PLURAL from that new shared module instead of from tests.* so the utility layer no longer depends on test code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/model_explainability/evalhub/conftest.py`:
- Around line 64-69: The fixture evalhub_ca_bundle_file may return an empty path
from create_ca_bundle_file(admin_client, ca_type="openshift"), causing later TLS
checks to fail with opaque errors; update the evalhub_ca_bundle_file fixture to
immediately fail-fast by asserting the returned CA bundle path is non-empty and
the file exists (e.g., check truthiness and os.path.exists/Path.exists on the
returned value) so failures point to the real missing OpenShift CA bundle;
reference create_ca_bundle_file and admin_client when adding these assertions.
---
Nitpick comments:
In `@utilities/resources/evalhub.py`:
- Around line 3-6: The file utilities/resources/evalhub.py must not import from
tests.model_explainability.evalhub.constants; extract the shared CR metadata
constants (EVALHUB_KIND, EVALHUB_PLURAL) into a new non-test shared module
(e.g., a module named evalhub_constants or shared.evalhub_constants) and update
both utilities/resources/evalhub.py and
tests.model_explainability.evalhub.constants to import EVALHUB_KIND and
EVALHUB_PLURAL from that new shared module instead of from tests.* so the
utility layer no longer depends on test code.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (6)
tests/model_explainability/evalhub/__init__.pytests/model_explainability/evalhub/conftest.pytests/model_explainability/evalhub/constants.pytests/model_explainability/evalhub/test_evalhub_health.pytests/model_explainability/evalhub/utils.pyutilities/resources/evalhub.py
|
Thanks for the PR, @ruivieira! This looks great—I've just left one small comment. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@utilities/resources/evalhub.py`:
- Around line 38-40: The code currently unconditionally overwrites
self.res["spec"] with an empty dict when neither self.kind_dict nor
self.yaml_file are present, which can drop existing spec fields; change this to
non-destructive initialization (e.g., use self.res.setdefault("spec", {}) or
guard with if "spec" not in self.res: self.res["spec"] = {}) so existing spec
content is preserved, then assign _spec = self.res["spec"] as before in the
block that references _spec.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (1)
utilities/resources/evalhub.py
|
Status of building tag latest: success. |
Summary
Add initial test coverage for the EvalHub service, including:
utilities/resources/Related Issues
How it has been tested
Additional Requirements
Summary by CodeRabbit
Tests
Chores