test: add new tests for embedding models#1254
Conversation
Signed-off-by: Debarati Basu-Nag <dbasunag@redhat.com>
|
The following are automatically added/executed:
Available user actions:
Supported labels{'/wip', '/lgtm', '/cherry-pick', '/build-push-pr-image', '/hold', '/verified'} |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited) Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds a class-scoped pytest fixture that fetches models filtered by Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Actionable Issues
🚥 Pre-merge checks | ✅ 1 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
tests/model_registry/model_catalog/search/test_model_search.py (1)
458-472: Don't bind this task-filter test to a single source unless the query is source-scoped.Line 468 requires every
tasks='text-embedding'result to come fromOTHER_MODELS_CATALOG_ID, but the fixture only filters on task. That couples the test to today's catalog inventory, so a legitimate embedding model added to another source will fail CI even when search behaves correctly. Either constrain the fixture/query to the intended source first, or drop the source-specific assertion from this class.As per coding guidelines,
**: REVIEW PRIORITIES: 3. Bug-prone patterns and error handling gaps.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_catalog/search/test_model_search.py` around lines 458 - 472, The test test_embedding_models_source_id currently asserts every item in embedding_models_response (fixture filtered only by task='text-embedding') has source_id == OTHER_MODELS_CATALOG_ID, coupling the test to current catalog inventory; either narrow the query/fixture to explicitly request the OTHER_MODELS_CATALOG_ID source (so embedding_models_response only contains models from that source) or remove the source-specific assertion entirely — update the test_embedding_models_source_id to either (A) modify the fixture/query that builds embedding_models_response to include source filtering for OTHER_MODELS_CATALOG_ID before checking items, or (B) delete the assertion that compares model["source_id"] to OTHER_MODELS_CATALOG_ID and keep only task-based validations.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/model_registry/model_catalog/search/conftest.py`:
- Around line 14-18: The fixture in conftest calls
get_models_from_catalog_api(...) without changing the default page_size, so it
only returns page 1 and breaks whole-catalog assertions; update the fixture to
iterate/paginate through all pages returned by get_models_from_catalog_api (use
its page/token parameters or repeatedly call with incremented page/page_size
until no more results) or explicitly pass a sufficiently large page_size to
get_models_from_catalog_api to return the full result set; ensure you use the
same identifiers (model_catalog_rest_url, model_registry_rest_headers,
get_models_from_catalog_api) so downstream tests see all models matching the
"&filterQuery=tasks='text-embedding'" filter.
In `@tests/model_registry/model_catalog/search/test_model_search.py`:
- Around line 447-456: The test test_filter_query_by_text_embedding_task
currently only asserts embedding_models_response.get("size", 0) > 0; update it
to also assert that the returned payload contains a non-empty items list by
checking embedding_models_response.get("items", []) is truthy (or len(...) > 0).
Locate the test function test_filter_query_by_text_embedding_task and add an
explicit assertion on embedding_models_response.get("items", []) to ensure items
are present and not just relying on the reported size.
---
Nitpick comments:
In `@tests/model_registry/model_catalog/search/test_model_search.py`:
- Around line 458-472: The test test_embedding_models_source_id currently
asserts every item in embedding_models_response (fixture filtered only by
task='text-embedding') has source_id == OTHER_MODELS_CATALOG_ID, coupling the
test to current catalog inventory; either narrow the query/fixture to explicitly
request the OTHER_MODELS_CATALOG_ID source (so embedding_models_response only
contains models from that source) or remove the source-specific assertion
entirely — update the test_embedding_models_source_id to either (A) modify the
fixture/query that builds embedding_models_response to include source filtering
for OTHER_MODELS_CATALOG_ID before checking items, or (B) delete the assertion
that compares model["source_id"] to OTHER_MODELS_CATALOG_ID and keep only
task-based validations.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 5f721509-29f7-43c6-a391-90a9a43705a3
📒 Files selected for processing (2)
tests/model_registry/model_catalog/search/conftest.pytests/model_registry/model_catalog/search/test_model_search.py
Signed-off-by: Debarati Basu-Nag <dbasunag@redhat.com>
|
Status of building tag latest: success. |
* test: add new tests for embedding models Signed-off-by: Debarati Basu-Nag <dbasunag@redhat.com> * fix: address review comments Signed-off-by: Debarati Basu-Nag <dbasunag@redhat.com> --------- Signed-off-by: Debarati Basu-Nag <dbasunag@redhat.com> Signed-off-by: Shehan Saleem <ssaleem@redhat.com>
Pull Request
Summary
Related Issues
How it has been tested
Additional Requirements
Summary by CodeRabbit