feat: add TrustyAIService Prometheus integration tests#301
feat: add TrustyAIService Prometheus integration tests#301adolfo-ab merged 2 commits intoopendatahub-io:mainfrom
Conversation
WalkthroughThe changes introduce new Prometheus metric validation tests for drift and fairness metrics, refactor metric validation utilities for greater flexibility, and consolidate Prometheus fixture usage. The metric validation logic now allows checking both metric values and labels. Related imports and fixtures are updated or removed to match the refactored utilities and test structure. Changes
Sequence Diagram(s)sequenceDiagram
participant Test as Test Method
participant Prom as Prometheus Instance
participant Utils as Monitoring Utilities
Test->>Prom: Query metric with metric_query
Prom-->>Test: Metric results
Test->>Utils: Extract field/label (get_metrics_value/get_metric_label)
Utils-->>Test: Field/label value
Test->>Utils: Validate value with validate_metrics_field (with timeout)
Utils-->>Test: Assertion success or TimeoutExpiredError
Poem
Tip ⚡️ Faster reviews with caching
Enjoy the performance boost—your workflow just got faster. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (7)
💤 Files with no reviewable changes (1)
🚧 Files skipped from review as they are similar to previous changes (6)
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
|
/verified |
|
The following are automatically added/executed:
Available user actions:
Supported labels{'/verified', '/wip', '/hold', '/lgtm'} |
lugi0
left a comment
There was a problem hiding this comment.
couple questions but looks good overall
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
tests/conftest.py (1)
526-533: Good centralization of the Prometheus fixture.Creating a session-scoped Prometheus fixture in the global conftest file makes it available to all tests, promoting reuse and consistency across the test suite. The fixture is properly configured with necessary parameters.
However, consider adding a docstring to describe the fixture's purpose and configuration.
@pytest.fixture(scope="session") def prometheus(admin_client: DynamicClient) -> Prometheus: + """ + Returns a configured Prometheus client for querying metrics from the cluster. + + The client is configured to use the thanos-querier with the current user's token. + """ return Prometheus( client=admin_client, resource_name="thanos-querier", verify_ssl=False, bearer_token=get_openshift_token(), )tests/model_explainability/trustyai_service/fairness/test_fairness.py (1)
120-134: Well-implemented Prometheus integration test for fairness metrics.This test verifies that TrustyAI's fairness metrics are properly exposed in Prometheus with the correct metadata. It makes good use of:
- The centralized Prometheus fixture
- The flexible
validate_metrics_fieldfunction- Partial function application to create a specialized label getter
To maintain consistency with other test methods, consider adding a docstring.
def test_fairness_metric_prometheus( self, admin_client, model_namespace, trustyai_service_with_pvc_storage, onnx_loan_model, prometheus, ): + """Verify that TrustyAI fairness metrics are correctly exposed in Prometheus with proper labels.""" validate_metrics_field( prometheus=prometheus, metrics_query=f'trustyai_{TrustyAIServiceMetrics.Fairness.SPD}{{namespace="{model_namespace.name}"}}', expected_value=TrustyAIServiceMetrics.Fairness.SPD.upper(), field_getter=partial(get_metric_label, label_name="metricName"), )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
tests/conftest.py(2 hunks)tests/model_explainability/trustyai_service/drift/test_drift.py(3 hunks)tests/model_explainability/trustyai_service/fairness/test_fairness.py(3 hunks)tests/model_serving/model_server/conftest.py(0 hunks)tests/model_serving/model_server/metrics/test_model_metrics.py(3 hunks)tests/model_serving/model_server/metrics/test_non_admin_users.py(2 hunks)utilities/monitoring.py(2 hunks)
💤 Files with no reviewable changes (1)
- tests/model_serving/model_server/conftest.py
🧰 Additional context used
🧬 Code Graph Analysis (5)
tests/conftest.py (1)
utilities/infra.py (1)
get_openshift_token(475-483)
tests/model_serving/model_server/metrics/test_model_metrics.py (1)
utilities/monitoring.py (2)
get_metrics_value(10-24)validate_metrics_field(53-88)
tests/model_serving/model_server/metrics/test_non_admin_users.py (1)
utilities/monitoring.py (1)
validate_metrics_field(53-88)
tests/model_explainability/trustyai_service/fairness/test_fairness.py (5)
utilities/monitoring.py (2)
validate_metrics_field(53-88)get_metric_label(27-50)tests/conftest.py (3)
admin_client(51-52)model_namespace(98-118)prometheus(527-533)tests/model_explainability/trustyai_service/conftest.py (1)
trustyai_service_with_pvc_storage(66-89)tests/model_explainability/trustyai_service/fairness/conftest.py (1)
onnx_loan_model(40-69)tests/model_explainability/trustyai_service/trustyai_service_utils.py (2)
TrustyAIServiceMetrics(32-37)Fairness(33-34)
utilities/monitoring.py (1)
tests/conftest.py (1)
prometheus(527-533)
🔇 Additional comments (11)
tests/model_serving/model_server/metrics/test_non_admin_users.py (2)
10-10: Good refactoring to use the more flexible validation function.The import change from
validate_metrics_valuetovalidate_metrics_fieldaligns with the broader refactoring in the monitoring utilities, providing more flexibility for metric validation.
70-74: Properly adapted function call to use the new API.The call to
validate_metrics_fieldmaintains the same validation logic while leveraging the improved, more flexible API. This change ensures consistent metric validation behavior with enhanced capabilities.tests/model_serving/model_server/metrics/test_model_metrics.py (3)
17-17: Appropriate update to import both validation function and the value getter.The import now includes both
validate_metrics_fieldandget_metrics_value, providing access to both the high-level validation function and the lower-level value getter if needed.
60-64: Successfully migrated to the enhanced metric validation API.This change properly replaces the deprecated
validate_metrics_valuewith the more flexiblevalidate_metrics_fieldfunction, maintaining the same validation logic for successful model requests.
81-85: Consistent update to use the new validation API for total requests.The migration to
validate_metrics_fieldis done consistently across all metric validations, ensuring a uniform approach to metric testing throughout the test suite.tests/conftest.py (1)
17-17: Added required import for the Prometheus client.This import supports the new centralized Prometheus fixture, ensuring the necessary class is available to create the Prometheus client instance.
tests/model_explainability/trustyai_service/fairness/test_fairness.py (2)
1-1: Added partial function import for creating specialized field getters.Good use of
functools.partialto create specialized field getter functions for the metric validation.
17-17: Updated imports for the metric validation utilities.The import now includes both
validate_metrics_fieldandget_metric_label, providing access to the necessary functions for validating Prometheus metrics.tests/model_explainability/trustyai_service/drift/test_drift.py (1)
109-123: Well-implemented Prometheus integration test for drift metrics.This test effectively validates that TrustyAI drift metrics are properly exposed in Prometheus, ensuring that monitoring systems can track model drift. The use of
partialwithget_metric_labelis a clean approach to customize the field extraction.utilities/monitoring.py (2)
10-51: Good extraction of metric value and label retrieval into separate functions.These helper functions improve code modularity by clearly separating concerns:
get_metrics_valuecleanly extracts numeric values from Prometheus query resultsget_metric_labelprovides focused label extraction with appropriate loggingThis separation makes the code more maintainable and facilitates the new label-based validations needed for TrustyAI metrics.
53-88: Well-designed refactoring of the metrics validation function.The
validate_metrics_fieldfunction generalizes the validation logic by:
- Using a customizable
field_getterto support validation of any metric attribute- Defaulting to value checking for backward compatibility
- Providing clear error messages and logging
- Using
TimeoutSamplerappropriately to handle eventual consistencyThis flexible approach elegantly supports both the new label-based validations and existing value checks.
7b617c6 to
d291201
Compare
|
/verified |
|
Status of building tag latest: success. |
Add tests to check the integration between TrustyAIService and Prometheus, for both drift and fairness metrics.
How Has This Been Tested?
Running the tests against a working cluster.
Merge criteria:
Summary by CodeRabbit