feat: add keda scaling tests#479
Conversation
📝 WalkthroughSummary by CodeRabbit
WalkthroughThis change introduces comprehensive support for testing KEDA-based autoscaling of inference services in a Kubernetes environment. It adds new test modules, fixtures, and utility functions for configuring, deploying, and validating KEDA scaling with both CPU and GPU model servers. Several utility and configuration files are updated to support autoscaling and monitoring. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~40 minutes Suggested labels
Suggested reviewers
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
|
The following are automatically added/executed:
Available user actions:
Supported labels{'/lgtm', '/hold', '/wip', '/cherry-pick', '/build-push-pr-image', '/verified'} |
There was a problem hiding this comment.
Actionable comments posted: 13
🧹 Nitpick comments (6)
utilities/infra.py (1)
1006-1027: Fix return type annotation and documentation inconsistency.The function implementation is solid, but there are minor documentation and type annotation issues:
- The return type is annotated as
list[Any]but the function returns a single ScaledObject resource, not a list- The docstring says "A list of all matching ScaledObjects" but only one ScaledObject is returned
Apply this diff to fix the return type and documentation:
-def get_isvc_keda_scaledobject(client: DynamicClient, isvc: InferenceService) -> list[Any]: +def get_isvc_keda_scaledobject(client: DynamicClient, isvc: InferenceService) -> Any: """ Get KEDA ScaledObject resources associated with an InferenceService. Args: client (DynamicClient): OCP Client to use. isvc (InferenceService): InferenceService object. Returns: - list[Any]: A list of all matching ScaledObjects + Any: The ScaledObject associated with the InferenceService Raises: ResourceNotFoundError: if no ScaledObjects are found. """tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py (2)
22-29: Consider extracting the metrics query to a constant or function.While the multi-line f-string is readable, consider extracting it to a function that takes namespace and model name as parameters for better reusability and testability.
-OVMS_METRICS_QUERY = ( - f"sum by (name) (rate(ovms_inference_time_us_sum{{" - f"namespace='{OVMS_MODEL_NAMESPACE}', name='{OVMS_MODEL_NAME}'" - f"}}[5m])) / " - f"sum by (name) (rate(ovms_inference_time_us_count{{" - f"namespace='{OVMS_MODEL_NAMESPACE}', name='{OVMS_MODEL_NAME}'" - f"}}[5m]))" -) +def create_ovms_metrics_query(namespace: str, model_name: str) -> str: + """Create OVMS metrics query for average inference time.""" + return ( + f"sum by (name) (rate(ovms_inference_time_us_sum{{" + f"namespace='{namespace}', name='{model_name}'" + f"}}[5m])) / " + f"sum by (name) (rate(ovms_inference_time_us_count{{" + f"namespace='{namespace}', name='{model_name}'" + f"}}[5m]))" + ) + +OVMS_METRICS_QUERY = create_ovms_metrics_query(OVMS_MODEL_NAMESPACE, OVMS_MODEL_NAME)
58-59: Consider renaming Generator parameters for clarity.The parameter names
stressed_ovms_keda_inference_serviceandstressed_keda_vllm_inference_servicesuggest they are InferenceService objects, but they're actually Generators. Consider using names that better reflect their nature.For consistency with pytest conventions, you could keep the current names but add a comment explaining that these are generator fixtures that yield InferenceService objects.
Also applies to: 82-83
tests/model_serving/model_server/utils.py (1)
330-331: Add return type annotation.The function is missing a return type annotation.
-def verify_final_pod_count(unprivileged_client: DynamicClient, isvc: InferenceService, final_pod_count: int): +def verify_final_pod_count(unprivileged_client: DynamicClient, isvc: InferenceService, final_pod_count: int) -> None:tests/model_serving/model_server/keda/conftest.py (2)
15-16: Consolidate duplicate imports from utilities.constants.You have multiple import statements from the same module. Consolidate them into a single import.
-from utilities.constants import ( - KServeDeploymentType, - RuntimeTemplates, - Labels, -) -from utilities.constants import ( - ModelAndFormat, -) +from utilities.constants import ( + KServeDeploymentType, + RuntimeTemplates, + Labels, + ModelAndFormat, + THANOS_QUERIER_ADDRESS, +)And remove line 29:
-from utilities.constants import THANOS_QUERIER_ADDRESSAlso applies to: 24-26
196-196: Fix typo in error message."Accelartor" should be "Accelerator".
- pytest.skip("Accelartor type is not provide,vLLM test can not be run on CPU") + pytest.skip("Accelerator type is not provided, vLLM test cannot be run on CPU")
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
pytest.ini(1 hunks)tests/model_serving/model_server/keda/conftest.py(1 hunks)tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py(1 hunks)tests/model_serving/model_server/keda/test_isvc_keda_scaling_gpu.py(1 hunks)tests/model_serving/model_server/utils.py(3 hunks)utilities/constants.py(1 hunks)utilities/inference_utils.py(3 hunks)utilities/infra.py(1 hunks)utilities/monitoring.py(3 hunks)
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: adolfo-ab
PR: opendatahub-io/opendatahub-tests#334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.
utilities/infra.py (1)
Learnt from: Snomaan6846
PR: #328
File: tests/model_serving/model_runtime/mlserver/conftest.py:182-189
Timestamp: 2025-06-16T05:39:15.070Z
Learning: The create_isvc function in utilities/inference_utils.py uses the parameter name volumes_mounts (with underscore), not volume_mounts, for passing volume mount configurations to InferenceService resources.
utilities/inference_utils.py (2)
Learnt from: Snomaan6846
PR: #328
File: tests/model_serving/model_runtime/mlserver/conftest.py:182-189
Timestamp: 2025-06-16T05:39:15.070Z
Learning: The create_isvc function in utilities/inference_utils.py uses the parameter name volumes_mounts (with underscore), not volume_mounts, for passing volume mount configurations to InferenceService resources.
Learnt from: israel-hdez
PR: #346
File: tests/model_serving/model_server/inference_graph/conftest.py:85-92
Timestamp: 2025-06-11T16:40:11.593Z
Learning: The helper create_isvc (used in tests/model_serving utilities) already waits until the created InferenceService reports Condition READY=True before returning, so additional readiness waits in fixtures are unnecessary.
tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py (2)
Learnt from: adolfo-ab
PR: #334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.
Learnt from: Snomaan6846
PR: #444
File: tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py:48-714
Timestamp: 2025-07-16T12:20:29.672Z
Learning: In tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py, the same get_deployment_config_dict() function is called twice in each pytest.param because different fixtures (mlserver_inference_service and mlserver_serving_runtime) need the same deployment configuration data. This duplication is intentional to provide identical configuration to multiple fixtures.
tests/model_serving/model_server/keda/test_isvc_keda_scaling_gpu.py (2)
Learnt from: Snomaan6846
PR: #444
File: tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py:48-714
Timestamp: 2025-07-16T12:20:29.672Z
Learning: In tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py, the same get_deployment_config_dict() function is called twice in each pytest.param because different fixtures (mlserver_inference_service and mlserver_serving_runtime) need the same deployment configuration data. This duplication is intentional to provide identical configuration to multiple fixtures.
Learnt from: adolfo-ab
PR: #334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.
tests/model_serving/model_server/utils.py (2)
Learnt from: israel-hdez
PR: #346
File: tests/model_serving/model_server/inference_graph/conftest.py:85-92
Timestamp: 2025-06-11T16:40:11.593Z
Learning: The helper create_isvc (used in tests/model_serving utilities) already waits until the created InferenceService reports Condition READY=True before returning, so additional readiness waits in fixtures are unnecessary.
Learnt from: Snomaan6846
PR: #328
File: tests/model_serving/model_runtime/mlserver/conftest.py:182-189
Timestamp: 2025-06-16T05:39:15.070Z
Learning: The create_isvc function in utilities/inference_utils.py uses the parameter name volumes_mounts (with underscore), not volume_mounts, for passing volume mount configurations to InferenceService resources.
tests/model_serving/model_server/keda/conftest.py (8)
Learnt from: Snomaan6846
PR: #444
File: tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py:48-714
Timestamp: 2025-07-16T12:20:29.672Z
Learning: In tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py, the same get_deployment_config_dict() function is called twice in each pytest.param because different fixtures (mlserver_inference_service and mlserver_serving_runtime) need the same deployment configuration data. This duplication is intentional to provide identical configuration to multiple fixtures.
Learnt from: israel-hdez
PR: #346
File: tests/model_serving/model_server/inference_graph/conftest.py:85-92
Timestamp: 2025-06-11T16:40:11.593Z
Learning: The helper create_isvc (used in tests/model_serving utilities) already waits until the created InferenceService reports Condition READY=True before returning, so additional readiness waits in fixtures are unnecessary.
Learnt from: adolfo-ab
PR: #334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.
Learnt from: lugi0
PR: #446
File: tests/model_registry/conftest.py:595-612
Timestamp: 2025-07-17T15:42:04.167Z
Learning: In tests/model_registry/conftest.py, the db_deployment_1 fixture (and similar duplicated resource fixtures) do not require the admin_client parameter or explicit dependencies on related fixtures like db_secret_1, db_pvc_1, and db_service_1, even though the original model_registry_db_deployment fixture includes these parameters.
Learnt from: lugi0
PR: #446
File: tests/model_registry/conftest.py:733-770
Timestamp: 2025-07-17T15:42:23.880Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 and model_registry_instance_2 fixtures do not need explicit database dependency fixtures (like db_deployment_1, db_secret_1, etc.) in their function signatures. Pytest's dependency injection automatically handles the fixture dependencies when they reference db_name_1 and db_name_2 parameters. This is the correct pattern for these Model Registry instance fixtures.
Learnt from: lugi0
PR: #446
File: tests/model_registry/conftest.py:579-591
Timestamp: 2025-07-17T15:43:04.876Z
Learning: In tests/model_registry/conftest.py, the db_service_1 and db_service_2 fixtures do not require the admin_client parameter for Service resource creation, despite the existing model_registry_db_service fixture using client=admin_client. This inconsistency was confirmed as intentional by user lugi0.
Learnt from: dbasunag
PR: #401
File: tests/model_registry/rest_api/mariadb/conftest.py:89-110
Timestamp: 2025-07-04T00:17:47.799Z
Learning: In tests/model_registry/rest_api/mariadb/conftest.py, the model_registry_with_mariadb fixture should always use OAUTH_PROXY_CONFIG_DICT for the oauth_proxy parameter regardless of the is_model_registry_oauth parameter value, based on expected product behavior for MariaDB-backed ModelRegistry instances.
Learnt from: lugi0
PR: #446
File: tests/model_registry/conftest.py:0-0
Timestamp: 2025-07-17T15:42:26.275Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 fixture (and similar duplicated Model Registry instance fixtures) do not require admin_client, db_deployment_1, or db_secret_1 parameters as explicit dependencies, even though these dependencies exist implicitly through the fixture dependency chain.
🧬 Code Graph Analysis (1)
tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py (5)
tests/model_serving/model_server/utils.py (2)
verify_keda_scaledobject(237-265)verify_final_pod_count(330-342)utilities/constants.py (3)
ModelFormat(12-19)ModelVersion(60-62)RunTimeConfigs(234-238)utilities/monitoring.py (1)
validate_metrics_field(53-95)tests/conftest.py (2)
unprivileged_client(363-403)prometheus(598-606)tests/model_serving/model_server/keda/conftest.py (1)
stressed_ovms_keda_inference_service(159-190)
🪛 GitHub Actions: Tox Tests
tests/model_serving/model_server/utils.py
[error] 17-17: ImportError: cannot import name 'verify_inference_response' from partially initialized module 'tests.model_serving.model_server.utils' due to a circular import. This causes multiple test modules to fail importing this utility.
tests/model_serving/model_server/keda/conftest.py
[error] 20-20: ImportError caused by circular import involving 'verify_inference_response' from 'tests.model_serving.model_server.utils'.
🔇 Additional comments (5)
utilities/constants.py (1)
325-326: LGTM! Well-placed constant for KEDA monitoring integration.The addition of
THANOS_QUERIER_ADDRESSprovides a centralized endpoint for Prometheus queries needed by the new KEDA autoscaling tests. The endpoint address is correctly formatted for the OpenShift monitoring service.pytest.ini (1)
33-33: LGTM! Proper test categorization for KEDA scaling tests.The new
kedamarker follows the established pattern and will enable selective execution of KEDA-related autoscaling tests.utilities/inference_utils.py (1)
580-580: LGTM! Clean integration of autoscaling parameter.The
auto_scalingparameter is well-implemented with proper type annotation, documentation, and follows the established pattern of conditional parameter usage in the function.Also applies to: 615-615, 665-666
utilities/monitoring.py (1)
59-59: LGTM! Flexible metric validation enhancement.The
greater_thanparameter addition is well-implemented with proper type annotation, documentation, and backward compatibility. The float conversion for numerical comparison and updated logging messages make the functionality clear and robust.Also applies to: 71-71, 84-91
tests/model_serving/model_server/keda/conftest.py (1)
20-23: Ignore false circular import inkeda/conftest.pyVerification shows that
tests/model_serving/model_server/utils.pydoes not import from anykedamodules, so the import ofrun_vllm_concurrent_loadandrun_ovms_concurrent_loadintests/model_serving/model_server/keda/conftest.pydoes not introduce a cycle. The actual circular dependency is betweenutils.pyandserverless/utils.py, not this conftest file.
No changes needed here—please disregard this suggestion.Likely an incorrect or invalid review comment.
|
Replacing #327 as the branch was damaged by cursor beyond recovery 😢 |
Signed-off-by: Vedant Mahabaleshwarkar <vmahabal@redhat.com>
5ae3512 to
717d71d
Compare
|
PTAL @mwaykole |
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (9)
tests/model_serving/model_server/utils.py (5)
3-3: Circular import issue persists.The circular import between this module and
serverless/utils.pyremains unresolved. This will cause import failures.
236-265: LGTM! KEDA ScaledObject verification function looks good.The function correctly retrieves and validates KEDA ScaledObject configurations. However, there's a minor type conversion issue in the threshold comparison.
Apply this fix for proper type comparison:
- assert threshold == expected_threshold, f"Threshold {threshold} does not match expected {expected_threshold}" + assert str(threshold) == str(expected_threshold), f"Threshold {threshold} does not match expected {expected_threshold}"
267-283: Fix docstring parameters.The docstring contains parameters that don't exist in the function signature.
Apply this fix:
""" Run VLLM concurrent load. Args: - pod_name: Pod name isvc: InferenceService instance - response_snapshot: Response snapshot - chat_query: Chat query - completion_query: Completion query num_concurrent: Number of concurrent requests duration: Duration in seconds to run the load test """
301-311: Complete the docstring with missing parameters.The docstring is missing documentation for
num_concurrentanddurationparameters.Apply this fix:
""" Run OVMS concurrent load. Args: isvc: InferenceService instance + num_concurrent: Number of concurrent requests + duration: Duration in seconds to run the load test """
354-367: Fix logic to properly wait for pod count stabilization.The current implementation exits immediately when any pods are found, regardless of whether the count matches the expected value. This defeats the purpose of waiting for stabilization.
Apply this fix:
for pods in inference_service_pods_sampler( client=unprivileged_client, isvc=isvc, timeout=Timeout.TIMEOUT_5MIN, sleep=10, ): if pods: - assert len(pods) == final_pod_count, ( - f"Final pod count {len(pods)} does not match expected {final_pod_count}" - ) + if len(pods) == final_pod_count: + LOGGER.info(f"Pod count stabilized at {final_pod_count}") + return + LOGGER.info(f"Current pod count: {len(pods)}, waiting for {final_pod_count}") + + # If we exit the loop without returning, we've timed out + raise TimeoutError(f"Timed out waiting for pod count to reach {final_pod_count}")Don't forget to add the import:
+from timeout_sampler import TimeoutErrortests/model_serving/model_server/keda/conftest.py (4)
42-46: Remove unused parameters from docstring.The docstring mentions parameters that don't exist in the function signature.
Apply this fix:
"""Create KEDA auto-scaling configuration for inference services. Args: query: The Prometheus query to use for scaling - model_name: Name of the model - namespace: Kubernetes namespace target_value: Target value for the metric Returns: dict: Auto-scaling configuration """
145-150: Remove unused arguments from function call.The
model_nameandnamespaceparameters are not used by thecreate_keda_auto_scaling_configfunction.Apply this fix:
isvc_kwargs["auto_scaling"] = create_keda_auto_scaling_config( query=request.param.get("metrics_query"), - model_name=request.param["name"], - namespace=model_namespace.name, target_value=str(request.param.get("metrics_threshold")), )
154-154: Fix undefined variable 'response_snapshot'.The
response_snapshotvariable is not defined in this fixture's scope.Apply this fix:
- run_vllm_concurrent_load(isvc=isvc, response_snapshot=response_snapshot) + run_vllm_concurrent_load(isvc=isvc)
181-186: Remove unused arguments from second function call.Same issue as the first occurrence - unused parameters being passed.
Apply this fix:
auto_scaling=create_keda_auto_scaling_config( query=request.param["metrics_query"], - model_name=model_name, - namespace=unprivileged_model_namespace.name, target_value=str(request.param["metrics_threshold"]), ),
🧹 Nitpick comments (1)
tests/model_serving/model_server/keda/conftest.py (1)
194-196: Fix typo in error message.There's a spelling error in the skip message.
Apply this fix:
if not supported_accelerator_type: - pytest.skip("Accelartor type is not provide,vLLM test can not be run on CPU") + pytest.skip("Accelerator type is not provided, vLLM test cannot be run on CPU")
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
pytest.ini(1 hunks)tests/model_serving/model_server/keda/conftest.py(1 hunks)tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py(1 hunks)tests/model_serving/model_server/keda/test_isvc_keda_scaling_gpu.py(1 hunks)tests/model_serving/model_server/serverless/test_concurrency_auto_scale.py(1 hunks)tests/model_serving/model_server/serverless/utils.py(0 hunks)tests/model_serving/model_server/utils.py(3 hunks)utilities/constants.py(1 hunks)utilities/inference_utils.py(3 hunks)utilities/infra.py(1 hunks)utilities/monitoring.py(3 hunks)
💤 Files with no reviewable changes (1)
- tests/model_serving/model_server/serverless/utils.py
✅ Files skipped from review due to trivial changes (3)
- pytest.ini
- tests/model_serving/model_server/serverless/test_concurrency_auto_scale.py
- utilities/constants.py
🚧 Files skipped from review as they are similar to previous changes (5)
- utilities/monitoring.py
- utilities/inference_utils.py
- utilities/infra.py
- tests/model_serving/model_server/keda/test_isvc_keda_scaling_gpu.py
- tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: adolfo-ab
PR: opendatahub-io/opendatahub-tests#334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.
tests/model_serving/model_server/utils.py (3)
Learnt from: israel-hdez
PR: #346
File: tests/model_serving/model_server/inference_graph/conftest.py:85-92
Timestamp: 2025-06-11T16:40:11.593Z
Learning: The helper create_isvc (used in tests/model_serving utilities) already waits until the created InferenceService reports Condition READY=True before returning, so additional readiness waits in fixtures are unnecessary.
Learnt from: Snomaan6846
PR: #328
File: tests/model_serving/model_runtime/mlserver/conftest.py:182-189
Timestamp: 2025-06-16T05:39:15.070Z
Learning: The create_isvc function in utilities/inference_utils.py uses the parameter name volumes_mounts (with underscore), not volume_mounts, for passing volume mount configurations to InferenceService resources.
Learnt from: fege
PR: #409
File: tests/model_registry/negative_tests/test_db_migration.py:47-57
Timestamp: 2025-07-09T08:50:22.172Z
Learning: In negative tests for model registry database migration failures (like test_db_migration_negative), the model registry pod is expected to be in a failed state (not Running) when the database is dirty. The test fetches logs from the failed pod to verify the expected error message about dirty database version, so waiting for Running status would be inappropriate.
tests/model_serving/model_server/keda/conftest.py (10)
Learnt from: Snomaan6846
PR: #444
File: tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py:48-714
Timestamp: 2025-07-16T12:20:29.672Z
Learning: In tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py, the same get_deployment_config_dict() function is called twice in each pytest.param because different fixtures (mlserver_inference_service and mlserver_serving_runtime) need the same deployment configuration data. This duplication is intentional to provide identical configuration to multiple fixtures.
Learnt from: adolfo-ab
PR: #334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.
Learnt from: lugi0
PR: #446
File: tests/model_registry/conftest.py:595-612
Timestamp: 2025-07-17T15:42:04.167Z
Learning: In tests/model_registry/conftest.py, the db_deployment_1 fixture (and similar duplicated resource fixtures) do not require the admin_client parameter or explicit dependencies on related fixtures like db_secret_1, db_pvc_1, and db_service_1, even though the original model_registry_db_deployment fixture includes these parameters.
Learnt from: israel-hdez
PR: #346
File: tests/model_serving/model_server/inference_graph/conftest.py:85-92
Timestamp: 2025-06-11T16:40:11.593Z
Learning: The helper create_isvc (used in tests/model_serving utilities) already waits until the created InferenceService reports Condition READY=True before returning, so additional readiness waits in fixtures are unnecessary.
Learnt from: lugi0
PR: #446
File: tests/model_registry/conftest.py:733-770
Timestamp: 2025-07-17T15:42:23.880Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 and model_registry_instance_2 fixtures do not need explicit database dependency fixtures (like db_deployment_1, db_secret_1, etc.) in their function signatures. Pytest's dependency injection automatically handles the fixture dependencies when they reference db_name_1 and db_name_2 parameters. This is the correct pattern for these Model Registry instance fixtures.
Learnt from: lugi0
PR: #446
File: tests/model_registry/conftest.py:579-591
Timestamp: 2025-07-17T15:43:04.876Z
Learning: In tests/model_registry/conftest.py, the db_service_1 and db_service_2 fixtures do not require the admin_client parameter for Service resource creation, despite the existing model_registry_db_service fixture using client=admin_client. This inconsistency was confirmed as intentional by user lugi0.
Learnt from: dbasunag
PR: #401
File: tests/model_registry/rest_api/mariadb/conftest.py:89-110
Timestamp: 2025-07-04T00:17:47.799Z
Learning: In tests/model_registry/rest_api/mariadb/conftest.py, the model_registry_with_mariadb fixture should always use OAUTH_PROXY_CONFIG_DICT for the oauth_proxy parameter regardless of the is_model_registry_oauth parameter value, based on expected product behavior for MariaDB-backed ModelRegistry instances.
Learnt from: Snomaan6846
PR: #328
File: tests/model_serving/model_runtime/mlserver/conftest.py:182-189
Timestamp: 2025-06-16T05:39:15.070Z
Learning: The create_isvc function in utilities/inference_utils.py uses the parameter name volumes_mounts (with underscore), not volume_mounts, for passing volume mount configurations to InferenceService resources.
Learnt from: dbasunag
PR: #429
File: tests/model_registry/rbac/test_mr_rbac_sa.py:45-45
Timestamp: 2025-07-30T14:15:25.605Z
Learning: In tests/model_registry/rbac/test_mr_rbac_sa.py, bounds checking for model_registry_instance_rest_endpoint list access is not needed because upstream fixture validation already ensures endpoints exist before the tests execute. The Model Registry setup process validates endpoint availability, making additional bounds checks redundant.
Learnt from: lugi0
PR: #446
File: tests/model_registry/conftest.py:0-0
Timestamp: 2025-07-17T15:42:26.275Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 fixture (and similar duplicated Model Registry instance fixtures) do not require admin_client, db_deployment_1, or db_secret_1 parameters as explicit dependencies, even though these dependencies exist implicitly through the fixture dependency chain.
🧬 Code Graph Analysis (1)
tests/model_serving/model_server/utils.py (5)
utilities/constants.py (2)
Timeout(212-223)Protocols(91-98)utilities/inference_utils.py (2)
UserInference(141-543)Inference(45-138)utilities/infra.py (2)
get_isvc_keda_scaledobject(1006-1026)get_pods_by_isvc_label(565-592)tests/model_serving/conftest.py (1)
protocol(26-36)tests/conftest.py (1)
unprivileged_client(363-403)
🔇 Additional comments (3)
tests/model_serving/model_server/keda/conftest.py (3)
73-93: LGTM! Well-structured vLLM serving runtime fixture.The fixture properly configures the vLLM serving runtime with appropriate accelerator detection and template mapping.
95-156: LGTM! Comprehensive KEDA vLLM inference service fixture.The fixture properly sets up GPU resources, runtime arguments, and scaling configuration. The logic for handling multi-GPU setups and quantization is well implemented.
158-191: LGTM! OVMS KEDA inference service fixture looks good.The fixture correctly configures OVMS with KEDA autoscaling and waits for readiness before running load tests.
Signed-off-by: Vedant Mahabaleshwarkar <vmahabal@redhat.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
tests/model_serving/model_server/keda/conftest.py (4)
42-46: Remove unused parameters from docstring.The
model_nameandnamespaceparameters are not used in the function but are mentioned in the docstring.
145-150: Remove unused arguments from create_keda_auto_scaling_config calls.The
model_nameandnamespaceparameters are not used by the function.
154-158: Fix undefined variable 'response_snapshot'.The
response_snapshotvariable is not defined in this scope. It should be passed as a parameter or removed if not needed.
185-190: Remove unused arguments from create_keda_auto_scaling_config calls.The
model_nameandnamespaceparameters are not used by the function.
🧹 Nitpick comments (1)
tests/model_serving/model_server/keda/conftest.py (1)
200-203: Fix typo in skip message.There's a spelling error in the skip message.
- pytest.skip("Accelartor type is not provide,vLLM test can not be run on CPU") + pytest.skip("Accelerator type is not provided, vLLM test cannot be run on CPU")
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
tests/model_serving/model_server/keda/conftest.py(1 hunks)tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py(1 hunks)tests/model_serving/model_server/keda/test_isvc_keda_scaling_gpu.py(1 hunks)tests/model_serving/model_server/utils.py(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- tests/model_serving/model_server/utils.py
- tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py
- tests/model_serving/model_server/keda/test_isvc_keda_scaling_gpu.py
🧰 Additional context used
🧠 Learnings (13)
📓 Common learnings
Learnt from: adolfo-ab
PR: opendatahub-io/opendatahub-tests#334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.
📚 Learning: in tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deplo...
Learnt from: Snomaan6846
PR: opendatahub-io/opendatahub-tests#444
File: tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py:48-714
Timestamp: 2025-07-16T12:20:29.672Z
Learning: In tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py, the same get_deployment_config_dict() function is called twice in each pytest.param because different fixtures (mlserver_inference_service and mlserver_serving_runtime) need the same deployment configuration data. This duplication is intentional to provide identical configuration to multiple fixtures.
Applied to files:
tests/model_serving/model_server/keda/conftest.py
📚 Learning: for trustyai image validation tests: operator image tests require admin_client, related_images_refs,...
Learnt from: adolfo-ab
PR: opendatahub-io/opendatahub-tests#334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.
Applied to files:
tests/model_serving/model_server/keda/conftest.py
📚 Learning: in tests/model_registry/conftest.py, the db_deployment_1 fixture (and similar duplicated resource fi...
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:595-612
Timestamp: 2025-07-17T15:42:04.167Z
Learning: In tests/model_registry/conftest.py, the db_deployment_1 fixture (and similar duplicated resource fixtures) do not require the admin_client parameter or explicit dependencies on related fixtures like db_secret_1, db_pvc_1, and db_service_1, even though the original model_registry_db_deployment fixture includes these parameters.
Applied to files:
tests/model_serving/model_server/keda/conftest.py
📚 Learning: the helper `create_isvc` (used in tests/model_serving utilities) already waits until the created inf...
Learnt from: israel-hdez
PR: opendatahub-io/opendatahub-tests#346
File: tests/model_serving/model_server/inference_graph/conftest.py:85-92
Timestamp: 2025-06-11T16:40:11.593Z
Learning: The helper `create_isvc` (used in tests/model_serving utilities) already waits until the created InferenceService reports Condition READY=True before returning, so additional readiness waits in fixtures are unnecessary.
Applied to files:
tests/model_serving/model_server/keda/conftest.py
📚 Learning: in tests/model_registry/conftest.py, the model_registry_instance_1 and model_registry_instance_2 fix...
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:733-770
Timestamp: 2025-07-17T15:42:23.880Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 and model_registry_instance_2 fixtures do not need explicit database dependency fixtures (like db_deployment_1, db_secret_1, etc.) in their function signatures. Pytest's dependency injection automatically handles the fixture dependencies when they reference db_name_1 and db_name_2 parameters. This is the correct pattern for these Model Registry instance fixtures.
Applied to files:
tests/model_serving/model_server/keda/conftest.py
📚 Learning: in tests/model_registry/conftest.py, the db_service_1 and db_service_2 fixtures do not require the a...
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:579-591
Timestamp: 2025-07-17T15:43:04.876Z
Learning: In tests/model_registry/conftest.py, the db_service_1 and db_service_2 fixtures do not require the admin_client parameter for Service resource creation, despite the existing model_registry_db_service fixture using client=admin_client. This inconsistency was confirmed as intentional by user lugi0.
Applied to files:
tests/model_serving/model_server/keda/conftest.py
📚 Learning: in tests/model_registry/rest_api/mariadb/conftest.py, the model_registry_with_mariadb fixture should...
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#401
File: tests/model_registry/rest_api/mariadb/conftest.py:89-110
Timestamp: 2025-07-04T00:17:47.799Z
Learning: In tests/model_registry/rest_api/mariadb/conftest.py, the model_registry_with_mariadb fixture should always use OAUTH_PROXY_CONFIG_DICT for the oauth_proxy parameter regardless of the is_model_registry_oauth parameter value, based on expected product behavior for MariaDB-backed ModelRegistry instances.
Applied to files:
tests/model_serving/model_server/keda/conftest.py
📚 Learning: in tests/model_registry/conftest.py, the model_registry_instance_1 fixture (and similar duplicated m...
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:0-0
Timestamp: 2025-07-17T15:42:26.275Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 fixture (and similar duplicated Model Registry instance fixtures) do not require admin_client, db_deployment_1, or db_secret_1 parameters as explicit dependencies, even though these dependencies exist implicitly through the fixture dependency chain.
Applied to files:
tests/model_serving/model_server/keda/conftest.py
📚 Learning: the create_isvc function in utilities/inference_utils.py uses the parameter name `volumes_mounts` (w...
Learnt from: Snomaan6846
PR: opendatahub-io/opendatahub-tests#328
File: tests/model_serving/model_runtime/mlserver/conftest.py:182-189
Timestamp: 2025-06-16T05:39:15.070Z
Learning: The create_isvc function in utilities/inference_utils.py uses the parameter name `volumes_mounts` (with underscore), not `volume_mounts`, for passing volume mount configurations to InferenceService resources.
Applied to files:
tests/model_serving/model_server/keda/conftest.py
📚 Learning: in tests/model_registry/rbac/test_mr_rbac_sa.py, bounds checking for model_registry_instance_rest_en...
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#429
File: tests/model_registry/rbac/test_mr_rbac_sa.py:45-45
Timestamp: 2025-07-30T14:15:25.605Z
Learning: In tests/model_registry/rbac/test_mr_rbac_sa.py, bounds checking for model_registry_instance_rest_endpoint list access is not needed because upstream fixture validation already ensures endpoints exist before the tests execute. The Model Registry setup process validates endpoint availability, making additional bounds checks redundant.
Applied to files:
tests/model_serving/model_server/keda/conftest.py
📚 Learning: in the opendatahub-tests repository, prefer keeping test parameterization configurations inline rath...
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#338
File: tests/model_registry/rbac/test_mr_rbac.py:24-53
Timestamp: 2025-06-06T12:22:57.057Z
Learning: In the opendatahub-tests repository, prefer keeping test parameterization configurations inline rather than extracting them to separate variables/constants, as it makes triaging easier by avoiding the need to jump between different parts of the file to understand the test setup.
Applied to files:
tests/model_serving/model_server/keda/conftest.py
📚 Learning: in the opendatahub-tests repository, the oauth cluster configuration's identityproviders field shoul...
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#354
File: tests/model_registry/rbac/conftest.py:212-224
Timestamp: 2025-06-16T13:00:51.478Z
Learning: In the opendatahub-tests repository, the OAuth cluster configuration's identityProviders field should not be empty/None and doesn't require defensive programming checks when concatenating with lists.
Applied to files:
tests/model_serving/model_server/keda/conftest.py
🔇 Additional comments (3)
tests/model_serving/model_server/keda/conftest.py (3)
74-92: LGTM!The fixture correctly creates a vLLM serving runtime with appropriate template selection based on accelerator type and proper fallback handling.
206-208: LGTM!The fixture correctly configures the JSON snapshot extension for response validation.
193-196: Response snapshot parameter is not supported inrun_concurrent_load_for_keda_scalingThe function defined in
tests/model_serving/model_server/utils.py:264only accepts:
isvcinference_confignum_concurrentdurationIt does not have a
response_snapshotparameter, so adding one in the call attests/model_serving/model_server/keda/conftest.py:193–196would result in a signature mismatch.
If you need to capture or verify response snapshots, please extend the function’s signature (and update all its callers); otherwise, you can disregard the original suggestion.Likely an incorrect or invalid review comment.
|
Status of building tag latest: success. |
Description
Fixes: https://issues.redhat.com/browse/RHOAIENG-25645
JIRA REQUIREMENTS:
How Has This Been Tested?
Manually tested as follows :
uv run pytest -k TestOVMSKedaScaling --tc=distribution:upstream --tc=use_unprivileged_client:Falseanduv run pytest -k TestVllmKedaScaling --tc=distribution:upstream --tc=use_unprivileged_client:FalseMerge criteria: