Skip to content

feat: add keda scaling tests#479

Merged
mwaykole merged 3 commits intoopendatahub-io:mainfrom
VedantMahabaleshwarkar:keda
Aug 1, 2025
Merged

feat: add keda scaling tests#479
mwaykole merged 3 commits intoopendatahub-io:mainfrom
VedantMahabaleshwarkar:keda

Conversation

@VedantMahabaleshwarkar
Copy link
Copy Markdown
Contributor

@VedantMahabaleshwarkar VedantMahabaleshwarkar commented Jul 31, 2025

Description

Fixes: https://issues.redhat.com/browse/RHOAIENG-25645

JIRA REQUIREMENTS:

  • Ability to deploy KServe InferenceServices with KEDA autoscaling configurations.
    • added
  • Ability to generate controllable load against a KServe InferenceService.
    • added
  • Ability to monitor the scaling behavior (number of pods) of a KServe InferenceService.
    • added
    • but currently disabled till the correct scaling metric is decided
  • Integration with the existing opendatahub-tests framework.
    • added
  • Tests should be automated and runnable in a CI/CD pipeline.
    • added

How Has This Been Tested?

Manually tested as follows :

  • On ROSA cluster with GPU node and latest ODH installed.
  • add model s3 environment variables
  • Ran uv run pytest -k TestOVMSKedaScaling --tc=distribution:upstream --tc=use_unprivileged_client:False and uv run pytest -k TestVllmKedaScaling --tc=distribution:upstream --tc=use_unprivileged_client:False

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

@VedantMahabaleshwarkar VedantMahabaleshwarkar requested a review from a team as a code owner July 31, 2025 00:04
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 31, 2025

📝 Walkthrough

Summary by CodeRabbit

  • New Features

    • Introduced integration tests for KEDA-based autoscaling of inference services, covering both CPU and GPU scenarios.
    • Added new test fixtures and utilities to support KEDA scaling tests, including concurrent load simulation and pod count verification.
    • Added a pytest marker for KEDA-related tests.
    • Introduced a global constant for the Thanos Querier address.
  • Bug Fixes

    • None
  • Refactor

    • Updated utility functions to support autoscaling configuration and improved metrics validation with new parameters.
    • Removed a redundant utility function and updated import paths for consistency.
  • Documentation

    • Updated function docstrings to reflect new parameters and behaviors.

Walkthrough

This change introduces comprehensive support for testing KEDA-based autoscaling of inference services in a Kubernetes environment. It adds new test modules, fixtures, and utility functions for configuring, deploying, and validating KEDA scaling with both CPU and GPU model servers. Several utility and configuration files are updated to support autoscaling and monitoring.

Changes

Cohort / File(s) Change Summary
Pytest Marker Addition
pytest.ini
Adds a keda marker to the pytest configuration for marking KEDA scaling tests.
KEDA Test Fixtures and Utilities
tests/model_serving/model_server/keda/conftest.py
Introduces fixtures and helper functions for setting up and managing KEDA-enabled inference services, including vLLM and OVMS runtimes, GPU support, and concurrent load testing.
KEDA Scaling Tests: OVMS (CPU)
tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py
Adds integration tests for KEDA autoscaling of OVMS inference services, verifying ScaledObject configuration, Prometheus metrics, and pod scaling behavior.
KEDA Scaling Tests: vLLM (GPU)
tests/model_serving/model_server/keda/test_isvc_keda_scaling_gpu.py
Adds integration tests for KEDA autoscaling of vLLM inference services with GPU support, including ScaledObject, metrics, and pod count validation.
Inference Service Testing Utilities
tests/model_serving/model_server/utils.py
Adds utility functions for verifying KEDA ScaledObjects, running concurrent load on inference services, and checking final pod counts after scaling.
Constant Addition
utilities/constants.py
Adds a constant for the Thanos Querier address used in monitoring.
Inference Service Creation Extension
utilities/inference_utils.py
Updates create_isvc to accept an optional auto_scaling parameter for autoscaling configuration.
KEDA ScaledObject Retrieval Utility
utilities/infra.py
Adds a function to retrieve KEDA ScaledObjects associated with an inference service.
Metrics Validation Enhancement
utilities/monitoring.py
Extends metrics validation to support "greater than or equal" checks for metric thresholds.
Test Import and Utility Cleanup
tests/model_serving/model_server/serverless/test_concurrency_auto_scale.py, tests/model_serving/model_server/serverless/utils.py
Removes redundant inference_service_pods_sampler from serverless utils and updates import paths accordingly.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~40 minutes

Suggested labels

size/l

Suggested reviewers

  • mwaykole
  • dbasunag

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions
Copy link
Copy Markdown

The following are automatically added/executed:

  • PR size label.
  • Run pre-commit
  • Run tox
  • Add PR author as the PR assignee
  • Build image based on the PR

Available user actions:

  • To mark a PR as WIP, add /wip in a comment. To remove it from the PR comment /wip cancel to the PR.
  • To block merging of a PR, add /hold in a comment. To un-block merging of PR comment /hold cancel.
  • To mark a PR as approved, add /lgtm in a comment. To remove, add /lgtm cancel.
    lgtm label removed on each new commit push.
  • To mark PR as verified comment /verified to the PR, to un-verify comment /verified cancel to the PR.
    verified label removed on each new commit push.
  • To Cherry-pick a merged PR /cherry-pick <target_branch_name> to the PR. If <target_branch_name> is valid,
    and the current PR is merged, a cherry-picked PR would be created and linked to the current PR.
  • To build and push image to quay, add /build-push-pr-image in a comment. This would create an image with tag
    pr-<pr_number> to quay repository. This image tag, however would be deleted on PR merge or close action.
Supported labels

{'/lgtm', '/hold', '/wip', '/cherry-pick', '/build-push-pr-image', '/verified'}

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

🧹 Nitpick comments (6)
utilities/infra.py (1)

1006-1027: Fix return type annotation and documentation inconsistency.

The function implementation is solid, but there are minor documentation and type annotation issues:

  1. The return type is annotated as list[Any] but the function returns a single ScaledObject resource, not a list
  2. The docstring says "A list of all matching ScaledObjects" but only one ScaledObject is returned

Apply this diff to fix the return type and documentation:

-def get_isvc_keda_scaledobject(client: DynamicClient, isvc: InferenceService) -> list[Any]:
+def get_isvc_keda_scaledobject(client: DynamicClient, isvc: InferenceService) -> Any:
     """
     Get KEDA ScaledObject resources associated with an InferenceService.

     Args:
         client (DynamicClient): OCP Client to use.
         isvc (InferenceService): InferenceService object.

     Returns:
-        list[Any]: A list of all matching ScaledObjects
+        Any: The ScaledObject associated with the InferenceService

     Raises:
         ResourceNotFoundError: if no ScaledObjects are found.
     """
tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py (2)

22-29: Consider extracting the metrics query to a constant or function.

While the multi-line f-string is readable, consider extracting it to a function that takes namespace and model name as parameters for better reusability and testability.

-OVMS_METRICS_QUERY = (
-    f"sum by (name) (rate(ovms_inference_time_us_sum{{"
-    f"namespace='{OVMS_MODEL_NAMESPACE}', name='{OVMS_MODEL_NAME}'"
-    f"}}[5m])) / "
-    f"sum by (name) (rate(ovms_inference_time_us_count{{"
-    f"namespace='{OVMS_MODEL_NAMESPACE}', name='{OVMS_MODEL_NAME}'"
-    f"}}[5m]))"
-)
+def create_ovms_metrics_query(namespace: str, model_name: str) -> str:
+    """Create OVMS metrics query for average inference time."""
+    return (
+        f"sum by (name) (rate(ovms_inference_time_us_sum{{"
+        f"namespace='{namespace}', name='{model_name}'"
+        f"}}[5m])) / "
+        f"sum by (name) (rate(ovms_inference_time_us_count{{"
+        f"namespace='{namespace}', name='{model_name}'"
+        f"}}[5m]))"
+    )
+
+OVMS_METRICS_QUERY = create_ovms_metrics_query(OVMS_MODEL_NAMESPACE, OVMS_MODEL_NAME)

58-59: Consider renaming Generator parameters for clarity.

The parameter names stressed_ovms_keda_inference_service and stressed_keda_vllm_inference_service suggest they are InferenceService objects, but they're actually Generators. Consider using names that better reflect their nature.

For consistency with pytest conventions, you could keep the current names but add a comment explaining that these are generator fixtures that yield InferenceService objects.

Also applies to: 82-83

tests/model_serving/model_server/utils.py (1)

330-331: Add return type annotation.

The function is missing a return type annotation.

-def verify_final_pod_count(unprivileged_client: DynamicClient, isvc: InferenceService, final_pod_count: int):
+def verify_final_pod_count(unprivileged_client: DynamicClient, isvc: InferenceService, final_pod_count: int) -> None:
tests/model_serving/model_server/keda/conftest.py (2)

15-16: Consolidate duplicate imports from utilities.constants.

You have multiple import statements from the same module. Consolidate them into a single import.

-from utilities.constants import (
-    KServeDeploymentType,
-    RuntimeTemplates,
-    Labels,
-)
-from utilities.constants import (
-    ModelAndFormat,
-)
+from utilities.constants import (
+    KServeDeploymentType,
+    RuntimeTemplates,
+    Labels,
+    ModelAndFormat,
+    THANOS_QUERIER_ADDRESS,
+)

And remove line 29:

-from utilities.constants import THANOS_QUERIER_ADDRESS

Also applies to: 24-26


196-196: Fix typo in error message.

"Accelartor" should be "Accelerator".

-        pytest.skip("Accelartor type is not provide,vLLM test can not be run on CPU")
+        pytest.skip("Accelerator type is not provided, vLLM test cannot be run on CPU")
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between baa3472 and 5ae3512.

📒 Files selected for processing (9)
  • pytest.ini (1 hunks)
  • tests/model_serving/model_server/keda/conftest.py (1 hunks)
  • tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py (1 hunks)
  • tests/model_serving/model_server/keda/test_isvc_keda_scaling_gpu.py (1 hunks)
  • tests/model_serving/model_server/utils.py (3 hunks)
  • utilities/constants.py (1 hunks)
  • utilities/inference_utils.py (3 hunks)
  • utilities/infra.py (1 hunks)
  • utilities/monitoring.py (3 hunks)
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: adolfo-ab
PR: opendatahub-io/opendatahub-tests#334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.
utilities/infra.py (1)

Learnt from: Snomaan6846
PR: #328
File: tests/model_serving/model_runtime/mlserver/conftest.py:182-189
Timestamp: 2025-06-16T05:39:15.070Z
Learning: The create_isvc function in utilities/inference_utils.py uses the parameter name volumes_mounts (with underscore), not volume_mounts, for passing volume mount configurations to InferenceService resources.

utilities/inference_utils.py (2)

Learnt from: Snomaan6846
PR: #328
File: tests/model_serving/model_runtime/mlserver/conftest.py:182-189
Timestamp: 2025-06-16T05:39:15.070Z
Learning: The create_isvc function in utilities/inference_utils.py uses the parameter name volumes_mounts (with underscore), not volume_mounts, for passing volume mount configurations to InferenceService resources.

Learnt from: israel-hdez
PR: #346
File: tests/model_serving/model_server/inference_graph/conftest.py:85-92
Timestamp: 2025-06-11T16:40:11.593Z
Learning: The helper create_isvc (used in tests/model_serving utilities) already waits until the created InferenceService reports Condition READY=True before returning, so additional readiness waits in fixtures are unnecessary.

tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py (2)

Learnt from: adolfo-ab
PR: #334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.

Learnt from: Snomaan6846
PR: #444
File: tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py:48-714
Timestamp: 2025-07-16T12:20:29.672Z
Learning: In tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py, the same get_deployment_config_dict() function is called twice in each pytest.param because different fixtures (mlserver_inference_service and mlserver_serving_runtime) need the same deployment configuration data. This duplication is intentional to provide identical configuration to multiple fixtures.

tests/model_serving/model_server/keda/test_isvc_keda_scaling_gpu.py (2)

Learnt from: Snomaan6846
PR: #444
File: tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py:48-714
Timestamp: 2025-07-16T12:20:29.672Z
Learning: In tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py, the same get_deployment_config_dict() function is called twice in each pytest.param because different fixtures (mlserver_inference_service and mlserver_serving_runtime) need the same deployment configuration data. This duplication is intentional to provide identical configuration to multiple fixtures.

Learnt from: adolfo-ab
PR: #334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.

tests/model_serving/model_server/utils.py (2)

Learnt from: israel-hdez
PR: #346
File: tests/model_serving/model_server/inference_graph/conftest.py:85-92
Timestamp: 2025-06-11T16:40:11.593Z
Learning: The helper create_isvc (used in tests/model_serving utilities) already waits until the created InferenceService reports Condition READY=True before returning, so additional readiness waits in fixtures are unnecessary.

Learnt from: Snomaan6846
PR: #328
File: tests/model_serving/model_runtime/mlserver/conftest.py:182-189
Timestamp: 2025-06-16T05:39:15.070Z
Learning: The create_isvc function in utilities/inference_utils.py uses the parameter name volumes_mounts (with underscore), not volume_mounts, for passing volume mount configurations to InferenceService resources.

tests/model_serving/model_server/keda/conftest.py (8)

Learnt from: Snomaan6846
PR: #444
File: tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py:48-714
Timestamp: 2025-07-16T12:20:29.672Z
Learning: In tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py, the same get_deployment_config_dict() function is called twice in each pytest.param because different fixtures (mlserver_inference_service and mlserver_serving_runtime) need the same deployment configuration data. This duplication is intentional to provide identical configuration to multiple fixtures.

Learnt from: israel-hdez
PR: #346
File: tests/model_serving/model_server/inference_graph/conftest.py:85-92
Timestamp: 2025-06-11T16:40:11.593Z
Learning: The helper create_isvc (used in tests/model_serving utilities) already waits until the created InferenceService reports Condition READY=True before returning, so additional readiness waits in fixtures are unnecessary.

Learnt from: adolfo-ab
PR: #334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.

Learnt from: lugi0
PR: #446
File: tests/model_registry/conftest.py:595-612
Timestamp: 2025-07-17T15:42:04.167Z
Learning: In tests/model_registry/conftest.py, the db_deployment_1 fixture (and similar duplicated resource fixtures) do not require the admin_client parameter or explicit dependencies on related fixtures like db_secret_1, db_pvc_1, and db_service_1, even though the original model_registry_db_deployment fixture includes these parameters.

Learnt from: lugi0
PR: #446
File: tests/model_registry/conftest.py:733-770
Timestamp: 2025-07-17T15:42:23.880Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 and model_registry_instance_2 fixtures do not need explicit database dependency fixtures (like db_deployment_1, db_secret_1, etc.) in their function signatures. Pytest's dependency injection automatically handles the fixture dependencies when they reference db_name_1 and db_name_2 parameters. This is the correct pattern for these Model Registry instance fixtures.

Learnt from: lugi0
PR: #446
File: tests/model_registry/conftest.py:579-591
Timestamp: 2025-07-17T15:43:04.876Z
Learning: In tests/model_registry/conftest.py, the db_service_1 and db_service_2 fixtures do not require the admin_client parameter for Service resource creation, despite the existing model_registry_db_service fixture using client=admin_client. This inconsistency was confirmed as intentional by user lugi0.

Learnt from: dbasunag
PR: #401
File: tests/model_registry/rest_api/mariadb/conftest.py:89-110
Timestamp: 2025-07-04T00:17:47.799Z
Learning: In tests/model_registry/rest_api/mariadb/conftest.py, the model_registry_with_mariadb fixture should always use OAUTH_PROXY_CONFIG_DICT for the oauth_proxy parameter regardless of the is_model_registry_oauth parameter value, based on expected product behavior for MariaDB-backed ModelRegistry instances.

Learnt from: lugi0
PR: #446
File: tests/model_registry/conftest.py:0-0
Timestamp: 2025-07-17T15:42:26.275Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 fixture (and similar duplicated Model Registry instance fixtures) do not require admin_client, db_deployment_1, or db_secret_1 parameters as explicit dependencies, even though these dependencies exist implicitly through the fixture dependency chain.

🧬 Code Graph Analysis (1)
tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py (5)
tests/model_serving/model_server/utils.py (2)
  • verify_keda_scaledobject (237-265)
  • verify_final_pod_count (330-342)
utilities/constants.py (3)
  • ModelFormat (12-19)
  • ModelVersion (60-62)
  • RunTimeConfigs (234-238)
utilities/monitoring.py (1)
  • validate_metrics_field (53-95)
tests/conftest.py (2)
  • unprivileged_client (363-403)
  • prometheus (598-606)
tests/model_serving/model_server/keda/conftest.py (1)
  • stressed_ovms_keda_inference_service (159-190)
🪛 GitHub Actions: Tox Tests
tests/model_serving/model_server/utils.py

[error] 17-17: ImportError: cannot import name 'verify_inference_response' from partially initialized module 'tests.model_serving.model_server.utils' due to a circular import. This causes multiple test modules to fail importing this utility.

tests/model_serving/model_server/keda/conftest.py

[error] 20-20: ImportError caused by circular import involving 'verify_inference_response' from 'tests.model_serving.model_server.utils'.

🔇 Additional comments (5)
utilities/constants.py (1)

325-326: LGTM! Well-placed constant for KEDA monitoring integration.

The addition of THANOS_QUERIER_ADDRESS provides a centralized endpoint for Prometheus queries needed by the new KEDA autoscaling tests. The endpoint address is correctly formatted for the OpenShift monitoring service.

pytest.ini (1)

33-33: LGTM! Proper test categorization for KEDA scaling tests.

The new keda marker follows the established pattern and will enable selective execution of KEDA-related autoscaling tests.

utilities/inference_utils.py (1)

580-580: LGTM! Clean integration of autoscaling parameter.

The auto_scaling parameter is well-implemented with proper type annotation, documentation, and follows the established pattern of conditional parameter usage in the function.

Also applies to: 615-615, 665-666

utilities/monitoring.py (1)

59-59: LGTM! Flexible metric validation enhancement.

The greater_than parameter addition is well-implemented with proper type annotation, documentation, and backward compatibility. The float conversion for numerical comparison and updated logging messages make the functionality clear and robust.

Also applies to: 71-71, 84-91

tests/model_serving/model_server/keda/conftest.py (1)

20-23: Ignore false circular import in keda/conftest.py

Verification shows that tests/model_serving/model_server/utils.py does not import from any keda modules, so the import of run_vllm_concurrent_load and run_ovms_concurrent_load in tests/model_serving/model_server/keda/conftest.py does not introduce a cycle. The actual circular dependency is between utils.py and serverless/utils.py, not this conftest file.
No changes needed here—please disregard this suggestion.

Likely an incorrect or invalid review comment.

Comment thread tests/model_serving/model_server/keda/conftest.py
Comment thread tests/model_serving/model_server/keda/conftest.py
Comment thread tests/model_serving/model_server/keda/conftest.py Outdated
Comment thread tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py
Comment thread tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py
Comment thread tests/model_serving/model_server/utils.py
Comment thread tests/model_serving/model_server/utils.py
Comment thread tests/model_serving/model_server/utils.py
Comment thread tests/model_serving/model_server/utils.py Outdated
Comment thread tests/model_serving/model_server/utils.py
@VedantMahabaleshwarkar
Copy link
Copy Markdown
Contributor Author

Replacing #327 as the branch was damaged by cursor beyond recovery 😢

Signed-off-by: Vedant Mahabaleshwarkar <vmahabal@redhat.com>
@VedantMahabaleshwarkar
Copy link
Copy Markdown
Contributor Author

PTAL @mwaykole

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (9)
tests/model_serving/model_server/utils.py (5)

3-3: Circular import issue persists.

The circular import between this module and serverless/utils.py remains unresolved. This will cause import failures.


236-265: LGTM! KEDA ScaledObject verification function looks good.

The function correctly retrieves and validates KEDA ScaledObject configurations. However, there's a minor type conversion issue in the threshold comparison.

Apply this fix for proper type comparison:

-    assert threshold == expected_threshold, f"Threshold {threshold} does not match expected {expected_threshold}"
+    assert str(threshold) == str(expected_threshold), f"Threshold {threshold} does not match expected {expected_threshold}"

267-283: Fix docstring parameters.

The docstring contains parameters that don't exist in the function signature.

Apply this fix:

     """
     Run VLLM concurrent load.

     Args:
-        pod_name: Pod name
         isvc: InferenceService instance
-        response_snapshot: Response snapshot
-        chat_query: Chat query
-        completion_query: Completion query
         num_concurrent: Number of concurrent requests
         duration: Duration in seconds to run the load test
     """

301-311: Complete the docstring with missing parameters.

The docstring is missing documentation for num_concurrent and duration parameters.

Apply this fix:

     """
     Run OVMS concurrent load.

     Args:
         isvc: InferenceService instance
+        num_concurrent: Number of concurrent requests
+        duration: Duration in seconds to run the load test
     """

354-367: Fix logic to properly wait for pod count stabilization.

The current implementation exits immediately when any pods are found, regardless of whether the count matches the expected value. This defeats the purpose of waiting for stabilization.

Apply this fix:

     for pods in inference_service_pods_sampler(
         client=unprivileged_client,
         isvc=isvc,
         timeout=Timeout.TIMEOUT_5MIN,
         sleep=10,
     ):
         if pods:
-            assert len(pods) == final_pod_count, (
-                f"Final pod count {len(pods)} does not match expected {final_pod_count}"
-            )
+            if len(pods) == final_pod_count:
+                LOGGER.info(f"Pod count stabilized at {final_pod_count}")
+                return
+            LOGGER.info(f"Current pod count: {len(pods)}, waiting for {final_pod_count}")
+    
+    # If we exit the loop without returning, we've timed out
+    raise TimeoutError(f"Timed out waiting for pod count to reach {final_pod_count}")

Don't forget to add the import:

+from timeout_sampler import TimeoutError
tests/model_serving/model_server/keda/conftest.py (4)

42-46: Remove unused parameters from docstring.

The docstring mentions parameters that don't exist in the function signature.

Apply this fix:

     """Create KEDA auto-scaling configuration for inference services.

     Args:
         query: The Prometheus query to use for scaling
-        model_name: Name of the model
-        namespace: Kubernetes namespace
         target_value: Target value for the metric

     Returns:
         dict: Auto-scaling configuration
     """

145-150: Remove unused arguments from function call.

The model_name and namespace parameters are not used by the create_keda_auto_scaling_config function.

Apply this fix:

     isvc_kwargs["auto_scaling"] = create_keda_auto_scaling_config(
         query=request.param.get("metrics_query"),
-        model_name=request.param["name"],
-        namespace=model_namespace.name,
         target_value=str(request.param.get("metrics_threshold")),
     )

154-154: Fix undefined variable 'response_snapshot'.

The response_snapshot variable is not defined in this fixture's scope.

Apply this fix:

-        run_vllm_concurrent_load(isvc=isvc, response_snapshot=response_snapshot)
+        run_vllm_concurrent_load(isvc=isvc)

181-186: Remove unused arguments from second function call.

Same issue as the first occurrence - unused parameters being passed.

Apply this fix:

         auto_scaling=create_keda_auto_scaling_config(
             query=request.param["metrics_query"],
-            model_name=model_name,
-            namespace=unprivileged_model_namespace.name,
             target_value=str(request.param["metrics_threshold"]),
         ),
🧹 Nitpick comments (1)
tests/model_serving/model_server/keda/conftest.py (1)

194-196: Fix typo in error message.

There's a spelling error in the skip message.

Apply this fix:

     if not supported_accelerator_type:
-        pytest.skip("Accelartor type is not provide,vLLM test can not be run on CPU")
+        pytest.skip("Accelerator type is not provided, vLLM test cannot be run on CPU")
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5ae3512 and 717d71d.

📒 Files selected for processing (11)
  • pytest.ini (1 hunks)
  • tests/model_serving/model_server/keda/conftest.py (1 hunks)
  • tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py (1 hunks)
  • tests/model_serving/model_server/keda/test_isvc_keda_scaling_gpu.py (1 hunks)
  • tests/model_serving/model_server/serverless/test_concurrency_auto_scale.py (1 hunks)
  • tests/model_serving/model_server/serverless/utils.py (0 hunks)
  • tests/model_serving/model_server/utils.py (3 hunks)
  • utilities/constants.py (1 hunks)
  • utilities/inference_utils.py (3 hunks)
  • utilities/infra.py (1 hunks)
  • utilities/monitoring.py (3 hunks)
💤 Files with no reviewable changes (1)
  • tests/model_serving/model_server/serverless/utils.py
✅ Files skipped from review due to trivial changes (3)
  • pytest.ini
  • tests/model_serving/model_server/serverless/test_concurrency_auto_scale.py
  • utilities/constants.py
🚧 Files skipped from review as they are similar to previous changes (5)
  • utilities/monitoring.py
  • utilities/inference_utils.py
  • utilities/infra.py
  • tests/model_serving/model_server/keda/test_isvc_keda_scaling_gpu.py
  • tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: adolfo-ab
PR: opendatahub-io/opendatahub-tests#334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.
tests/model_serving/model_server/utils.py (3)

Learnt from: israel-hdez
PR: #346
File: tests/model_serving/model_server/inference_graph/conftest.py:85-92
Timestamp: 2025-06-11T16:40:11.593Z
Learning: The helper create_isvc (used in tests/model_serving utilities) already waits until the created InferenceService reports Condition READY=True before returning, so additional readiness waits in fixtures are unnecessary.

Learnt from: Snomaan6846
PR: #328
File: tests/model_serving/model_runtime/mlserver/conftest.py:182-189
Timestamp: 2025-06-16T05:39:15.070Z
Learning: The create_isvc function in utilities/inference_utils.py uses the parameter name volumes_mounts (with underscore), not volume_mounts, for passing volume mount configurations to InferenceService resources.

Learnt from: fege
PR: #409
File: tests/model_registry/negative_tests/test_db_migration.py:47-57
Timestamp: 2025-07-09T08:50:22.172Z
Learning: In negative tests for model registry database migration failures (like test_db_migration_negative), the model registry pod is expected to be in a failed state (not Running) when the database is dirty. The test fetches logs from the failed pod to verify the expected error message about dirty database version, so waiting for Running status would be inappropriate.

tests/model_serving/model_server/keda/conftest.py (10)

Learnt from: Snomaan6846
PR: #444
File: tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py:48-714
Timestamp: 2025-07-16T12:20:29.672Z
Learning: In tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py, the same get_deployment_config_dict() function is called twice in each pytest.param because different fixtures (mlserver_inference_service and mlserver_serving_runtime) need the same deployment configuration data. This duplication is intentional to provide identical configuration to multiple fixtures.

Learnt from: adolfo-ab
PR: #334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.

Learnt from: lugi0
PR: #446
File: tests/model_registry/conftest.py:595-612
Timestamp: 2025-07-17T15:42:04.167Z
Learning: In tests/model_registry/conftest.py, the db_deployment_1 fixture (and similar duplicated resource fixtures) do not require the admin_client parameter or explicit dependencies on related fixtures like db_secret_1, db_pvc_1, and db_service_1, even though the original model_registry_db_deployment fixture includes these parameters.

Learnt from: israel-hdez
PR: #346
File: tests/model_serving/model_server/inference_graph/conftest.py:85-92
Timestamp: 2025-06-11T16:40:11.593Z
Learning: The helper create_isvc (used in tests/model_serving utilities) already waits until the created InferenceService reports Condition READY=True before returning, so additional readiness waits in fixtures are unnecessary.

Learnt from: lugi0
PR: #446
File: tests/model_registry/conftest.py:733-770
Timestamp: 2025-07-17T15:42:23.880Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 and model_registry_instance_2 fixtures do not need explicit database dependency fixtures (like db_deployment_1, db_secret_1, etc.) in their function signatures. Pytest's dependency injection automatically handles the fixture dependencies when they reference db_name_1 and db_name_2 parameters. This is the correct pattern for these Model Registry instance fixtures.

Learnt from: lugi0
PR: #446
File: tests/model_registry/conftest.py:579-591
Timestamp: 2025-07-17T15:43:04.876Z
Learning: In tests/model_registry/conftest.py, the db_service_1 and db_service_2 fixtures do not require the admin_client parameter for Service resource creation, despite the existing model_registry_db_service fixture using client=admin_client. This inconsistency was confirmed as intentional by user lugi0.

Learnt from: dbasunag
PR: #401
File: tests/model_registry/rest_api/mariadb/conftest.py:89-110
Timestamp: 2025-07-04T00:17:47.799Z
Learning: In tests/model_registry/rest_api/mariadb/conftest.py, the model_registry_with_mariadb fixture should always use OAUTH_PROXY_CONFIG_DICT for the oauth_proxy parameter regardless of the is_model_registry_oauth parameter value, based on expected product behavior for MariaDB-backed ModelRegistry instances.

Learnt from: Snomaan6846
PR: #328
File: tests/model_serving/model_runtime/mlserver/conftest.py:182-189
Timestamp: 2025-06-16T05:39:15.070Z
Learning: The create_isvc function in utilities/inference_utils.py uses the parameter name volumes_mounts (with underscore), not volume_mounts, for passing volume mount configurations to InferenceService resources.

Learnt from: dbasunag
PR: #429
File: tests/model_registry/rbac/test_mr_rbac_sa.py:45-45
Timestamp: 2025-07-30T14:15:25.605Z
Learning: In tests/model_registry/rbac/test_mr_rbac_sa.py, bounds checking for model_registry_instance_rest_endpoint list access is not needed because upstream fixture validation already ensures endpoints exist before the tests execute. The Model Registry setup process validates endpoint availability, making additional bounds checks redundant.

Learnt from: lugi0
PR: #446
File: tests/model_registry/conftest.py:0-0
Timestamp: 2025-07-17T15:42:26.275Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 fixture (and similar duplicated Model Registry instance fixtures) do not require admin_client, db_deployment_1, or db_secret_1 parameters as explicit dependencies, even though these dependencies exist implicitly through the fixture dependency chain.

🧬 Code Graph Analysis (1)
tests/model_serving/model_server/utils.py (5)
utilities/constants.py (2)
  • Timeout (212-223)
  • Protocols (91-98)
utilities/inference_utils.py (2)
  • UserInference (141-543)
  • Inference (45-138)
utilities/infra.py (2)
  • get_isvc_keda_scaledobject (1006-1026)
  • get_pods_by_isvc_label (565-592)
tests/model_serving/conftest.py (1)
  • protocol (26-36)
tests/conftest.py (1)
  • unprivileged_client (363-403)
🔇 Additional comments (3)
tests/model_serving/model_server/keda/conftest.py (3)

73-93: LGTM! Well-structured vLLM serving runtime fixture.

The fixture properly configures the vLLM serving runtime with appropriate accelerator detection and template mapping.


95-156: LGTM! Comprehensive KEDA vLLM inference service fixture.

The fixture properly sets up GPU resources, runtime arguments, and scaling configuration. The logic for handling multi-GPU setups and quantization is well implemented.


158-191: LGTM! OVMS KEDA inference service fixture looks good.

The fixture correctly configures OVMS with KEDA autoscaling and waits for readiness before running load tests.

mwaykole
mwaykole previously approved these changes Jul 31, 2025
Comment thread tests/model_serving/model_server/keda/conftest.py Outdated
Comment thread tests/model_serving/model_server/keda/conftest.py
Comment thread tests/model_serving/model_server/keda/conftest.py Outdated
Comment thread tests/model_serving/model_server/utils.py Outdated
Comment thread tests/model_serving/model_server/utils.py Outdated
Comment thread tests/model_serving/model_server/keda/test_isvc_keda_scaling_gpu.py
Comment thread tests/model_serving/model_server/utils.py Outdated
Comment thread tests/model_serving/model_server/utils.py Outdated
Comment thread tests/model_serving/model_server/utils.py
Comment thread utilities/infra.py
Signed-off-by: Vedant Mahabaleshwarkar <vmahabal@redhat.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
tests/model_serving/model_server/keda/conftest.py (4)

42-46: Remove unused parameters from docstring.

The model_name and namespace parameters are not used in the function but are mentioned in the docstring.


145-150: Remove unused arguments from create_keda_auto_scaling_config calls.

The model_name and namespace parameters are not used by the function.


154-158: Fix undefined variable 'response_snapshot'.

The response_snapshot variable is not defined in this scope. It should be passed as a parameter or removed if not needed.


185-190: Remove unused arguments from create_keda_auto_scaling_config calls.

The model_name and namespace parameters are not used by the function.

🧹 Nitpick comments (1)
tests/model_serving/model_server/keda/conftest.py (1)

200-203: Fix typo in skip message.

There's a spelling error in the skip message.

-        pytest.skip("Accelartor type is not provide,vLLM test can not be run on CPU")
+        pytest.skip("Accelerator type is not provided, vLLM test cannot be run on CPU")
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 717d71d and 5a5172a.

📒 Files selected for processing (4)
  • tests/model_serving/model_server/keda/conftest.py (1 hunks)
  • tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py (1 hunks)
  • tests/model_serving/model_server/keda/test_isvc_keda_scaling_gpu.py (1 hunks)
  • tests/model_serving/model_server/utils.py (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • tests/model_serving/model_server/utils.py
  • tests/model_serving/model_server/keda/test_isvc_keda_scaling_cpu.py
  • tests/model_serving/model_server/keda/test_isvc_keda_scaling_gpu.py
🧰 Additional context used
🧠 Learnings (13)
📓 Common learnings
Learnt from: adolfo-ab
PR: opendatahub-io/opendatahub-tests#334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.
📚 Learning: in tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deplo...
Learnt from: Snomaan6846
PR: opendatahub-io/opendatahub-tests#444
File: tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py:48-714
Timestamp: 2025-07-16T12:20:29.672Z
Learning: In tests/model_serving/model_runtime/mlserver/basic_model_deployment/test_mlserver_basic_model_deployment.py, the same get_deployment_config_dict() function is called twice in each pytest.param because different fixtures (mlserver_inference_service and mlserver_serving_runtime) need the same deployment configuration data. This duplication is intentional to provide identical configuration to multiple fixtures.

Applied to files:

  • tests/model_serving/model_server/keda/conftest.py
📚 Learning: for trustyai image validation tests: operator image tests require admin_client, related_images_refs,...
Learnt from: adolfo-ab
PR: opendatahub-io/opendatahub-tests#334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.

Applied to files:

  • tests/model_serving/model_server/keda/conftest.py
📚 Learning: in tests/model_registry/conftest.py, the db_deployment_1 fixture (and similar duplicated resource fi...
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:595-612
Timestamp: 2025-07-17T15:42:04.167Z
Learning: In tests/model_registry/conftest.py, the db_deployment_1 fixture (and similar duplicated resource fixtures) do not require the admin_client parameter or explicit dependencies on related fixtures like db_secret_1, db_pvc_1, and db_service_1, even though the original model_registry_db_deployment fixture includes these parameters.

Applied to files:

  • tests/model_serving/model_server/keda/conftest.py
📚 Learning: the helper `create_isvc` (used in tests/model_serving utilities) already waits until the created inf...
Learnt from: israel-hdez
PR: opendatahub-io/opendatahub-tests#346
File: tests/model_serving/model_server/inference_graph/conftest.py:85-92
Timestamp: 2025-06-11T16:40:11.593Z
Learning: The helper `create_isvc` (used in tests/model_serving utilities) already waits until the created InferenceService reports Condition READY=True before returning, so additional readiness waits in fixtures are unnecessary.

Applied to files:

  • tests/model_serving/model_server/keda/conftest.py
📚 Learning: in tests/model_registry/conftest.py, the model_registry_instance_1 and model_registry_instance_2 fix...
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:733-770
Timestamp: 2025-07-17T15:42:23.880Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 and model_registry_instance_2 fixtures do not need explicit database dependency fixtures (like db_deployment_1, db_secret_1, etc.) in their function signatures. Pytest's dependency injection automatically handles the fixture dependencies when they reference db_name_1 and db_name_2 parameters. This is the correct pattern for these Model Registry instance fixtures.

Applied to files:

  • tests/model_serving/model_server/keda/conftest.py
📚 Learning: in tests/model_registry/conftest.py, the db_service_1 and db_service_2 fixtures do not require the a...
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:579-591
Timestamp: 2025-07-17T15:43:04.876Z
Learning: In tests/model_registry/conftest.py, the db_service_1 and db_service_2 fixtures do not require the admin_client parameter for Service resource creation, despite the existing model_registry_db_service fixture using client=admin_client. This inconsistency was confirmed as intentional by user lugi0.

Applied to files:

  • tests/model_serving/model_server/keda/conftest.py
📚 Learning: in tests/model_registry/rest_api/mariadb/conftest.py, the model_registry_with_mariadb fixture should...
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#401
File: tests/model_registry/rest_api/mariadb/conftest.py:89-110
Timestamp: 2025-07-04T00:17:47.799Z
Learning: In tests/model_registry/rest_api/mariadb/conftest.py, the model_registry_with_mariadb fixture should always use OAUTH_PROXY_CONFIG_DICT for the oauth_proxy parameter regardless of the is_model_registry_oauth parameter value, based on expected product behavior for MariaDB-backed ModelRegistry instances.

Applied to files:

  • tests/model_serving/model_server/keda/conftest.py
📚 Learning: in tests/model_registry/conftest.py, the model_registry_instance_1 fixture (and similar duplicated m...
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:0-0
Timestamp: 2025-07-17T15:42:26.275Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 fixture (and similar duplicated Model Registry instance fixtures) do not require admin_client, db_deployment_1, or db_secret_1 parameters as explicit dependencies, even though these dependencies exist implicitly through the fixture dependency chain.

Applied to files:

  • tests/model_serving/model_server/keda/conftest.py
📚 Learning: the create_isvc function in utilities/inference_utils.py uses the parameter name `volumes_mounts` (w...
Learnt from: Snomaan6846
PR: opendatahub-io/opendatahub-tests#328
File: tests/model_serving/model_runtime/mlserver/conftest.py:182-189
Timestamp: 2025-06-16T05:39:15.070Z
Learning: The create_isvc function in utilities/inference_utils.py uses the parameter name `volumes_mounts` (with underscore), not `volume_mounts`, for passing volume mount configurations to InferenceService resources.

Applied to files:

  • tests/model_serving/model_server/keda/conftest.py
📚 Learning: in tests/model_registry/rbac/test_mr_rbac_sa.py, bounds checking for model_registry_instance_rest_en...
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#429
File: tests/model_registry/rbac/test_mr_rbac_sa.py:45-45
Timestamp: 2025-07-30T14:15:25.605Z
Learning: In tests/model_registry/rbac/test_mr_rbac_sa.py, bounds checking for model_registry_instance_rest_endpoint list access is not needed because upstream fixture validation already ensures endpoints exist before the tests execute. The Model Registry setup process validates endpoint availability, making additional bounds checks redundant.

Applied to files:

  • tests/model_serving/model_server/keda/conftest.py
📚 Learning: in the opendatahub-tests repository, prefer keeping test parameterization configurations inline rath...
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#338
File: tests/model_registry/rbac/test_mr_rbac.py:24-53
Timestamp: 2025-06-06T12:22:57.057Z
Learning: In the opendatahub-tests repository, prefer keeping test parameterization configurations inline rather than extracting them to separate variables/constants, as it makes triaging easier by avoiding the need to jump between different parts of the file to understand the test setup.

Applied to files:

  • tests/model_serving/model_server/keda/conftest.py
📚 Learning: in the opendatahub-tests repository, the oauth cluster configuration's identityproviders field shoul...
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#354
File: tests/model_registry/rbac/conftest.py:212-224
Timestamp: 2025-06-16T13:00:51.478Z
Learning: In the opendatahub-tests repository, the OAuth cluster configuration's identityProviders field should not be empty/None and doesn't require defensive programming checks when concatenating with lists.

Applied to files:

  • tests/model_serving/model_server/keda/conftest.py
🔇 Additional comments (3)
tests/model_serving/model_server/keda/conftest.py (3)

74-92: LGTM!

The fixture correctly creates a vLLM serving runtime with appropriate template selection based on accelerator type and proper fallback handling.


206-208: LGTM!

The fixture correctly configures the JSON snapshot extension for response validation.


193-196: Response snapshot parameter is not supported in run_concurrent_load_for_keda_scaling

The function defined in tests/model_serving/model_server/utils.py:264 only accepts:

  • isvc
  • inference_config
  • num_concurrent
  • duration

It does not have a response_snapshot parameter, so adding one in the call at tests/model_serving/model_server/keda/conftest.py:193–196 would result in a signature mismatch.
If you need to capture or verify response snapshots, please extend the function’s signature (and update all its callers); otherwise, you can disregard the original suggestion.

Likely an incorrect or invalid review comment.

Comment thread tests/model_serving/model_server/keda/conftest.py
@mwaykole mwaykole enabled auto-merge (squash) August 1, 2025 08:15
@mwaykole mwaykole disabled auto-merge August 1, 2025 08:15
@mwaykole mwaykole merged commit b1a2d7f into opendatahub-io:main Aug 1, 2025
8 checks passed
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Aug 1, 2025

Status of building tag latest: success.
Status of pushing tag latest to image registry: success.

@coderabbitai coderabbitai Bot mentioned this pull request Aug 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants