feat: add model car (OCI image) deployment tests#1162
feat: add model car (OCI image) deployment tests#1162Jooho wants to merge 6 commits intoopendatahub-io:mainfrom
Conversation
|
The following are automatically added/executed:
Available user actions:
Supported labels{'/lgtm', '/verified', '/cherry-pick', '/wip', '/hold', '/build-push-pr-image'} |
|
/hold wait for opendatahub-io/odh-model-controller#703 |
📝 WalkthroughWalkthroughAdds MLServer “model car” (OCI image) e2e test support: new pytest fixture and parameterized tests, multiple MLServer model snapshots, utilities and constants to generate OCI storage/namespace/configs, and deployment-mode handling adjustments. Includes new README and snapshot artifacts. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
tests/model_serving/model_runtime/mlserver/model_car/test_mlserver_model_car.py (1)
119-125: Select a ready predictor pod instead of the first listed pod.Line 125 uses
pods[0], which is non-deterministic during restarts/rollouts and can produce flaky inference failures.Proposed fix
pods = get_pods_by_isvc_label( client=mlserver_model_car_inference_service.client, isvc=mlserver_model_car_inference_service, ) if not pods: raise RuntimeError(f"No pods found for InferenceService {mlserver_model_car_inference_service.name}") - pod = pods[0] + ready_pods = [ + p + for p in pods + if any( + cs.ready + for cs in (getattr(p.instance.status, "containerStatuses", None) or []) + ) + ] + if not ready_pods: + raise RuntimeError( + f"No ready pods found for InferenceService {mlserver_model_car_inference_service.name}" + ) + pod = ready_pods[0]As per coding guidelines, "REVIEW PRIORITIES: 3. Bug-prone patterns and error handling gaps".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_serving/model_runtime/mlserver/model_car/test_mlserver_model_car.py` around lines 119 - 125, The test currently picks pods[0] which is non-deterministic; change it to choose a ready predictor pod from the list returned by get_pods_by_isvc_label (mlserver_model_car_inference_service) by filtering pods for readiness (e.g., pod.status.phase == "Running" and pod.status.conditions contains condition type "Ready" == "True" or a container_status with ready==True) and/or label identifying the predictor container, then assign that ready pod to pod; if no ready predictor pod is found, raise a clear RuntimeError stating no ready predictor pod for the InferenceService instead of using pods[0].
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/model_serving/model_runtime/mlserver/conftest.py`:
- Around line 169-173: The test fixture currently allows storage_uri to be None
which delays the failure to create_isvc and obscures the real error; add an
explicit validation right after storage_uri = params.get("storage-uri") to raise
a ValueError (or similar) when storage_uri is missing/empty, mirroring the
existing model_format check so callers fail fast before invoking create_isvc.
- Around line 170-184: The code reads params["deployment-mode"] into
deployment_mode which misses callers using "deployment_type" and can silently
mis-handle Standard deployments; change the lookup to normalize both keys (e.g.,
check "deployment-mode" then "deployment_type" or map/alias them) before passing
deployment_mode into create_isvc, and change the wait_for_predictor_pods default
in the create_isvc call to a deterministic True (use
params.get("wait_for_predictor_pods", True)) so pod readiness is awaited; update
references to the variables deployment_mode and the create_isvc call sites
accordingly.
In `@tests/model_serving/model_runtime/mlserver/utils.py`:
- Around line 206-220: Validate model_format_name before using getattr: ensure
model_format_name is a non-empty string and that ModelCarImage has the dynamic
attribute f"MLSERVER_{model_format_name.upper()}" (use hasattr or catch
AttributeError) and if missing raise a clear ValueError mentioning the
unsupported model_format_name and the expected constant name; then proceed to
fetch the storage_uri via getattr(ModelCarImage, ...) and build the config as
before.
In `@utilities/constants.py`:
- Around line 300-304: Constants MLSERVER_SKLEARN, MLSERVER_XGBOOST,
MLSERVER_LIGHTGBM currently point to a personal quay.io namespace and
MLSERVER_ONNX is empty; replace these hardcoded URIs with
organization-owned/mirrored registry URIs (e.g., change quay.io/jooholee/... to
the org mirror like quay.io/opendatahub-io/...) or make them configurable via
environment variables and fall back to sensible defaults, and ensure
MLSERVER_ONNX is populated before use (or add a runtime guard in the code that
references MLSERVER_ONNX to raise a clear error if it is empty). Update the
constants MLSERVER_SKLEARN, MLSERVER_XGBOOST, MLSERVER_LIGHTGBM, and
MLSERVER_ONNX and/or add validation logic where these symbols are consumed to
prevent CI/production breakage if the values are missing or still pointing at
personal namespaces.
---
Nitpick comments:
In
`@tests/model_serving/model_runtime/mlserver/model_car/test_mlserver_model_car.py`:
- Around line 119-125: The test currently picks pods[0] which is
non-deterministic; change it to choose a ready predictor pod from the list
returned by get_pods_by_isvc_label (mlserver_model_car_inference_service) by
filtering pods for readiness (e.g., pod.status.phase == "Running" and
pod.status.conditions contains condition type "Ready" == "True" or a
container_status with ready==True) and/or label identifying the predictor
container, then assign that ready pod to pod; if no ready predictor pod is
found, raise a clear RuntimeError stating no ready predictor pod for the
InferenceService instead of using pods[0].
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: cf351258-cc36-4c0f-9b97-1de88a52e681
📒 Files selected for processing (11)
tests/model_serving/model_runtime/mlserver/conftest.pytests/model_serving/model_runtime/mlserver/model_car/__init__.pytests/model_serving/model_runtime/mlserver/model_car/__snapshots__/test_mlserver_model_car/TestMLServerModelCar.test_mlserver_model_car_inference[lightgbm-raw-deployment-modelcar].jsontests/model_serving/model_runtime/mlserver/model_car/__snapshots__/test_mlserver_model_car/TestMLServerModelCar.test_mlserver_model_car_inference[lightgbm-raw-deployment-modelcar_text_type].jsontests/model_serving/model_runtime/mlserver/model_car/__snapshots__/test_mlserver_model_car/TestMLServerModelCar.test_mlserver_model_car_inference[sklearn-raw-deployment-modelcar].jsontests/model_serving/model_runtime/mlserver/model_car/__snapshots__/test_mlserver_model_car/TestMLServerModelCar.test_mlserver_model_car_inference[xgboost-raw-deployment-modelcar].jsontests/model_serving/model_runtime/mlserver/model_car/test_mlserver_model_car.pytests/model_serving/model_runtime/mlserver/utils.pyutilities/constants.pyutilities/general.pyutilities/inference_utils.py
| storage_uri = params.get("storage-uri") | ||
| deployment_mode = params.get("deployment-mode", KServeDeploymentType.RAW_DEPLOYMENT) | ||
| model_format = params.get("model-format") | ||
| if not model_format: | ||
| raise ValueError("model-format is required in params") |
There was a problem hiding this comment.
Fail fast when storage_uri is missing.
Line 169 allows storage_uri=None and defers failure to create_isvc, which obscures the root cause.
Proposed fix
- storage_uri = params.get("storage-uri")
+ storage_uri = params.get("storage_uri") or params.get("storage-uri")
deployment_mode = params.get("deployment-mode", KServeDeploymentType.RAW_DEPLOYMENT)
model_format = params.get("model-format")
+ if not storage_uri:
+ raise ValueError("storage_uri (or storage-uri) is required in params")
if not model_format:
raise ValueError("model-format is required in params")As per coding guidelines, "REVIEW PRIORITIES: 3. Bug-prone patterns and error handling gaps".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/model_serving/model_runtime/mlserver/conftest.py` around lines 169 -
173, The test fixture currently allows storage_uri to be None which delays the
failure to create_isvc and obscures the real error; add an explicit validation
right after storage_uri = params.get("storage-uri") to raise a ValueError (or
similar) when storage_uri is missing/empty, mirroring the existing model_format
check so callers fail fast before invoking create_isvc.
| deployment_mode = params.get("deployment-mode", KServeDeploymentType.RAW_DEPLOYMENT) | ||
| model_format = params.get("model-format") | ||
| if not model_format: | ||
| raise ValueError("model-format is required in params") | ||
|
|
||
| with create_isvc( | ||
| client=admin_client, | ||
| name=f"{model_format}-modelcar", | ||
| namespace=model_namespace.name, | ||
| runtime=mlserver_serving_runtime.name, | ||
| storage_uri=storage_uri, | ||
| model_format=model_format, | ||
| deployment_mode=deployment_mode, | ||
| external_route=params.get("enable_external_route"), | ||
| wait_for_predictor_pods=params.get("wait_for_predictor_pods", False), |
There was a problem hiding this comment.
Normalize deployment-mode keys and avoid non-deterministic pod readiness defaults.
Line 170 uses deployment-mode, but related fixtures consume deployment_type; this can silently ignore Standard mode inputs. Also, Line 184 defaults wait_for_predictor_pods to False, which makes the test path race-prone.
Proposed fix
- deployment_mode = params.get("deployment-mode", KServeDeploymentType.RAW_DEPLOYMENT)
+ deployment_mode = (
+ params.get("deployment_type")
+ or params.get("deployment-mode")
+ or KServeDeploymentType.RAW_DEPLOYMENT
+ )
@@
- external_route=params.get("enable_external_route"),
- wait_for_predictor_pods=params.get("wait_for_predictor_pods", False),
+ external_route=params.get("enable_external_route", False),
+ wait_for_predictor_pods=params.get("wait_for_predictor_pods", True),As per coding guidelines, "REVIEW PRIORITIES: 2. Architectural issues and anti-patterns" and "3. Bug-prone patterns and error handling gaps".
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| deployment_mode = params.get("deployment-mode", KServeDeploymentType.RAW_DEPLOYMENT) | |
| model_format = params.get("model-format") | |
| if not model_format: | |
| raise ValueError("model-format is required in params") | |
| with create_isvc( | |
| client=admin_client, | |
| name=f"{model_format}-modelcar", | |
| namespace=model_namespace.name, | |
| runtime=mlserver_serving_runtime.name, | |
| storage_uri=storage_uri, | |
| model_format=model_format, | |
| deployment_mode=deployment_mode, | |
| external_route=params.get("enable_external_route"), | |
| wait_for_predictor_pods=params.get("wait_for_predictor_pods", False), | |
| deployment_mode = ( | |
| params.get("deployment_type") | |
| or params.get("deployment-mode") | |
| or KServeDeploymentType.RAW_DEPLOYMENT | |
| ) | |
| model_format = params.get("model-format") | |
| if not model_format: | |
| raise ValueError("model-format is required in params") | |
| with create_isvc( | |
| client=admin_client, | |
| name=f"{model_format}-modelcar", | |
| namespace=model_namespace.name, | |
| runtime=mlserver_serving_runtime.name, | |
| storage_uri=storage_uri, | |
| model_format=model_format, | |
| deployment_mode=deployment_mode, | |
| external_route=params.get("enable_external_route", False), | |
| wait_for_predictor_pods=params.get("wait_for_predictor_pods", True), |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/model_serving/model_runtime/mlserver/conftest.py` around lines 170 -
184, The code reads params["deployment-mode"] into deployment_mode which misses
callers using "deployment_type" and can silently mis-handle Standard
deployments; change the lookup to normalize both keys (e.g., check
"deployment-mode" then "deployment_type" or map/alias them) before passing
deployment_mode into create_isvc, and change the wait_for_predictor_pods default
in the create_isvc call to a deterministic True (use
params.get("wait_for_predictor_pods", True)) so pod readiness is awaited; update
references to the variables deployment_mode and the create_isvc call sites
accordingly.
| if modelcar: | ||
| from utilities.constants import ModelCarImage | ||
|
|
||
| # Get OCI image URI from ModelCarImage constant | ||
| storage_uri = getattr(ModelCarImage, f"MLSERVER_{model_format_name.upper()}") | ||
|
|
||
| def get_model_namespace_dict(model_format_name: str, deployment_type: str) -> dict[str, str]: | ||
| config: dict[str, Any] = { | ||
| "storage-uri": storage_uri, | ||
| "model-format": model_format_name, | ||
| } | ||
|
|
||
| if env_variables: | ||
| config["model_env_variables"] = env_variables | ||
|
|
||
| return config |
There was a problem hiding this comment.
Missing validation for model_format_name before dynamic attribute lookup.
getattr(ModelCarImage, f"MLSERVER_{model_format_name.upper()}") will raise AttributeError with a cryptic message if model_format_name doesn't have a corresponding constant. Add validation or a meaningful error.
Proposed fix
if modelcar:
from utilities.constants import ModelCarImage
- # Get OCI image URI from ModelCarImage constant
- storage_uri = getattr(ModelCarImage, f"MLSERVER_{model_format_name.upper()}")
+ # Get OCI image URI from ModelCarImage constant
+ attr_name = f"MLSERVER_{model_format_name.upper()}"
+ if not hasattr(ModelCarImage, attr_name):
+ raise ValueError(f"Unsupported model format for modelcar: {model_format_name}. "
+ f"No ModelCarImage.{attr_name} constant defined.")
+ storage_uri = getattr(ModelCarImage, attr_name)
+ if not storage_uri:
+ raise ValueError(f"ModelCarImage.{attr_name} is empty. OCI URI not yet configured.")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if modelcar: | |
| from utilities.constants import ModelCarImage | |
| # Get OCI image URI from ModelCarImage constant | |
| storage_uri = getattr(ModelCarImage, f"MLSERVER_{model_format_name.upper()}") | |
| def get_model_namespace_dict(model_format_name: str, deployment_type: str) -> dict[str, str]: | |
| config: dict[str, Any] = { | |
| "storage-uri": storage_uri, | |
| "model-format": model_format_name, | |
| } | |
| if env_variables: | |
| config["model_env_variables"] = env_variables | |
| return config | |
| if modelcar: | |
| from utilities.constants import ModelCarImage | |
| # Get OCI image URI from ModelCarImage constant | |
| attr_name = f"MLSERVER_{model_format_name.upper()}" | |
| if not hasattr(ModelCarImage, attr_name): | |
| raise ValueError(f"Unsupported model format for modelcar: {model_format_name}. " | |
| f"No ModelCarImage.{attr_name} constant defined.") | |
| storage_uri = getattr(ModelCarImage, attr_name) | |
| if not storage_uri: | |
| raise ValueError(f"ModelCarImage.{attr_name} is empty. OCI URI not yet configured.") | |
| config: dict[str, Any] = { | |
| "storage-uri": storage_uri, | |
| "model-format": model_format_name, | |
| } | |
| if env_variables: | |
| config["model_env_variables"] = env_variables | |
| return config |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/model_serving/model_runtime/mlserver/utils.py` around lines 206 - 220,
Validate model_format_name before using getattr: ensure model_format_name is a
non-empty string and that ModelCarImage has the dynamic attribute
f"MLSERVER_{model_format_name.upper()}" (use hasattr or catch AttributeError)
and if missing raise a clear ValueError mentioning the unsupported
model_format_name and the expected constant name; then proceed to fetch the
storage_uri via getattr(ModelCarImage, ...) and build the config as before.
utilities/constants.py
Outdated
| # MLServer model car images - update URIs when images are available | ||
| MLSERVER_SKLEARN: str = "oci://quay.io/jooholee/mlserver-sklearn@sha256:e0b0b1d17ae852bccccce35df7b0126d24f04626bb7fa41e9f19039d6b285340" | ||
| MLSERVER_XGBOOST: str = "oci://quay.io/jooholee/mlserver-xgboost@sha256:de50a0cd9c36fe300b3bffe25875daef886da9b0d0f8b5776a3b5395860ec785" | ||
| MLSERVER_LIGHTGBM: str = "oci://quay.io/jooholee/mlserver-lightgbm@sha256:ce80860b4427348f580f95f9826b1fdfb105e4240eacf69fb143883f4aa667f4" | ||
| MLSERVER_ONNX: str = "" # TODO: Add OCI URI when image ready |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if any code currently references MLSERVER_ONNX
rg -n "MLSERVER_ONNX" --type=pyRepository: opendatahub-io/opendatahub-tests
Length of output: 168
🏁 Script executed:
rg -n "quay\.io|registry|OCI.*URI" --type=py -A 1 -B 1Repository: opendatahub-io/opendatahub-tests
Length of output: 50389
Personal quay.io namespace creates supply-chain and availability risk.
Images in quay.io/jooholee/ (MLSERVER_SKLEARN, MLSERVER_XGBOOST, MLSERVER_LIGHTGBM) and similar personal namespaces like quay.io/mwaykole/ are not suitable for production or CI pipelines. If the personal account is deleted or access revoked, these images become unavailable. Mirror to an organization-owned registry (e.g., quay.io/opendatahub-io/) before these constants are actively used in tests.
The empty MLSERVER_ONNX value requires population before any code path references it. The TODO comment indicates awareness, but this must be resolved before the constant is used.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@utilities/constants.py` around lines 300 - 304, Constants MLSERVER_SKLEARN,
MLSERVER_XGBOOST, MLSERVER_LIGHTGBM currently point to a personal quay.io
namespace and MLSERVER_ONNX is empty; replace these hardcoded URIs with
organization-owned/mirrored registry URIs (e.g., change quay.io/jooholee/... to
the org mirror like quay.io/opendatahub-io/...) or make them configurable via
environment variables and fall back to sensible defaults, and ensure
MLSERVER_ONNX is populated before use (or add a runtime guard in the code that
references MLSERVER_ONNX to raise a clear error if it is empty). Update the
constants MLSERVER_SKLEARN, MLSERVER_XGBOOST, MLSERVER_LIGHTGBM, and
MLSERVER_ONNX and/or add validation logic where these symbols are consumed to
prevent CI/production breakage if the values are missing or still pointing at
personal namespaces.
- Add MLServer model car tests for sklearn, xgboost, and lightgbm using OCI images. - Add mlserver_model_car_inference_service fixture with env variable support. - Add Standard deployment mode support in constants and utils Signed-off-by: Jooho Lee <jlee@redhat.com>
2cc3167 to
56d9f09
Compare
56d9f09 to
84b53ab
Compare
for more information, see https://pre-commit.ci Signed-off-by: Jooho Lee <jlee@redhat.com>
Signed-off-by: Jooho Lee <jlee@redhat.com>
Signed-off-by: Jooho Lee <jlee@redhat.com>
Signed-off-by: Jooho Lee <jlee@redhat.com>
b23d7ef to
a875c9c
Compare
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/model_serving/model_runtime/mlserver/utils.py (1)
272-275:⚠️ Potential issue | 🟠 MajorFail fast for unsupported deployment type instead of returning empty config.
Lines [272]-[275] return
{}for unknowndeployment_type, which defers errors and obscures root cause. RaiseValueErrorimmediately with supported options.Proposed fix
def get_deployment_config_dict( model_format_name: str, deployment_type: str = RAW_DEPLOYMENT_TYPE, ) -> dict[str, str]: @@ - deployment_config_dict = {} - - if deployment_type == RAW_DEPLOYMENT_TYPE: - deployment_config_dict = {"name": model_format_name, **BASE_RAW_DEPLOYMENT_CONFIG} - - return deployment_config_dict + if deployment_type == RAW_DEPLOYMENT_TYPE: + return {"name": model_format_name, **BASE_RAW_DEPLOYMENT_CONFIG} + + raise ValueError( + f"Unsupported deployment_type: {deployment_type}. " + f"Supported values: ({RAW_DEPLOYMENT_TYPE},)" + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_serving/model_runtime/mlserver/utils.py` around lines 272 - 275, The code currently falls through and returns an empty deployment_config_dict when deployment_type is not recognized; change this to fail fast by raising a ValueError that names the unsupported deployment_type and lists the supported options (e.g., RAW_DEPLOYMENT_TYPE and any others expected) instead of returning {} — update the branch around RAW_DEPLOYMENT_TYPE/model_format_name/BASE_RAW_DEPLOYMENT_CONFIG in tests/model_serving/model_runtime/mlserver/utils.py to raise ValueError with a clear message when deployment_type is unknown.
♻️ Duplicate comments (1)
tests/model_serving/model_runtime/mlserver/utils.py (1)
206-211:⚠️ Potential issue | 🟡 MinorValidate
ModelCarImageconstant existence before dynamic lookup.Line [210] does an unchecked dynamic
getattr, so unsupportedmodel_format_namefails with an opaqueAttributeError. Validate and raise a clearValueErrorthat includes the expected constant name.Proposed fix
if modelcar: from utilities.constants import ModelCarImage - # Get OCI image URI from ModelCarImage constant - storage_uri = getattr(ModelCarImage, f"MLSERVER_{model_format_name.upper()}") + # Get OCI image URI from ModelCarImage constant + attr_name = f"MLSERVER_{model_format_name.upper()}" + if not model_format_name.strip(): + raise ValueError("model_format_name must be a non-empty string") + if not hasattr(ModelCarImage, attr_name): + raise ValueError( + f"Unsupported model format for modelcar: {model_format_name}. " + f"Expected ModelCarImage.{attr_name}" + ) + storage_uri = getattr(ModelCarImage, attr_name)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_serving/model_runtime/mlserver/utils.py` around lines 206 - 211, The dynamic lookup using getattr(ModelCarImage, f"MLSERVER_{model_format_name.upper()}") can raise an opaque AttributeError for unsupported formats; before calling getattr in the modelcar branch, compute const_name = f"MLSERVER_{model_format_name.upper()}" and check hasattr(ModelCarImage, const_name); if missing, raise a clear ValueError referencing model_format_name and the expected constant name (const_name), otherwise assign storage_uri = getattr(ModelCarImage, const_name).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@tests/model_serving/model_runtime/mlserver/utils.py`:
- Around line 272-275: The code currently falls through and returns an empty
deployment_config_dict when deployment_type is not recognized; change this to
fail fast by raising a ValueError that names the unsupported deployment_type and
lists the supported options (e.g., RAW_DEPLOYMENT_TYPE and any others expected)
instead of returning {} — update the branch around
RAW_DEPLOYMENT_TYPE/model_format_name/BASE_RAW_DEPLOYMENT_CONFIG in
tests/model_serving/model_runtime/mlserver/utils.py to raise ValueError with a
clear message when deployment_type is unknown.
---
Duplicate comments:
In `@tests/model_serving/model_runtime/mlserver/utils.py`:
- Around line 206-211: The dynamic lookup using getattr(ModelCarImage,
f"MLSERVER_{model_format_name.upper()}") can raise an opaque AttributeError for
unsupported formats; before calling getattr in the modelcar branch, compute
const_name = f"MLSERVER_{model_format_name.upper()}" and check
hasattr(ModelCarImage, const_name); if missing, raise a clear ValueError
referencing model_format_name and the expected constant name (const_name),
otherwise assign storage_uri = getattr(ModelCarImage, const_name).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 38c6c746-6970-4469-8180-07916196ebad
📒 Files selected for processing (13)
tests/model_serving/model_runtime/mlserver/conftest.pytests/model_serving/model_runtime/mlserver/model_car/README.mdtests/model_serving/model_runtime/mlserver/model_car/__init__.pytests/model_serving/model_runtime/mlserver/model_car/__snapshots__/test_mlserver_model_car/TestMLServerModelCar.test_mlserver_model_car_inference[lightgbm-raw-deployment-modelcar].jsontests/model_serving/model_runtime/mlserver/model_car/__snapshots__/test_mlserver_model_car/TestMLServerModelCar.test_mlserver_model_car_inference[lightgbm-raw-deployment-modelcar_text_type].jsontests/model_serving/model_runtime/mlserver/model_car/__snapshots__/test_mlserver_model_car/TestMLServerModelCar.test_mlserver_model_car_inference[onnx-raw-deployment-modelcar].jsontests/model_serving/model_runtime/mlserver/model_car/__snapshots__/test_mlserver_model_car/TestMLServerModelCar.test_mlserver_model_car_inference[sklearn-raw-deployment-modelcar].jsontests/model_serving/model_runtime/mlserver/model_car/__snapshots__/test_mlserver_model_car/TestMLServerModelCar.test_mlserver_model_car_inference[xgboost-raw-deployment-modelcar].jsontests/model_serving/model_runtime/mlserver/model_car/test_mlserver_model_car.pytests/model_serving/model_runtime/mlserver/utils.pyutilities/constants.pyutilities/general.pyutilities/inference_utils.py
✅ Files skipped from review due to trivial changes (8)
- tests/model_serving/model_runtime/mlserver/model_car/snapshots/test_mlserver_model_car/TestMLServerModelCar.test_mlserver_model_car_inference[xgboost-raw-deployment-modelcar].json
- tests/model_serving/model_runtime/mlserver/model_car/snapshots/test_mlserver_model_car/TestMLServerModelCar.test_mlserver_model_car_inference[lightgbm-raw-deployment-modelcar].json
- tests/model_serving/model_runtime/mlserver/model_car/snapshots/test_mlserver_model_car/TestMLServerModelCar.test_mlserver_model_car_inference[lightgbm-raw-deployment-modelcar_text_type].json
- tests/model_serving/model_runtime/mlserver/model_car/README.md
- tests/model_serving/model_runtime/mlserver/model_car/snapshots/test_mlserver_model_car/TestMLServerModelCar.test_mlserver_model_car_inference[onnx-raw-deployment-modelcar].json
- tests/model_serving/model_runtime/mlserver/conftest.py
- tests/model_serving/model_runtime/mlserver/model_car/test_mlserver_model_car.py
- utilities/constants.py
🚧 Files skipped from review as they are similar to previous changes (2)
- utilities/general.py
- tests/model_serving/model_runtime/mlserver/model_car/snapshots/test_mlserver_model_car/TestMLServerModelCar.test_mlserver_model_car_inference[sklearn-raw-deployment-modelcar].json
Pull Request
Summary
Related Issues
How it has been tested
Additional Requirements
Summary by CodeRabbit
New Features
Tests
Documentation