Add negative test for modelServer#1152
Conversation
Signed-off-by: Milind waykole <mwaykole@redhat.com>
|
The following are automatically added/executed:
Available user actions:
Supported labels{'/wip', '/hold', '/cherry-pick', '/verified', '/build-push-pr-image', '/lgtm'} |
Signed-off-by: Milind waykole <mwaykole@redhat.com>
Signed-off-by: Milind waykole <mwaykole@redhat.com> Made-with: Cursor
📝 WalkthroughWalkthroughThis PR refactors KServe negative test fixtures to use package-scoped resources instead of per-test/per-class scopes, introduces four new negative test modules for validating error handling (invalid model names, malformed JSON, missing fields, wrong data types), and updates test utilities with a revised inference request function and pod health assertion helper. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/model_serving/model_server/kserve/negative/utils.py (1)
88-97:⚠️ Potential issue | 🟠 MajorAdd timeout flags to curl command to prevent test hangs.
The curl invocation lacks
--connect-timeoutand--max-timeflags, so transient networking issues can indefinitely stall the test process.🔧 Suggested patch
cmd = ( f"curl -s -w '\\n%{{http_code}}' " + f"--connect-timeout 10 --max-time 30 " f"-X POST {endpoint} " f"-H 'Content-Type: {content_type}' " f"--data-raw {shlex.quote(body)} " f"--insecure" )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_serving/model_server/kserve/negative/utils.py` around lines 88 - 97, The curl command built in the cmd variable currently lacks timeouts and can hang; update the command string assembled where cmd is defined to include connection and overall timeouts (for example add --connect-timeout 5 and --max-time 30) before --insecure, so the curl invocation used by run_command (the call to run_command(command=shlex.split(cmd), verify_stderr=False, check=False)) will fail fast on transient network issues; ensure the timeout flags are inside the f-string and properly ordered/quoted with the existing arguments.
🧹 Nitpick comments (2)
tests/model_serving/model_server/kserve/negative/test_unsupported_content_type.py (1)
87-91: Broaden pod-health validation to both unsupported content types.The health test currently exercises only one invalid header path. Consider checking both values used in the parametrized status test to avoid blind spots.
♻️ Suggested patch
- send_inference_request( - inference_service=negative_test_ovms_isvc, - body=json.dumps(VALID_OVMS_INFERENCE_BODY), - content_type="text/xml", - ) + for content_type in ("text/xml", "application/x-www-form-urlencoded"): + send_inference_request( + inference_service=negative_test_ovms_isvc, + body=json.dumps(VALID_OVMS_INFERENCE_BODY), + content_type=content_type, + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_serving/model_server/kserve/negative/test_unsupported_content_type.py` around lines 87 - 91, The test currently calls send_inference_request only with content_type "text/xml", leaving the other unsupported header untested; update the test to iterate or parametrize over both unsupported content-types used in the parametrized status test and call send_inference_request for each value (use the same two content-type values as the parametrized status test) so pod-health is validated for both cases; modify the test to loop or add a small subtest/param for the other content type and assert the same health outcome for each.tests/model_serving/model_server/kserve/negative/test_malformed_json_payload.py (1)
74-76: Align assertions with stated expectation about parse-failure signal.The test docs say the response should indicate JSON parse failure, but the assertion checks only status. Consider validating a stable error substring to cover the full expected behavior.
♻️ Suggested patch
assert status_code in MALFORMED_JSON_EXPECTED_CODES, ( f"Expected 400 or 412 for malformed JSON, got {status_code}. Response: {response_body}" ) + assert "json" in response_body.lower(), ( + f"Expected JSON parse-related error details, got: {response_body}" + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_serving/model_server/kserve/negative/test_malformed_json_payload.py` around lines 74 - 76, The current assertion only checks status_code against MALFORMED_JSON_EXPECTED_CODES but doesn't verify the response indicates a JSON parse failure; update the test in test_malformed_json_payload.py to also assert that response_body (stringified) contains a stable error substring such as "parse" or "malformed json" (case-insensitive) to confirm the parse-failure signal, using the existing variables status_code, MALFORMED_JSON_EXPECTED_CODES and response_body to locate and enhance the assertion so both status and error message are validated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/model_serving/model_server/kserve/negative/test_invalid_model_name.py`:
- Around line 25-26: The TestInvalidModelName test class is missing the pytest
marker for rawdeployment; add the `@pytest.mark.rawdeployment` decorator alongside
the existing `@pytest.mark.tier1` above the TestInvalidModelName class declaration
so it matches the other negative tests (e.g., test_unsupported_content_type.py)
and will be picked up by CI filters that target rawdeployment.
In
`@tests/model_serving/model_server/kserve/negative/test_wrong_input_data_type.py`:
- Around line 35-36: The module-level pytest marker list for the
TestWrongInputDataType test class is missing the rawdeployment marker; update
the module to include pytest.mark.rawdeployment alongside pytest.mark.tier1 (so
the class TestWrongInputDataType is marked with both tier1 and rawdeployment) to
match sibling negative tests; also scan other negative test modules (e.g.,
test_invalid_model_name.py) and add rawdeployment where similar markers are
expected for CI marker-based selection consistency.
---
Outside diff comments:
In `@tests/model_serving/model_server/kserve/negative/utils.py`:
- Around line 88-97: The curl command built in the cmd variable currently lacks
timeouts and can hang; update the command string assembled where cmd is defined
to include connection and overall timeouts (for example add --connect-timeout 5
and --max-time 30) before --insecure, so the curl invocation used by run_command
(the call to run_command(command=shlex.split(cmd), verify_stderr=False,
check=False)) will fail fast on transient network issues; ensure the timeout
flags are inside the f-string and properly ordered/quoted with the existing
arguments.
---
Nitpick comments:
In
`@tests/model_serving/model_server/kserve/negative/test_malformed_json_payload.py`:
- Around line 74-76: The current assertion only checks status_code against
MALFORMED_JSON_EXPECTED_CODES but doesn't verify the response indicates a JSON
parse failure; update the test in test_malformed_json_payload.py to also assert
that response_body (stringified) contains a stable error substring such as
"parse" or "malformed json" (case-insensitive) to confirm the parse-failure
signal, using the existing variables status_code, MALFORMED_JSON_EXPECTED_CODES
and response_body to locate and enhance the assertion so both status and error
message are validated.
In
`@tests/model_serving/model_server/kserve/negative/test_unsupported_content_type.py`:
- Around line 87-91: The test currently calls send_inference_request only with
content_type "text/xml", leaving the other unsupported header untested; update
the test to iterate or parametrize over both unsupported content-types used in
the parametrized status test and call send_inference_request for each value (use
the same two content-type values as the parametrized status test) so pod-health
is validated for both cases; modify the test to loop or add a small
subtest/param for the other content type and assert the same health outcome for
each.
ℹ️ Review info
Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: dd715c29-d607-49ca-8872-7070d0e2bc07
📒 Files selected for processing (7)
tests/model_serving/model_server/kserve/negative/conftest.pytests/model_serving/model_server/kserve/negative/test_invalid_model_name.pytests/model_serving/model_server/kserve/negative/test_malformed_json_payload.pytests/model_serving/model_server/kserve/negative/test_missing_required_fields.pytests/model_serving/model_server/kserve/negative/test_unsupported_content_type.pytests/model_serving/model_server/kserve/negative/test_wrong_input_data_type.pytests/model_serving/model_server/kserve/negative/utils.py
|
Status of building tag latest: success. |
Pull Request
Summary
Related Issues
How it has been tested
Additional Requirements
Summary by CodeRabbit
Release Notes
Tests