Skip to content

Add negative test for modelServer#1152

Merged
mwaykole merged 5 commits intoopendatahub-io:mainfrom
mwaykole:neg12
Mar 4, 2026
Merged

Add negative test for modelServer#1152
mwaykole merged 5 commits intoopendatahub-io:mainfrom
mwaykole:neg12

Conversation

@mwaykole
Copy link
Copy Markdown
Member

@mwaykole mwaykole commented Feb 26, 2026

Pull Request

Summary

Related Issues

  • Fixes:
  • JIRA:

How it has been tested

  • Locally
  • Jenkins

Additional Requirements

  • If this PR introduces a new test image, did you create a PR to mirror it in disconnected environment?
  • If this PR introduces new marker(s)/adds a new component, was relevant ticket created to update relevant Jenkins job?

Summary by CodeRabbit

Release Notes

Tests

  • Added comprehensive negative test coverage for model serving error scenarios, validating error responses for invalid model requests, malformed JSON payloads, missing required fields, incorrect input data types, and unsupported content types.
  • Improved test infrastructure by refactoring fixtures to use shared resources and consolidating test utilities for enhanced reusability across modules.

Signed-off-by: Milind waykole <mwaykole@redhat.com>
@github-actions
Copy link
Copy Markdown

The following are automatically added/executed:

  • PR size label.
  • Run pre-commit
  • Run tox
  • Add PR author as the PR assignee
  • Build image based on the PR

Available user actions:

  • To mark a PR as WIP, add /wip in a comment. To remove it from the PR comment /wip cancel to the PR.
  • To block merging of a PR, add /hold in a comment. To un-block merging of PR comment /hold cancel.
  • To mark a PR as approved, add /lgtm in a comment. To remove, add /lgtm cancel.
    lgtm label removed on each new commit push.
  • To mark PR as verified comment /verified to the PR, to un-verify comment /verified cancel to the PR.
    verified label removed on each new commit push.
  • To Cherry-pick a merged PR /cherry-pick <target_branch_name> to the PR. If <target_branch_name> is valid,
    and the current PR is merged, a cherry-picked PR would be created and linked to the current PR.
  • To build and push image to quay, add /build-push-pr-image in a comment. This would create an image with tag
    pr-<pr_number> to quay repository. This image tag, however would be deleted on PR merge or close action.
Supported labels

{'/wip', '/hold', '/cherry-pick', '/verified', '/build-push-pr-image', '/lgtm'}

Signed-off-by: Milind waykole <mwaykole@redhat.com>
Signed-off-by: Milind waykole <mwaykole@redhat.com>
Made-with: Cursor
@mwaykole mwaykole marked this pull request as ready for review March 4, 2026 06:04
@mwaykole mwaykole enabled auto-merge (squash) March 4, 2026 06:05
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 4, 2026

📝 Walkthrough

Walkthrough

This PR refactors KServe negative test fixtures to use package-scoped resources instead of per-test/per-class scopes, introduces four new negative test modules for validating error handling (invalid model names, malformed JSON, missing fields, wrong data types), and updates test utilities with a revised inference request function and pod health assertion helper.

Changes

Cohort / File(s) Summary
Fixture Refactoring
tests/model_serving/model_server/kserve/negative/conftest.py
Introduces package-scoped fixtures negative_test_namespace and negative_test_s3_secret to replace per-class resources. Updates ovms_serving_runtime and negative_test_ovms_isvc to use shared fixtures instead of unprivileged_model_namespace and ci_endpoint_s3_secret. Adjusts storage configuration and imports accordingly.
New Negative Test Modules
tests/model_serving/model_server/kserve/negative/test_invalid_model_name.py, test_malformed_json_payload.py, test_missing_required_fields.py, test_wrong_input_data_type.py
Adds four test classes with parameterized and non-parameterized tests validating error handling for invalid models, malformed JSON, missing required fields, and wrong input data types. Each includes assertions for correct HTTP status codes and pod health verification using new utilities.
Test Utility Updates
tests/model_serving/model_server/kserve/negative/utils.py
Introduces assert_pods_healthy() helper for pod state validation; adds VALID_OVMS_INFERENCE_BODY constant; refactors send_inference_request_with_content_type() to send_inference_request() with updated signature (body as string, optional model_name override, default content_type) and simplified response handling.
Test Refactoring
tests/model_serving/model_server/kserve/negative/test_unsupported_content_type.py
Simplifies by removing Jira marker and parameterized fixture setup; replaces custom pod state verification with assert_pods_healthy(); uses new send_inference_request() utility and imported VALID_OVMS_INFERENCE_BODY constant instead of local definitions.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is entirely template boilerplate with all sections empty, checkboxes unchecked, and no actual content describing the changes, testing approach, or related issues. Complete all template sections: add a detailed summary of the negative tests added, link any related GitHub issues or JIRA tickets, check off testing methods, and address additional requirements if applicable.
Title check ❓ Inconclusive The title 'Add negative test for modelServer' is vague and does not clearly specify which negative tests are being added or what specific behaviors are being tested. Replace with a more specific title that identifies the scope of negative tests, e.g., 'Add negative tests for OVMS model inference errors' or 'Add KServe OVMS negative tests for invalid models, malformed payloads, and unsupported content types'.
✅ Passed checks (1 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/model_serving/model_server/kserve/negative/utils.py (1)

88-97: ⚠️ Potential issue | 🟠 Major

Add timeout flags to curl command to prevent test hangs.

The curl invocation lacks --connect-timeout and --max-time flags, so transient networking issues can indefinitely stall the test process.

🔧 Suggested patch
     cmd = (
         f"curl -s -w '\\n%{{http_code}}' "
+        f"--connect-timeout 10 --max-time 30 "
         f"-X POST {endpoint} "
         f"-H 'Content-Type: {content_type}' "
         f"--data-raw {shlex.quote(body)} "
         f"--insecure"
     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_serving/model_server/kserve/negative/utils.py` around lines 88 -
97, The curl command built in the cmd variable currently lacks timeouts and can
hang; update the command string assembled where cmd is defined to include
connection and overall timeouts (for example add --connect-timeout 5 and
--max-time 30) before --insecure, so the curl invocation used by run_command
(the call to run_command(command=shlex.split(cmd), verify_stderr=False,
check=False)) will fail fast on transient network issues; ensure the timeout
flags are inside the f-string and properly ordered/quoted with the existing
arguments.
🧹 Nitpick comments (2)
tests/model_serving/model_server/kserve/negative/test_unsupported_content_type.py (1)

87-91: Broaden pod-health validation to both unsupported content types.

The health test currently exercises only one invalid header path. Consider checking both values used in the parametrized status test to avoid blind spots.

♻️ Suggested patch
-        send_inference_request(
-            inference_service=negative_test_ovms_isvc,
-            body=json.dumps(VALID_OVMS_INFERENCE_BODY),
-            content_type="text/xml",
-        )
+        for content_type in ("text/xml", "application/x-www-form-urlencoded"):
+            send_inference_request(
+                inference_service=negative_test_ovms_isvc,
+                body=json.dumps(VALID_OVMS_INFERENCE_BODY),
+                content_type=content_type,
+            )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@tests/model_serving/model_server/kserve/negative/test_unsupported_content_type.py`
around lines 87 - 91, The test currently calls send_inference_request only with
content_type "text/xml", leaving the other unsupported header untested; update
the test to iterate or parametrize over both unsupported content-types used in
the parametrized status test and call send_inference_request for each value (use
the same two content-type values as the parametrized status test) so pod-health
is validated for both cases; modify the test to loop or add a small
subtest/param for the other content type and assert the same health outcome for
each.
tests/model_serving/model_server/kserve/negative/test_malformed_json_payload.py (1)

74-76: Align assertions with stated expectation about parse-failure signal.

The test docs say the response should indicate JSON parse failure, but the assertion checks only status. Consider validating a stable error substring to cover the full expected behavior.

♻️ Suggested patch
         assert status_code in MALFORMED_JSON_EXPECTED_CODES, (
             f"Expected 400 or 412 for malformed JSON, got {status_code}. Response: {response_body}"
         )
+        assert "json" in response_body.lower(), (
+            f"Expected JSON parse-related error details, got: {response_body}"
+        )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@tests/model_serving/model_server/kserve/negative/test_malformed_json_payload.py`
around lines 74 - 76, The current assertion only checks status_code against
MALFORMED_JSON_EXPECTED_CODES but doesn't verify the response indicates a JSON
parse failure; update the test in test_malformed_json_payload.py to also assert
that response_body (stringified) contains a stable error substring such as
"parse" or "malformed json" (case-insensitive) to confirm the parse-failure
signal, using the existing variables status_code, MALFORMED_JSON_EXPECTED_CODES
and response_body to locate and enhance the assertion so both status and error
message are validated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/model_serving/model_server/kserve/negative/test_invalid_model_name.py`:
- Around line 25-26: The TestInvalidModelName test class is missing the pytest
marker for rawdeployment; add the `@pytest.mark.rawdeployment` decorator alongside
the existing `@pytest.mark.tier1` above the TestInvalidModelName class declaration
so it matches the other negative tests (e.g., test_unsupported_content_type.py)
and will be picked up by CI filters that target rawdeployment.

In
`@tests/model_serving/model_server/kserve/negative/test_wrong_input_data_type.py`:
- Around line 35-36: The module-level pytest marker list for the
TestWrongInputDataType test class is missing the rawdeployment marker; update
the module to include pytest.mark.rawdeployment alongside pytest.mark.tier1 (so
the class TestWrongInputDataType is marked with both tier1 and rawdeployment) to
match sibling negative tests; also scan other negative test modules (e.g.,
test_invalid_model_name.py) and add rawdeployment where similar markers are
expected for CI marker-based selection consistency.

---

Outside diff comments:
In `@tests/model_serving/model_server/kserve/negative/utils.py`:
- Around line 88-97: The curl command built in the cmd variable currently lacks
timeouts and can hang; update the command string assembled where cmd is defined
to include connection and overall timeouts (for example add --connect-timeout 5
and --max-time 30) before --insecure, so the curl invocation used by run_command
(the call to run_command(command=shlex.split(cmd), verify_stderr=False,
check=False)) will fail fast on transient network issues; ensure the timeout
flags are inside the f-string and properly ordered/quoted with the existing
arguments.

---

Nitpick comments:
In
`@tests/model_serving/model_server/kserve/negative/test_malformed_json_payload.py`:
- Around line 74-76: The current assertion only checks status_code against
MALFORMED_JSON_EXPECTED_CODES but doesn't verify the response indicates a JSON
parse failure; update the test in test_malformed_json_payload.py to also assert
that response_body (stringified) contains a stable error substring such as
"parse" or "malformed json" (case-insensitive) to confirm the parse-failure
signal, using the existing variables status_code, MALFORMED_JSON_EXPECTED_CODES
and response_body to locate and enhance the assertion so both status and error
message are validated.

In
`@tests/model_serving/model_server/kserve/negative/test_unsupported_content_type.py`:
- Around line 87-91: The test currently calls send_inference_request only with
content_type "text/xml", leaving the other unsupported header untested; update
the test to iterate or parametrize over both unsupported content-types used in
the parametrized status test and call send_inference_request for each value (use
the same two content-type values as the parametrized status test) so pod-health
is validated for both cases; modify the test to loop or add a small
subtest/param for the other content type and assert the same health outcome for
each.

ℹ️ Review info
Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: dd715c29-d607-49ca-8872-7070d0e2bc07

📥 Commits

Reviewing files that changed from the base of the PR and between 394a748 and a515a44.

📒 Files selected for processing (7)
  • tests/model_serving/model_server/kserve/negative/conftest.py
  • tests/model_serving/model_server/kserve/negative/test_invalid_model_name.py
  • tests/model_serving/model_server/kserve/negative/test_malformed_json_payload.py
  • tests/model_serving/model_server/kserve/negative/test_missing_required_fields.py
  • tests/model_serving/model_server/kserve/negative/test_unsupported_content_type.py
  • tests/model_serving/model_server/kserve/negative/test_wrong_input_data_type.py
  • tests/model_serving/model_server/kserve/negative/utils.py

@mwaykole mwaykole disabled auto-merge March 4, 2026 08:56
@mwaykole mwaykole enabled auto-merge (squash) March 4, 2026 08:56
@mwaykole mwaykole merged commit 39d3462 into opendatahub-io:main Mar 4, 2026
11 checks passed
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Mar 4, 2026

Status of building tag latest: success.
Status of pushing tag latest to image registry: success.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants