Skip to content

Add HAP detectors to existing test cases#508

Merged
sheltoncyril merged 3 commits intoopendatahub-io:mainfrom
kpunwatk:several_detectors
Aug 15, 2025
Merged

Add HAP detectors to existing test cases#508
sheltoncyril merged 3 commits intoopendatahub-io:mainfrom
kpunwatk:several_detectors

Conversation

@kpunwatk
Copy link
Copy Markdown
Contributor

@kpunwatk kpunwatk commented Aug 11, 2025

Addresses: https://issues.redhat.com/browse/RHOAIENG-28245
This PR introduces HAP (Hate speech and profanity) detectors to existing test cases within the guardrail tests.
Screenshot 2025-08-13 at 16 59 43

Description

How Has This Been Tested?

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

@kpunwatk kpunwatk requested a review from a team as a code owner August 11, 2025 11:29
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Aug 11, 2025

📝 Walkthrough

Summary by CodeRabbit

  • Tests
    • Added multi-detector guardrails tests combining prompt-injection and harmful-content detectors, with end-to-end checks for both unsafe inputs and safe prompts.
    • Introduced fixtures to provision a detector service and route during test runs, enabling reliable orchestration and verification paths.
  • Chores
    • Added a MinIO pod configuration to support model assets required by the new detector tests, improving test environment parity and setup consistency.

Walkthrough

Adds two pytest fixtures to provision a HAP detector InferenceService and its Route, extends guardrails tests with a multi-detector (prompt_injection + hap) test class and related assertions, and adds a MinIO PodConfig constant for a QWEN HAP/BPIV2 test image.

Changes

Cohort / File(s) Summary of changes
Guardrails test fixtures
tests/model_explainability/guardrails/conftest.py
Adds fixtures hap_detector_isvc (creates an InferenceService named hap-detector using RAW_DEPLOYMENT with model_format guardrails-detector-huggingface, tied to a HuggingFace ServingRuntime, using a MinIO data connection and storage path granite-guardian-hap-38m; fixed resources 1 CPU/4Gi RAM, no GPUs, replica min/max 1, predictor pods not waited on, auth disabled) and hap_detector_route (creates and waits for a Route named hap-detector-route for that service).
Guardrails multi-detector tests
tests/model_explainability/guardrails/test_guardrails.py
Adds HF_DETECTORS mapping to include prompt_injection and hap for input and output; introduces TestGuardrailsOrchestratorWithSeveralDetectors with tests test_guardrails_several_detector_unsuitable_input and test_guardrails_several_detector_negative_detection that run the orchestrator configured with two detectors and assert per-detector detection fields or negative detection.
MinIO pod config
utilities/constants.py
Adds MinIo.PodConfig.QWEN_HAP_BPIV2_MINIO_CONFIG: dict[str, Any] reusing MINIO_BASE_CONFIG, specifying image and sha256 for a QWEN HAP/BPIV2 MinIO test image.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these settings in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 5aa9b6d and 178d585.

📒 Files selected for processing (3)
  • tests/model_explainability/guardrails/conftest.py (1 hunks)
  • tests/model_explainability/guardrails/test_guardrails.py (2 hunks)
  • utilities/constants.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • tests/model_explainability/guardrails/conftest.py
  • utilities/constants.py
  • tests/model_explainability/guardrails/test_guardrails.py
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@kpunwatk
Copy link
Copy Markdown
Contributor Author

/wip

@kpunwatk kpunwatk changed the title Add HAP detectors to existing test cases [WIP] Add HAP detectors to existing test cases Aug 11, 2025
@github-actions
Copy link
Copy Markdown

The following are automatically added/executed:

  • PR size label.
  • Run pre-commit
  • Run tox
  • Add PR author as the PR assignee
  • Build image based on the PR

Available user actions:

  • To mark a PR as WIP, add /wip in a comment. To remove it from the PR comment /wip cancel to the PR.
  • To block merging of a PR, add /hold in a comment. To un-block merging of PR comment /hold cancel.
  • To mark a PR as approved, add /lgtm in a comment. To remove, add /lgtm cancel.
    lgtm label removed on each new commit push.
  • To mark PR as verified comment /verified to the PR, to un-verify comment /verified cancel to the PR.
    verified label removed on each new commit push.
  • To Cherry-pick a merged PR /cherry-pick <target_branch_name> to the PR. If <target_branch_name> is valid,
    and the current PR is merged, a cherry-picked PR would be created and linked to the current PR.
  • To build and push image to quay, add /build-push-pr-image in a comment. This would create an image with tag
    pr-<pr_number> to quay repository. This image tag, however would be deleted on PR merge or close action.
Supported labels

{'/hold', '/lgtm', '/build-push-pr-image', '/wip', '/verified', '/cherry-pick'}

@kpunwatk
Copy link
Copy Markdown
Contributor Author

/wip

Copy link
Copy Markdown
Contributor

@adolfo-ab adolfo-ab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to split into several scenarios, for example:

  1. Unsuitable input:
    1.1. Prompt that triggers all the detectors at the same time (HAP, prompt injection, pii), assert that the three detectors are triggered.
    1.2. Prompt that triggers a single detector (just HAP, for example), assert that only that detector is triggered, and not the other 2

  2. Unsuitable output
    2.1. Prompt that triggers a single detection at the output (you can use the same one we use for PII output detection in the TestGuardrailsOrchestratorWithBuiltInDetectors test class)

  3. No detections (a test with a harmless prompt, assert that no detections are triggered)

Comment thread tests/model_explainability/guardrails/test_guardrails.py Outdated
Comment thread tests/model_explainability/guardrails/test_guardrails.py Outdated
Comment thread tests/model_explainability/guardrails/test_guardrails.py Outdated
@kpunwatk
Copy link
Copy Markdown
Contributor Author

/wip cancel

@kpunwatk kpunwatk changed the title [WIP] Add HAP detectors to existing test cases Add HAP detectors to existing test cases Aug 13, 2025
Copy link
Copy Markdown
Collaborator

@dbasunag dbasunag left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please rebase and see if tests can be split or simplified.

Comment thread tests/model_explainability/guardrails/test_guardrails.py Outdated
Comment thread tests/model_explainability/guardrails/test_guardrails.py Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
tests/model_explainability/guardrails/conftest.py (1)

343-353: Optional: The Route for the HAP detector may be unnecessary

These tests route traffic via the Guardrails Orchestrator using internal Service names (hap-detector-predictor). The detector Routes are not used in the requests themselves (only pulled in as fixtures to ensure readiness). If not needed elsewhere, consider dropping the detector Route resources to reduce cluster surface and flake potential.

-@pytest.fixture(scope="class")
-def hap_detector_route(
-    admin_client: DynamicClient,
-    model_namespace: Namespace,
-    hap_detector_isvc: InferenceService,
-) -> Generator[Route, Any, Any]:
-    yield Route(
-        name="hap-detector-route",
-        namespace=model_namespace.name,
-        service=hap_detector_isvc.name,
-        wait_for_resource=True,
-    )
+# If external exposure is not required, you can safely remove this fixture.
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 83f6a9a and 7819b11.

📒 Files selected for processing (3)
  • tests/model_explainability/guardrails/conftest.py (1 hunks)
  • tests/model_explainability/guardrails/test_guardrails.py (2 hunks)
  • utilities/constants.py (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: adolfo-ab
PR: opendatahub-io/opendatahub-tests#334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.
📚 Learning: 2025-06-11T16:40:11.593Z
Learnt from: israel-hdez
PR: opendatahub-io/opendatahub-tests#346
File: tests/model_serving/model_server/inference_graph/conftest.py:85-92
Timestamp: 2025-06-11T16:40:11.593Z
Learning: The helper `create_isvc` (used in tests/model_serving utilities) already waits until the created InferenceService reports Condition READY=True before returning, so additional readiness waits in fixtures are unnecessary.

Applied to files:

  • tests/model_explainability/guardrails/conftest.py
🔇 Additional comments (3)
utilities/constants.py (1)

318-324: Add HAP/BPIV2 MinIO image config — consistent with existing pattern

The new QWEN_HAP_BPIV2_MINIO_CONFIG mirrors the existing image+sha256 pinning and reuses MINIO_BASE_CONFIG. This slots in cleanly with other MinIO presets.

tests/model_explainability/guardrails/conftest.py (1)

311-339: HAP detector InferenceService fixture looks correct and aligns with existing prompt-injection fixture

  • Reuses the same HF ServingRuntime and creates a dedicated isvc with a clear name, pinned resources, and auth disabled.
  • Good call on wait_for_predictor_pods=False since create_isvc already waits for READY=True. Leveraging the existing helper avoids redundant waits. I’m explicitly using the retrieved learning here.

No changes required from my side.

tests/model_explainability/guardrails/test_guardrails.py (1)

325-367: Paramset uses QWEN_HAP_BPIV2_MINIO_CONFIG — good alignment with new MinIO preset

This parametrization correctly switches to MinIo.PodConfig.QWEN_HAP_BPIV2_MINIO_CONFIG for the multi-detector scenario. The orchestrator config also registers both services (prompt-injection-detector-predictor and hap-detector-predictor), matching the fixture names.

No changes needed.

Comment thread tests/model_explainability/guardrails/test_guardrails.py
Comment thread tests/model_explainability/guardrails/test_guardrails.py
Comment thread tests/model_explainability/guardrails/test_guardrails.py
Comment thread tests/model_explainability/guardrails/test_guardrails.py
Comment thread tests/model_explainability/guardrails/test_guardrails.py Outdated
Comment thread tests/model_explainability/guardrails/test_guardrails.py
Comment thread tests/model_explainability/guardrails/test_guardrails.py Outdated
@kpunwatk kpunwatk force-pushed the several_detectors branch 2 times, most recently from 5a54272 to 506fcbf Compare August 14, 2025 10:04
@kpunwatk
Copy link
Copy Markdown
Contributor Author

Hi @adolfo-ab @sheltoncyril @dbasunag please review again, relevant of the changes are updated, Thanks!

sheltoncyril
sheltoncyril previously approved these changes Aug 14, 2025
Copy link
Copy Markdown
Contributor

@sheltoncyril sheltoncyril left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
tests/model_explainability/guardrails/test_guardrails.py (2)

375-385: Docstring grammar and clarity

Tighten wording and fix plural agreement.

-    These tests verify that the GuardrailsOrchestrator works as expected when using two HuggingFace detectors
-    (prompt injection and hap).
+    These tests verify that the GuardrailsOrchestrator works as expected when using two HuggingFace detectors
+    (prompt injection and HAP).
@@
-        - Deploy a prompt injection detector and HAP detectors using the HuggingFace SR.
-        - Check that the detectors works when we have an unsuitable input.
-        - Check that the detector works when we have a harmless input (no detection).
+        - Deploy a Prompt Injection detector and a HAP detector using the HuggingFace SR.
+        - Check that the detectors work when we have an unsuitable input.
+        - Check that the detectors work when we have a harmless input (no detection).

404-414: Add timeouts to HTTP calls to avoid indefinite hangs in CI

requests.post defaults to no timeout; a slow or stuck route will stall the test. Add a reasonable timeout (e.g., 30–60s) to each request in this suite.

         response_prompt = requests.post(
             url=f"https://{guardrails_orchestrator_route.host}/{CHAT_COMPLETIONS_DETECTION_ENDPOINT}",
             headers=get_auth_headers(token=current_client_token),
             json=get_chat_detections_payload(
                 content=prompt_injection,
                 model=MNT_MODELS,
                 detectors=HF_DETECTORS,
             ),
-            verify=openshift_ca_bundle_file,
+            verify=openshift_ca_bundle_file,
+            timeout=Timeout.TIMEOUT_1MIN,
         )
@@
         response_hap = requests.post(
             url=f"https://{guardrails_orchestrator_route.host}/{CHAT_COMPLETIONS_DETECTION_ENDPOINT}",
             headers=get_auth_headers(token=current_client_token),
             json=get_chat_detections_payload(
                 content=hap_prompt,
                 model=MNT_MODELS,
                 detectors=HF_DETECTORS,
             ),
-            verify=openshift_ca_bundle_file,
+            verify=openshift_ca_bundle_file,
+            timeout=Timeout.TIMEOUT_1MIN,
         )
@@
         response = requests.post(
             url=f"https://{guardrails_orchestrator_route.host}/{CHAT_COMPLETIONS_DETECTION_ENDPOINT}",
             headers=get_auth_headers(token=current_client_token),
             json=get_chat_detections_payload(content=HARMLESS_PROMPT, model=MNT_MODELS, detectors=HF_DETECTORS),
-            verify=openshift_ca_bundle_file,
+            verify=openshift_ca_bundle_file,
+            timeout=Timeout.TIMEOUT_1MIN,
         )

Also applies to: 424-433, 453-458

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these settings in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 7cb77cf and 7854b50.

📒 Files selected for processing (3)
  • tests/model_explainability/guardrails/conftest.py (1 hunks)
  • tests/model_explainability/guardrails/test_guardrails.py (2 hunks)
  • utilities/constants.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/model_explainability/guardrails/conftest.py
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: adolfo-ab
PR: opendatahub-io/opendatahub-tests#334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.
🧬 Code Graph Analysis (1)
tests/model_explainability/guardrails/test_guardrails.py (4)
utilities/constants.py (3)
  • MinIo (272-332)
  • PodConfig (255-262)
  • PodConfig (288-328)
tests/conftest.py (3)
  • current_client_token (82-83)
  • minio_pod (490-530)
  • minio_data_connection (556-568)
tests/model_explainability/guardrails/conftest.py (4)
  • guardrails_orchestrator_route (104-113)
  • prompt_injection_detector_route (205-215)
  • hap_detector_route (343-353)
  • guardrails_orchestrator (34-60)
tests/model_explainability/guardrails/utils.py (4)
  • get_auth_headers (11-12)
  • get_chat_detections_payload (15-40)
  • verify_builtin_detector_unsuitable_input_response (110-148)
  • verify_negative_detection_response (190-226)
🔇 Additional comments (2)
utilities/constants.py (1)

318-324: New MinIO image config for HAP/BPIV2 — LGTM

The constant is consistent with existing patterns (digest-pinned image, MINIO_BASE_CONFIG merge). No issues spotted.

tests/model_explainability/guardrails/test_guardrails.py (1)

36-46: Good split between single- and multi-detector constants

This preserves backward compatibility for single-detector HF tests while enabling the multi-detector scenarios. Naming is clear.

Comment thread tests/model_explainability/guardrails/test_guardrails.py
Comment thread tests/model_explainability/guardrails/test_guardrails.py Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
tests/model_explainability/guardrails/test_guardrails.py (1)

331-371: Avoid namespace collision with single-detector HF tests

Both suites use the same model namespace ("test-guardrails-huggingface"); this risks resource conflicts/flakiness when run together. Use a distinct namespace for the multi-detector suite.

Apply this diff:

-            {"name": "test-guardrails-huggingface"},
+            {"name": "test-guardrails-huggingface-multi"},
🧹 Nitpick comments (2)
utilities/constants.py (1)

318-321: Nit: collapse split string literals for readability

Inline the digest into the image string to avoid relying on implicit string literal concatenation.

Apply this diff:

-        QWEN_HAP_BPIV2_MINIO_CONFIG: dict[str, Any] = {
-            "image": "quay.io/trustyai_testing/qwen2.5-0.5b-instruct-hap-bpiv2-minio@"
-            "sha256:eac1ca56f62606e887c80b4a358b3061c8d67f0b071c367c0aa12163967d5b2b",
+        QWEN_HAP_BPIV2_MINIO_CONFIG: dict[str, Any] = {
+            "image": "quay.io/trustyai_testing/qwen2.5-0.5b-instruct-hap-bpiv2-minio@sha256:eac1ca56f62606e887c80b4a358b3061c8d67f0b071c367c0aa12163967d5b2b",
tests/model_explainability/guardrails/test_guardrails.py (1)

375-385: Docstring grammar nits

Fix subject-verb agreement and clarify that both detectors are exercised.

Apply this diff:

 class TestGuardrailsOrchestratorWithSeveralDetectors:
     """
-    These tests verify that the GuardrailsOrchestrator works as expected when using two HuggingFace detectors
-    (prompt injection and hap).
+    These tests verify that the GuardrailsOrchestrator works as expected when using two HuggingFace detectors
+    (prompt injection and HAP).
     Steps:
         - Deploy an LLM (Qwen2.5-0.5B-Instruct) using the vLLM SR.
         - Deploy the GuardrailsOrchestrator.
-        - Deploy a prompt injection detector and HAP detectors using the HuggingFace SR.
-        - Check that the detectors works when we have an unsuitable input.
-        - Check that the detector works when we have a harmless input (no detection).
+        - Deploy prompt injection and HAP detectors using the HuggingFace SR.
+        - Check that the detectors work for unsuitable input.
+        - Check that the detectors return no detection for harmless input.
     """
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these settings in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 35f9f0c and 5aa9b6d.

📒 Files selected for processing (3)
  • tests/model_explainability/guardrails/conftest.py (1 hunks)
  • tests/model_explainability/guardrails/test_guardrails.py (2 hunks)
  • utilities/constants.py (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: adolfo-ab
PR: opendatahub-io/opendatahub-tests#334
File: tests/model_explainability/trustyai_service/test_trustyai_service.py:52-65
Timestamp: 2025-06-05T10:05:17.642Z
Learning: For TrustyAI image validation tests: operator image tests require admin_client, related_images_refs, and trustyai_operator_configmap fixtures, while service image tests would require different fixtures like trustyai_service_with_pvc_storage, model_namespace, and current_client_token.
📚 Learning: 2025-06-11T16:40:11.593Z
Learnt from: israel-hdez
PR: opendatahub-io/opendatahub-tests#346
File: tests/model_serving/model_server/inference_graph/conftest.py:85-92
Timestamp: 2025-06-11T16:40:11.593Z
Learning: The helper `create_isvc` (used in tests/model_serving utilities) already waits until the created InferenceService reports Condition READY=True before returning, so additional readiness waits in fixtures are unnecessary.

Applied to files:

  • tests/model_explainability/guardrails/conftest.py
🧬 Code Graph Analysis (2)
tests/model_explainability/guardrails/conftest.py (2)
utilities/inference_utils.py (1)
  • create_isvc (547-761)
utilities/constants.py (1)
  • KServeDeploymentType (6-9)
tests/model_explainability/guardrails/test_guardrails.py (4)
tests/conftest.py (3)
  • current_client_token (82-83)
  • minio_pod (490-530)
  • minio_data_connection (556-568)
tests/model_explainability/conftest.py (1)
  • qwen_isvc (181-204)
tests/model_explainability/guardrails/conftest.py (6)
  • guardrails_orchestrator_route (104-113)
  • prompt_injection_detector_route (205-215)
  • hap_detector_route (343-353)
  • openshift_ca_bundle_file (220-223)
  • orchestrator_config (64-73)
  • guardrails_orchestrator (34-60)
tests/model_explainability/guardrails/utils.py (4)
  • get_auth_headers (11-12)
  • get_chat_detections_payload (15-40)
  • verify_builtin_detector_unsuitable_input_response (110-148)
  • verify_negative_detection_response (190-226)
🔇 Additional comments (7)
utilities/constants.py (1)

318-324: Add pinned HAP MinIO image config — LGTM

The new QWEN_HAP_BPIV2_MINIO_CONFIG is consistent with existing MinIO configs and correctly pins the image by digest. This enables reproducible test environments for the new HAP detector scenarios.

tests/model_explainability/guardrails/conftest.py (3)

311-339: HAP detector InferenceService fixture — LGTM

Parameters mirror the prompt-injection detector, using the HF runtime and MinIO storage as expected. Auth disabled and 1x replica sizing is appropriate for tests.


325-327: Verify MinIO storage path exists in the new image

Ensure the path "granite-guardian-hap-38m" exists in the MinIO bucket provided by QWEN_HAP_BPIV2_MINIO_CONFIG; otherwise, the detector will fail to load.

Would you like me to add a preflight check in the fixture that probes MinIO for the path and skips the test gracefully if missing?


342-353: Route for HAP detector — LGTM

The Route wiring to the detector service aligns with the pattern used for the prompt-injection detector and should work as intended.

tests/model_explainability/guardrails/test_guardrails.py (3)

36-46: Detector constants split is correct and clear — LGTM

Keeping PROMPT_INJECTION_DETECTORS for single-detector tests and introducing HF_DETECTORS for the dual-detector suite preserves backward compatibility and clarity.


401-421: Good use of a loop to exercise both detectors

The looped POST + verification reduces duplication and keeps both detectors’ assertions consistent.


442-449: Negative detection payload correctly uses multi-detector configuration

The harmless prompt uses HF_DETECTORS, ensuring both detectors contribute to the “no detections” assertion.

Comment thread tests/model_explainability/guardrails/test_guardrails.py Outdated
Comment thread tests/model_explainability/guardrails/test_guardrails.py Outdated
Comment thread tests/model_explainability/guardrails/test_guardrails.py Outdated
	modified:   tests/model_explainability/guardrails/conftest.py
	modified:   tests/model_explainability/guardrails/test_guardrails.py
	modified:   utilities/constants.py

	modified:   tests/model_explainability/guardrails/conftest.py
	modified:   tests/model_explainability/guardrails/test_guardrails.py
	modified:   utilities/constants.py

	modified:   tests/model_explainability/guardrails/conftest.py
	modified:   tests/model_explainability/guardrails/test_guardrails.py
	modified:   utilities/constants.py

	modified:   tests/model_explainability/guardrails/test_guardrails.py

	modified:   tests/model_explainability/guardrails/test_guardrails.py

	modified:   tests/model_explainability/guardrails/test_guardrails.py

	modified:   tests/model_explainability/guardrails/test_guardrails.py

	modified:   tests/model_explainability/guardrails/test_guardrails.py
Copy link
Copy Markdown
Contributor

@sheltoncyril sheltoncyril left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@sheltoncyril sheltoncyril requested a review from adolfo-ab August 15, 2025 10:28
@sheltoncyril sheltoncyril dismissed adolfo-ab’s stale review August 15, 2025 10:34

All comments have been addressed, just looking to merge as requested by @kpunwatk

@sheltoncyril sheltoncyril merged commit 66a1327 into opendatahub-io:main Aug 15, 2025
10 checks passed
@github-actions
Copy link
Copy Markdown

Status of building tag latest: success.
Status of pushing tag latest to image registry: success.

mwaykole pushed a commit to mwaykole/opendatahub-tests that referenced this pull request Jan 23, 2026
* Add HAP detectors to existing test cases
	modified:   tests/model_explainability/guardrails/conftest.py
	modified:   tests/model_explainability/guardrails/test_guardrails.py
	modified:   utilities/constants.py

	modified:   tests/model_explainability/guardrails/conftest.py
	modified:   tests/model_explainability/guardrails/test_guardrails.py
	modified:   utilities/constants.py

	modified:   tests/model_explainability/guardrails/conftest.py
	modified:   tests/model_explainability/guardrails/test_guardrails.py
	modified:   utilities/constants.py

	modified:   tests/model_explainability/guardrails/test_guardrails.py

	modified:   tests/model_explainability/guardrails/test_guardrails.py

	modified:   tests/model_explainability/guardrails/test_guardrails.py

	modified:   tests/model_explainability/guardrails/test_guardrails.py

	modified:   tests/model_explainability/guardrails/test_guardrails.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants