Skip to content

test(lmeval): add GPU integration tests with vLLM runtime and fix accelerator typo#1275

Open
ssaleem-rh wants to merge 9 commits intoopendatahub-io:mainfrom
ssaleem-rh:lmeval_gpu
Open

test(lmeval): add GPU integration tests with vLLM runtime and fix accelerator typo#1275
ssaleem-rh wants to merge 9 commits intoopendatahub-io:mainfrom
ssaleem-rh:lmeval_gpu

Conversation

@ssaleem-rh
Copy link
Copy Markdown

@ssaleem-rh ssaleem-rh commented Mar 23, 2026

Pull Request

Summary

Introduces GPU based LMEval integration testing and fixes a typo in the accelerator environment variable.

  • Added test_lmeval_gpu to validate LMEval functionality on GPU-backed model deployments using vLLM
  • Introduced GPU-specific fixtures:
    • ServingRuntime
    • InferenceService
    • LMEvalJob
    • Pod setup for evaluation
  • Added wait_for_vllm_model_ready utility to ensure model readiness
  • Relocated skip_if_no_supported_accelerator_type fixture to tests/conftest.py for reuse across test modules
  • Tests are skipped automatically when no supported accelerator is available
  • Fixed typo in environment variable and related error message:
    • SUPPORTED_ACCLERATOR_TYPESUPPORTED_ACCELERATOR_TYPE

Related Issues

How it has been tested

  • Locally
  • Jenkins

Test Results:

Successfully executed test_lmeval_gpu on a GPU-enabled cluster

image image

Additional Requirements

  • If this PR introduces a new test image, did you create a PR to mirror it in disconnected environment?
  • If this PR introduces new marker(s)/adds a new component, was relevant ticket created to update relevant Jenkins job?

Summary by CodeRabbit

  • New Features

    • GPU-accelerated model evaluation with KServe inference service integration and multi-vendor GPU support (NVIDIA, AMD, Gaudi)
    • Automatic model readiness polling for GPU workflows before evaluation runs
  • Bug Fixes

    • Fixed environment variable name for accelerator-type configuration (SUPPORTED_ACCELERATOR_TYPE)
  • Tests

    • Added GPU-specific evaluation tests and a session-level skip when no supported accelerator is available; removed a duplicate prior fixture

…CELERATOR_TYPE

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED

Signed-off-by: Shehan Saleem <ssaleem@redhat.com>
Add test_lmeval_gpu to verify LMEval works with GPU-backed
model deployments via vLLM runtime. Includes:
- New test for GPU model evaluation with SmolLM-1.7B
- wait_for_vllm_model_ready utility for model readiness checks
- GPU-specific fixtures: ServingRuntime, InferenceService, LMEvalJob, and pod; skip when no supported accelerator

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED

Signed-off-by: Shehan Saleem <ssaleem@redhat.com>
…TED_ACCLERATOR_TYPE → SUPPORTED_ACCELERATOR_TYPE in runtime option.

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED

Signed-off-by: Shehan Saleem <ssaleem@redhat.com>
- Relocate skip_if_no_supported_accelerator_type fixture to tests/conftest.py for reuse across test modules
- Introduce ACCELERATOR_IDENTIFIER constant and update imports
- Relax type annotation in lmeval_vllm_inference_service to str | None

Signed-off-by: ssaleem-rh <ssaleem@redhat.com>

Signed-off-by: Shehan Saleem <ssaleem@redhat.com>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED

Signed-off-by: Shehan Saleem <ssaleem@redhat.com>
@github-actions
Copy link
Copy Markdown

The following are automatically added/executed:

  • PR size label.
  • Run pre-commit
  • Run tox
  • Add PR author as the PR assignee
  • Build image based on the PR

Available user actions:

  • To mark a PR as WIP, add /wip in a comment. To remove it from the PR comment /wip cancel to the PR.
  • To block merging of a PR, add /hold in a comment. To un-block merging of PR comment /hold cancel.
  • To mark a PR as approved, add /lgtm in a comment. To remove, add /lgtm cancel.
    lgtm label removed on each new commit push.
  • To mark PR as verified comment /verified to the PR, to un-verify comment /verified cancel to the PR.
    verified label removed on each new commit push.
  • To Cherry-pick a merged PR /cherry-pick <target_branch_name> to the PR. If <target_branch_name> is valid,
    and the current PR is merged, a cherry-picked PR would be created and linked to the current PR.
  • To build and push image to quay, add /build-push-pr-image in a comment. This would create an image with tag
    pr-<pr_number> to quay repository. This image tag, however would be deleted on PR merge or close action.
Supported labels

{'/cherry-pick', '/build-push-pr-image', '/wip', '/hold', '/lgtm', '/verified'}

Previously removed as unnecessary, but required to suppress BLE001 warning.

Signed-off-by: Shehan Saleem <ssaleem@redhat.com>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 23, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 38ee1859-c173-49fd-b15d-de865f7a43ce

📥 Commits

Reviewing files that changed from the base of the PR and between 6205fa6 and feda520.

📒 Files selected for processing (2)
  • tests/model_explainability/lm_eval/utils.py
  • utilities/exceptions.py
✅ Files skipped from review due to trivial changes (1)
  • utilities/exceptions.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/model_explainability/lm_eval/utils.py

📝 Walkthrough

Walkthrough

Corrects a misspelled pytest env-var default, centralizes a session skip fixture, adds GPU LMEval support (accelerator identifiers, vLLM ServingRuntime/InferenceService/LMEvalJob fixtures, pod readiness waiter, GPU test), and removes a duplicate skip fixture. Attention: HF-related env vars and PVC/service provisioning may expose secrets or data (CWE-200); verify secret handling.

Changes

Cohort / File(s) Summary
Root pytest config
conftest.py, tests/conftest.py
Fixed default env-var for --supported-accelerator-type from SUPPORTED_ACCLERATOR_TYPESUPPORTED_ACCELERATOR_TYPE; added session-scoped skip_if_no_supported_accelerator_type fixture in tests/conftest.py.
GPU evaluation fixtures
tests/model_explainability/lm_eval/conftest.py
Added fixtures to provision a vLLM ServingRuntime and InferenceService (selects GPU resource key via ACCELERATOR_IDENTIFIER), create an LMEvalJob targeting /v1/completions, and yield the job pod; configures CPU/memory/GPU requests/limits, vLLM args, HF env vars, skips unsupported accelerators.
Accelerator constants
tests/model_explainability/lm_eval/constants.py
Added ACCELERATOR_IDENTIFIER: dict[str, str] mapping "nvidia"→"nvidia.com/gpu", "amd"→"amd.com/gpu", "gaudi"→"habana.ai/gaudi".
Tests and readiness utility
tests/model_explainability/lm_eval/test_lm_eval.py, tests/model_explainability/lm_eval/utils.py
Added GPU-marked test test_lmeval_gpu that awaits vLLM readiness then validates job pod/logs; added wait_for_vllm_model_ready() which polls predictor pod logs for readiness markers with retry/timeout and surfaces diagnostics on failure.
Exception types
utilities/exceptions.py
Added new exception ResourceNotFoundError used by utilities.
Fixture consolidation
tests/model_serving/model_runtime/conftest.py
Removed duplicate session-scoped skip_if_no_supported_accelerator_type fixture (now centralized in tests/conftest.py).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 2
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main changes: adding GPU integration tests for LMEval with vLLM and fixing the accelerator environment variable typo.
Description check ✅ Passed The description addresses all key template sections with substantive content: Summary details the changes, Related Issues includes JIRA ticket, testing checkbox marked locally with evidence, and requirements section completed.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/conftest.py`:
- Around line 990-994: The fixture skip_if_no_supported_accelerator_type
currently only skips when supported_accelerator_type is missing; update it to
also skip when the provided supported_accelerator_type is not a GPU-type (e.g.,
values like "spyre" or "cpu_x86"). In the skip_if_no_supported_accelerator_type
fixture, change the condition to check that supported_accelerator_type is truthy
AND matches an expected GPU indicator (for example contains "gpu" or "cuda" or
is in a set of known GPU identifiers) and call pytest.skip with a clear message
when it does not; keep the parameter name supported_accelerator_type and the
pytest.skip call but make the value check stricter so non-GPU strings are
skipped.

In `@tests/model_explainability/lm_eval/conftest.py`:
- Around line 582-597: The fixture lmeval_vllm_serving_runtime currently
hardcodes RuntimeTemplates.VLLM_CUDA; change it to choose the runtime template
based on the cluster/accelerator type (e.g., detect NVIDIA vs AMD/ROCm vs Gaudi
from whatever cluster/fixture value is available) by creating a small mapping
(accelerator_type -> runtime_template) and use that variable instead of
RuntimeTemplates.VLLM_CUDA when calling ServingRuntimeFromTemplate; ensure the
chosen template covers ROCm and Gaudi variants (e.g., VLLM_ROCM, VLLM_GAUDI or
their project equivalents) and keep other args (runtime_image,
support_tgis_open_ai_endpoints, deployment_type) unchanged so the correct
ServingRuntime backend is provisioned for each accelerator.

In `@tests/model_explainability/lm_eval/test_lm_eval.py`:
- Around line 203-214: The GPU test test_lmeval_gpu is executing online model
downloads and must be skipped on disconnected clusters; add the pytest marker
`@pytest.mark.skip_on_disconnected` above the test_lmeval_gpu definition
(alongside the existing `@pytest.mark.gpu` and `@pytest.mark.parametrize`
decorators) to guard this GPU path so it will be skipped when Hub egress is not
available.

In `@tests/model_explainability/lm_eval/utils.py`:
- Around line 160-182: The generic except Exception around predictor_pod.log()
in the waiting loop and the final logs retrieval should be narrowed to only
transient/k8s-unavailable errors; change both handlers to catch
kubernetes.client.exceptions.ApiException and the project ResourceNotFoundError
(or whatever local pod-missing exception is used) and let any other exception
propagate so permanent errors fail fast; also add the necessary imports for
ApiException and ResourceNotFoundError and keep the same logging behavior for
the caught exceptions while re-raising unexpected exceptions instead of
swallowing them and retrying until max_wait_time; target the
predictor_pod.log(...) calls and the UnexpectedFailureError raise for locating
the changes.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 2db22a44-3620-4fa0-8bb3-715d65beb111

📥 Commits

Reviewing files that changed from the base of the PR and between 90b3ed2 and 368ce13.

📒 Files selected for processing (7)
  • conftest.py
  • tests/conftest.py
  • tests/model_explainability/lm_eval/conftest.py
  • tests/model_explainability/lm_eval/constants.py
  • tests/model_explainability/lm_eval/test_lm_eval.py
  • tests/model_explainability/lm_eval/utils.py
  • tests/model_serving/model_runtime/conftest.py
💤 Files with no reviewable changes (1)
  • tests/model_serving/model_runtime/conftest.py

Add support for NVIDIA, AMD, and Gaudi in LMEval GPU tests.
Update the skip_if_no_supported_accelerator_type fixture to validate supported GPU types.
Improve exception handling in the vLLM readiness check and mark tests with skip_on_disconnected.

Signed-off-by: Shehan Saleem <ssaleem@redhat.com>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
tests/model_explainability/lm_eval/conftest.py (2)

675-678: Instantiating Service only to retrieve its name is unnecessary.

The service name follows a predictable pattern ({isvc_name}-predictor). Creating a Service object solely to access .name adds overhead and obscures intent.

Simplify to direct string construction
-    model_service = Service(
-        name=f"{lmeval_vllm_inference_service.name}-predictor",
-        namespace=lmeval_vllm_inference_service.namespace,
-    )
+    model_service_name = f"{lmeval_vllm_inference_service.name}-predictor"

Then at line 696:

-                "value": f"http://{model_service.name}.{model_namespace.name}.svc.cluster.local:80/v1/completions",
+                "value": f"http://{model_service_name}.{model_namespace.name}.svc.cluster.local:80/v1/completions",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_explainability/lm_eval/conftest.py` around lines 675 - 678,
Replace the unnecessary Service instantiation used only to obtain a name:
instead of creating model_service = Service(...) just build the predictor name
string directly from lmeval_vllm_inference_service.name (e.g.
f"{lmeval_vllm_inference_service.name}-predictor") and use that string where
model_service.name was referenced (see model_service variable and its later
usage around the predictor creation at line ~696); remove the unused Service
object to simplify intent and eliminate overhead.

621-626: Extract model_path to a module-level constant to avoid duplication.

"HuggingFaceTB/SmolLM-1.7B" is defined here and again at line 674 in lmevaljob_gpu. If the model changes, both locations must be updated.

Suggested refactor

Add near the top of the file with other constants:

SMOLLM_MODEL_PATH: str = "HuggingFaceTB/SmolLM-1.7B"

Then reference SMOLLM_MODEL_PATH in both fixtures.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_explainability/lm_eval/conftest.py` around lines 621 - 626,
Extract the literal "HuggingFaceTB/SmolLM-1.7B" into a module-level constant
(e.g., SMOLLM_MODEL_PATH) near the top with other constants, then replace uses
of the literal in the fixtures (referenced by model_path in current fixture and
in lmevaljob_gpu) to reference SMOLLM_MODEL_PATH instead so both fixtures share
the single source of truth.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/model_explainability/lm_eval/conftest.py`:
- Around line 595-599: The code silently defaults supported_accelerator_type to
"nvidia" which can provision CUDA incorrectly; change the logic so it no longer
defaults — if supported_accelerator_type is None either (a) declare this fixture
dependent on skip_if_no_supported_accelerator_type to guarantee a value, or (b)
immediately call pytest.skip (or raise) when supported_accelerator_type is None
before computing accelerator_type/template_name; update the block that currently
sets accelerator_type, template_name and the subsequent skip to first check
supported_accelerator_type and skip with a clear message rather than falling
back to "nvidia".

---

Nitpick comments:
In `@tests/model_explainability/lm_eval/conftest.py`:
- Around line 675-678: Replace the unnecessary Service instantiation used only
to obtain a name: instead of creating model_service = Service(...) just build
the predictor name string directly from lmeval_vllm_inference_service.name (e.g.
f"{lmeval_vllm_inference_service.name}-predictor") and use that string where
model_service.name was referenced (see model_service variable and its later
usage around the predictor creation at line ~696); remove the unused Service
object to simplify intent and eliminate overhead.
- Around line 621-626: Extract the literal "HuggingFaceTB/SmolLM-1.7B" into a
module-level constant (e.g., SMOLLM_MODEL_PATH) near the top with other
constants, then replace uses of the literal in the fixtures (referenced by
model_path in current fixture and in lmevaljob_gpu) to reference
SMOLLM_MODEL_PATH instead so both fixtures share the single source of truth.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: aa15b6ad-be6f-4a1f-a355-46d489d4cfc3

📥 Commits

Reviewing files that changed from the base of the PR and between f5a7677 and 6205fa6.

📒 Files selected for processing (4)
  • tests/conftest.py
  • tests/model_explainability/lm_eval/conftest.py
  • tests/model_explainability/lm_eval/test_lm_eval.py
  • tests/model_explainability/lm_eval/utils.py
🚧 Files skipped from review as they are similar to previous changes (3)
  • tests/conftest.py
  • tests/model_explainability/lm_eval/test_lm_eval.py
  • tests/model_explainability/lm_eval/utils.py

Comment on lines +595 to +599
accelerator_type = supported_accelerator_type.lower() if supported_accelerator_type else "nvidia"
template_name = accelerator_to_template.get(accelerator_type)

if not template_name:
pytest.skip(f"Unsupported accelerator type for vLLM: {supported_accelerator_type}")
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Silent fallback to "nvidia" when supported_accelerator_type is None can cause confusing failures.

Line 595 defaults to "nvidia" when supported_accelerator_type is None, but the CLI option (per root conftest.py) returns None when the environment variable is unset. If a test runs on a non-NVIDIA cluster without the accelerator type configured, this fixture will provision a CUDA runtime and fail with a misleading error instead of skipping gracefully.

Consider either:

  1. Requiring the value explicitly (skip/error when None)
  2. Relying on the skip_if_no_supported_accelerator_type fixture as a dependency to guarantee the value is never None here
Option 1: Fail fast when accelerator type is missing
-    accelerator_type = supported_accelerator_type.lower() if supported_accelerator_type else "nvidia"
-    template_name = accelerator_to_template.get(accelerator_type)
-
-    if not template_name:
-        pytest.skip(f"Unsupported accelerator type for vLLM: {supported_accelerator_type}")
+    if not supported_accelerator_type:
+        pytest.skip("supported_accelerator_type is required for GPU-backed vLLM tests")
+
+    accelerator_type = supported_accelerator_type.lower()
+    template_name = accelerator_to_template.get(accelerator_type)
+
+    if not template_name:
+        pytest.skip(f"Unsupported accelerator type for vLLM: {accelerator_type}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
accelerator_type = supported_accelerator_type.lower() if supported_accelerator_type else "nvidia"
template_name = accelerator_to_template.get(accelerator_type)
if not template_name:
pytest.skip(f"Unsupported accelerator type for vLLM: {supported_accelerator_type}")
if not supported_accelerator_type:
pytest.skip("supported_accelerator_type is required for GPU-backed vLLM tests")
accelerator_type = supported_accelerator_type.lower()
template_name = accelerator_to_template.get(accelerator_type)
if not template_name:
pytest.skip(f"Unsupported accelerator type for vLLM: {accelerator_type}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_explainability/lm_eval/conftest.py` around lines 595 - 599, The
code silently defaults supported_accelerator_type to "nvidia" which can
provision CUDA incorrectly; change the logic so it no longer defaults — if
supported_accelerator_type is None either (a) declare this fixture dependent on
skip_if_no_supported_accelerator_type to guarantee a value, or (b) immediately
call pytest.skip (or raise) when supported_accelerator_type is None before
computing accelerator_type/template_name; update the block that currently sets
accelerator_type, template_name and the subsequent skip to first check
supported_accelerator_type and skip with a clear message rather than falling
back to "nvidia".

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ssaleem-rh this seems like a legit comment. Can you please address it?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still see no changes here and find this comment resolved again. @kpunwatk can you please work with @ssaleem-rh here?

@ssaleem-rh ssaleem-rh requested a review from sheltoncyril March 30, 2026 13:28
kpunwatk
kpunwatk previously approved these changes Mar 30, 2026
Copy link
Copy Markdown
Contributor

@kpunwatk kpunwatk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Comment on lines +595 to +599
accelerator_type = supported_accelerator_type.lower() if supported_accelerator_type else "nvidia"
template_name = accelerator_to_template.get(accelerator_type)

if not template_name:
pytest.skip(f"Unsupported accelerator type for vLLM: {supported_accelerator_type}")
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ssaleem-rh this seems like a legit comment. Can you please address it?

Replace manual timeout and stabilization loop with TimeoutSampler.
Add specific exceptions (ResourceNotFoundError, UnexpectedResourceCountError).
Use component=predictor label selector for pod filtering.
Use collect_pod_information for better logging.

Signed-off-by: Shehan Saleem <ssaleem@redhat.com>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
@ssaleem-rh
Copy link
Copy Markdown
Author

Addressed all review comments. Please take another look.

Comment on lines +595 to +599
accelerator_type = supported_accelerator_type.lower() if supported_accelerator_type else "nvidia"
template_name = accelerator_to_template.get(accelerator_type)

if not template_name:
pytest.skip(f"Unsupported accelerator type for vLLM: {supported_accelerator_type}")
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still see no changes here and find this comment resolved again. @kpunwatk can you please work with @ssaleem-rh here?

except TimeoutExpiredError as e:
LOGGER.error(f"vLLM pod failed to start within {max_wait_time} seconds")
collect_pod_information(pod=predictor_pod)
raise UnexpectedFailureError(f"vLLM model failed to load within {max_wait_time} seconds") from e
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please re-raise TimeoutExpiredError

"""Unexpected value found"""


class ResourceNotFoundError(Exception):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use from kubernetes.dynamic.exceptions import ResourceNotFoundError

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants