Skip to content

test(workbenches): add a test to validate ImageStreams#1126

Merged
dbasunag merged 2 commits intoopendatahub-io:mainfrom
jstourac:validateImageStreams
Feb 25, 2026
Merged

test(workbenches): add a test to validate ImageStreams#1126
dbasunag merged 2 commits intoopendatahub-io:mainfrom
jstourac:validateImageStreams

Conversation

@jstourac
Copy link
Copy Markdown
Member

@jstourac jstourac commented Feb 18, 2026

Description

Introduces a new test to check that state of ImageStream resources related to workbench images.

Note: after we merge this, we shall backport to 3.3 at least:

How Has This Been Tested?

uv run pytest tests/workbenches/test_imagestream_health.py

Example of a problemmatic ImageStream:

status:
  dockerImageRepository: ''
  tags:
    - tag: '2023.2'
      items:
        - created: '2026-02-16T09:39:52Z'
          dockerImageReference: 'quay.io/modh/codeserver@sha256:b1a048f3711149e36a89e0eda1a5601130fb536ecc0aabae42ab6e4d26977354'
          image: 'sha256:b1a048f3711149e36a89e0eda1a5601130fb536ecc0aabae42ab6e4d26977354'
          generation: 2
    - tag: '2024.1'
      items:
        - created: '2026-02-16T09:39:52Z'
          dockerImageReference: 'quay.io/modh/codeserver@sha256:1fd51b0e8a14995f1f7273a4b0b40f6e7e27e225ab179959747846e54079d61e'
          image: 'sha256:1fd51b0e8a14995f1f7273a4b0b40f6e7e27e225ab179959747846e54079d61e'
          generation: 2
    - tag: '2024.2'
      items:
        - created: '2026-02-16T09:39:52Z'
          dockerImageReference: 'quay.io/modh/codeserver@sha256:92f2a10dde5c96b29324426b4325401e8f4a0d257e439927172d5fe909289c44'
          image: 'sha256:92f2a10dde5c96b29324426b4325401e8f4a0d257e439927172d5fe909289c44'
          generation: 2
    - tag: '2025.1'
      items: null
      conditions:
        - type: ImportSuccess
          status: 'False'
          lastTransitionTime: '2026-02-16T09:39:52Z'
          reason: InternalError
          message: 'Internal error occurred: [<redacted>:8443/rhoai/odh-workbench-codeserver-datascience-cpu-py311-rhel9@sha256:755f8dacf495f6abb29233edb422ca473ba82cc23370d4fcbaa4f938e90a9c25: name unknown: repository not found, registry.redhat.io/rhoai/odh-workbench-codeserver-datascience-cpu-py311-rhel9@sha256:755f8dacf495f6abb29233edb422ca473ba82cc23370d4fcbaa4f938e90a9c25: Get "https://registry.redhat.io/v2/": context deadline exceeded]'
          generation: 2
    - tag: '2025.2'
      items: null
      conditions:
        - type: ImportSuccess
          status: 'False'
          lastTransitionTime: '2026-02-16T09:39:52Z'
          reason: InternalError
          message: 'Internal error occurred: [dockerimage.image.openshift.io "<redacted>:8443/rhoai/odh-workbench-codeserver-datascience-cpu-py312-rhel9@sha256:4fedf0efff3b52f398665ec9bd22546f1517155bf4653a90f911e0c8700fba4f" not found, registry.redhat.io/rhoai/odh-workbench-codeserver-datascience-cpu-py312-rhel9@sha256:4fedf0efff3b52f398665ec9bd22546f1517155bf4653a90f911e0c8700fba4f: Get "https://registry.redhat.io/v2/": context deadline exceeded]'
          generation: 2
    - tag: '3.4'
      items:
        - created: '2026-02-16T09:39:52Z'
          dockerImageReference: 'registry.redhat.io/rhoai/odh-workbench-codeserver-datascience-cpu-py312-rhel9@sha256:4c104ebad514a1683e1da6f9ed2f3547de741c0ae163811a0a2dcbdec53e4987'
          image: 'sha256:4c104ebad514a1683e1da6f9ed2f3547de741c0ae163811a0a2dcbdec53e4987'
          generation: 2

Example success call:

---------------------------------------------------------------------------------------------------------------- test_workbench_imagestreams_health[runtime_imagestreams] ----------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------ SETUP ------------------------------------------------------------------------------------------------------------------------------------------
2026-02-19T17:21:14.276184 conftest INFO Executing function fixture: label_selector
2026-02-19T17:21:14.276361 conftest INFO Executing function fixture: expected_imagestream_count
------------------------------------------------------------------------------------------------------------------------------------------ CALL ------------------------------------------------------------------------------------------------------------------------------------------
2026-02-19T17:21:14.389101 tests.workbenches.test_imagestream_health INFO Checking ImageStreams for label selector 'opendatahub.io/runtime-image=true': expected_count=7, actual_count=7
2026-02-19T17:21:14.389296 tests.workbenches.test_imagestream_health INFO ImageStreams matched for 'opendatahub.io/runtime-image=true': runtime-datascience, runtime-minimal, runtime-pytorch, runtime-pytorch-llmcompressor, runtime-rocm-pytorch, runtime-rocm-tensorflow, runtime-tensorflow
2026-02-19T17:21:14.501450 tests.workbenches.test_imagestream_health INFO Validating ImageStream runtime-datascience (label selector: opendatahub.io/runtime-image=true)
2026-02-19T17:21:14.501676 tests.workbenches.test_imagestream_health INFO ImageStream runtime-datascience tag coverage: spec_tags=['datascience'], status_tags=['datascience']
2026-02-19T17:21:14.501744 tests.workbenches.test_imagestream_health INFO Checked ImageStream tag runtime-datascience:datascience (items_count=1, import_success=N/A)
2026-02-19T17:21:14.613141 tests.workbenches.test_imagestream_health INFO Validating ImageStream runtime-minimal (label selector: opendatahub.io/runtime-image=true)
2026-02-19T17:21:14.613274 tests.workbenches.test_imagestream_health INFO ImageStream runtime-minimal tag coverage: spec_tags=['minimal'], status_tags=['minimal']
2026-02-19T17:21:14.613319 tests.workbenches.test_imagestream_health INFO Checked ImageStream tag runtime-minimal:minimal (items_count=1, import_success=N/A)
2026-02-19T17:21:14.724429 tests.workbenches.test_imagestream_health INFO Validating ImageStream runtime-pytorch (label selector: opendatahub.io/runtime-image=true)
2026-02-19T17:21:14.724544 tests.workbenches.test_imagestream_health INFO ImageStream runtime-pytorch tag coverage: spec_tags=['pytorch'], status_tags=['pytorch']
2026-02-19T17:21:14.724590 tests.workbenches.test_imagestream_health INFO Checked ImageStream tag runtime-pytorch:pytorch (items_count=1, import_success=N/A)
2026-02-19T17:21:14.835460 tests.workbenches.test_imagestream_health INFO Validating ImageStream runtime-pytorch-llmcompressor (label selector: opendatahub.io/runtime-image=true)
2026-02-19T17:21:14.835576 tests.workbenches.test_imagestream_health INFO ImageStream runtime-pytorch-llmcompressor tag coverage: spec_tags=['pytorch-llmcompressor'], status_tags=['pytorch-llmcompressor']
2026-02-19T17:21:14.835621 tests.workbenches.test_imagestream_health INFO Checked ImageStream tag runtime-pytorch-llmcompressor:pytorch-llmcompressor (items_count=1, import_success=N/A)
2026-02-19T17:21:14.946648 tests.workbenches.test_imagestream_health INFO Validating ImageStream runtime-rocm-pytorch (label selector: opendatahub.io/runtime-image=true)
2026-02-19T17:21:14.946857 tests.workbenches.test_imagestream_health INFO ImageStream runtime-rocm-pytorch tag coverage: spec_tags=['rocm-pytorch'], status_tags=['rocm-pytorch']
2026-02-19T17:21:14.946964 tests.workbenches.test_imagestream_health INFO Checked ImageStream tag runtime-rocm-pytorch:rocm-pytorch (items_count=1, import_success=N/A)
2026-02-19T17:21:15.058609 tests.workbenches.test_imagestream_health INFO Validating ImageStream runtime-rocm-tensorflow (label selector: opendatahub.io/runtime-image=true)
2026-02-19T17:21:15.058839 tests.workbenches.test_imagestream_health INFO ImageStream runtime-rocm-tensorflow tag coverage: spec_tags=['rocm-tensorflow'], status_tags=['rocm-tensorflow']
2026-02-19T17:21:15.058959 tests.workbenches.test_imagestream_health INFO Checked ImageStream tag runtime-rocm-tensorflow:rocm-tensorflow (items_count=1, import_success=N/A)
2026-02-19T17:21:15.171177 tests.workbenches.test_imagestream_health INFO Validating ImageStream runtime-tensorflow (label selector: opendatahub.io/runtime-image=true)
2026-02-19T17:21:15.171387 tests.workbenches.test_imagestream_health INFO ImageStream runtime-tensorflow tag coverage: spec_tags=['tensorflow'], status_tags=['tensorflow']
2026-02-19T17:21:15.171437 tests.workbenches.test_imagestream_health INFO Checked ImageStream tag runtime-tensorflow:tensorflow (items_count=1, import_success=N/A)
.

TEST: test_workbench_imagestreams_health[runtime_imagestreams] STATUS: PASSED

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

  • Tests
    • Added health-validation tests for workbench ImageStream resources: fetches streams by label, verifies expected counts, compares declared tags vs reported status, performs per-tag health checks for image references and import success, and aggregates detailed errors into a single test failure for easier diagnosis.

@github-actions
Copy link
Copy Markdown

The following are automatically added/executed:

  • PR size label.
  • Run pre-commit
  • Run tox
  • Add PR author as the PR assignee
  • Build image based on the PR

Available user actions:

  • To mark a PR as WIP, add /wip in a comment. To remove it from the PR comment /wip cancel to the PR.
  • To block merging of a PR, add /hold in a comment. To un-block merging of PR comment /hold cancel.
  • To mark a PR as approved, add /lgtm in a comment. To remove, add /lgtm cancel.
    lgtm label removed on each new commit push.
  • To mark PR as verified comment /verified to the PR, to un-verify comment /verified cancel to the PR.
    verified label removed on each new commit push.
  • To Cherry-pick a merged PR /cherry-pick <target_branch_name> to the PR. If <target_branch_name> is valid,
    and the current PR is merged, a cherry-picked PR would be created and linked to the current PR.
  • To build and push image to quay, add /build-push-pr-image in a comment. This would create an image with tag
    pr-<pr_number> to quay repository. This image tag, however would be deleted on PR merge or close action.
Supported labels

{'/wip', '/verified', '/build-push-pr-image', '/cherry-pick', '/lgtm', '/hold'}

@jstourac jstourac changed the title Validate image streams test(workbenches): add a test to validate ImageStreams Feb 19, 2026
@jstourac jstourac force-pushed the validateImageStreams branch 2 times, most recently from c5b2157 to b3098a7 Compare February 19, 2026 16:22
@jstourac jstourac marked this pull request as ready for review February 19, 2026 16:22
@jstourac jstourac enabled auto-merge (rebase) February 19, 2026 16:22
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Feb 19, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a new pytest module that implements health validation for OpenShift ImageStream resources: per-tag checks for resolved image references and ImportSuccess conditions, comparison of spec.tags vs status.tags, aggregation of tag- and stream-level errors, and a parameterized test that lists ImageStreams by label and fails with combined diagnostics.

Changes

Cohort / File(s) Summary
New ImageStream health validation test module
tests/workbenches/test_imagestream_health.py
Adds a test module with an ImageStream resource class and three functions: _validate_imagestream_tag_health() to validate per-tag items, dockerImageReference, image and ImportSuccess conditions; _validate_imagestreams_with_label() to check expected count, compare spec.tags vs status.tags, iterate tags and aggregate errors; and test_workbench_imagestreams_health() which is a parameterized pytest that lists ImageStreams via a DynamicClient and fails with a combined error message when issues are found.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and concisely summarizes the main change—adding a test to validate ImageStreams for workbenches.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Description check ✅ Passed The PR description includes a clear summary of changes, testing instructions with examples, and demonstrates manual testing with successful and problematic ImageStream examples.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
tests/workbenches/test_imagestream_health.py (2)

147-148: Consider using pytest.fail() instead of raw AssertionError.

pytest.fail() is the idiomatic way to fail a test with a message in pytest — it provides better integration with pytest's reporting and avoids the AssertionError appearing in tracebacks as if it were an assertion statement.

Suggested change
     if errors:
-        raise AssertionError("\n".join(errors))
+        pytest.fail("\n".join(errors))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/workbenches/test_imagestream_health.py` around lines 147 - 148, Replace
the explicit raise AssertionError("\n".join(errors)) with
pytest.fail("\n".join(errors)) to use pytest's idiomatic test-failure reporting;
ensure the module imports pytest (add import pytest if not already present) and
keep the error message exactly as "\n".join(errors) so existing messages remain
unchanged.

151-157: Hardcoded expected ImageStream counts are brittle.

The counts 11 and 7 will break whenever a notebook or runtime image is added/removed upstream. Consider either:

  • Making these counts configurable via py_config so different product versions can override them, or
  • Removing the exact-count assertion and relying solely on the per-tag health checks (or using a minimum bound like >= 1).

If exact counts are intentional as a regression guard, add a comment documenting why and where the canonical list lives so future maintainers know what to update.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/workbenches/test_imagestream_health.py` around lines 151 - 157, The
test hardcodes exact ImageStream counts (the pytest param
expected_imagestream_count values 11 and 7) which is brittle; update the
paramization in tests/workbenches/test_imagestream_health.py so the expected
counts come from a configurable source (e.g., a py_config value like
py_config.expected_imagestream_counts[label_selector]) with a sensible default
fallback (or change the assertion that uses expected_imagestream_count to assert
a minimum like >= 1 instead of exact equality); alternatively, if exact counts
must be enforced, add a clear comment next to the
label_selector/expected_imagestream_count entries pointing to the canonical list
location and why they are fixed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/workbenches/test_imagestream_health.py`:
- Around line 147-148: Replace the explicit raise
AssertionError("\n".join(errors)) with pytest.fail("\n".join(errors)) to use
pytest's idiomatic test-failure reporting; ensure the module imports pytest (add
import pytest if not already present) and keep the error message exactly as
"\n".join(errors) so existing messages remain unchanged.
- Around line 151-157: The test hardcodes exact ImageStream counts (the pytest
param expected_imagestream_count values 11 and 7) which is brittle; update the
paramization in tests/workbenches/test_imagestream_health.py so the expected
counts come from a configurable source (e.g., a py_config value like
py_config.expected_imagestream_counts[label_selector]) with a sensible default
fallback (or change the assertion that uses expected_imagestream_count to assert
a minimum like >= 1 instead of exact equality); alternatively, if exact counts
must be enforced, add a clear comment next to the
label_selector/expected_imagestream_count entries pointing to the canonical list
location and why they are fixed.

jiridanek
jiridanek previously approved these changes Feb 19, 2026
@jstourac
Copy link
Copy Markdown
Member Author

I implemented one proposal from coderabbitai, the other one I don't want to implement now. Also rebased against the latest main.

jiridanek
jiridanek previously approved these changes Feb 23, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
tests/workbenches/test_imagestream_health.py (1)

151-156: Hard-coded expected counts will cause false failures across RHOAI/ODH versions and environments.

11 and 7 are specific to one particular release. A deployment with a different version, a minimal install, or an upgrade that adds/removes an image will fail this smoke test for the wrong reason. The namespace is already sourced from py_config; these counts should follow the same pattern (e.g., py_config["expected_notebook_imagestream_count"]).

♻️ Suggested approach
-@pytest.mark.parametrize(
-    "label_selector, expected_imagestream_count",
-    [
-        pytest.param("opendatahub.io/notebook-image=true", 11, id="notebook_imagestreams"),
-        pytest.param("opendatahub.io/runtime-image=true", 7, id="runtime_imagestreams"),
-    ],
-)
+@pytest.mark.parametrize(
+    "label_selector, expected_imagestream_count",
+    [
+        pytest.param(
+            "opendatahub.io/notebook-image=true",
+            py_config.get("expected_notebook_imagestream_count", 11),
+            id="notebook_imagestreams",
+        ),
+        pytest.param(
+            "opendatahub.io/runtime-image=true",
+            py_config.get("expected_runtime_imagestream_count", 7),
+            id="runtime_imagestreams",
+        ),
+    ],
+)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/workbenches/test_imagestream_health.py` around lines 151 - 156, The
test uses hard-coded expected counts (11 and 7) in the pytest.mark.parametrize
tuple causing brittle failures; update the parameters to read expected counts
from the test config (use py_config keys such as
py_config["expected_notebook_imagestream_count"] and
py_config["expected_runtime_imagestream_count"]) instead of literals, and ensure
the parametrized ids remain descriptive (e.g., "notebook_imagestreams",
"runtime_imagestreams"); modify the pytest.mark.parametrize for
("label_selector, expected_imagestream_count") to reference those py_config
entries so the test adapts to different RHOAI/ODH versions/environments.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/workbenches/test_imagestream_health.py`:
- Around line 163-167: The test docstring in the ImageStream health test uses a
Given-When-Then narrative but lacks the Google-style Args: section required by
the project guideline; update the test's docstring (the triple-quoted string for
the ImageStream health test in tests/workbenches/test_imagestream_health.py) to
follow Google-format by adding an "Args:" section (e.g., "Args: None" if the
test takes no parameters) and, if applicable, document any fixtures/parameters
used by the test so the docstring conforms to the guideline and linter checks.
- Around line 12-16: Replace the locally-defined ImageStream subclass with an
import from ocp_resources (add "from ocp_resources.image_stream import
ImageStream" and remove the local class declaration) so the test uses the
upstream implementation; replace the hard-coded expected counts (the literals 11
and 7 used in the assertions that validate notebook and runtime image counts)
with values obtained from configuration or discovered from the
deployment/fixture instead of fixed constants; and update the test
function/class docstring to include Google-style "Args:" and "Returns:" sections
for full compliance.
- Line 18: Add the missing "workbenches" component marker to the pytest.ini
markers list and apply it to the test by updating the pytestmark in
tests/workbenches/test_imagestream_health.py (the pytestmark variable) so the
test has both the smoke and workbenches markers; modify pytest.ini to include a
"workbenches" entry under markers and change pytestmark in
test_imagestream_health.py to include pytest.mark.workbenches alongside
pytest.mark.smoke.

---

Nitpick comments:
In `@tests/workbenches/test_imagestream_health.py`:
- Around line 151-156: The test uses hard-coded expected counts (11 and 7) in
the pytest.mark.parametrize tuple causing brittle failures; update the
parameters to read expected counts from the test config (use py_config keys such
as py_config["expected_notebook_imagestream_count"] and
py_config["expected_runtime_imagestream_count"]) instead of literals, and ensure
the parametrized ids remain descriptive (e.g., "notebook_imagestreams",
"runtime_imagestreams"); modify the pytest.mark.parametrize for
("label_selector, expected_imagestream_count") to reference those py_config
entries so the test adapts to different RHOAI/ODH versions/environments.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b3098a7 and d685a79.

📒 Files selected for processing (1)
  • tests/workbenches/test_imagestream_health.py

Comment thread tests/workbenches/test_imagestream_health.py Outdated
Comment thread tests/workbenches/test_imagestream_health.py
Comment thread tests/workbenches/test_imagestream_health.py
harshad16
harshad16 previously approved these changes Feb 23, 2026
Copy link
Copy Markdown
Member

@harshad16 harshad16 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
👍

Copy link
Copy Markdown

@andyatmiami andyatmiami left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jstourac

nothing here particularly blocking in terms of review..

i see @harshad16 already approved - which i'm comfortable treating as means to merge this. i had commented earlier on this but forgot to submit..

Comment thread tests/workbenches/test_imagestream_health.py Outdated
Comment thread tests/workbenches/test_imagestream_health.py
Comment thread tests/workbenches/test_imagestream_health.py Outdated
Comment thread tests/workbenches/test_imagestream_health.py
Comment thread tests/workbenches/test_imagestream_health.py
Comment thread tests/workbenches/test_imagestream_health.py
@jstourac jstourac disabled auto-merge February 23, 2026 19:32
@jstourac
Copy link
Copy Markdown
Member Author

Thank you for your reviews, guys! Andy, I'll look in the rest of your comments tomorrow. I may be implementing some of your proposals! 🙂

@jstourac
Copy link
Copy Markdown
Member Author

Rebased against the latest main branch.

dbasunag
dbasunag previously approved these changes Feb 25, 2026
Copy link
Copy Markdown
Collaborator

@dbasunag dbasunag left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My comments are only for your considerations and are non-blocking.

@jstourac
Copy link
Copy Markdown
Member Author

Thank you, @dbasunag . I just pushed a change to make ruff happy again.

@dbasunag dbasunag enabled auto-merge (squash) February 25, 2026 15:27
@dbasunag dbasunag merged commit d03a8b2 into opendatahub-io:main Feb 25, 2026
8 checks passed
@github-actions
Copy link
Copy Markdown

Status of building tag latest: success.
Status of pushing tag latest to image registry: success.

@jstourac jstourac deleted the validateImageStreams branch February 25, 2026 15:54
jstourac added a commit to jstourac/opendatahub-tests that referenced this pull request Feb 25, 2026
…o#1126)

Co-authored-by: Debarati Basu-Nag <dbasunag@redhat.com>
(cherry picked from commit d03a8b2)
dbasunag added a commit that referenced this pull request Feb 25, 2026
(cherry picked from commit d03a8b2)

Co-authored-by: Debarati Basu-Nag <dbasunag@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants