Skip to content

fix: Use structlog as the logging package, since simple_logger is not agent friendly#1176

Merged
dbasunag merged 7 commits intoopendatahub-io:mainfrom
dbasunag:logging
Mar 24, 2026
Merged

fix: Use structlog as the logging package, since simple_logger is not agent friendly#1176
dbasunag merged 7 commits intoopendatahub-io:mainfrom
dbasunag:logging

Conversation

@dbasunag
Copy link
Copy Markdown
Collaborator

@dbasunag dbasunag commented Mar 5, 2026

Pull Request

Summary

Related Issues

  • Fixes:
  • JIRA:

How it has been tested

  • Locally
  • Jenkins

Additional Requirements

  • If this PR introduces a new test image, did you create a PR to mirror it in disconnected environment?
  • If this PR introduces new marker(s)/adds a new component, was relevant ticket created to update relevant Jenkins job?

Summary by CodeRabbit

  • Chores
    • Switched project-wide logging to a structured JSON format for more consistent, machine-friendly logs.
    • Added a centralized logging utility providing standardized formatting, duplicate-message filtering, and improved console/readable output.
    • Updated tests and utilities to use the new centralized logger so logs and diagnostics are consistent across runs.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 5, 2026

The following are automatically added/executed:

  • PR size label.
  • Run pre-commit
  • Run tox
  • Add PR author as the PR assignee
  • Build image based on the PR

Available user actions:

  • To mark a PR as WIP, add /wip in a comment. To remove it from the PR comment /wip cancel to the PR.
  • To block merging of a PR, add /hold in a comment. To un-block merging of PR comment /hold cancel.
  • To mark a PR as approved, add /lgtm in a comment. To remove, add /lgtm cancel.
    lgtm label removed on each new commit push.
  • To mark PR as verified comment /verified to the PR, to un-verify comment /verified cancel to the PR.
    verified label removed on each new commit push.
  • To Cherry-pick a merged PR /cherry-pick <target_branch_name> to the PR. If <target_branch_name> is valid,
    and the current PR is merged, a cherry-picked PR would be created and linked to the current PR.
  • To build and push image to quay, add /build-push-pr-image in a comment. This would create an image with tag
    pr-<pr_number> to quay repository. This image tag, however would be deleted on PR merge or close action.
Supported labels

{'/wip', '/verified', '/hold', '/lgtm', '/cherry-pick', '/build-push-pr-image'}

Signed-off-by: Debarati Basu-Nag <dbasunag@redhat.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 23, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Replaces python-simple-logger with structlog, adds a new structlog-based logger module utilities/opendatahub_logger.py, and updates ~180 test and utility files to import get_logger from utilities/opendatahub_logger instead of simple_logger.logger.

Changes

Cohort / File(s) Summary
Dependency management
pyproject.toml
Removed python-simple-logger; added structlog>=24.1.0.
New logging infrastructure
utilities/opendatahub_logger.py
Added a new module implementing structlog-based JSON logging, DuplicateFilter, WrapperLogFormatter, JSON formatters, StructlogWrapper, and get_logger() factory (new public APIs).
Logger helper updates
utilities/logger.py
Now imports DuplicateFilter and WrapperLogFormatter from utilities.opendatahub_logger and removed the secondary_log_colors argument from the formatter construction.
Utility modules
utilities/*.py (examples: utilities/infra.py, utilities/general.py, utilities/inference_utils.py, utilities/monitoring.py, utilities/operator_utils.py, utilities/registry_utils.py, utilities/certificates_utils.py, utilities/jira.py, utilities/llmd_utils.py, utilities/kueue_utils.py, utilities/must_gather_collector.py, utilities/plugins/*, ...)
Replaced imports of get_logger from simple_logger.logger with utilities.opendatahub_logger.get_logger and reinitialized module-level LOGGER variables; no other logic changes reported.
Test files
tests/** (many files; grouped examples: tests/conftest.py, tests/**/conftest.py, tests/model_registry/**, tests/model_serving/**, tests/workbenches/**, tests/llama_stack/**, tests/model_explainability/**, ...)
Swapped get_logger imports across ~180 test and fixture modules to utilities.opendatahub_logger.get_logger; preserved LOGGER = get_logger(name=__name__). Several model-catalog tests adjusted helper import paths (e.g., tests.model_registry.utilstests.model_registry.model_catalog.utils).
Model-catalog helper reimports
tests/model_registry/model_catalog/...
Multiple files updated to change where shared helpers are imported from (consolidation to tests.model_registry.model_catalog.utils) in addition to the logger import swap.
Small formatting/import grouping edits
assorted test modules
A few files restructured imports into parenthesized multi-line form (no semantics changed).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is entirely templated with no substantive content filled in—no actual summary, no issue links, no testing details, and no explanation of why this change is needed. Fill in the description template with a brief summary explaining the logging migration, link any related issues/JIRA tickets, confirm testing status (Locally/Jenkins), and address additional requirements.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: replacing simple_logger with structlog as the logging package due to agent compatibility issues.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (6)
utilities/jira.py (1)

111-111: ⚠️ Potential issue | 🔴 Critical

Invalid Python 3 exception syntax will cause SyntaxError.

Line 111 uses comma-separated exception types without parentheses, which is Python 2 syntax. This will fail at module import time.

except NewConnectionError, JIRAError, RequestsConnectionError:

Should be:

except (NewConnectionError, JIRAError, RequestsConnectionError):
🐛 Proposed fix
-    except NewConnectionError, JIRAError, RequestsConnectionError:
+    except (NewConnectionError, JIRAError, RequestsConnectionError):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@utilities/jira.py` at line 111, Update the invalid Python 2-style exception
clause in utilities/jira.py: replace the comma-separated except clause that
references NewConnectionError, JIRAError, and RequestsConnectionError with a
Python 3 compatible tuple form (i.e., use except (NewConnectionError, JIRAError,
RequestsConnectionError):), and if the block needs the exception object for
logging or handling, capture it with an "as e" (e.g., except (... ) as e) inside
the same try/except surrounding the JIRA interaction or the function where this
clause appears.
utilities/monitoring.py (1)

78-97: ⚠️ Potential issue | 🟡 Minor

Potential UnboundLocalError if TimeoutSampler raises before first yield.

If TimeoutSampler raises TimeoutExpiredError before yielding any sample (e.g., immediate timeout or exception in func), sample will be undefined at Line 96.

Proposed fix
+    sample = None
     try:
         for sample in TimeoutSampler(
             wait_timeout=timeout,
             sleep=15,
             func=field_getter,
             prometheus=prometheus,
             metrics_query=metrics_query,
         ):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@utilities/monitoring.py` around lines 78 - 97, The except block can reference
sample before it's defined if TimeoutSampler raises before first yield; update
the logic around TimeoutSampler/TimeoutExpiredError by initializing a sentinel
(e.g., last_value = None) before the try and assigning last_value = sample
inside the loop (or set sample = None before try), then change the except
handler to log last_value (or the sentinel) instead of sample and include
expected_value; ensure you update all LOGGER.error and LOGGER.info references
that use sample to use the initialized variable (symbols: TimeoutSampler,
TimeoutExpiredError, sample, last_value, LOGGER, expected_value).
tests/model_serving/model_server/kserve/inference_service_lifecycle/test_isvc_replicas_update.py (1)

45-45: ⚠️ Potential issue | 🟡 Minor

Missing f-string prefix in assertion message.

The string lacks the f prefix, so {pod.name for pod in isvc_pods} will be printed literally instead of being evaluated.

Proposed fix
-        assert len(isvc_pods) == 2, "Expected 2 inference pods, existing pods: {pod.name for pod in isvc_pods}"
+        assert len(isvc_pods) == 2, f"Expected 2 inference pods, existing pods: {[pod.name for pod in isvc_pods]}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@tests/model_serving/model_server/kserve/inference_service_lifecycle/test_isvc_replicas_update.py`
at line 45, The assertion message in test_isvc_replicas_update uses a
format-expression but is missing the f-string prefix, so the set comprehension
"{pod.name for pod in isvc_pods}" is not evaluated; update the assertion in the
test (the line asserting len(isvc_pods) == 2) to use an f-string (prefix the
string with f) so the actual pod names are interpolated into the error message
for easier debugging.
utilities/infra.py (1)

972-973: ⚠️ Potential issue | 🔴 Critical

Invalid Python 3 syntax - SyntaxError at runtime.

except A, B: is Python 2 syntax. Python 3 requires parentheses for multiple exception types.

Proposed fix
-        except ResourceNotFoundError, NotFoundError:
+        except (ResourceNotFoundError, NotFoundError):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@utilities/infra.py` around lines 972 - 973, The except clause uses Python 2
syntax "except ResourceNotFoundError, NotFoundError:" which raises a
SyntaxError; change it to the Python 3 form using a tuple: "except
(ResourceNotFoundError, NotFoundError):" in the try/except that logs "Pod
{pod.name} is deleted" (look for the except block referencing
ResourceNotFoundError, NotFoundError and LOGGER).
utilities/plugins/openai_plugin.py (2)

107-109: ⚠️ Potential issue | 🔴 Critical

Invalid Python 3 syntax - SyntaxError at runtime.

except A, B: is Python 2 syntax. Python 3 requires parentheses for multiple exception types.

Proposed fix
-        except requests.exceptions.RequestException, json.JSONDecodeError:
+        except (requests.exceptions.RequestException, json.JSONDecodeError):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@utilities/plugins/openai_plugin.py` around lines 107 - 109, The except clause
uses Python 2 syntax ("except A, B:") which causes a SyntaxError; update the
exception handling in openai_plugin.py to use a tuple for multiple exceptions
(i.e., except (requests.exceptions.RequestException, json.JSONDecodeError):) and
ensure the block references LOGGER and re-raises as before (you can also add
exc_info=True to LOGGER.error for better diagnostics). Locate the try/except
around streaming requests where requests.exceptions.RequestException and
json.JSONDecodeError are caught and replace the comma-separated form with the
parenthesized tuple of exception types.

140-141: ⚠️ Potential issue | 🔴 Critical

Same Python 3 syntax error as Line 107.

Proposed fix
-        except requests.exceptions.RequestException, json.JSONDecodeError:
+        except (requests.exceptions.RequestException, json.JSONDecodeError):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@utilities/plugins/openai_plugin.py` around lines 140 - 141, The except clause
uses invalid Python 3 syntax; update the exception handling in openai_plugin.py
to catch both exceptions using a tuple (e.g., change the incorrect "except
requests.exceptions.RequestException, json.JSONDecodeError:" to a tuple form) so
the block reads "except (requests.exceptions.RequestException,
json.JSONDecodeError):" and keep the existing LOGGER.exception("Request error")
inside that block; ensure json is the expected module (json.JSONDecodeError) or
use json.decoder.JSONDecodeError if your imports differ.
🧹 Nitpick comments (12)
utilities/kueue_utils.py (1)

11-13: Add configure_third_party=False to module-level logger initialization.

Line 13 calls get_logger(name=__name__) with default configure_third_party=True, triggering global logging reconfiguration on every import. This pattern is pervasive across 130+ modules in the codebase. Centralize third-party logging setup in a single bootstrap location and opt out at module level:

LOGGER = get_logger(name=__name__, configure_third_party=False)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@utilities/kueue_utils.py` around lines 11 - 13, The module-level logger is
calling get_logger with default configure_third_party=True which causes global
logging reconfiguration on import; update the LOGGER initialization to call
get_logger(name=__name__, configure_third_party=False) so this module opts out
of configuring third-party logging and centralizes that setup in your bootstrap
code — change the LOGGER assignment that uses get_logger to pass
configure_third_party=False.
tests/model_registry/mcp_servers/search/test_keyword_search.py (1)

7-9: Unused logger.

LOGGER is defined but never used in this file. Remove if not needed.

Proposed fix
 from tests.model_registry.mcp_servers.constants import CALCULATOR_SERVER_NAME
 from tests.model_registry.utils import execute_get_command
-from utilities.opendatahub_logger import get_logger
-
-LOGGER = get_logger(name=__name__)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_registry/mcp_servers/search/test_keyword_search.py` around lines
7 - 9, Remove the unused LOGGER import and variable: delete the get_logger
import from utilities.opendatahub_logger and the LOGGER =
get_logger(name=__name__) line in
tests/model_registry/mcp_servers/search/test_keyword_search.py since LOGGER is
never referenced; if logging is needed later, reintroduce get_logger and LOGGER
where actually used (search for LOGGER references before removal).
tests/model_serving/model_runtime/vllm/toolcalling/test_granite_3_2_8b_instruct_preview.py (1)

21-23: Unused logger.

LOGGER is defined but never used in this file. Remove the dead code.

Proposed fix
 from utilities.constants import KServeDeploymentType
-from utilities.opendatahub_logger import get_logger
-
-LOGGER = get_logger(name=__name__)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@tests/model_serving/model_runtime/vllm/toolcalling/test_granite_3_2_8b_instruct_preview.py`
around lines 21 - 23, The file defines an unused logger variable LOGGER created
via get_logger(name=__name__) which is dead code; remove the import of
get_logger and the LOGGER definition (symbols: get_logger and LOGGER) from
tests/model_serving/model_runtime/vllm/toolcalling/test_granite_3_2_8b_instruct_preview.py
so there are no unused imports or variables left behind, keeping the rest of the
test file unchanged.
tests/model_explainability/evalhub/conftest.py (1)

12-15: Unused logger.

LOGGER is defined but never used in this file. Remove if not needed, or confirm it's intended for future use.

Proposed fix
 from utilities.certificates_utils import create_ca_bundle_file
 from utilities.constants import Timeout
-from utilities.opendatahub_logger import get_logger
 from utilities.resources.evalhub import EvalHub
-
-LOGGER = get_logger(name=__name__)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_explainability/evalhub/conftest.py` around lines 12 - 15, LOGGER
is defined via get_logger(name=__name__) in
tests/model_explainability/evalhub/conftest.py but never used; either remove the
unused LOGGER and the get_logger import (and remove any unused EvalHub import if
applicable) or, if the logger is intentionally reserved, keep it and add a short
comment or a usage (e.g., a minimal debug log) to avoid lint warnings; update
references to the LOGGER symbol accordingly and ensure no unused-import warnings
remain.
tests/model_registry/test_security.py (1)

35-40: verify=False disables TLS certificate validation (CWE-295).

This is acceptable for test environments with self-signed certificates, but ensure this pattern doesn't propagate to production code. Consider using a test CA bundle or REQUESTS_CA_BUNDLE environment variable for more realistic security testing.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_registry/test_security.py` around lines 35 - 40, The test
currently calls requests.get(...) with verify=False which disables TLS
validation; update the test in tests/model_registry/test_security.py around the
requests.get invocation to avoid using verify=False in test logic — instead
configure a test CA bundle (pass verify='/path/to/test_ca_bundle.pem') or set
REQUESTS_CA_BUNDLE in the test environment, or use a test fixture/mocking (e.g.,
requests_mock) to simulate TLS without disabling verification, ensuring the call
in the test (requests.get) preserves certificate validation while still allowing
self-signed cert testing.
tests/model_serving/model_server/upgrade/utils.py (1)

201-205: Consider moving import to module level.

Importing inside the function adds overhead on each invocation. Additionally, the variable naming (logger) is inconsistent with other modules in this PR which use uppercase LOGGER.

♻️ Suggested refactor

Move the import to the top of the file with other imports:

+from utilities.opendatahub_logger import get_logger
+
+logger = get_logger(name=__name__)
+
 # ... existing code ...

 def verify_metrics_retained(
     prometheus: Prometheus,
     query: str,
     min_value: int,
     timeout: int = 240,
 ) -> None:
     # ...
     from timeout_sampler import TimeoutExpiredError, TimeoutSampler

-    from utilities.opendatahub_logger import get_logger
-
-    logger = get_logger(name=__name__)

     try:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_serving/model_server/upgrade/utils.py` around lines 201 - 205,
The import of TimeoutExpiredError, TimeoutSampler and get_logger and the
instantiation logger = get_logger(name=__name__) should be moved out of any
function and placed at module level with the other imports; also rename the
variable to LOGGER to match project convention (use LOGGER =
get_logger(name=__name__)), and update any local references to logger to use
LOGGER and adjust import locations so TimeoutExpiredError and TimeoutSampler are
imported once at the top of the module.
tests/model_serving/model_runtime/rhoai_upgrade/test_upgrade.py (1)

10-12: Avoid import-order-dependent global logging setup from test modules.

Line 12 calls get_logger with defaults; in utilities/opendatahub_logger.py, default configure_third_party=True can trigger global logging reconfiguration during module import. In test suites, this makes logging setup depend on import order. Prefer disabling third-party setup at leaf modules and perform it once in a dedicated bootstrap fixture.

Suggested change
-LOGGER = get_logger(name=__name__)
+LOGGER = get_logger(name=__name__, configure_third_party=False)

As per coding guidelines, "REVIEW PRIORITIES: 3. Bug-prone patterns and error handling gaps".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_serving/model_runtime/rhoai_upgrade/test_upgrade.py` around lines
10 - 12, The test module currently calls get_logger(...) at import time (LOGGER
= get_logger(name=__name__)), which can trigger global logging reconfiguration
via the default configure_third_party=True; change the call in the test to
disable third-party configuration by calling get_logger with
configure_third_party=False (or remove the module-level LOGGER and obtain a
logger inside a test fixture/setup that runs once), ensuring tests do not
perform global logging reconfiguration on import; reference get_logger and the
module-level LOGGER in the test to locate and update the call.
tests/model_serving/model_runtime/vllm/basic_model_deployment/test_granite_7b_redhat_lab.py (1)

18-20: Avoid import-time global logging reconfiguration from leaf test modules.

get_logger() defaults to configure_third_party=True, so this module-level call can trigger global logging setup during import. With this pattern repeated across many files, logging behavior becomes import-order dependent. Configure third-party logging once in a central bootstrap (e.g., top-level conftest.py) and use get_logger(name=__name__, configure_third_party=False) in leaf modules.

Suggested change
-from utilities.opendatahub_logger import get_logger
+from utilities.opendatahub_logger import get_logger

-LOGGER = get_logger(name=__name__)
+LOGGER = get_logger(name=__name__, configure_third_party=False)

As per coding guidelines, REVIEW PRIORITIES: 2. Architectural issues and anti-patterns.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@tests/model_serving/model_runtime/vllm/basic_model_deployment/test_granite_7b_redhat_lab.py`
around lines 18 - 20, The module-level call to get_logger() (creating LOGGER)
triggers global logging configuration because get_logger defaults to
configure_third_party=True; change the call to get_logger(name=__name__,
configure_third_party=False) so the test file (and its LOGGER variable) does not
reconfigure third-party logging at import time, and ensure third-party logging
is configured centrally (e.g., in top-level conftest.py) instead of in leaf
modules.
tests/model_registry/model_catalog/search/utils.py (1)

22-22: Import execute_get_command from the owning module, not a transitive re-export.

This import now depends on tests.model_registry.model_catalog.utils re-export behavior. Use the defining module to keep dependency boundaries explicit and reduce brittle coupling.

Proposed change
-from tests.model_registry.model_catalog.utils import execute_database_query, execute_get_command, parse_psql_output
+from tests.model_registry.model_catalog.utils import execute_database_query, parse_psql_output
+from tests.model_registry.utils import execute_get_command
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/model_registry/model_catalog/search/utils.py` at line 22, The file
currently imports execute_get_command transitively from
tests.model_registry.model_catalog.utils; change the import so
execute_get_command is imported directly from the module that defines it (not
via the re-export). In the import line that references execute_database_query,
execute_get_command, parse_psql_output (in search.utils), remove
execute_get_command from that re-export and add a separate import that points to
its owning module (the module that defines execute_get_command). Keep the other
imports unchanged and run tests to ensure no import errors.
utilities/opendatahub_logger.py (3)

350-363: Test function in production module.

test_third_party_logging() is a demonstration function that uses print() without assertions. Move to a proper test file under tests/ or convert to a documented example. As-is, it won't be discovered by pytest and clutters the production API.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@utilities/opendatahub_logger.py` around lines 350 - 363, The function
test_third_party_logging() is a demo test left in production; remove it from the
module and either move its logic into a real pytest test under tests/ (e.g.,
create tests/test_third_party_logging.py using pytest capturing/assertions and
import get_logger) or convert it to an example in the README or a documented
examples/ file; update or remove any imports it relies on (get_logger, logging)
and ensure no leftover top-level test_* function names remain in
utilities/opendatahub_logger.py to prevent confusion with pytest discovery.

267-288: Global monkey-patch of logging.getLogger has side effects.

Patching logging.getLogger at line 288 is a significant global mutation that affects all code including third-party libraries. The patched version also stores state via logger._json_configured (line 284) which won't survive pickling or certain testing scenarios. Consider documenting this behavior prominently or using a less invasive approach like logging.setLoggerClass().

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@utilities/opendatahub_logger.py` around lines 267 - 288, The code globally
monkey-patches logging.getLogger by assigning patched_getLogger over
original_getLogger which causes wide side effects and stores state on logger via
logger._json_configured; instead, avoid global monkey-patching and switch to a
less invasive approach: replace the assignment of logging.getLogger with using
logging.setLoggerClass or a custom Logger subclass that applies
ThirdPartyJSONFormatter in its __init__/handle methods, remove or avoid relying
on logger._json_configured for persistent state (use logger.manager or
handler-level flags), and document any remaining global behavior; update
references to original_getLogger, patched_getLogger, ThirdPartyJSONFormatter,
and logger._json_configured when refactoring.

204-226: Invalid # noqa: FCN001 rule codes.

FCN001 is not a recognized Ruff rule. Either remove these noqa comments or add FCN001 to lint.external in your Ruff configuration if this rule comes from another linter.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@utilities/opendatahub_logger.py` around lines 204 - 226, The methods info,
debug, warning, warn, error, critical, and exception in this file include
invalid "# noqa: FCN001" comments; remove those invalid noqa markers (or if
FCN001 is from an external linter, add "FCN001" to lint.external in the Ruff
config) so the linter no longer reports unknown rule codes—locate the calls to
self._log and the warn wrapper (methods named info, debug, warning, warn, error,
critical, exception, and the helper _log) and either delete the trailing "#
noqa: FCN001" on each line or update your Ruff config accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pyproject.toml`:
- Line 53: Update the structlog dependency in pyproject.toml: replace the
current constraint "structlog>=24.1.0" with "structlog>=25.4.0" so the package
supports Python 3.14 and satisfies the project requires-python "==3.14.*"
constraint; locate the dependency entry that currently reads structlog>=24.1.0
and change it to structlog>=25.4.0, then run dependency resolution to verify no
transitive conflicts.

In
`@tests/model_registry/model_catalog/catalog_config/test_catalog_source_merge.py`:
- Line 4: The import for execute_get_command is pointing to the wrong module;
update the import statement that currently references
tests.model_registry.model_catalog.utils to instead import execute_get_command
from tests.model_registry.utils so the test module uses the correct function
definition (look for the import line that names execute_get_command in the
test_catalog_source_merge file and change its module path).

In `@tests/model_registry/model_catalog/catalog_config/utils.py`:
- Line 26: The StructlogWrapper.info() calls (e.g., LOGGER.info("Found expected
number of models: %s for source: %s", expected_count, source_label)) are logging
literal "%s" because StructlogWrapper._log() drops positional args; fix _log()
in the StructlogWrapper class to interpolate positional format args before
calling structlog (e.g., compute msg_str = msg % args if args else msg, then
pass event=msg_str and **kwargs to the underlying logger) so existing callers
continue to work, or alternatively update callers (like the LOGGER.info call) to
use f-strings/keyword args consistently; prefer updating StructlogWrapper._log()
to format msg with args to preserve backward compatibility.

In `@tests/model_registry/model_catalog/huggingface/test_huggingface_negative.py`:
- Around line 7-9: The module-level call LOGGER = get_logger(name=__name__)
causes global side effects during import by invoking setup_global_json_logging;
change the initialization to avoid invoking third-party reconfiguration at
import time by calling get_logger with configure_third_party=False (i.e., LOGGER
= get_logger(name=__name__, configure_third_party=False)), or alternatively
remove module-level logger initialization and obtain the logger from a
session-scoped fixture that centralizes logging setup to prevent repeated global
patches; target the LOGGER symbol and the get_logger call when making this
change.

In
`@tests/model_serving/model_runtime/vllm/basic_model_deployment/test_merlinite_7b_lab.py`:
- Around line 18-20: The test module performs process-wide logging
reconfiguration by calling get_logger(name=__name__) with the default
configure_third_party=True; to prevent cross-test coupling change the call that
initializes LOGGER to pass configure_third_party=False so
get_logger(name=__name__, configure_third_party=False) is used (location: LOGGER
initialization in this file referencing utilities.opendatahub_logger.get_logger)
ensuring the test import does not mutate global logging configuration.

In `@tests/workbenches/test_imagestream_health.py`:
- Around line 10-13: The module-level LOGGER initialization is triggering global
logging reconfiguration via get_logger() (which uses the module-level flag
_global_logging_configured); change the call to get_logger to disable
third-party reconfiguration by passing configure_third_party=False when creating
LOGGER (i.e., update LOGGER = get_logger(name=__name__) to pass
configure_third_party=False), and move global/third-party logging setup into a
session-scoped pytest fixture (create a single session fixture that calls
get_logger(..., configure_third_party=True) once) so tests do not reconfigure
logging on import order.

In `@utilities/opendatahub_logger.py`:
- Around line 150-160: Syntax error: the except clause uses Python 2-style
"except TypeError, ValueError" which must be changed to modern tuple syntax;
update the exception handler around the json.dumps(log_entry) call in
utilities/opendatahub_logger.py so it reads except (TypeError, ValueError): and
leave the fallback_entry construction and return json.dumps(fallback_entry)
unchanged (locate the try/except that surrounds json.dumps(log_entry) and modify
only the except clause).
- Around line 297-303: The function that defines parameters level, log_to_file,
log_file, filename, log_to_console, json_format, and configure_third_party
currently ignores most of them; update it so either (A) the parameters are
actually applied (e.g., use level to set logger level, honor
log_to_file/log_file/filename to attach a FileHandler, use log_to_console to
attach a StreamHandler, and respect json_format or map it to a formatter) or (B)
emit a clear deprecation/warning when any of those parameters are passed with
non-default values (using the warnings module) and document that
configure_third_party remains the only effective flag; locate the parameter list
in utilities/opendatahub_logger.py and change the implementation in that same
function to apply the chosen fix for level, log_to_file, log_file, filename,
log_to_console, and json_format.
- Around line 179-185: Structlog is being reconfigured on each logger
instantiation in StructlogWrapper.__init__ by calling structlog.configure(),
which causes global side effects and race conditions; move that call into a
one-time module-level initialization guarded by the existing
_global_logging_configured flag (or create one) so configuration is executed
only once (e.g., perform structlog.configure(...) at import time or inside a
protected init function), then have StructlogWrapper.__init__ and get_logger()
only bind/return loggers without calling structlog.configure again.

---

Outside diff comments:
In
`@tests/model_serving/model_server/kserve/inference_service_lifecycle/test_isvc_replicas_update.py`:
- Line 45: The assertion message in test_isvc_replicas_update uses a
format-expression but is missing the f-string prefix, so the set comprehension
"{pod.name for pod in isvc_pods}" is not evaluated; update the assertion in the
test (the line asserting len(isvc_pods) == 2) to use an f-string (prefix the
string with f) so the actual pod names are interpolated into the error message
for easier debugging.

In `@utilities/infra.py`:
- Around line 972-973: The except clause uses Python 2 syntax "except
ResourceNotFoundError, NotFoundError:" which raises a SyntaxError; change it to
the Python 3 form using a tuple: "except (ResourceNotFoundError,
NotFoundError):" in the try/except that logs "Pod {pod.name} is deleted" (look
for the except block referencing ResourceNotFoundError, NotFoundError and
LOGGER).

In `@utilities/jira.py`:
- Line 111: Update the invalid Python 2-style exception clause in
utilities/jira.py: replace the comma-separated except clause that references
NewConnectionError, JIRAError, and RequestsConnectionError with a Python 3
compatible tuple form (i.e., use except (NewConnectionError, JIRAError,
RequestsConnectionError):), and if the block needs the exception object for
logging or handling, capture it with an "as e" (e.g., except (... ) as e) inside
the same try/except surrounding the JIRA interaction or the function where this
clause appears.

In `@utilities/monitoring.py`:
- Around line 78-97: The except block can reference sample before it's defined
if TimeoutSampler raises before first yield; update the logic around
TimeoutSampler/TimeoutExpiredError by initializing a sentinel (e.g., last_value
= None) before the try and assigning last_value = sample inside the loop (or set
sample = None before try), then change the except handler to log last_value (or
the sentinel) instead of sample and include expected_value; ensure you update
all LOGGER.error and LOGGER.info references that use sample to use the
initialized variable (symbols: TimeoutSampler, TimeoutExpiredError, sample,
last_value, LOGGER, expected_value).

In `@utilities/plugins/openai_plugin.py`:
- Around line 107-109: The except clause uses Python 2 syntax ("except A, B:")
which causes a SyntaxError; update the exception handling in openai_plugin.py to
use a tuple for multiple exceptions (i.e., except
(requests.exceptions.RequestException, json.JSONDecodeError):) and ensure the
block references LOGGER and re-raises as before (you can also add exc_info=True
to LOGGER.error for better diagnostics). Locate the try/except around streaming
requests where requests.exceptions.RequestException and json.JSONDecodeError are
caught and replace the comma-separated form with the parenthesized tuple of
exception types.
- Around line 140-141: The except clause uses invalid Python 3 syntax; update
the exception handling in openai_plugin.py to catch both exceptions using a
tuple (e.g., change the incorrect "except requests.exceptions.RequestException,
json.JSONDecodeError:" to a tuple form) so the block reads "except
(requests.exceptions.RequestException, json.JSONDecodeError):" and keep the
existing LOGGER.exception("Request error") inside that block; ensure json is the
expected module (json.JSONDecodeError) or use json.decoder.JSONDecodeError if
your imports differ.

---

Nitpick comments:
In `@tests/model_explainability/evalhub/conftest.py`:
- Around line 12-15: LOGGER is defined via get_logger(name=__name__) in
tests/model_explainability/evalhub/conftest.py but never used; either remove the
unused LOGGER and the get_logger import (and remove any unused EvalHub import if
applicable) or, if the logger is intentionally reserved, keep it and add a short
comment or a usage (e.g., a minimal debug log) to avoid lint warnings; update
references to the LOGGER symbol accordingly and ensure no unused-import warnings
remain.

In `@tests/model_registry/mcp_servers/search/test_keyword_search.py`:
- Around line 7-9: Remove the unused LOGGER import and variable: delete the
get_logger import from utilities.opendatahub_logger and the LOGGER =
get_logger(name=__name__) line in
tests/model_registry/mcp_servers/search/test_keyword_search.py since LOGGER is
never referenced; if logging is needed later, reintroduce get_logger and LOGGER
where actually used (search for LOGGER references before removal).

In `@tests/model_registry/model_catalog/search/utils.py`:
- Line 22: The file currently imports execute_get_command transitively from
tests.model_registry.model_catalog.utils; change the import so
execute_get_command is imported directly from the module that defines it (not
via the re-export). In the import line that references execute_database_query,
execute_get_command, parse_psql_output (in search.utils), remove
execute_get_command from that re-export and add a separate import that points to
its owning module (the module that defines execute_get_command). Keep the other
imports unchanged and run tests to ensure no import errors.

In `@tests/model_registry/test_security.py`:
- Around line 35-40: The test currently calls requests.get(...) with
verify=False which disables TLS validation; update the test in
tests/model_registry/test_security.py around the requests.get invocation to
avoid using verify=False in test logic — instead configure a test CA bundle
(pass verify='/path/to/test_ca_bundle.pem') or set REQUESTS_CA_BUNDLE in the
test environment, or use a test fixture/mocking (e.g., requests_mock) to
simulate TLS without disabling verification, ensuring the call in the test
(requests.get) preserves certificate validation while still allowing self-signed
cert testing.

In `@tests/model_serving/model_runtime/rhoai_upgrade/test_upgrade.py`:
- Around line 10-12: The test module currently calls get_logger(...) at import
time (LOGGER = get_logger(name=__name__)), which can trigger global logging
reconfiguration via the default configure_third_party=True; change the call in
the test to disable third-party configuration by calling get_logger with
configure_third_party=False (or remove the module-level LOGGER and obtain a
logger inside a test fixture/setup that runs once), ensuring tests do not
perform global logging reconfiguration on import; reference get_logger and the
module-level LOGGER in the test to locate and update the call.

In
`@tests/model_serving/model_runtime/vllm/basic_model_deployment/test_granite_7b_redhat_lab.py`:
- Around line 18-20: The module-level call to get_logger() (creating LOGGER)
triggers global logging configuration because get_logger defaults to
configure_third_party=True; change the call to get_logger(name=__name__,
configure_third_party=False) so the test file (and its LOGGER variable) does not
reconfigure third-party logging at import time, and ensure third-party logging
is configured centrally (e.g., in top-level conftest.py) instead of in leaf
modules.

In
`@tests/model_serving/model_runtime/vllm/toolcalling/test_granite_3_2_8b_instruct_preview.py`:
- Around line 21-23: The file defines an unused logger variable LOGGER created
via get_logger(name=__name__) which is dead code; remove the import of
get_logger and the LOGGER definition (symbols: get_logger and LOGGER) from
tests/model_serving/model_runtime/vllm/toolcalling/test_granite_3_2_8b_instruct_preview.py
so there are no unused imports or variables left behind, keeping the rest of the
test file unchanged.

In `@tests/model_serving/model_server/upgrade/utils.py`:
- Around line 201-205: The import of TimeoutExpiredError, TimeoutSampler and
get_logger and the instantiation logger = get_logger(name=__name__) should be
moved out of any function and placed at module level with the other imports;
also rename the variable to LOGGER to match project convention (use LOGGER =
get_logger(name=__name__)), and update any local references to logger to use
LOGGER and adjust import locations so TimeoutExpiredError and TimeoutSampler are
imported once at the top of the module.

In `@utilities/kueue_utils.py`:
- Around line 11-13: The module-level logger is calling get_logger with default
configure_third_party=True which causes global logging reconfiguration on
import; update the LOGGER initialization to call get_logger(name=__name__,
configure_third_party=False) so this module opts out of configuring third-party
logging and centralizes that setup in your bootstrap code — change the LOGGER
assignment that uses get_logger to pass configure_third_party=False.

In `@utilities/opendatahub_logger.py`:
- Around line 350-363: The function test_third_party_logging() is a demo test
left in production; remove it from the module and either move its logic into a
real pytest test under tests/ (e.g., create tests/test_third_party_logging.py
using pytest capturing/assertions and import get_logger) or convert it to an
example in the README or a documented examples/ file; update or remove any
imports it relies on (get_logger, logging) and ensure no leftover top-level
test_* function names remain in utilities/opendatahub_logger.py to prevent
confusion with pytest discovery.
- Around line 267-288: The code globally monkey-patches logging.getLogger by
assigning patched_getLogger over original_getLogger which causes wide side
effects and stores state on logger via logger._json_configured; instead, avoid
global monkey-patching and switch to a less invasive approach: replace the
assignment of logging.getLogger with using logging.setLoggerClass or a custom
Logger subclass that applies ThirdPartyJSONFormatter in its __init__/handle
methods, remove or avoid relying on logger._json_configured for persistent state
(use logger.manager or handler-level flags), and document any remaining global
behavior; update references to original_getLogger, patched_getLogger,
ThirdPartyJSONFormatter, and logger._json_configured when refactoring.
- Around line 204-226: The methods info, debug, warning, warn, error, critical,
and exception in this file include invalid "# noqa: FCN001" comments; remove
those invalid noqa markers (or if FCN001 is from an external linter, add
"FCN001" to lint.external in the Ruff config) so the linter no longer reports
unknown rule codes—locate the calls to self._log and the warn wrapper (methods
named info, debug, warning, warn, error, critical, exception, and the helper
_log) and either delete the trailing "# noqa: FCN001" on each line or update
your Ruff config accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: ab7d1aa6-93d5-4962-84ee-f368942f6669

📥 Commits

Reviewing files that changed from the base of the PR and between 5957e10 and 818d41a.

⛔ Files ignored due to path filters (2)
  • .github/workflows/scripts/pr_workflow.py is excluded by !.github/**
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (188)
  • pyproject.toml
  • tests/cluster_health/test_cluster_health.py
  • tests/cluster_health/test_operator_health.py
  • tests/conftest.py
  • tests/fixtures/inference.py
  • tests/llama_stack/conftest.py
  • tests/llama_stack/inference/test_completions.py
  • tests/llama_stack/safety/test_trustyai_fms_provider.py
  • tests/llama_stack/utils.py
  • tests/llama_stack/vector_io/test_vector_stores.py
  • tests/model_explainability/evalhub/conftest.py
  • tests/model_explainability/evalhub/utils.py
  • tests/model_explainability/guardrails/test_guardrails.py
  • tests/model_explainability/guardrails/utils.py
  • tests/model_explainability/lm_eval/test_lm_eval.py
  • tests/model_explainability/lm_eval/utils.py
  • tests/model_explainability/trustyai_service/trustyai_service_utils.py
  • tests/model_explainability/trustyai_service/utils.py
  • tests/model_registry/component_health/test_mr_health_check.py
  • tests/model_registry/component_health/test_mr_operator_health.py
  • tests/model_registry/conftest.py
  • tests/model_registry/image_validation/test_verify_rhoai_images.py
  • tests/model_registry/image_validation/utils.py
  • tests/model_registry/mcp_servers/config/conftest.py
  • tests/model_registry/mcp_servers/config/test_included_excluded_servers.py
  • tests/model_registry/mcp_servers/config/test_invalid_yaml.py
  • tests/model_registry/mcp_servers/config/test_multi_source.py
  • tests/model_registry/mcp_servers/config/test_named_queries.py
  • tests/model_registry/mcp_servers/conftest.py
  • tests/model_registry/mcp_servers/search/test_filtering.py
  • tests/model_registry/mcp_servers/search/test_keyword_search.py
  • tests/model_registry/mcp_servers/search/test_ordering.py
  • tests/model_registry/mcp_servers/test_data_integrity.py
  • tests/model_registry/model_catalog/catalog_config/conftest.py
  • tests/model_registry/model_catalog/catalog_config/test_catalog_source_merge.py
  • tests/model_registry/model_catalog/catalog_config/test_custom_model_catalog.py
  • tests/model_registry/model_catalog/catalog_config/test_default_model_catalog.py
  • tests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.py
  • tests/model_registry/model_catalog/catalog_config/test_model_catalog_negative.py
  • tests/model_registry/model_catalog/catalog_config/utils.py
  • tests/model_registry/model_catalog/conftest.py
  • tests/model_registry/model_catalog/db_check/conftest.py
  • tests/model_registry/model_catalog/db_check/test_model_catalog_db_validation.py
  • tests/model_registry/model_catalog/db_check/utils.py
  • tests/model_registry/model_catalog/huggingface/conftest.py
  • tests/model_registry/model_catalog/huggingface/test_huggingface_exclude_models.py
  • tests/model_registry/model_catalog/huggingface/test_huggingface_model_deployment.py
  • tests/model_registry/model_catalog/huggingface/test_huggingface_model_search.py
  • tests/model_registry/model_catalog/huggingface/test_huggingface_model_type_classification.py
  • tests/model_registry/model_catalog/huggingface/test_huggingface_model_validation.py
  • tests/model_registry/model_catalog/huggingface/test_huggingface_models_multiple_sources.py
  • tests/model_registry/model_catalog/huggingface/test_huggingface_negative.py
  • tests/model_registry/model_catalog/huggingface/test_huggingface_source_error_validation.py
  • tests/model_registry/model_catalog/huggingface/utils.py
  • tests/model_registry/model_catalog/metadata/test_catalog_preview.py
  • tests/model_registry/model_catalog/metadata/test_custom_properties.py
  • tests/model_registry/model_catalog/metadata/test_filter_options_endpoint.py
  • tests/model_registry/model_catalog/metadata/test_labels_endpoint.py
  • tests/model_registry/model_catalog/metadata/test_sources_endpoint.py
  • tests/model_registry/model_catalog/metadata/utils.py
  • tests/model_registry/model_catalog/rbac/test_catalog_rbac.py
  • tests/model_registry/model_catalog/search/test_model_artifact_search.py
  • tests/model_registry/model_catalog/search/test_model_search.py
  • tests/model_registry/model_catalog/search/utils.py
  • tests/model_registry/model_catalog/sorting/test_model_artifacts_sorting.py
  • tests/model_registry/model_catalog/sorting/test_model_sorting.py
  • tests/model_registry/model_catalog/sorting/test_sorting_functionality.py
  • tests/model_registry/model_catalog/sorting/utils.py
  • tests/model_registry/model_catalog/upgrade/test_model_catalog_upgrade.py
  • tests/model_registry/model_catalog/utils.py
  • tests/model_registry/model_registry/async_job/test_async_upload_e2e.py
  • tests/model_registry/model_registry/async_job/utils.py
  • tests/model_registry/model_registry/conftest.py
  • tests/model_registry/model_registry/negative_tests/test_db_migration.py
  • tests/model_registry/model_registry/negative_tests/test_model_registry_creation_negative.py
  • tests/model_registry/model_registry/python_client/signing/conftest.py
  • tests/model_registry/model_registry/python_client/signing/test_signing_infrastructure.py
  • tests/model_registry/model_registry/python_client/signing/test_signing_negative.py
  • tests/model_registry/model_registry/python_client/signing/utils.py
  • tests/model_registry/model_registry/python_client/test_model_registry_creation.py
  • tests/model_registry/model_registry/rbac/conftest.py
  • tests/model_registry/model_registry/rbac/group_utils.py
  • tests/model_registry/model_registry/rbac/test_mr_rbac.py
  • tests/model_registry/model_registry/rbac/test_mr_rbac_sa.py
  • tests/model_registry/model_registry/rest_api/conftest.py
  • tests/model_registry/model_registry/rest_api/test_model_registry_rest_api.py
  • tests/model_registry/model_registry/rest_api/test_model_registry_secure_db.py
  • tests/model_registry/model_registry/rest_api/test_multiple_mr.py
  • tests/model_registry/model_registry/rest_api/utils.py
  • tests/model_registry/model_registry/upgrade/conftest.py
  • tests/model_registry/model_registry/upgrade/test_model_registry_upgrade.py
  • tests/model_registry/scc/conftest.py
  • tests/model_registry/scc/test_model_catalog_scc.py
  • tests/model_registry/scc/test_model_registry_scc.py
  • tests/model_registry/scc/utils.py
  • tests/model_registry/test_security.py
  • tests/model_registry/utils.py
  • tests/model_serving/maas_billing/conftest.py
  • tests/model_serving/maas_billing/maas_subscription/component_health/test_maas_api_health.py
  • tests/model_serving/maas_billing/maas_subscription/component_health/test_maas_controller_health.py
  • tests/model_serving/maas_billing/maas_subscription/conftest.py
  • tests/model_serving/maas_billing/maas_subscription/test_api_key_authorization.py
  • tests/model_serving/maas_billing/maas_subscription/test_api_key_crud.py
  • tests/model_serving/maas_billing/maas_subscription/test_cascade_deletion.py
  • tests/model_serving/maas_billing/maas_subscription/test_maas_auth_enforcement.py
  • tests/model_serving/maas_billing/maas_subscription/test_maas_sub_enforcement.py
  • tests/model_serving/maas_billing/maas_subscription/test_multiple_auth_policies_per_model.py
  • tests/model_serving/maas_billing/maas_subscription/test_multiple_subscriptions_no_header.py
  • tests/model_serving/maas_billing/maas_subscription/test_multiple_subscriptions_per_model.py
  • tests/model_serving/maas_billing/maas_subscription/test_subscription_without_auth_policy.py
  • tests/model_serving/maas_billing/maas_subscription/utils.py
  • tests/model_serving/maas_billing/test_maas_endpoints.py
  • tests/model_serving/maas_billing/test_maas_rbac_e2e.py
  • tests/model_serving/maas_billing/test_maas_request_rate_limits.py
  • tests/model_serving/maas_billing/test_maas_token_rate_limits.py
  • tests/model_serving/maas_billing/test_maas_token_revoke.py
  • tests/model_serving/maas_billing/utils.py
  • tests/model_serving/model_runtime/image_validation/test_verify_serving_runtime_images.py
  • tests/model_serving/model_runtime/model_validation/conftest.py
  • tests/model_serving/model_runtime/model_validation/test_modelvalidation.py
  • tests/model_serving/model_runtime/openvino/conftest.py
  • tests/model_serving/model_runtime/openvino/test_ovms_model_deployment.py
  • tests/model_serving/model_runtime/rhoai_upgrade/test_upgrade.py
  • tests/model_serving/model_runtime/triton/basic_model_deployment/conftest.py
  • tests/model_serving/model_runtime/triton/basic_model_deployment/test_dali_model.py
  • tests/model_serving/model_runtime/triton/basic_model_deployment/test_fil_model.py
  • tests/model_serving/model_runtime/triton/basic_model_deployment/test_keras_model.py
  • tests/model_serving/model_runtime/triton/basic_model_deployment/test_onnx_model.py
  • tests/model_serving/model_runtime/triton/basic_model_deployment/test_python_model.py
  • tests/model_serving/model_runtime/triton/basic_model_deployment/test_pytorch_model.py
  • tests/model_serving/model_runtime/triton/basic_model_deployment/test_tensorflow_model.py
  • tests/model_serving/model_runtime/utils.py
  • tests/model_serving/model_runtime/vllm/basic_model_deployment/test_elyza_japanese_llama_2_7b_instruct.py
  • tests/model_serving/model_runtime/vllm/basic_model_deployment/test_granite_2b_instruct_preview_4k_r240917a.py
  • tests/model_serving/model_runtime/vllm/basic_model_deployment/test_granite_7b_redhat_lab.py
  • tests/model_serving/model_runtime/vllm/basic_model_deployment/test_granite_7b_starter.py
  • tests/model_serving/model_runtime/vllm/basic_model_deployment/test_llama31_8B_instruct.py
  • tests/model_serving/model_runtime/vllm/basic_model_deployment/test_llama3_8B_instruct.py
  • tests/model_serving/model_runtime/vllm/basic_model_deployment/test_llama_2_13b_chat.py
  • tests/model_serving/model_runtime/vllm/basic_model_deployment/test_merlinite_7b_lab.py
  • tests/model_serving/model_runtime/vllm/conftest.py
  • tests/model_serving/model_runtime/vllm/multimodal/test_granite_31_2b_vision.py
  • tests/model_serving/model_runtime/vllm/quantization/test_openhermes-2_5_mistral-7b_awq.py
  • tests/model_serving/model_runtime/vllm/speculative_decoding/test_granite_7b_lab_draft.py
  • tests/model_serving/model_runtime/vllm/speculative_decoding/test_granite_7b_lab_ngram.py
  • tests/model_serving/model_runtime/vllm/toolcalling/test_granite_3_2_8b_instruct_preview.py
  • tests/model_serving/model_runtime/vllm/utils.py
  • tests/model_serving/model_server/conftest.py
  • tests/model_serving/model_server/kserve/authentication/conftest.py
  • tests/model_serving/model_server/kserve/autoscaling/keda/conftest.py
  • tests/model_serving/model_server/kserve/autoscaling/keda/test_isvc_keda_scaling_cpu.py
  • tests/model_serving/model_server/kserve/autoscaling/keda/test_isvc_keda_scaling_gpu.py
  • tests/model_serving/model_server/kserve/inference_service_lifecycle/test_isvc_replicas_update.py
  • tests/model_serving/model_server/kserve/inference_service_lifecycle/utils.py
  • tests/model_serving/model_server/kserve/ingress/conftest.py
  • tests/model_serving/model_server/kserve/ingress/test_internal_endpoint.py
  • tests/model_serving/model_server/kserve/ingress/utils.py
  • tests/model_serving/model_server/kserve/multi_node/conftest.py
  • tests/model_serving/model_server/kserve/multi_node/test_nvidia_multi_node.py
  • tests/model_serving/model_server/kserve/multi_node/test_oci_multi_node.py
  • tests/model_serving/model_server/kserve/multi_node/utils.py
  • tests/model_serving/model_server/kserve/platform/dsc_deployment_mode/utils.py
  • tests/model_serving/model_server/kserve/platform/test_custom_resources.py
  • tests/model_serving/model_server/llmd/conftest.py
  • tests/model_serving/model_server/llmd/utils.py
  • tests/model_serving/model_server/upgrade/conftest.py
  • tests/model_serving/model_server/upgrade/utils.py
  • tests/model_serving/model_server/utils.py
  • tests/workbenches/conftest.py
  • tests/workbenches/notebook-controller/test_custom_images.py
  • tests/workbenches/test_imagestream_health.py
  • tests/workbenches/utils.py
  • utilities/certificates_utils.py
  • utilities/data_science_cluster_utils.py
  • utilities/general.py
  • utilities/inference_utils.py
  • utilities/infra.py
  • utilities/jira.py
  • utilities/kueue_utils.py
  • utilities/llmd_utils.py
  • utilities/logger.py
  • utilities/monitoring.py
  • utilities/must_gather_collector.py
  • utilities/opendatahub_logger.py
  • utilities/operator_utils.py
  • utilities/plugins/openai_plugin.py
  • utilities/plugins/tgis_grpc_plugin.py
  • utilities/registry_utils.py

jgarciao
jgarciao previously approved these changes Mar 24, 2026
@dbasunag
Copy link
Copy Markdown
Collaborator Author

/build-push-pr-image

@github-actions
Copy link
Copy Markdown

Status of building tag : skipped.
Status of pushing tag to image registry: skipped.

fege
fege previously approved these changes Mar 24, 2026
Copy link
Copy Markdown
Contributor

@fege fege left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Signed-off-by: Debarati Basu-Nag <dbasunag@redhat.com>
@dbasunag
Copy link
Copy Markdown
Collaborator Author

/build-push-pr-image

@github-actions
Copy link
Copy Markdown

Status of building tag pr-1176: success.
Status of pushing tag pr-1176 to image registry: success.

@jgarciao jgarciao self-requested a review March 24, 2026 17:38
Copy link
Copy Markdown
Contributor

@sheltoncyril sheltoncyril left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@dbasunag dbasunag merged commit a84ef19 into opendatahub-io:main Mar 24, 2026
12 checks passed
@dbasunag dbasunag deleted the logging branch March 24, 2026 18:28
@github-actions
Copy link
Copy Markdown

Status of building tag latest: success.
Status of pushing tag latest to image registry: success.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants