fix: Use structlog as the logging package, since simple_logger is not agent friendly#1176
fix: Use structlog as the logging package, since simple_logger is not agent friendly#1176dbasunag merged 7 commits intoopendatahub-io:mainfrom
Conversation
|
The following are automatically added/executed:
Available user actions:
Supported labels{'/wip', '/verified', '/hold', '/lgtm', '/cherry-pick', '/build-push-pr-image'} |
Signed-off-by: Debarati Basu-Nag <dbasunag@redhat.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughReplaces Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
utilities/jira.py (1)
111-111:⚠️ Potential issue | 🔴 CriticalInvalid Python 3 exception syntax will cause
SyntaxError.Line 111 uses comma-separated exception types without parentheses, which is Python 2 syntax. This will fail at module import time.
except NewConnectionError, JIRAError, RequestsConnectionError:Should be:
except (NewConnectionError, JIRAError, RequestsConnectionError):🐛 Proposed fix
- except NewConnectionError, JIRAError, RequestsConnectionError: + except (NewConnectionError, JIRAError, RequestsConnectionError):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utilities/jira.py` at line 111, Update the invalid Python 2-style exception clause in utilities/jira.py: replace the comma-separated except clause that references NewConnectionError, JIRAError, and RequestsConnectionError with a Python 3 compatible tuple form (i.e., use except (NewConnectionError, JIRAError, RequestsConnectionError):), and if the block needs the exception object for logging or handling, capture it with an "as e" (e.g., except (... ) as e) inside the same try/except surrounding the JIRA interaction or the function where this clause appears.utilities/monitoring.py (1)
78-97:⚠️ Potential issue | 🟡 MinorPotential
UnboundLocalErrorifTimeoutSamplerraises before first yield.If
TimeoutSamplerraisesTimeoutExpiredErrorbefore yielding any sample (e.g., immediate timeout or exception infunc),samplewill be undefined at Line 96.Proposed fix
+ sample = None try: for sample in TimeoutSampler( wait_timeout=timeout, sleep=15, func=field_getter, prometheus=prometheus, metrics_query=metrics_query, ):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utilities/monitoring.py` around lines 78 - 97, The except block can reference sample before it's defined if TimeoutSampler raises before first yield; update the logic around TimeoutSampler/TimeoutExpiredError by initializing a sentinel (e.g., last_value = None) before the try and assigning last_value = sample inside the loop (or set sample = None before try), then change the except handler to log last_value (or the sentinel) instead of sample and include expected_value; ensure you update all LOGGER.error and LOGGER.info references that use sample to use the initialized variable (symbols: TimeoutSampler, TimeoutExpiredError, sample, last_value, LOGGER, expected_value).tests/model_serving/model_server/kserve/inference_service_lifecycle/test_isvc_replicas_update.py (1)
45-45:⚠️ Potential issue | 🟡 MinorMissing f-string prefix in assertion message.
The string lacks the
fprefix, so{pod.name for pod in isvc_pods}will be printed literally instead of being evaluated.Proposed fix
- assert len(isvc_pods) == 2, "Expected 2 inference pods, existing pods: {pod.name for pod in isvc_pods}" + assert len(isvc_pods) == 2, f"Expected 2 inference pods, existing pods: {[pod.name for pod in isvc_pods]}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_serving/model_server/kserve/inference_service_lifecycle/test_isvc_replicas_update.py` at line 45, The assertion message in test_isvc_replicas_update uses a format-expression but is missing the f-string prefix, so the set comprehension "{pod.name for pod in isvc_pods}" is not evaluated; update the assertion in the test (the line asserting len(isvc_pods) == 2) to use an f-string (prefix the string with f) so the actual pod names are interpolated into the error message for easier debugging.utilities/infra.py (1)
972-973:⚠️ Potential issue | 🔴 CriticalInvalid Python 3 syntax -
SyntaxErrorat runtime.
except A, B:is Python 2 syntax. Python 3 requires parentheses for multiple exception types.Proposed fix
- except ResourceNotFoundError, NotFoundError: + except (ResourceNotFoundError, NotFoundError):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utilities/infra.py` around lines 972 - 973, The except clause uses Python 2 syntax "except ResourceNotFoundError, NotFoundError:" which raises a SyntaxError; change it to the Python 3 form using a tuple: "except (ResourceNotFoundError, NotFoundError):" in the try/except that logs "Pod {pod.name} is deleted" (look for the except block referencing ResourceNotFoundError, NotFoundError and LOGGER).utilities/plugins/openai_plugin.py (2)
107-109:⚠️ Potential issue | 🔴 CriticalInvalid Python 3 syntax -
SyntaxErrorat runtime.
except A, B:is Python 2 syntax. Python 3 requires parentheses for multiple exception types.Proposed fix
- except requests.exceptions.RequestException, json.JSONDecodeError: + except (requests.exceptions.RequestException, json.JSONDecodeError):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utilities/plugins/openai_plugin.py` around lines 107 - 109, The except clause uses Python 2 syntax ("except A, B:") which causes a SyntaxError; update the exception handling in openai_plugin.py to use a tuple for multiple exceptions (i.e., except (requests.exceptions.RequestException, json.JSONDecodeError):) and ensure the block references LOGGER and re-raises as before (you can also add exc_info=True to LOGGER.error for better diagnostics). Locate the try/except around streaming requests where requests.exceptions.RequestException and json.JSONDecodeError are caught and replace the comma-separated form with the parenthesized tuple of exception types.
140-141:⚠️ Potential issue | 🔴 CriticalSame Python 3 syntax error as Line 107.
Proposed fix
- except requests.exceptions.RequestException, json.JSONDecodeError: + except (requests.exceptions.RequestException, json.JSONDecodeError):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utilities/plugins/openai_plugin.py` around lines 140 - 141, The except clause uses invalid Python 3 syntax; update the exception handling in openai_plugin.py to catch both exceptions using a tuple (e.g., change the incorrect "except requests.exceptions.RequestException, json.JSONDecodeError:" to a tuple form) so the block reads "except (requests.exceptions.RequestException, json.JSONDecodeError):" and keep the existing LOGGER.exception("Request error") inside that block; ensure json is the expected module (json.JSONDecodeError) or use json.decoder.JSONDecodeError if your imports differ.
🧹 Nitpick comments (12)
utilities/kueue_utils.py (1)
11-13: Addconfigure_third_party=Falseto module-level logger initialization.Line 13 calls
get_logger(name=__name__)with defaultconfigure_third_party=True, triggering global logging reconfiguration on every import. This pattern is pervasive across 130+ modules in the codebase. Centralize third-party logging setup in a single bootstrap location and opt out at module level:LOGGER = get_logger(name=__name__, configure_third_party=False)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utilities/kueue_utils.py` around lines 11 - 13, The module-level logger is calling get_logger with default configure_third_party=True which causes global logging reconfiguration on import; update the LOGGER initialization to call get_logger(name=__name__, configure_third_party=False) so this module opts out of configuring third-party logging and centralizes that setup in your bootstrap code — change the LOGGER assignment that uses get_logger to pass configure_third_party=False.tests/model_registry/mcp_servers/search/test_keyword_search.py (1)
7-9: Unused logger.
LOGGERis defined but never used in this file. Remove if not needed.Proposed fix
from tests.model_registry.mcp_servers.constants import CALCULATOR_SERVER_NAME from tests.model_registry.utils import execute_get_command -from utilities.opendatahub_logger import get_logger - -LOGGER = get_logger(name=__name__)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/mcp_servers/search/test_keyword_search.py` around lines 7 - 9, Remove the unused LOGGER import and variable: delete the get_logger import from utilities.opendatahub_logger and the LOGGER = get_logger(name=__name__) line in tests/model_registry/mcp_servers/search/test_keyword_search.py since LOGGER is never referenced; if logging is needed later, reintroduce get_logger and LOGGER where actually used (search for LOGGER references before removal).tests/model_serving/model_runtime/vllm/toolcalling/test_granite_3_2_8b_instruct_preview.py (1)
21-23: Unused logger.
LOGGERis defined but never used in this file. Remove the dead code.Proposed fix
from utilities.constants import KServeDeploymentType -from utilities.opendatahub_logger import get_logger - -LOGGER = get_logger(name=__name__)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_serving/model_runtime/vllm/toolcalling/test_granite_3_2_8b_instruct_preview.py` around lines 21 - 23, The file defines an unused logger variable LOGGER created via get_logger(name=__name__) which is dead code; remove the import of get_logger and the LOGGER definition (symbols: get_logger and LOGGER) from tests/model_serving/model_runtime/vllm/toolcalling/test_granite_3_2_8b_instruct_preview.py so there are no unused imports or variables left behind, keeping the rest of the test file unchanged.tests/model_explainability/evalhub/conftest.py (1)
12-15: Unused logger.
LOGGERis defined but never used in this file. Remove if not needed, or confirm it's intended for future use.Proposed fix
from utilities.certificates_utils import create_ca_bundle_file from utilities.constants import Timeout -from utilities.opendatahub_logger import get_logger from utilities.resources.evalhub import EvalHub - -LOGGER = get_logger(name=__name__)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_explainability/evalhub/conftest.py` around lines 12 - 15, LOGGER is defined via get_logger(name=__name__) in tests/model_explainability/evalhub/conftest.py but never used; either remove the unused LOGGER and the get_logger import (and remove any unused EvalHub import if applicable) or, if the logger is intentionally reserved, keep it and add a short comment or a usage (e.g., a minimal debug log) to avoid lint warnings; update references to the LOGGER symbol accordingly and ensure no unused-import warnings remain.tests/model_registry/test_security.py (1)
35-40:verify=Falsedisables TLS certificate validation (CWE-295).This is acceptable for test environments with self-signed certificates, but ensure this pattern doesn't propagate to production code. Consider using a test CA bundle or
REQUESTS_CA_BUNDLEenvironment variable for more realistic security testing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/test_security.py` around lines 35 - 40, The test currently calls requests.get(...) with verify=False which disables TLS validation; update the test in tests/model_registry/test_security.py around the requests.get invocation to avoid using verify=False in test logic — instead configure a test CA bundle (pass verify='/path/to/test_ca_bundle.pem') or set REQUESTS_CA_BUNDLE in the test environment, or use a test fixture/mocking (e.g., requests_mock) to simulate TLS without disabling verification, ensuring the call in the test (requests.get) preserves certificate validation while still allowing self-signed cert testing.tests/model_serving/model_server/upgrade/utils.py (1)
201-205: Consider moving import to module level.Importing inside the function adds overhead on each invocation. Additionally, the variable naming (
logger) is inconsistent with other modules in this PR which use uppercaseLOGGER.♻️ Suggested refactor
Move the import to the top of the file with other imports:
+from utilities.opendatahub_logger import get_logger + +logger = get_logger(name=__name__) + # ... existing code ... def verify_metrics_retained( prometheus: Prometheus, query: str, min_value: int, timeout: int = 240, ) -> None: # ... from timeout_sampler import TimeoutExpiredError, TimeoutSampler - from utilities.opendatahub_logger import get_logger - - logger = get_logger(name=__name__) try:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_serving/model_server/upgrade/utils.py` around lines 201 - 205, The import of TimeoutExpiredError, TimeoutSampler and get_logger and the instantiation logger = get_logger(name=__name__) should be moved out of any function and placed at module level with the other imports; also rename the variable to LOGGER to match project convention (use LOGGER = get_logger(name=__name__)), and update any local references to logger to use LOGGER and adjust import locations so TimeoutExpiredError and TimeoutSampler are imported once at the top of the module.tests/model_serving/model_runtime/rhoai_upgrade/test_upgrade.py (1)
10-12: Avoid import-order-dependent global logging setup from test modules.Line 12 calls
get_loggerwith defaults; inutilities/opendatahub_logger.py, defaultconfigure_third_party=Truecan trigger global logging reconfiguration during module import. In test suites, this makes logging setup depend on import order. Prefer disabling third-party setup at leaf modules and perform it once in a dedicated bootstrap fixture.Suggested change
-LOGGER = get_logger(name=__name__) +LOGGER = get_logger(name=__name__, configure_third_party=False)As per coding guidelines, "REVIEW PRIORITIES: 3. Bug-prone patterns and error handling gaps".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_serving/model_runtime/rhoai_upgrade/test_upgrade.py` around lines 10 - 12, The test module currently calls get_logger(...) at import time (LOGGER = get_logger(name=__name__)), which can trigger global logging reconfiguration via the default configure_third_party=True; change the call in the test to disable third-party configuration by calling get_logger with configure_third_party=False (or remove the module-level LOGGER and obtain a logger inside a test fixture/setup that runs once), ensuring tests do not perform global logging reconfiguration on import; reference get_logger and the module-level LOGGER in the test to locate and update the call.tests/model_serving/model_runtime/vllm/basic_model_deployment/test_granite_7b_redhat_lab.py (1)
18-20: Avoid import-time global logging reconfiguration from leaf test modules.
get_logger()defaults toconfigure_third_party=True, so this module-level call can trigger global logging setup during import. With this pattern repeated across many files, logging behavior becomes import-order dependent. Configure third-party logging once in a central bootstrap (e.g., top-levelconftest.py) and useget_logger(name=__name__, configure_third_party=False)in leaf modules.Suggested change
-from utilities.opendatahub_logger import get_logger +from utilities.opendatahub_logger import get_logger -LOGGER = get_logger(name=__name__) +LOGGER = get_logger(name=__name__, configure_third_party=False)As per coding guidelines,
REVIEW PRIORITIES: 2. Architectural issues and anti-patterns.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_serving/model_runtime/vllm/basic_model_deployment/test_granite_7b_redhat_lab.py` around lines 18 - 20, The module-level call to get_logger() (creating LOGGER) triggers global logging configuration because get_logger defaults to configure_third_party=True; change the call to get_logger(name=__name__, configure_third_party=False) so the test file (and its LOGGER variable) does not reconfigure third-party logging at import time, and ensure third-party logging is configured centrally (e.g., in top-level conftest.py) instead of in leaf modules.tests/model_registry/model_catalog/search/utils.py (1)
22-22: Importexecute_get_commandfrom the owning module, not a transitive re-export.This import now depends on
tests.model_registry.model_catalog.utilsre-export behavior. Use the defining module to keep dependency boundaries explicit and reduce brittle coupling.Proposed change
-from tests.model_registry.model_catalog.utils import execute_database_query, execute_get_command, parse_psql_output +from tests.model_registry.model_catalog.utils import execute_database_query, parse_psql_output +from tests.model_registry.utils import execute_get_command🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_catalog/search/utils.py` at line 22, The file currently imports execute_get_command transitively from tests.model_registry.model_catalog.utils; change the import so execute_get_command is imported directly from the module that defines it (not via the re-export). In the import line that references execute_database_query, execute_get_command, parse_psql_output (in search.utils), remove execute_get_command from that re-export and add a separate import that points to its owning module (the module that defines execute_get_command). Keep the other imports unchanged and run tests to ensure no import errors.utilities/opendatahub_logger.py (3)
350-363: Test function in production module.
test_third_party_logging()is a demonstration function that usesprint()without assertions. Move to a proper test file undertests/or convert to a documented example. As-is, it won't be discovered by pytest and clutters the production API.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utilities/opendatahub_logger.py` around lines 350 - 363, The function test_third_party_logging() is a demo test left in production; remove it from the module and either move its logic into a real pytest test under tests/ (e.g., create tests/test_third_party_logging.py using pytest capturing/assertions and import get_logger) or convert it to an example in the README or a documented examples/ file; update or remove any imports it relies on (get_logger, logging) and ensure no leftover top-level test_* function names remain in utilities/opendatahub_logger.py to prevent confusion with pytest discovery.
267-288: Global monkey-patch oflogging.getLoggerhas side effects.Patching
logging.getLoggerat line 288 is a significant global mutation that affects all code including third-party libraries. The patched version also stores state vialogger._json_configured(line 284) which won't survive pickling or certain testing scenarios. Consider documenting this behavior prominently or using a less invasive approach likelogging.setLoggerClass().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utilities/opendatahub_logger.py` around lines 267 - 288, The code globally monkey-patches logging.getLogger by assigning patched_getLogger over original_getLogger which causes wide side effects and stores state on logger via logger._json_configured; instead, avoid global monkey-patching and switch to a less invasive approach: replace the assignment of logging.getLogger with using logging.setLoggerClass or a custom Logger subclass that applies ThirdPartyJSONFormatter in its __init__/handle methods, remove or avoid relying on logger._json_configured for persistent state (use logger.manager or handler-level flags), and document any remaining global behavior; update references to original_getLogger, patched_getLogger, ThirdPartyJSONFormatter, and logger._json_configured when refactoring.
204-226: Invalid# noqa: FCN001rule codes.
FCN001is not a recognized Ruff rule. Either remove these noqa comments or addFCN001tolint.externalin your Ruff configuration if this rule comes from another linter.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utilities/opendatahub_logger.py` around lines 204 - 226, The methods info, debug, warning, warn, error, critical, and exception in this file include invalid "# noqa: FCN001" comments; remove those invalid noqa markers (or if FCN001 is from an external linter, add "FCN001" to lint.external in the Ruff config) so the linter no longer reports unknown rule codes—locate the calls to self._log and the warn wrapper (methods named info, debug, warning, warn, error, critical, exception, and the helper _log) and either delete the trailing "# noqa: FCN001" on each line or update your Ruff config accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pyproject.toml`:
- Line 53: Update the structlog dependency in pyproject.toml: replace the
current constraint "structlog>=24.1.0" with "structlog>=25.4.0" so the package
supports Python 3.14 and satisfies the project requires-python "==3.14.*"
constraint; locate the dependency entry that currently reads structlog>=24.1.0
and change it to structlog>=25.4.0, then run dependency resolution to verify no
transitive conflicts.
In
`@tests/model_registry/model_catalog/catalog_config/test_catalog_source_merge.py`:
- Line 4: The import for execute_get_command is pointing to the wrong module;
update the import statement that currently references
tests.model_registry.model_catalog.utils to instead import execute_get_command
from tests.model_registry.utils so the test module uses the correct function
definition (look for the import line that names execute_get_command in the
test_catalog_source_merge file and change its module path).
In `@tests/model_registry/model_catalog/catalog_config/utils.py`:
- Line 26: The StructlogWrapper.info() calls (e.g., LOGGER.info("Found expected
number of models: %s for source: %s", expected_count, source_label)) are logging
literal "%s" because StructlogWrapper._log() drops positional args; fix _log()
in the StructlogWrapper class to interpolate positional format args before
calling structlog (e.g., compute msg_str = msg % args if args else msg, then
pass event=msg_str and **kwargs to the underlying logger) so existing callers
continue to work, or alternatively update callers (like the LOGGER.info call) to
use f-strings/keyword args consistently; prefer updating StructlogWrapper._log()
to format msg with args to preserve backward compatibility.
In `@tests/model_registry/model_catalog/huggingface/test_huggingface_negative.py`:
- Around line 7-9: The module-level call LOGGER = get_logger(name=__name__)
causes global side effects during import by invoking setup_global_json_logging;
change the initialization to avoid invoking third-party reconfiguration at
import time by calling get_logger with configure_third_party=False (i.e., LOGGER
= get_logger(name=__name__, configure_third_party=False)), or alternatively
remove module-level logger initialization and obtain the logger from a
session-scoped fixture that centralizes logging setup to prevent repeated global
patches; target the LOGGER symbol and the get_logger call when making this
change.
In
`@tests/model_serving/model_runtime/vllm/basic_model_deployment/test_merlinite_7b_lab.py`:
- Around line 18-20: The test module performs process-wide logging
reconfiguration by calling get_logger(name=__name__) with the default
configure_third_party=True; to prevent cross-test coupling change the call that
initializes LOGGER to pass configure_third_party=False so
get_logger(name=__name__, configure_third_party=False) is used (location: LOGGER
initialization in this file referencing utilities.opendatahub_logger.get_logger)
ensuring the test import does not mutate global logging configuration.
In `@tests/workbenches/test_imagestream_health.py`:
- Around line 10-13: The module-level LOGGER initialization is triggering global
logging reconfiguration via get_logger() (which uses the module-level flag
_global_logging_configured); change the call to get_logger to disable
third-party reconfiguration by passing configure_third_party=False when creating
LOGGER (i.e., update LOGGER = get_logger(name=__name__) to pass
configure_third_party=False), and move global/third-party logging setup into a
session-scoped pytest fixture (create a single session fixture that calls
get_logger(..., configure_third_party=True) once) so tests do not reconfigure
logging on import order.
In `@utilities/opendatahub_logger.py`:
- Around line 150-160: Syntax error: the except clause uses Python 2-style
"except TypeError, ValueError" which must be changed to modern tuple syntax;
update the exception handler around the json.dumps(log_entry) call in
utilities/opendatahub_logger.py so it reads except (TypeError, ValueError): and
leave the fallback_entry construction and return json.dumps(fallback_entry)
unchanged (locate the try/except that surrounds json.dumps(log_entry) and modify
only the except clause).
- Around line 297-303: The function that defines parameters level, log_to_file,
log_file, filename, log_to_console, json_format, and configure_third_party
currently ignores most of them; update it so either (A) the parameters are
actually applied (e.g., use level to set logger level, honor
log_to_file/log_file/filename to attach a FileHandler, use log_to_console to
attach a StreamHandler, and respect json_format or map it to a formatter) or (B)
emit a clear deprecation/warning when any of those parameters are passed with
non-default values (using the warnings module) and document that
configure_third_party remains the only effective flag; locate the parameter list
in utilities/opendatahub_logger.py and change the implementation in that same
function to apply the chosen fix for level, log_to_file, log_file, filename,
log_to_console, and json_format.
- Around line 179-185: Structlog is being reconfigured on each logger
instantiation in StructlogWrapper.__init__ by calling structlog.configure(),
which causes global side effects and race conditions; move that call into a
one-time module-level initialization guarded by the existing
_global_logging_configured flag (or create one) so configuration is executed
only once (e.g., perform structlog.configure(...) at import time or inside a
protected init function), then have StructlogWrapper.__init__ and get_logger()
only bind/return loggers without calling structlog.configure again.
---
Outside diff comments:
In
`@tests/model_serving/model_server/kserve/inference_service_lifecycle/test_isvc_replicas_update.py`:
- Line 45: The assertion message in test_isvc_replicas_update uses a
format-expression but is missing the f-string prefix, so the set comprehension
"{pod.name for pod in isvc_pods}" is not evaluated; update the assertion in the
test (the line asserting len(isvc_pods) == 2) to use an f-string (prefix the
string with f) so the actual pod names are interpolated into the error message
for easier debugging.
In `@utilities/infra.py`:
- Around line 972-973: The except clause uses Python 2 syntax "except
ResourceNotFoundError, NotFoundError:" which raises a SyntaxError; change it to
the Python 3 form using a tuple: "except (ResourceNotFoundError,
NotFoundError):" in the try/except that logs "Pod {pod.name} is deleted" (look
for the except block referencing ResourceNotFoundError, NotFoundError and
LOGGER).
In `@utilities/jira.py`:
- Line 111: Update the invalid Python 2-style exception clause in
utilities/jira.py: replace the comma-separated except clause that references
NewConnectionError, JIRAError, and RequestsConnectionError with a Python 3
compatible tuple form (i.e., use except (NewConnectionError, JIRAError,
RequestsConnectionError):), and if the block needs the exception object for
logging or handling, capture it with an "as e" (e.g., except (... ) as e) inside
the same try/except surrounding the JIRA interaction or the function where this
clause appears.
In `@utilities/monitoring.py`:
- Around line 78-97: The except block can reference sample before it's defined
if TimeoutSampler raises before first yield; update the logic around
TimeoutSampler/TimeoutExpiredError by initializing a sentinel (e.g., last_value
= None) before the try and assigning last_value = sample inside the loop (or set
sample = None before try), then change the except handler to log last_value (or
the sentinel) instead of sample and include expected_value; ensure you update
all LOGGER.error and LOGGER.info references that use sample to use the
initialized variable (symbols: TimeoutSampler, TimeoutExpiredError, sample,
last_value, LOGGER, expected_value).
In `@utilities/plugins/openai_plugin.py`:
- Around line 107-109: The except clause uses Python 2 syntax ("except A, B:")
which causes a SyntaxError; update the exception handling in openai_plugin.py to
use a tuple for multiple exceptions (i.e., except
(requests.exceptions.RequestException, json.JSONDecodeError):) and ensure the
block references LOGGER and re-raises as before (you can also add exc_info=True
to LOGGER.error for better diagnostics). Locate the try/except around streaming
requests where requests.exceptions.RequestException and json.JSONDecodeError are
caught and replace the comma-separated form with the parenthesized tuple of
exception types.
- Around line 140-141: The except clause uses invalid Python 3 syntax; update
the exception handling in openai_plugin.py to catch both exceptions using a
tuple (e.g., change the incorrect "except requests.exceptions.RequestException,
json.JSONDecodeError:" to a tuple form) so the block reads "except
(requests.exceptions.RequestException, json.JSONDecodeError):" and keep the
existing LOGGER.exception("Request error") inside that block; ensure json is the
expected module (json.JSONDecodeError) or use json.decoder.JSONDecodeError if
your imports differ.
---
Nitpick comments:
In `@tests/model_explainability/evalhub/conftest.py`:
- Around line 12-15: LOGGER is defined via get_logger(name=__name__) in
tests/model_explainability/evalhub/conftest.py but never used; either remove the
unused LOGGER and the get_logger import (and remove any unused EvalHub import if
applicable) or, if the logger is intentionally reserved, keep it and add a short
comment or a usage (e.g., a minimal debug log) to avoid lint warnings; update
references to the LOGGER symbol accordingly and ensure no unused-import warnings
remain.
In `@tests/model_registry/mcp_servers/search/test_keyword_search.py`:
- Around line 7-9: Remove the unused LOGGER import and variable: delete the
get_logger import from utilities.opendatahub_logger and the LOGGER =
get_logger(name=__name__) line in
tests/model_registry/mcp_servers/search/test_keyword_search.py since LOGGER is
never referenced; if logging is needed later, reintroduce get_logger and LOGGER
where actually used (search for LOGGER references before removal).
In `@tests/model_registry/model_catalog/search/utils.py`:
- Line 22: The file currently imports execute_get_command transitively from
tests.model_registry.model_catalog.utils; change the import so
execute_get_command is imported directly from the module that defines it (not
via the re-export). In the import line that references execute_database_query,
execute_get_command, parse_psql_output (in search.utils), remove
execute_get_command from that re-export and add a separate import that points to
its owning module (the module that defines execute_get_command). Keep the other
imports unchanged and run tests to ensure no import errors.
In `@tests/model_registry/test_security.py`:
- Around line 35-40: The test currently calls requests.get(...) with
verify=False which disables TLS validation; update the test in
tests/model_registry/test_security.py around the requests.get invocation to
avoid using verify=False in test logic — instead configure a test CA bundle
(pass verify='/path/to/test_ca_bundle.pem') or set REQUESTS_CA_BUNDLE in the
test environment, or use a test fixture/mocking (e.g., requests_mock) to
simulate TLS without disabling verification, ensuring the call in the test
(requests.get) preserves certificate validation while still allowing self-signed
cert testing.
In `@tests/model_serving/model_runtime/rhoai_upgrade/test_upgrade.py`:
- Around line 10-12: The test module currently calls get_logger(...) at import
time (LOGGER = get_logger(name=__name__)), which can trigger global logging
reconfiguration via the default configure_third_party=True; change the call in
the test to disable third-party configuration by calling get_logger with
configure_third_party=False (or remove the module-level LOGGER and obtain a
logger inside a test fixture/setup that runs once), ensuring tests do not
perform global logging reconfiguration on import; reference get_logger and the
module-level LOGGER in the test to locate and update the call.
In
`@tests/model_serving/model_runtime/vllm/basic_model_deployment/test_granite_7b_redhat_lab.py`:
- Around line 18-20: The module-level call to get_logger() (creating LOGGER)
triggers global logging configuration because get_logger defaults to
configure_third_party=True; change the call to get_logger(name=__name__,
configure_third_party=False) so the test file (and its LOGGER variable) does not
reconfigure third-party logging at import time, and ensure third-party logging
is configured centrally (e.g., in top-level conftest.py) instead of in leaf
modules.
In
`@tests/model_serving/model_runtime/vllm/toolcalling/test_granite_3_2_8b_instruct_preview.py`:
- Around line 21-23: The file defines an unused logger variable LOGGER created
via get_logger(name=__name__) which is dead code; remove the import of
get_logger and the LOGGER definition (symbols: get_logger and LOGGER) from
tests/model_serving/model_runtime/vllm/toolcalling/test_granite_3_2_8b_instruct_preview.py
so there are no unused imports or variables left behind, keeping the rest of the
test file unchanged.
In `@tests/model_serving/model_server/upgrade/utils.py`:
- Around line 201-205: The import of TimeoutExpiredError, TimeoutSampler and
get_logger and the instantiation logger = get_logger(name=__name__) should be
moved out of any function and placed at module level with the other imports;
also rename the variable to LOGGER to match project convention (use LOGGER =
get_logger(name=__name__)), and update any local references to logger to use
LOGGER and adjust import locations so TimeoutExpiredError and TimeoutSampler are
imported once at the top of the module.
In `@utilities/kueue_utils.py`:
- Around line 11-13: The module-level logger is calling get_logger with default
configure_third_party=True which causes global logging reconfiguration on
import; update the LOGGER initialization to call get_logger(name=__name__,
configure_third_party=False) so this module opts out of configuring third-party
logging and centralizes that setup in your bootstrap code — change the LOGGER
assignment that uses get_logger to pass configure_third_party=False.
In `@utilities/opendatahub_logger.py`:
- Around line 350-363: The function test_third_party_logging() is a demo test
left in production; remove it from the module and either move its logic into a
real pytest test under tests/ (e.g., create tests/test_third_party_logging.py
using pytest capturing/assertions and import get_logger) or convert it to an
example in the README or a documented examples/ file; update or remove any
imports it relies on (get_logger, logging) and ensure no leftover top-level
test_* function names remain in utilities/opendatahub_logger.py to prevent
confusion with pytest discovery.
- Around line 267-288: The code globally monkey-patches logging.getLogger by
assigning patched_getLogger over original_getLogger which causes wide side
effects and stores state on logger via logger._json_configured; instead, avoid
global monkey-patching and switch to a less invasive approach: replace the
assignment of logging.getLogger with using logging.setLoggerClass or a custom
Logger subclass that applies ThirdPartyJSONFormatter in its __init__/handle
methods, remove or avoid relying on logger._json_configured for persistent state
(use logger.manager or handler-level flags), and document any remaining global
behavior; update references to original_getLogger, patched_getLogger,
ThirdPartyJSONFormatter, and logger._json_configured when refactoring.
- Around line 204-226: The methods info, debug, warning, warn, error, critical,
and exception in this file include invalid "# noqa: FCN001" comments; remove
those invalid noqa markers (or if FCN001 is from an external linter, add
"FCN001" to lint.external in the Ruff config) so the linter no longer reports
unknown rule codes—locate the calls to self._log and the warn wrapper (methods
named info, debug, warning, warn, error, critical, exception, and the helper
_log) and either delete the trailing "# noqa: FCN001" on each line or update
your Ruff config accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: ab7d1aa6-93d5-4962-84ee-f368942f6669
⛔ Files ignored due to path filters (2)
.github/workflows/scripts/pr_workflow.pyis excluded by!.github/**uv.lockis excluded by!**/*.lock
📒 Files selected for processing (188)
pyproject.tomltests/cluster_health/test_cluster_health.pytests/cluster_health/test_operator_health.pytests/conftest.pytests/fixtures/inference.pytests/llama_stack/conftest.pytests/llama_stack/inference/test_completions.pytests/llama_stack/safety/test_trustyai_fms_provider.pytests/llama_stack/utils.pytests/llama_stack/vector_io/test_vector_stores.pytests/model_explainability/evalhub/conftest.pytests/model_explainability/evalhub/utils.pytests/model_explainability/guardrails/test_guardrails.pytests/model_explainability/guardrails/utils.pytests/model_explainability/lm_eval/test_lm_eval.pytests/model_explainability/lm_eval/utils.pytests/model_explainability/trustyai_service/trustyai_service_utils.pytests/model_explainability/trustyai_service/utils.pytests/model_registry/component_health/test_mr_health_check.pytests/model_registry/component_health/test_mr_operator_health.pytests/model_registry/conftest.pytests/model_registry/image_validation/test_verify_rhoai_images.pytests/model_registry/image_validation/utils.pytests/model_registry/mcp_servers/config/conftest.pytests/model_registry/mcp_servers/config/test_included_excluded_servers.pytests/model_registry/mcp_servers/config/test_invalid_yaml.pytests/model_registry/mcp_servers/config/test_multi_source.pytests/model_registry/mcp_servers/config/test_named_queries.pytests/model_registry/mcp_servers/conftest.pytests/model_registry/mcp_servers/search/test_filtering.pytests/model_registry/mcp_servers/search/test_keyword_search.pytests/model_registry/mcp_servers/search/test_ordering.pytests/model_registry/mcp_servers/test_data_integrity.pytests/model_registry/model_catalog/catalog_config/conftest.pytests/model_registry/model_catalog/catalog_config/test_catalog_source_merge.pytests/model_registry/model_catalog/catalog_config/test_custom_model_catalog.pytests/model_registry/model_catalog/catalog_config/test_default_model_catalog.pytests/model_registry/model_catalog/catalog_config/test_default_source_inclusion_exclusion_cleanup.pytests/model_registry/model_catalog/catalog_config/test_model_catalog_negative.pytests/model_registry/model_catalog/catalog_config/utils.pytests/model_registry/model_catalog/conftest.pytests/model_registry/model_catalog/db_check/conftest.pytests/model_registry/model_catalog/db_check/test_model_catalog_db_validation.pytests/model_registry/model_catalog/db_check/utils.pytests/model_registry/model_catalog/huggingface/conftest.pytests/model_registry/model_catalog/huggingface/test_huggingface_exclude_models.pytests/model_registry/model_catalog/huggingface/test_huggingface_model_deployment.pytests/model_registry/model_catalog/huggingface/test_huggingface_model_search.pytests/model_registry/model_catalog/huggingface/test_huggingface_model_type_classification.pytests/model_registry/model_catalog/huggingface/test_huggingface_model_validation.pytests/model_registry/model_catalog/huggingface/test_huggingface_models_multiple_sources.pytests/model_registry/model_catalog/huggingface/test_huggingface_negative.pytests/model_registry/model_catalog/huggingface/test_huggingface_source_error_validation.pytests/model_registry/model_catalog/huggingface/utils.pytests/model_registry/model_catalog/metadata/test_catalog_preview.pytests/model_registry/model_catalog/metadata/test_custom_properties.pytests/model_registry/model_catalog/metadata/test_filter_options_endpoint.pytests/model_registry/model_catalog/metadata/test_labels_endpoint.pytests/model_registry/model_catalog/metadata/test_sources_endpoint.pytests/model_registry/model_catalog/metadata/utils.pytests/model_registry/model_catalog/rbac/test_catalog_rbac.pytests/model_registry/model_catalog/search/test_model_artifact_search.pytests/model_registry/model_catalog/search/test_model_search.pytests/model_registry/model_catalog/search/utils.pytests/model_registry/model_catalog/sorting/test_model_artifacts_sorting.pytests/model_registry/model_catalog/sorting/test_model_sorting.pytests/model_registry/model_catalog/sorting/test_sorting_functionality.pytests/model_registry/model_catalog/sorting/utils.pytests/model_registry/model_catalog/upgrade/test_model_catalog_upgrade.pytests/model_registry/model_catalog/utils.pytests/model_registry/model_registry/async_job/test_async_upload_e2e.pytests/model_registry/model_registry/async_job/utils.pytests/model_registry/model_registry/conftest.pytests/model_registry/model_registry/negative_tests/test_db_migration.pytests/model_registry/model_registry/negative_tests/test_model_registry_creation_negative.pytests/model_registry/model_registry/python_client/signing/conftest.pytests/model_registry/model_registry/python_client/signing/test_signing_infrastructure.pytests/model_registry/model_registry/python_client/signing/test_signing_negative.pytests/model_registry/model_registry/python_client/signing/utils.pytests/model_registry/model_registry/python_client/test_model_registry_creation.pytests/model_registry/model_registry/rbac/conftest.pytests/model_registry/model_registry/rbac/group_utils.pytests/model_registry/model_registry/rbac/test_mr_rbac.pytests/model_registry/model_registry/rbac/test_mr_rbac_sa.pytests/model_registry/model_registry/rest_api/conftest.pytests/model_registry/model_registry/rest_api/test_model_registry_rest_api.pytests/model_registry/model_registry/rest_api/test_model_registry_secure_db.pytests/model_registry/model_registry/rest_api/test_multiple_mr.pytests/model_registry/model_registry/rest_api/utils.pytests/model_registry/model_registry/upgrade/conftest.pytests/model_registry/model_registry/upgrade/test_model_registry_upgrade.pytests/model_registry/scc/conftest.pytests/model_registry/scc/test_model_catalog_scc.pytests/model_registry/scc/test_model_registry_scc.pytests/model_registry/scc/utils.pytests/model_registry/test_security.pytests/model_registry/utils.pytests/model_serving/maas_billing/conftest.pytests/model_serving/maas_billing/maas_subscription/component_health/test_maas_api_health.pytests/model_serving/maas_billing/maas_subscription/component_health/test_maas_controller_health.pytests/model_serving/maas_billing/maas_subscription/conftest.pytests/model_serving/maas_billing/maas_subscription/test_api_key_authorization.pytests/model_serving/maas_billing/maas_subscription/test_api_key_crud.pytests/model_serving/maas_billing/maas_subscription/test_cascade_deletion.pytests/model_serving/maas_billing/maas_subscription/test_maas_auth_enforcement.pytests/model_serving/maas_billing/maas_subscription/test_maas_sub_enforcement.pytests/model_serving/maas_billing/maas_subscription/test_multiple_auth_policies_per_model.pytests/model_serving/maas_billing/maas_subscription/test_multiple_subscriptions_no_header.pytests/model_serving/maas_billing/maas_subscription/test_multiple_subscriptions_per_model.pytests/model_serving/maas_billing/maas_subscription/test_subscription_without_auth_policy.pytests/model_serving/maas_billing/maas_subscription/utils.pytests/model_serving/maas_billing/test_maas_endpoints.pytests/model_serving/maas_billing/test_maas_rbac_e2e.pytests/model_serving/maas_billing/test_maas_request_rate_limits.pytests/model_serving/maas_billing/test_maas_token_rate_limits.pytests/model_serving/maas_billing/test_maas_token_revoke.pytests/model_serving/maas_billing/utils.pytests/model_serving/model_runtime/image_validation/test_verify_serving_runtime_images.pytests/model_serving/model_runtime/model_validation/conftest.pytests/model_serving/model_runtime/model_validation/test_modelvalidation.pytests/model_serving/model_runtime/openvino/conftest.pytests/model_serving/model_runtime/openvino/test_ovms_model_deployment.pytests/model_serving/model_runtime/rhoai_upgrade/test_upgrade.pytests/model_serving/model_runtime/triton/basic_model_deployment/conftest.pytests/model_serving/model_runtime/triton/basic_model_deployment/test_dali_model.pytests/model_serving/model_runtime/triton/basic_model_deployment/test_fil_model.pytests/model_serving/model_runtime/triton/basic_model_deployment/test_keras_model.pytests/model_serving/model_runtime/triton/basic_model_deployment/test_onnx_model.pytests/model_serving/model_runtime/triton/basic_model_deployment/test_python_model.pytests/model_serving/model_runtime/triton/basic_model_deployment/test_pytorch_model.pytests/model_serving/model_runtime/triton/basic_model_deployment/test_tensorflow_model.pytests/model_serving/model_runtime/utils.pytests/model_serving/model_runtime/vllm/basic_model_deployment/test_elyza_japanese_llama_2_7b_instruct.pytests/model_serving/model_runtime/vllm/basic_model_deployment/test_granite_2b_instruct_preview_4k_r240917a.pytests/model_serving/model_runtime/vllm/basic_model_deployment/test_granite_7b_redhat_lab.pytests/model_serving/model_runtime/vllm/basic_model_deployment/test_granite_7b_starter.pytests/model_serving/model_runtime/vllm/basic_model_deployment/test_llama31_8B_instruct.pytests/model_serving/model_runtime/vllm/basic_model_deployment/test_llama3_8B_instruct.pytests/model_serving/model_runtime/vllm/basic_model_deployment/test_llama_2_13b_chat.pytests/model_serving/model_runtime/vllm/basic_model_deployment/test_merlinite_7b_lab.pytests/model_serving/model_runtime/vllm/conftest.pytests/model_serving/model_runtime/vllm/multimodal/test_granite_31_2b_vision.pytests/model_serving/model_runtime/vllm/quantization/test_openhermes-2_5_mistral-7b_awq.pytests/model_serving/model_runtime/vllm/speculative_decoding/test_granite_7b_lab_draft.pytests/model_serving/model_runtime/vllm/speculative_decoding/test_granite_7b_lab_ngram.pytests/model_serving/model_runtime/vllm/toolcalling/test_granite_3_2_8b_instruct_preview.pytests/model_serving/model_runtime/vllm/utils.pytests/model_serving/model_server/conftest.pytests/model_serving/model_server/kserve/authentication/conftest.pytests/model_serving/model_server/kserve/autoscaling/keda/conftest.pytests/model_serving/model_server/kserve/autoscaling/keda/test_isvc_keda_scaling_cpu.pytests/model_serving/model_server/kserve/autoscaling/keda/test_isvc_keda_scaling_gpu.pytests/model_serving/model_server/kserve/inference_service_lifecycle/test_isvc_replicas_update.pytests/model_serving/model_server/kserve/inference_service_lifecycle/utils.pytests/model_serving/model_server/kserve/ingress/conftest.pytests/model_serving/model_server/kserve/ingress/test_internal_endpoint.pytests/model_serving/model_server/kserve/ingress/utils.pytests/model_serving/model_server/kserve/multi_node/conftest.pytests/model_serving/model_server/kserve/multi_node/test_nvidia_multi_node.pytests/model_serving/model_server/kserve/multi_node/test_oci_multi_node.pytests/model_serving/model_server/kserve/multi_node/utils.pytests/model_serving/model_server/kserve/platform/dsc_deployment_mode/utils.pytests/model_serving/model_server/kserve/platform/test_custom_resources.pytests/model_serving/model_server/llmd/conftest.pytests/model_serving/model_server/llmd/utils.pytests/model_serving/model_server/upgrade/conftest.pytests/model_serving/model_server/upgrade/utils.pytests/model_serving/model_server/utils.pytests/workbenches/conftest.pytests/workbenches/notebook-controller/test_custom_images.pytests/workbenches/test_imagestream_health.pytests/workbenches/utils.pyutilities/certificates_utils.pyutilities/data_science_cluster_utils.pyutilities/general.pyutilities/inference_utils.pyutilities/infra.pyutilities/jira.pyutilities/kueue_utils.pyutilities/llmd_utils.pyutilities/logger.pyutilities/monitoring.pyutilities/must_gather_collector.pyutilities/opendatahub_logger.pyutilities/operator_utils.pyutilities/plugins/openai_plugin.pyutilities/plugins/tgis_grpc_plugin.pyutilities/registry_utils.py
tests/model_registry/model_catalog/catalog_config/test_catalog_source_merge.py
Outdated
Show resolved
Hide resolved
tests/model_registry/model_catalog/huggingface/test_huggingface_negative.py
Show resolved
Hide resolved
tests/model_serving/model_runtime/vllm/basic_model_deployment/test_merlinite_7b_lab.py
Show resolved
Hide resolved
|
/build-push-pr-image |
|
Status of building tag : skipped. |
Signed-off-by: Debarati Basu-Nag <dbasunag@redhat.com>
|
/build-push-pr-image |
|
Status of building tag pr-1176: success. |
|
Status of building tag latest: success. |
Pull Request
Summary
Related Issues
How it has been tested
Additional Requirements
Summary by CodeRabbit