Updates for GA of Model Registry#571
Conversation
📝 WalkthroughWalkthroughIntroduces conditional model registry namespace reconfiguration in a session-scoped fixture with readiness waits and guaranteed cleanup. Removes upgrade-specific fixtures and updates upgrade tests to no longer depend on them. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
The following are automatically added/executed:
Available user actions:
Supported labels{'/hold', '/cherry-pick', '/wip', '/lgtm', '/verified', '/build-push-pr-image'} |
| for component_name in component_patch: | ||
| dsc_resource.wait_for_condition( | ||
| condition=DscComponents.COMPONENT_MAPPING[component_name], status="True" | ||
| original_namespace_name = dsc_resource.instance.spec.components.modelregistry.registriesNamespace |
There was a problem hiding this comment.
this can be move in the if I belive
| original_namespace = Namespace(name=original_namespace_name, wait_for_resource=True) | ||
| original_namespace.delete(wait=True) |
There was a problem hiding this comment.
do we actually need to delete it? it could be left there since we are creating it again after a few lines
There was a problem hiding this comment.
Cleaned it that way there is no confusion about leftover resources.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (3)
tests/model_registry/conftest.py (3)
174-176: Scopeoriginal_namespace_nameto the custom-namespace branchAvoid computing it when
--custom-namespaceis not set.- original_namespace_name = dsc_resource.instance.spec.components.modelregistry.registriesNamespace - if pytestconfig.option.custom_namespace: + if pytestconfig.option.custom_namespace: + original_namespace_name = dsc_resource.instance.spec.components.modelregistry.registriesNamespace
195-197: Do we need to delete the original namespace up front?Deleting and later recreating the original MR namespace adds churn and RBAC risk. If the goal is only to move MR to a custom namespace, consider keeping the original namespace intact. At minimum, guard against accidental deletion when the custom and original namespaces match.
- original_namespace = Namespace(name=original_namespace_name, wait_for_resource=True) - original_namespace.delete(wait=True) + if original_namespace_name != py_config["model_registry_namespace"]: + original_namespace = Namespace(name=original_namespace_name, wait_for_resource=True) + original_namespace.delete(wait=True)
175-239: Safety guard: don't delete the model-registry namespace when custom == originalIf py_config["model_registry_namespace"] == original_namespace_name the finally block will delete the active namespace; skip that delete (or assert/raise) when they are equal.
Location: tests/model_registry/conftest.py — updated_dsc_component_state_scope_session fixture (≈ lines 175–239).
🧹 Nitpick comments (1)
tests/model_registry/conftest.py (1)
198-214: After re-enabling MR in the custom namespace, also wait for DSC to be READYYou already wait for pods; add a DSC readiness wait to catch operator-level reconciliation issues earlier.
namespace = Namespace(name=py_config["model_registry_namespace"], wait_for_resource=True) namespace.wait_for_status(status=Namespace.Status.ACTIVE) + # Wait for DSC to reconcile with MR re-enabled and new namespace + for _ in range(60): # ~5 minutes + try: + wait_for_dsc_status_ready(dsc_resource=dsc_resource) + break + except Exception: + time.sleep(5) + else: + wait_for_dsc_status_ready(dsc_resource=dsc_resource)
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
tests/model_registry/conftest.py(2 hunks)tests/model_registry/upgrade/conftest.py(0 hunks)tests/model_registry/upgrade/test_model_registry_upgrade.py(1 hunks)
💤 Files with no reviewable changes (1)
- tests/model_registry/upgrade/conftest.py
🧰 Additional context used
🧠 Learnings (8)
📓 Common learnings
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:733-770
Timestamp: 2025-07-17T15:42:23.880Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 and model_registry_instance_2 fixtures do not need explicit database dependency fixtures (like db_deployment_1, db_secret_1, etc.) in their function signatures. Pytest's dependency injection automatically handles the fixture dependencies when they reference db_name_1 and db_name_2 parameters. This is the correct pattern for these Model Registry instance fixtures.
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#487
File: tests/model_registry/async_job/constants.py:23-31
Timestamp: 2025-08-08T15:58:03.524Z
Learning: In opendatahub-io/opendatahub-tests, for tests under tests/model_registry/async_job (e.g., constants.py and related fixtures), runs start from a clean, known Model Registry state. Therefore, using static MODEL_ID, MODEL_VERSION_ID, and MODEL_ARTIFACT_ID values in MODEL_SYNC_CONFIG is intentional and acceptable; dynamic ID injection is not required for these async job tests (per guidance from user lugi0).
Learnt from: fege
PR: opendatahub-io/opendatahub-tests#409
File: tests/model_registry/negative_tests/test_db_migration.py:47-57
Timestamp: 2025-07-09T08:50:22.172Z
Learning: In negative tests for model registry database migration failures (like test_db_migration_negative), the model registry pod is expected to be in a failed state (not Running) when the database is dirty. The test fetches logs from the failed pod to verify the expected error message about dirty database version, so waiting for Running status would be inappropriate.
📚 Learning: 2025-07-17T15:42:23.880Z
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:733-770
Timestamp: 2025-07-17T15:42:23.880Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 and model_registry_instance_2 fixtures do not need explicit database dependency fixtures (like db_deployment_1, db_secret_1, etc.) in their function signatures. Pytest's dependency injection automatically handles the fixture dependencies when they reference db_name_1 and db_name_2 parameters. This is the correct pattern for these Model Registry instance fixtures.
Applied to files:
tests/model_registry/upgrade/test_model_registry_upgrade.pytests/model_registry/conftest.py
📚 Learning: 2025-08-11T12:14:34.102Z
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#504
File: tests/model_registry/upgrade/conftest.py:20-30
Timestamp: 2025-08-11T12:14:34.102Z
Learning: In tests/model_registry/upgrade/conftest.py, when using ResourceEditor to patch DataScienceCluster resources with a partial components dictionary like `{DscComponents.MODELREGISTRY: {"managementState": ...}}`, the patch is applied correctly without overwriting other components in spec.components. ResourceEditor performs the appropriate merge operation for this resource type.
Applied to files:
tests/model_registry/upgrade/test_model_registry_upgrade.pytests/model_registry/conftest.py
📚 Learning: 2025-07-17T15:42:26.275Z
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:0-0
Timestamp: 2025-07-17T15:42:26.275Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 fixture (and similar duplicated Model Registry instance fixtures) do not require admin_client, db_deployment_1, or db_secret_1 parameters as explicit dependencies, even though these dependencies exist implicitly through the fixture dependency chain.
Applied to files:
tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-08-11T12:14:12.803Z
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#504
File: tests/model_registry/rest_api/test_model_registry_secure_db.py:16-16
Timestamp: 2025-08-11T12:14:12.803Z
Learning: In tests/model_registry/rest_api/test_model_registry_secure_db.py, both the class-level `model_registry_instance` fixture and the method-level `deploy_secure_mysql_and_mr` fixture are intentionally required together for the secure database test setup. These fixtures serve different purposes and are not redundant.
Applied to files:
tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-07-17T15:42:04.167Z
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:595-612
Timestamp: 2025-07-17T15:42:04.167Z
Learning: In tests/model_registry/conftest.py, the db_deployment_1 fixture (and similar duplicated resource fixtures) do not require the admin_client parameter or explicit dependencies on related fixtures like db_secret_1, db_pvc_1, and db_service_1, even though the original model_registry_db_deployment fixture includes these parameters.
Applied to files:
tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-08-08T15:58:03.524Z
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#487
File: tests/model_registry/async_job/constants.py:23-31
Timestamp: 2025-08-08T15:58:03.524Z
Learning: In opendatahub-io/opendatahub-tests, for tests under tests/model_registry/async_job (e.g., constants.py and related fixtures), runs start from a clean, known Model Registry state. Therefore, using static MODEL_ID, MODEL_VERSION_ID, and MODEL_ARTIFACT_ID values in MODEL_SYNC_CONFIG is intentional and acceptable; dynamic ID injection is not required for these async job tests (per guidance from user lugi0).
Applied to files:
tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-07-30T14:15:25.605Z
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#429
File: tests/model_registry/rbac/test_mr_rbac_sa.py:45-45
Timestamp: 2025-07-30T14:15:25.605Z
Learning: In tests/model_registry/rbac/test_mr_rbac_sa.py, bounds checking for model_registry_instance_rest_endpoint list access is not needed because upstream fixture validation already ensures endpoints exist before the tests execute. The Model Registry setup process validates endpoint availability, making additional bounds checks redundant.
Applied to files:
tests/model_registry/upgrade/test_model_registry_upgrade.py
🧬 Code graph analysis (1)
tests/model_registry/conftest.py (3)
utilities/infra.py (2)
get_data_science_cluster(865-866)wait_for_dsc_status_ready(1045-1051)utilities/constants.py (2)
DscComponents(157-178)ManagementState(163-165)tests/model_registry/utils.py (1)
wait_for_pods_running(248-281)
🔇 Additional comments (2)
tests/model_registry/upgrade/test_model_registry_upgrade.py (1)
31-31: LGTM — no residual fixture references foundSearched repository for "pre_upgrade_dsc_patch" and "post_upgrade_dsc_patch" (ripgrep --hidden --no-ignore-vcs and git grep); no occurrences found.
tests/model_registry/conftest.py (1)
55-55: Incorrect — wait_for_dsc_status_ready already retries (not one-shot)utilities/infra.py::wait_for_dsc_status_ready is decorated with @Retry(wait_timeout=120, sleep=5, exceptions_dict={ResourceNotReadyError: []}) and raises ResourceNotReadyError to trigger retries, so it will block/retry (up to 120s) rather than performing a single immediate check. The import/call in tests/model_registry/conftest.py is therefore valid.
Likely an incorrect or invalid review comment.
| try: | ||
| # first disable MR | ||
| resource_editor.update(backup_resources=True) | ||
| wait_for_dsc_status_ready(dsc_resource=dsc_resource) |
There was a problem hiding this comment.
Replace single-shot DSC readiness check with retry loop
Prevents transient failures after patching DSC.
- wait_for_dsc_status_ready(dsc_resource=dsc_resource)
+ # Wait for DSC to reconcile after disabling MR (retry to avoid flakes)
+ for _ in range(60): # ~5 minutes
+ try:
+ wait_for_dsc_status_ready(dsc_resource=dsc_resource)
+ break
+ except Exception:
+ time.sleep(5)
+ else:
+ # Final raise with the same helper for context
+ wait_for_dsc_status_ready(dsc_resource=dsc_resource)Add once near the imports:
import time🤖 Prompt for AI Agents
In tests/model_registry/conftest.py around line 193, the single-shot call to
wait_for_dsc_status_ready(dsc_resource=dsc_resource) can fail transiently after
patching the DSC; replace it with a retry loop that attempts the readiness check
repeatedly until success or a timeout. Add import time near the top imports,
then implement a loop that records a start time, calls
wait_for_dsc_status_ready(dsc_resource=dsc_resource) inside a try/except (or
checks the return), sleeps a short interval (e.g., 1–2s) between attempts, and
breaks on success; if the timeout is exceeded, raise an informative exception so
the test fails deterministically.
| finally: | ||
| resource_editor.restore() | ||
| Namespace(name=py_config["model_registry_namespace"]).delete(wait=True) | ||
| # create the original namespace object again, so that we can wait for it to be created first | ||
| original_namespace = Namespace(name=original_namespace_name, wait_for_resource=True) | ||
| original_namespace.wait_for_status(status=Namespace.Status.ACTIVE) | ||
| wait_for_pods_running( | ||
| admin_client=admin_client, | ||
| namespace_name=py_config["applications_namespace"], | ||
| number_of_consecutive_checks=6, | ||
| ) |
There was a problem hiding this comment.
Restore order: recreate original NS before restoring DSC; add readiness + MR-NS pod waits
Current flow restores DSC while the original namespace may not exist, causing noisy reconciliation. Also, you only wait for pods in the applications namespace on restore.
- yield dsc_resource
- finally:
- resource_editor.restore()
- Namespace(name=py_config["model_registry_namespace"]).delete(wait=True)
- # create the original namespace object again, so that we can wait for it to be created first
- original_namespace = Namespace(name=original_namespace_name, wait_for_resource=True)
- original_namespace.wait_for_status(status=Namespace.Status.ACTIVE)
- wait_for_pods_running(
- admin_client=admin_client,
- namespace_name=py_config["applications_namespace"],
- number_of_consecutive_checks=6,
- )
+ yield dsc_resource
+ finally:
+ # First, delete the custom namespace while MR is still Removed (after exiting the inner editor)
+ if py_config["model_registry_namespace"] != original_namespace_name:
+ Namespace(name=py_config["model_registry_namespace"]).delete(wait=True)
+ # Ensure the original namespace exists before pointing DSC back to it
+ original_namespace = Namespace(name=original_namespace_name, wait_for_resource=True)
+ original_namespace.wait_for_status(status=Namespace.Status.ACTIVE)
+ # Restore DSC to its original spec (likely Managed in the original namespace)
+ resource_editor.restore()
+ # Wait for DSC to become READY again (retry to avoid flakes)
+ for _ in range(60):
+ try:
+ wait_for_dsc_status_ready(dsc_resource=dsc_resource)
+ break
+ except Exception:
+ time.sleep(5)
+ else:
+ wait_for_dsc_status_ready(dsc_resource=dsc_resource)
+ # Wait for pods to settle in both app and MR namespaces
+ wait_for_pods_running(
+ admin_client=admin_client,
+ namespace_name=py_config["applications_namespace"],
+ number_of_consecutive_checks=6,
+ )
+ wait_for_pods_running(
+ admin_client=admin_client,
+ namespace_name=original_namespace_name,
+ number_of_consecutive_checks=6,
+ )Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In tests/model_registry/conftest.py around lines 225 to 235, the teardown
currently restores the DSC before ensuring the original namespace exists and
only waits for pods in the applications namespace; change the order to recreate
the original Namespace object (with wait_for_resource=True) and wait for it to
reach ACTIVE before calling resource_editor.restore(), and after restore add
wait_for_pods_running checks for both the model-registry namespace
(py_config["model_registry_namespace"]) and the applications namespace
(py_config["applications_namespace"]) with appropriate
number_of_consecutive_checks (e.g., 6) to ensure readiness of MR-NS pods as well
as app pods.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
tests/model_registry/conftest.py (1)
174-176: Scope the original namespace lookup inside the conditional.The variable is only used in the custom-namespace path. Move it into the if-block.
- original_namespace_name = dsc_resource.instance.spec.components.modelregistry.registriesNamespace - if pytestconfig.option.custom_namespace: + if pytestconfig.option.custom_namespace: + original_namespace_name = dsc_resource.instance.spec.components.modelregistry.registriesNamespace
🧹 Nitpick comments (2)
tests/model_registry/conftest.py (2)
175-211: Guard against no-op/unsafe rebind when custom namespace equals the original.If py_config["model_registry_namespace"] matches the original, we’ll delete and recreate the same namespace unnecessarily.
- if pytestconfig.option.custom_namespace: + if pytestconfig.option.custom_namespace: + target_ns = py_config["model_registry_namespace"] + original_namespace_name = dsc_resource.instance.spec.components.modelregistry.registriesNamespace + if target_ns == original_namespace_name: + LOGGER.warning("custom_namespace requested equals original MR namespace; skipping rebind.") + yield dsc_resource + return
237-239: Optionally assert DSC readiness in the default path.For symmetry and early failure, check DSC Ready before yielding when no custom namespace is used.
- LOGGER.info("Model Registry is enabled by default and does not require any setup.") - yield dsc_resource + LOGGER.info("Model Registry is enabled by default and does not require any setup.") + try: + wait_for_dsc_status_ready(dsc_resource=dsc_resource) + finally: + yield dsc_resource
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
tests/model_registry/conftest.py(2 hunks)tests/model_registry/upgrade/conftest.py(0 hunks)tests/model_registry/upgrade/test_model_registry_upgrade.py(1 hunks)
💤 Files with no reviewable changes (1)
- tests/model_registry/upgrade/conftest.py
🧰 Additional context used
🧠 Learnings (8)
📓 Common learnings
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:733-770
Timestamp: 2025-07-17T15:42:23.880Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 and model_registry_instance_2 fixtures do not need explicit database dependency fixtures (like db_deployment_1, db_secret_1, etc.) in their function signatures. Pytest's dependency injection automatically handles the fixture dependencies when they reference db_name_1 and db_name_2 parameters. This is the correct pattern for these Model Registry instance fixtures.
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:0-0
Timestamp: 2025-07-17T15:42:26.275Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 fixture (and similar duplicated Model Registry instance fixtures) do not require admin_client, db_deployment_1, or db_secret_1 parameters as explicit dependencies, even though these dependencies exist implicitly through the fixture dependency chain.
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#487
File: tests/model_registry/async_job/constants.py:23-31
Timestamp: 2025-08-08T15:58:03.524Z
Learning: In opendatahub-io/opendatahub-tests, for tests under tests/model_registry/async_job (e.g., constants.py and related fixtures), runs start from a clean, known Model Registry state. Therefore, using static MODEL_ID, MODEL_VERSION_ID, and MODEL_ARTIFACT_ID values in MODEL_SYNC_CONFIG is intentional and acceptable; dynamic ID injection is not required for these async job tests (per guidance from user lugi0).
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#504
File: tests/model_registry/rest_api/test_model_registry_secure_db.py:16-16
Timestamp: 2025-08-11T12:14:12.803Z
Learning: In tests/model_registry/rest_api/test_model_registry_secure_db.py, both the class-level `model_registry_instance` fixture and the method-level `deploy_secure_mysql_and_mr` fixture are intentionally required together for the secure database test setup. These fixtures serve different purposes and are not redundant.
📚 Learning: 2025-08-11T12:14:34.102Z
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#504
File: tests/model_registry/upgrade/conftest.py:20-30
Timestamp: 2025-08-11T12:14:34.102Z
Learning: In tests/model_registry/upgrade/conftest.py, when using ResourceEditor to patch DataScienceCluster resources with a partial components dictionary like `{DscComponents.MODELREGISTRY: {"managementState": ...}}`, the patch is applied correctly without overwriting other components in spec.components. ResourceEditor performs the appropriate merge operation for this resource type.
Applied to files:
tests/model_registry/conftest.pytests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-07-17T15:42:23.880Z
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:733-770
Timestamp: 2025-07-17T15:42:23.880Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 and model_registry_instance_2 fixtures do not need explicit database dependency fixtures (like db_deployment_1, db_secret_1, etc.) in their function signatures. Pytest's dependency injection automatically handles the fixture dependencies when they reference db_name_1 and db_name_2 parameters. This is the correct pattern for these Model Registry instance fixtures.
Applied to files:
tests/model_registry/conftest.pytests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-07-17T15:42:26.275Z
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:0-0
Timestamp: 2025-07-17T15:42:26.275Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 fixture (and similar duplicated Model Registry instance fixtures) do not require admin_client, db_deployment_1, or db_secret_1 parameters as explicit dependencies, even though these dependencies exist implicitly through the fixture dependency chain.
Applied to files:
tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-08-11T12:14:12.803Z
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#504
File: tests/model_registry/rest_api/test_model_registry_secure_db.py:16-16
Timestamp: 2025-08-11T12:14:12.803Z
Learning: In tests/model_registry/rest_api/test_model_registry_secure_db.py, both the class-level `model_registry_instance` fixture and the method-level `deploy_secure_mysql_and_mr` fixture are intentionally required together for the secure database test setup. These fixtures serve different purposes and are not redundant.
Applied to files:
tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-07-17T15:42:04.167Z
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:595-612
Timestamp: 2025-07-17T15:42:04.167Z
Learning: In tests/model_registry/conftest.py, the db_deployment_1 fixture (and similar duplicated resource fixtures) do not require the admin_client parameter or explicit dependencies on related fixtures like db_secret_1, db_pvc_1, and db_service_1, even though the original model_registry_db_deployment fixture includes these parameters.
Applied to files:
tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-08-08T15:58:03.524Z
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#487
File: tests/model_registry/async_job/constants.py:23-31
Timestamp: 2025-08-08T15:58:03.524Z
Learning: In opendatahub-io/opendatahub-tests, for tests under tests/model_registry/async_job (e.g., constants.py and related fixtures), runs start from a clean, known Model Registry state. Therefore, using static MODEL_ID, MODEL_VERSION_ID, and MODEL_ARTIFACT_ID values in MODEL_SYNC_CONFIG is intentional and acceptable; dynamic ID injection is not required for these async job tests (per guidance from user lugi0).
Applied to files:
tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-07-30T14:15:25.605Z
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#429
File: tests/model_registry/rbac/test_mr_rbac_sa.py:45-45
Timestamp: 2025-07-30T14:15:25.605Z
Learning: In tests/model_registry/rbac/test_mr_rbac_sa.py, bounds checking for model_registry_instance_rest_endpoint list access is not needed because upstream fixture validation already ensures endpoints exist before the tests execute. The Model Registry setup process validates endpoint availability, making additional bounds checks redundant.
Applied to files:
tests/model_registry/upgrade/test_model_registry_upgrade.py
🧬 Code graph analysis (1)
tests/model_registry/conftest.py (3)
utilities/infra.py (2)
get_data_science_cluster(865-866)wait_for_dsc_status_ready(1045-1051)utilities/constants.py (2)
DscComponents(157-178)ManagementState(163-165)tests/model_registry/utils.py (1)
wait_for_pods_running(248-281)
🔇 Additional comments (3)
tests/model_registry/conftest.py (2)
55-55: Import looks good and is used.
195-197: Pass admin_client to Namespace to avoid context drift.Namespace operations here omit the client; this can hit the wrong cluster/context. Be consistent with other places that pass client=admin_client.
- original_namespace = Namespace(name=original_namespace_name, wait_for_resource=True) + original_namespace = Namespace(client=admin_client, name=original_namespace_name, wait_for_resource=True) original_namespace.delete(wait=True) @@ - namespace = Namespace(name=py_config["model_registry_namespace"], wait_for_resource=True) + namespace = Namespace(client=admin_client, name=py_config["model_registry_namespace"], wait_for_resource=True) namespace.wait_for_status(status=Namespace.Status.ACTIVE) @@ - Namespace(name=py_config["model_registry_namespace"]).delete(wait=True) + Namespace(client=admin_client, name=py_config["model_registry_namespace"]).delete(wait=True) @@ - original_namespace = Namespace(name=original_namespace_name, wait_for_resource=True) + original_namespace = Namespace(client=admin_client, name=original_namespace_name, wait_for_resource=True)Also applies to: 212-214, 227-227, 229-231
⛔ Skipped due to learnings
Learnt from: fege PR: opendatahub-io/opendatahub-tests#574 File: tests/model_registry/cluster_health/test_mr_default.py:28-29 Timestamp: 2025-09-02T07:29:44.473Z Learning: In tests/model_registry/cluster_health/test_mr_default.py, passing admin_client to Namespace constructor is not necessary according to fege, indicating the current approach without explicit client parameter is acceptable for this test.tests/model_registry/upgrade/test_model_registry_upgrade.py (1)
31-31: Approve — upgrade patch fixtures removed; no stale references remain. Repo-wide search (rg -uu and fallback grep) found no occurrences of pre_upgrade_dsc_patch or post_upgrade_dsc_patch.
| # first disable MR | ||
| resource_editor.update(backup_resources=True) | ||
| wait_for_dsc_status_ready(dsc_resource=dsc_resource) | ||
| # now delete the original namespace: |
There was a problem hiding this comment.
“wait_for_dsc_status_ready” is a single-shot check, not a wait — high flake risk.
This helper returns immediately and raises if not yet Ready. After patching DSC, readiness can take time; a single check will intermittently fail.
Recommended: replace this call with a real poll here, or update utilities.infra.wait_for_dsc_status_ready to poll with a timeout. If changing here, for example:
- resource_editor.update(backup_resources=True)
- wait_for_dsc_status_ready(dsc_resource=dsc_resource)
+ resource_editor.update(backup_resources=True)
+ # Poll for DSC readiness to avoid flakes
+ for _ in range(36): # ~3 minutes @5s
+ try:
+ wait_for_dsc_status_ready(dsc_resource=dsc_resource)
+ break
+ except Exception:
+ import time; time.sleep(5)
+ else:
+ # Final raise with context
+ wait_for_dsc_status_ready(dsc_resource=dsc_resource)Alternatively, upgrade the util to actually wait and keep the single call here.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # first disable MR | |
| resource_editor.update(backup_resources=True) | |
| wait_for_dsc_status_ready(dsc_resource=dsc_resource) | |
| # now delete the original namespace: | |
| # first disable MR | |
| resource_editor.update(backup_resources=True) | |
| # Poll for DSC readiness to avoid flakes | |
| for _ in range(36): # ~3 minutes @5s | |
| try: | |
| wait_for_dsc_status_ready(dsc_resource=dsc_resource) | |
| break | |
| except Exception: | |
| import time; time.sleep(5) | |
| else: | |
| # Final raise with context | |
| wait_for_dsc_status_ready(dsc_resource=dsc_resource) | |
| # now delete the original namespace: |
🤖 Prompt for AI Agents
In tests/model_registry/conftest.py around lines 191-194, the call to
wait_for_dsc_status_ready is a single-shot check and causes flakes because DSC
readiness can take time; replace this single immediate call with a polling wait:
after resource_editor.update(backup_resources=True) loop until the DSC reports
Ready (or raise on timeout), checking status every 1–2 seconds and using a
sensible overall timeout (e.g., 60–120s); alternatively, modify
utilities.infra.wait_for_dsc_status_ready to perform the same
polling-with-timeout behavior and keep the single call here.
|
/build-push-pr-image |
|
Status of building tag pr-571: success. |
|
Status of building tag latest: success. |
Depends on RHOAIENG-25130
Description
How Has This Been Tested?
Merge criteria:
Summary by CodeRabbit
Tests
Chores