Skip to content

Updates for GA of Model Registry#571

Merged
dbasunag merged 6 commits intoopendatahub-io:mainfrom
dbasunag:custom_default_ns
Sep 15, 2025
Merged

Updates for GA of Model Registry#571
dbasunag merged 6 commits intoopendatahub-io:mainfrom
dbasunag:custom_default_ns

Conversation

@dbasunag
Copy link
Copy Markdown
Collaborator

@dbasunag dbasunag commented Aug 30, 2025

  1. Change updated_dsc_component_state_scope_session to only update dsc for custom ns scenario
  2. remove fixtures post_upgrade_dsc_patch and pre_upgrade_dsc_patch

Depends on RHOAIENG-25130

Description

How Has This Been Tested?

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

  • Tests

    • Improved Model Registry test reliability with dynamic namespace handling, readiness waits, and robust cleanup to prevent side effects.
    • Simplified test setup by removing upgrade-specific fixtures and decorators.
    • Streamlined fixture interface for clearer, more predictable behavior.
  • Chores

    • Removed obsolete upgrade test scaffolding to reduce maintenance.
    • Consolidated test flows to decrease flakiness and improve stability.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Aug 30, 2025

📝 Walkthrough

Walkthrough

Introduces conditional model registry namespace reconfiguration in a session-scoped fixture with readiness waits and guaranteed cleanup. Removes upgrade-specific fixtures and updates upgrade tests to no longer depend on them.

Changes

Cohort / File(s) Summary of changes
Session fixture: conditional MR namespace rebind
tests/model_registry/conftest.py
Updated updated_dsc_component_state_scope_session: removed teardown_resources param; added conditional flow using pytestconfig.option.custom_namespace to disable MR, delete old namespace, re-enable MR in a custom namespace, wait for DSC ready and pods running; ensured restoration and cleanup in finally; imported wait_for_dsc_status_ready.
Upgrade path cleanup
tests/model_registry/upgrade/conftest.py, tests/model_registry/upgrade/test_model_registry_upgrade.py
Deleted upgrade conftest.py removing pre_upgrade_dsc_patch and post_upgrade_dsc_patch fixtures. Updated upgrade tests to drop usefixtures references to these fixtures, retaining other fixtures only.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "Updates for GA of Model Registry" is concise and directly related to the primary intent of the changeset—preparing Model Registry behavior for GA (including custom-namespace handling and removal of upgrade fixtures), so it accurately summarizes the main change.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown

The following are automatically added/executed:

  • PR size label.
  • Run pre-commit
  • Run tox
  • Add PR author as the PR assignee
  • Build image based on the PR

Available user actions:

  • To mark a PR as WIP, add /wip in a comment. To remove it from the PR comment /wip cancel to the PR.
  • To block merging of a PR, add /hold in a comment. To un-block merging of PR comment /hold cancel.
  • To mark a PR as approved, add /lgtm in a comment. To remove, add /lgtm cancel.
    lgtm label removed on each new commit push.
  • To mark PR as verified comment /verified to the PR, to un-verify comment /verified cancel to the PR.
    verified label removed on each new commit push.
  • To Cherry-pick a merged PR /cherry-pick <target_branch_name> to the PR. If <target_branch_name> is valid,
    and the current PR is merged, a cherry-picked PR would be created and linked to the current PR.
  • To build and push image to quay, add /build-push-pr-image in a comment. This would create an image with tag
    pr-<pr_number> to quay repository. This image tag, however would be deleted on PR merge or close action.
Supported labels

{'/hold', '/cherry-pick', '/wip', '/lgtm', '/verified', '/build-push-pr-image'}

Copy link
Copy Markdown
Contributor

@fege fege left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

for component_name in component_patch:
dsc_resource.wait_for_condition(
condition=DscComponents.COMPONENT_MAPPING[component_name], status="True"
original_namespace_name = dsc_resource.instance.spec.components.modelregistry.registriesNamespace
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this can be move in the if I belive

Comment on lines +195 to +196
original_namespace = Namespace(name=original_namespace_name, wait_for_resource=True)
original_namespace.delete(wait=True)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we actually need to delete it? it could be left there since we are creating it again after a few lines

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cleaned it that way there is no confusion about leftover resources.

Copy link
Copy Markdown
Contributor

@sheltoncyril sheltoncyril left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
tests/model_registry/conftest.py (3)

174-176: Scope original_namespace_name to the custom-namespace branch

Avoid computing it when --custom-namespace is not set.

-    original_namespace_name = dsc_resource.instance.spec.components.modelregistry.registriesNamespace
-    if pytestconfig.option.custom_namespace:
+    if pytestconfig.option.custom_namespace:
+        original_namespace_name = dsc_resource.instance.spec.components.modelregistry.registriesNamespace

195-197: Do we need to delete the original namespace up front?

Deleting and later recreating the original MR namespace adds churn and RBAC risk. If the goal is only to move MR to a custom namespace, consider keeping the original namespace intact. At minimum, guard against accidental deletion when the custom and original namespaces match.

-            original_namespace = Namespace(name=original_namespace_name, wait_for_resource=True)
-            original_namespace.delete(wait=True)
+            if original_namespace_name != py_config["model_registry_namespace"]:
+                original_namespace = Namespace(name=original_namespace_name, wait_for_resource=True)
+                original_namespace.delete(wait=True)

175-239: Safety guard: don't delete the model-registry namespace when custom == original

If py_config["model_registry_namespace"] == original_namespace_name the finally block will delete the active namespace; skip that delete (or assert/raise) when they are equal.
Location: tests/model_registry/conftest.py — updated_dsc_component_state_scope_session fixture (≈ lines 175–239).

🧹 Nitpick comments (1)
tests/model_registry/conftest.py (1)

198-214: After re-enabling MR in the custom namespace, also wait for DSC to be READY

You already wait for pods; add a DSC readiness wait to catch operator-level reconciliation issues earlier.

                 namespace = Namespace(name=py_config["model_registry_namespace"], wait_for_resource=True)
                 namespace.wait_for_status(status=Namespace.Status.ACTIVE)
+                # Wait for DSC to reconcile with MR re-enabled and new namespace
+                for _ in range(60):  # ~5 minutes
+                    try:
+                        wait_for_dsc_status_ready(dsc_resource=dsc_resource)
+                        break
+                    except Exception:
+                        time.sleep(5)
+                else:
+                    wait_for_dsc_status_ready(dsc_resource=dsc_resource)
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c5ca79f and 849762e.

📒 Files selected for processing (3)
  • tests/model_registry/conftest.py (2 hunks)
  • tests/model_registry/upgrade/conftest.py (0 hunks)
  • tests/model_registry/upgrade/test_model_registry_upgrade.py (1 hunks)
💤 Files with no reviewable changes (1)
  • tests/model_registry/upgrade/conftest.py
🧰 Additional context used
🧠 Learnings (8)
📓 Common learnings
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:733-770
Timestamp: 2025-07-17T15:42:23.880Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 and model_registry_instance_2 fixtures do not need explicit database dependency fixtures (like db_deployment_1, db_secret_1, etc.) in their function signatures. Pytest's dependency injection automatically handles the fixture dependencies when they reference db_name_1 and db_name_2 parameters. This is the correct pattern for these Model Registry instance fixtures.
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#487
File: tests/model_registry/async_job/constants.py:23-31
Timestamp: 2025-08-08T15:58:03.524Z
Learning: In opendatahub-io/opendatahub-tests, for tests under tests/model_registry/async_job (e.g., constants.py and related fixtures), runs start from a clean, known Model Registry state. Therefore, using static MODEL_ID, MODEL_VERSION_ID, and MODEL_ARTIFACT_ID values in MODEL_SYNC_CONFIG is intentional and acceptable; dynamic ID injection is not required for these async job tests (per guidance from user lugi0).
Learnt from: fege
PR: opendatahub-io/opendatahub-tests#409
File: tests/model_registry/negative_tests/test_db_migration.py:47-57
Timestamp: 2025-07-09T08:50:22.172Z
Learning: In negative tests for model registry database migration failures (like test_db_migration_negative), the model registry pod is expected to be in a failed state (not Running) when the database is dirty. The test fetches logs from the failed pod to verify the expected error message about dirty database version, so waiting for Running status would be inappropriate.
📚 Learning: 2025-07-17T15:42:23.880Z
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:733-770
Timestamp: 2025-07-17T15:42:23.880Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 and model_registry_instance_2 fixtures do not need explicit database dependency fixtures (like db_deployment_1, db_secret_1, etc.) in their function signatures. Pytest's dependency injection automatically handles the fixture dependencies when they reference db_name_1 and db_name_2 parameters. This is the correct pattern for these Model Registry instance fixtures.

Applied to files:

  • tests/model_registry/upgrade/test_model_registry_upgrade.py
  • tests/model_registry/conftest.py
📚 Learning: 2025-08-11T12:14:34.102Z
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#504
File: tests/model_registry/upgrade/conftest.py:20-30
Timestamp: 2025-08-11T12:14:34.102Z
Learning: In tests/model_registry/upgrade/conftest.py, when using ResourceEditor to patch DataScienceCluster resources with a partial components dictionary like `{DscComponents.MODELREGISTRY: {"managementState": ...}}`, the patch is applied correctly without overwriting other components in spec.components. ResourceEditor performs the appropriate merge operation for this resource type.

Applied to files:

  • tests/model_registry/upgrade/test_model_registry_upgrade.py
  • tests/model_registry/conftest.py
📚 Learning: 2025-07-17T15:42:26.275Z
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:0-0
Timestamp: 2025-07-17T15:42:26.275Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 fixture (and similar duplicated Model Registry instance fixtures) do not require admin_client, db_deployment_1, or db_secret_1 parameters as explicit dependencies, even though these dependencies exist implicitly through the fixture dependency chain.

Applied to files:

  • tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-08-11T12:14:12.803Z
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#504
File: tests/model_registry/rest_api/test_model_registry_secure_db.py:16-16
Timestamp: 2025-08-11T12:14:12.803Z
Learning: In tests/model_registry/rest_api/test_model_registry_secure_db.py, both the class-level `model_registry_instance` fixture and the method-level `deploy_secure_mysql_and_mr` fixture are intentionally required together for the secure database test setup. These fixtures serve different purposes and are not redundant.

Applied to files:

  • tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-07-17T15:42:04.167Z
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:595-612
Timestamp: 2025-07-17T15:42:04.167Z
Learning: In tests/model_registry/conftest.py, the db_deployment_1 fixture (and similar duplicated resource fixtures) do not require the admin_client parameter or explicit dependencies on related fixtures like db_secret_1, db_pvc_1, and db_service_1, even though the original model_registry_db_deployment fixture includes these parameters.

Applied to files:

  • tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-08-08T15:58:03.524Z
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#487
File: tests/model_registry/async_job/constants.py:23-31
Timestamp: 2025-08-08T15:58:03.524Z
Learning: In opendatahub-io/opendatahub-tests, for tests under tests/model_registry/async_job (e.g., constants.py and related fixtures), runs start from a clean, known Model Registry state. Therefore, using static MODEL_ID, MODEL_VERSION_ID, and MODEL_ARTIFACT_ID values in MODEL_SYNC_CONFIG is intentional and acceptable; dynamic ID injection is not required for these async job tests (per guidance from user lugi0).

Applied to files:

  • tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-07-30T14:15:25.605Z
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#429
File: tests/model_registry/rbac/test_mr_rbac_sa.py:45-45
Timestamp: 2025-07-30T14:15:25.605Z
Learning: In tests/model_registry/rbac/test_mr_rbac_sa.py, bounds checking for model_registry_instance_rest_endpoint list access is not needed because upstream fixture validation already ensures endpoints exist before the tests execute. The Model Registry setup process validates endpoint availability, making additional bounds checks redundant.

Applied to files:

  • tests/model_registry/upgrade/test_model_registry_upgrade.py
🧬 Code graph analysis (1)
tests/model_registry/conftest.py (3)
utilities/infra.py (2)
  • get_data_science_cluster (865-866)
  • wait_for_dsc_status_ready (1045-1051)
utilities/constants.py (2)
  • DscComponents (157-178)
  • ManagementState (163-165)
tests/model_registry/utils.py (1)
  • wait_for_pods_running (248-281)
🔇 Additional comments (2)
tests/model_registry/upgrade/test_model_registry_upgrade.py (1)

31-31: LGTM — no residual fixture references found

Searched repository for "pre_upgrade_dsc_patch" and "post_upgrade_dsc_patch" (ripgrep --hidden --no-ignore-vcs and git grep); no occurrences found.

tests/model_registry/conftest.py (1)

55-55: Incorrect — wait_for_dsc_status_ready already retries (not one-shot)

utilities/infra.py::wait_for_dsc_status_ready is decorated with @Retry(wait_timeout=120, sleep=5, exceptions_dict={ResourceNotReadyError: []}) and raises ResourceNotReadyError to trigger retries, so it will block/retry (up to 120s) rather than performing a single immediate check. The import/call in tests/model_registry/conftest.py is therefore valid.

Likely an incorrect or invalid review comment.

try:
# first disable MR
resource_editor.update(backup_resources=True)
wait_for_dsc_status_ready(dsc_resource=dsc_resource)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Replace single-shot DSC readiness check with retry loop

Prevents transient failures after patching DSC.

-            wait_for_dsc_status_ready(dsc_resource=dsc_resource)
+            # Wait for DSC to reconcile after disabling MR (retry to avoid flakes)
+            for _ in range(60):  # ~5 minutes
+                try:
+                    wait_for_dsc_status_ready(dsc_resource=dsc_resource)
+                    break
+                except Exception:
+                    time.sleep(5)
+            else:
+                # Final raise with the same helper for context
+                wait_for_dsc_status_ready(dsc_resource=dsc_resource)

Add once near the imports:

import time
🤖 Prompt for AI Agents
In tests/model_registry/conftest.py around line 193, the single-shot call to
wait_for_dsc_status_ready(dsc_resource=dsc_resource) can fail transiently after
patching the DSC; replace it with a retry loop that attempts the readiness check
repeatedly until success or a timeout. Add import time near the top imports,
then implement a loop that records a start time, calls
wait_for_dsc_status_ready(dsc_resource=dsc_resource) inside a try/except (or
checks the return), sleeps a short interval (e.g., 1–2s) between attempts, and
breaks on success; if the timeout is exceeded, raise an informative exception so
the test fails deterministically.

Comment on lines +225 to 235
finally:
resource_editor.restore()
Namespace(name=py_config["model_registry_namespace"]).delete(wait=True)
# create the original namespace object again, so that we can wait for it to be created first
original_namespace = Namespace(name=original_namespace_name, wait_for_resource=True)
original_namespace.wait_for_status(status=Namespace.Status.ACTIVE)
wait_for_pods_running(
admin_client=admin_client,
namespace_name=py_config["applications_namespace"],
number_of_consecutive_checks=6,
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Restore order: recreate original NS before restoring DSC; add readiness + MR-NS pod waits

Current flow restores DSC while the original namespace may not exist, causing noisy reconciliation. Also, you only wait for pods in the applications namespace on restore.

-                yield dsc_resource
-        finally:
-            resource_editor.restore()
-            Namespace(name=py_config["model_registry_namespace"]).delete(wait=True)
-            # create the original namespace object again, so that we can wait for it to be created first
-            original_namespace = Namespace(name=original_namespace_name, wait_for_resource=True)
-            original_namespace.wait_for_status(status=Namespace.Status.ACTIVE)
-            wait_for_pods_running(
-                admin_client=admin_client,
-                namespace_name=py_config["applications_namespace"],
-                number_of_consecutive_checks=6,
-            )
+                yield dsc_resource
+        finally:
+            # First, delete the custom namespace while MR is still Removed (after exiting the inner editor)
+            if py_config["model_registry_namespace"] != original_namespace_name:
+                Namespace(name=py_config["model_registry_namespace"]).delete(wait=True)
+            # Ensure the original namespace exists before pointing DSC back to it
+            original_namespace = Namespace(name=original_namespace_name, wait_for_resource=True)
+            original_namespace.wait_for_status(status=Namespace.Status.ACTIVE)
+            # Restore DSC to its original spec (likely Managed in the original namespace)
+            resource_editor.restore()
+            # Wait for DSC to become READY again (retry to avoid flakes)
+            for _ in range(60):
+                try:
+                    wait_for_dsc_status_ready(dsc_resource=dsc_resource)
+                    break
+                except Exception:
+                    time.sleep(5)
+            else:
+                wait_for_dsc_status_ready(dsc_resource=dsc_resource)
+            # Wait for pods to settle in both app and MR namespaces
+            wait_for_pods_running(
+                admin_client=admin_client,
+                namespace_name=py_config["applications_namespace"],
+                number_of_consecutive_checks=6,
+            )
+            wait_for_pods_running(
+                admin_client=admin_client,
+                namespace_name=original_namespace_name,
+                number_of_consecutive_checks=6,
+            )

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In tests/model_registry/conftest.py around lines 225 to 235, the teardown
currently restores the DSC before ensuring the original namespace exists and
only waits for pods in the applications namespace; change the order to recreate
the original Namespace object (with wait_for_resource=True) and wait for it to
reach ACTIVE before calling resource_editor.restore(), and after restore add
wait_for_pods_running checks for both the model-registry namespace
(py_config["model_registry_namespace"]) and the applications namespace
(py_config["applications_namespace"]) with appropriate
number_of_consecutive_checks (e.g., 6) to ensure readiness of MR-NS pods as well
as app pods.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
tests/model_registry/conftest.py (1)

174-176: Scope the original namespace lookup inside the conditional.

The variable is only used in the custom-namespace path. Move it into the if-block.

-    original_namespace_name = dsc_resource.instance.spec.components.modelregistry.registriesNamespace
-    if pytestconfig.option.custom_namespace:
+    if pytestconfig.option.custom_namespace:
+        original_namespace_name = dsc_resource.instance.spec.components.modelregistry.registriesNamespace
🧹 Nitpick comments (2)
tests/model_registry/conftest.py (2)

175-211: Guard against no-op/unsafe rebind when custom namespace equals the original.

If py_config["model_registry_namespace"] matches the original, we’ll delete and recreate the same namespace unnecessarily.

-    if pytestconfig.option.custom_namespace:
+    if pytestconfig.option.custom_namespace:
+        target_ns = py_config["model_registry_namespace"]
+        original_namespace_name = dsc_resource.instance.spec.components.modelregistry.registriesNamespace
+        if target_ns == original_namespace_name:
+            LOGGER.warning("custom_namespace requested equals original MR namespace; skipping rebind.")
+            yield dsc_resource
+            return

237-239: Optionally assert DSC readiness in the default path.

For symmetry and early failure, check DSC Ready before yielding when no custom namespace is used.

-        LOGGER.info("Model Registry is enabled by default and does not require any setup.")
-        yield dsc_resource
+        LOGGER.info("Model Registry is enabled by default and does not require any setup.")
+        try:
+            wait_for_dsc_status_ready(dsc_resource=dsc_resource)
+        finally:
+            yield dsc_resource
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c5ca79f and 849762e.

📒 Files selected for processing (3)
  • tests/model_registry/conftest.py (2 hunks)
  • tests/model_registry/upgrade/conftest.py (0 hunks)
  • tests/model_registry/upgrade/test_model_registry_upgrade.py (1 hunks)
💤 Files with no reviewable changes (1)
  • tests/model_registry/upgrade/conftest.py
🧰 Additional context used
🧠 Learnings (8)
📓 Common learnings
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:733-770
Timestamp: 2025-07-17T15:42:23.880Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 and model_registry_instance_2 fixtures do not need explicit database dependency fixtures (like db_deployment_1, db_secret_1, etc.) in their function signatures. Pytest's dependency injection automatically handles the fixture dependencies when they reference db_name_1 and db_name_2 parameters. This is the correct pattern for these Model Registry instance fixtures.
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:0-0
Timestamp: 2025-07-17T15:42:26.275Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 fixture (and similar duplicated Model Registry instance fixtures) do not require admin_client, db_deployment_1, or db_secret_1 parameters as explicit dependencies, even though these dependencies exist implicitly through the fixture dependency chain.
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#487
File: tests/model_registry/async_job/constants.py:23-31
Timestamp: 2025-08-08T15:58:03.524Z
Learning: In opendatahub-io/opendatahub-tests, for tests under tests/model_registry/async_job (e.g., constants.py and related fixtures), runs start from a clean, known Model Registry state. Therefore, using static MODEL_ID, MODEL_VERSION_ID, and MODEL_ARTIFACT_ID values in MODEL_SYNC_CONFIG is intentional and acceptable; dynamic ID injection is not required for these async job tests (per guidance from user lugi0).
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#504
File: tests/model_registry/rest_api/test_model_registry_secure_db.py:16-16
Timestamp: 2025-08-11T12:14:12.803Z
Learning: In tests/model_registry/rest_api/test_model_registry_secure_db.py, both the class-level `model_registry_instance` fixture and the method-level `deploy_secure_mysql_and_mr` fixture are intentionally required together for the secure database test setup. These fixtures serve different purposes and are not redundant.
📚 Learning: 2025-08-11T12:14:34.102Z
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#504
File: tests/model_registry/upgrade/conftest.py:20-30
Timestamp: 2025-08-11T12:14:34.102Z
Learning: In tests/model_registry/upgrade/conftest.py, when using ResourceEditor to patch DataScienceCluster resources with a partial components dictionary like `{DscComponents.MODELREGISTRY: {"managementState": ...}}`, the patch is applied correctly without overwriting other components in spec.components. ResourceEditor performs the appropriate merge operation for this resource type.

Applied to files:

  • tests/model_registry/conftest.py
  • tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-07-17T15:42:23.880Z
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:733-770
Timestamp: 2025-07-17T15:42:23.880Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 and model_registry_instance_2 fixtures do not need explicit database dependency fixtures (like db_deployment_1, db_secret_1, etc.) in their function signatures. Pytest's dependency injection automatically handles the fixture dependencies when they reference db_name_1 and db_name_2 parameters. This is the correct pattern for these Model Registry instance fixtures.

Applied to files:

  • tests/model_registry/conftest.py
  • tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-07-17T15:42:26.275Z
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:0-0
Timestamp: 2025-07-17T15:42:26.275Z
Learning: In tests/model_registry/conftest.py, the model_registry_instance_1 fixture (and similar duplicated Model Registry instance fixtures) do not require admin_client, db_deployment_1, or db_secret_1 parameters as explicit dependencies, even though these dependencies exist implicitly through the fixture dependency chain.

Applied to files:

  • tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-08-11T12:14:12.803Z
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#504
File: tests/model_registry/rest_api/test_model_registry_secure_db.py:16-16
Timestamp: 2025-08-11T12:14:12.803Z
Learning: In tests/model_registry/rest_api/test_model_registry_secure_db.py, both the class-level `model_registry_instance` fixture and the method-level `deploy_secure_mysql_and_mr` fixture are intentionally required together for the secure database test setup. These fixtures serve different purposes and are not redundant.

Applied to files:

  • tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-07-17T15:42:04.167Z
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#446
File: tests/model_registry/conftest.py:595-612
Timestamp: 2025-07-17T15:42:04.167Z
Learning: In tests/model_registry/conftest.py, the db_deployment_1 fixture (and similar duplicated resource fixtures) do not require the admin_client parameter or explicit dependencies on related fixtures like db_secret_1, db_pvc_1, and db_service_1, even though the original model_registry_db_deployment fixture includes these parameters.

Applied to files:

  • tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-08-08T15:58:03.524Z
Learnt from: lugi0
PR: opendatahub-io/opendatahub-tests#487
File: tests/model_registry/async_job/constants.py:23-31
Timestamp: 2025-08-08T15:58:03.524Z
Learning: In opendatahub-io/opendatahub-tests, for tests under tests/model_registry/async_job (e.g., constants.py and related fixtures), runs start from a clean, known Model Registry state. Therefore, using static MODEL_ID, MODEL_VERSION_ID, and MODEL_ARTIFACT_ID values in MODEL_SYNC_CONFIG is intentional and acceptable; dynamic ID injection is not required for these async job tests (per guidance from user lugi0).

Applied to files:

  • tests/model_registry/upgrade/test_model_registry_upgrade.py
📚 Learning: 2025-07-30T14:15:25.605Z
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#429
File: tests/model_registry/rbac/test_mr_rbac_sa.py:45-45
Timestamp: 2025-07-30T14:15:25.605Z
Learning: In tests/model_registry/rbac/test_mr_rbac_sa.py, bounds checking for model_registry_instance_rest_endpoint list access is not needed because upstream fixture validation already ensures endpoints exist before the tests execute. The Model Registry setup process validates endpoint availability, making additional bounds checks redundant.

Applied to files:

  • tests/model_registry/upgrade/test_model_registry_upgrade.py
🧬 Code graph analysis (1)
tests/model_registry/conftest.py (3)
utilities/infra.py (2)
  • get_data_science_cluster (865-866)
  • wait_for_dsc_status_ready (1045-1051)
utilities/constants.py (2)
  • DscComponents (157-178)
  • ManagementState (163-165)
tests/model_registry/utils.py (1)
  • wait_for_pods_running (248-281)
🔇 Additional comments (3)
tests/model_registry/conftest.py (2)

55-55: Import looks good and is used.


195-197: Pass admin_client to Namespace to avoid context drift.

Namespace operations here omit the client; this can hit the wrong cluster/context. Be consistent with other places that pass client=admin_client.

-            original_namespace = Namespace(name=original_namespace_name, wait_for_resource=True)
+            original_namespace = Namespace(client=admin_client, name=original_namespace_name, wait_for_resource=True)
             original_namespace.delete(wait=True)
@@
-                namespace = Namespace(name=py_config["model_registry_namespace"], wait_for_resource=True)
+                namespace = Namespace(client=admin_client, name=py_config["model_registry_namespace"], wait_for_resource=True)
                 namespace.wait_for_status(status=Namespace.Status.ACTIVE)
@@
-            Namespace(name=py_config["model_registry_namespace"]).delete(wait=True)
+            Namespace(client=admin_client, name=py_config["model_registry_namespace"]).delete(wait=True)
@@
-            original_namespace = Namespace(name=original_namespace_name, wait_for_resource=True)
+            original_namespace = Namespace(client=admin_client, name=original_namespace_name, wait_for_resource=True)

Also applies to: 212-214, 227-227, 229-231

⛔ Skipped due to learnings
Learnt from: fege
PR: opendatahub-io/opendatahub-tests#574
File: tests/model_registry/cluster_health/test_mr_default.py:28-29
Timestamp: 2025-09-02T07:29:44.473Z
Learning: In tests/model_registry/cluster_health/test_mr_default.py, passing admin_client to Namespace constructor is not necessary according to fege, indicating the current approach without explicit client parameter is acceptable for this test.
tests/model_registry/upgrade/test_model_registry_upgrade.py (1)

31-31: Approve — upgrade patch fixtures removed; no stale references remain. Repo-wide search (rg -uu and fallback grep) found no occurrences of pre_upgrade_dsc_patch or post_upgrade_dsc_patch.

Comment on lines +191 to +194
# first disable MR
resource_editor.update(backup_resources=True)
wait_for_dsc_status_ready(dsc_resource=dsc_resource)
# now delete the original namespace:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

“wait_for_dsc_status_ready” is a single-shot check, not a wait — high flake risk.

This helper returns immediately and raises if not yet Ready. After patching DSC, readiness can take time; a single check will intermittently fail.

Recommended: replace this call with a real poll here, or update utilities.infra.wait_for_dsc_status_ready to poll with a timeout. If changing here, for example:

-            resource_editor.update(backup_resources=True)
-            wait_for_dsc_status_ready(dsc_resource=dsc_resource)
+            resource_editor.update(backup_resources=True)
+            # Poll for DSC readiness to avoid flakes
+            for _ in range(36):  # ~3 minutes @5s
+                try:
+                    wait_for_dsc_status_ready(dsc_resource=dsc_resource)
+                    break
+                except Exception:
+                    import time; time.sleep(5)
+            else:
+                # Final raise with context
+                wait_for_dsc_status_ready(dsc_resource=dsc_resource)

Alternatively, upgrade the util to actually wait and keep the single call here.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# first disable MR
resource_editor.update(backup_resources=True)
wait_for_dsc_status_ready(dsc_resource=dsc_resource)
# now delete the original namespace:
# first disable MR
resource_editor.update(backup_resources=True)
# Poll for DSC readiness to avoid flakes
for _ in range(36): # ~3 minutes @5s
try:
wait_for_dsc_status_ready(dsc_resource=dsc_resource)
break
except Exception:
import time; time.sleep(5)
else:
# Final raise with context
wait_for_dsc_status_ready(dsc_resource=dsc_resource)
# now delete the original namespace:
🤖 Prompt for AI Agents
In tests/model_registry/conftest.py around lines 191-194, the call to
wait_for_dsc_status_ready is a single-shot check and causes flakes because DSC
readiness can take time; replace this single immediate call with a polling wait:
after resource_editor.update(backup_resources=True) loop until the DSC reports
Ready (or raise on timeout), checking status every 1–2 seconds and using a
sensible overall timeout (e.g., 60–120s); alternatively, modify
utilities.infra.wait_for_dsc_status_ready to perform the same
polling-with-timeout behavior and keep the single call here.

@dbasunag
Copy link
Copy Markdown
Collaborator Author

/build-push-pr-image

@github-actions
Copy link
Copy Markdown

Status of building tag pr-571: success.
Status of pushing tag pr-571 to image registry: success.

@dbasunag dbasunag merged commit a853ac0 into opendatahub-io:main Sep 15, 2025
14 checks passed
@dbasunag dbasunag deleted the custom_default_ns branch September 15, 2025 15:48
@github-actions
Copy link
Copy Markdown

Status of building tag latest: success.
Status of pushing tag latest to image registry: success.

sheltoncyril pushed a commit to sheltoncyril/opendatahub-tests that referenced this pull request Oct 2, 2025
mwaykole pushed a commit to mwaykole/opendatahub-tests that referenced this pull request Jan 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants