test: New tests to validate postgres password autogeneration#1200
test: New tests to validate postgres password autogeneration#1200dbasunag merged 3 commits intoopendatahub-io:mainfrom
Conversation
|
The following are automatically added/executed:
Available user actions:
Supported labels{'/lgtm', '/verified', '/cherry-pick', '/hold', '/build-push-pr-image', '/wip'} |
📝 WalkthroughWalkthroughAdds tests and fixtures that fetch, decode, delete and poll for recreation of the Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/model_registry/model_catalog/db_check/conftest.py`:
- Around line 14-20: The fixture model_catalog_postgres_secret currently uses
py_config["model_registry_namespace"] which bypasses dynamic namespace
resolution; change the fixture to accept the existing pytest fixture
model_registry_namespace (add it as a parameter to
model_catalog_postgres_secret) and use that value for the Secret namespace
instead of py_config["model_registry_namespace"]; apply the same change to the
other fixtures in this module that reference py_config for the model registry
namespace so they all use the model_registry_namespace fixture (keep the Secret
creation and ensure_exists behavior unchanged).
- Around line 38-56: The loop currently treats any Secret with exists==True as
the recreated secret but delete() is asynchronous and the first sampled Secret
may be the old instance; capture the original secret identity from
model_catalog_postgres_secret (e.g., metadata.uid or metadata.resource_version)
before calling delete(), then in the TimeoutSampler loop for Secret (the Secret
class used with admin_client and resource_name) wait until secret.exists is True
AND the secret's metadata.uid (or resource_version) differs from the original
value to ensure you observed a newly-created Secret instance; update the
condition that sets recreated_secret to check both existence and a changed
uid/resource_version.
In `@tests/model_registry/model_catalog/db_check/test_model_catalog_secrets.py`:
- Around line 57-60: The test currently calls the namespace-wide
wait_for_pods_running which can be tripped by unrelated pod restarts; replace
that call with the targeted helper
wait_for_model_catalog_pod_ready_after_deletion and import it from
tests.model_registry.model_catalog.db_check.utils, keeping the existing
assertion wait_for_model_catalog_pod_created(client=admin_client,
model_registry_namespace=model_registry_namespace) and then invoking
wait_for_model_catalog_pod_ready_after_deletion(...) with the same admin_client
and model_registry_namespace (and any required pod identifier params) so the
check is scoped to the model-catalog pod under test rather than the whole
namespace.
- Line 46: The call to get_postgres_pod_in_namespace in the test is using the
default namespace; update the invocation to pass the test's
model_registry_namespace variable so it targets the correct namespace (e.g.,
replace get_postgres_pod_in_namespace(admin_client=admin_client) with
get_postgres_pod_in_namespace(admin_client=admin_client,
namespace=model_registry_namespace)); keep the admin_client argument and ensure
any downstream references use the returned postgres_pod as before.
In `@tests/model_registry/model_catalog/db_check/utils.py`:
- Around line 59-68: The helper currently only checks pods[0] (catalog_pod) for
readiness which is order-dependent; update the logic to iterate over the pods
list and consider the helper successful if any pod meets the readiness criteria.
Replace the single-use of catalog_pod with a loop over pods, and for each pod
check pod.status == Pod.Status.RUNNING and pod.instance.status.conditions, then
inspect condition.type == "Ready" and condition.status == "True" and return when
found; keep LOGGER.info(f"Model catalog pod {pod.name} is ready") for the pod
that passes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 99f90098-5f4e-41eb-9b18-7c05ab52e60a
📒 Files selected for processing (4)
tests/model_registry/model_catalog/db_check/__init__.pytests/model_registry/model_catalog/db_check/conftest.pytests/model_registry/model_catalog/db_check/test_model_catalog_secrets.pytests/model_registry/model_catalog/db_check/utils.py
There was a problem hiding this comment.
♻️ Duplicate comments (2)
tests/model_registry/model_catalog/db_check/conftest.py (2)
31-58:⚠️ Potential issue | 🟠 MajorWait for a new Secret instance, not just
exists == True.
delete()is asynchronous. The first sampled object can still be the original Secret, so this loop can return stale data before recreation happens.Suggested fix
def recreated_model_catalog_postgres_secret( - admin_client: DynamicClient, model_catalog_postgres_secret: Secret + admin_client: DynamicClient, + model_catalog_postgres_secret: Secret, + model_registry_namespace: str, ) -> dict[str, str]: """Delete model-catalog-postgres secret and wait for it to be recreated""" - model_registry_namespace = py_config["model_registry_namespace"] resource_name = "model-catalog-postgres" LOGGER.info(f"Deleting secret {resource_name} in namespace {model_registry_namespace}") + original_uid = model_catalog_postgres_secret.instance.metadata.uid model_catalog_postgres_secret.delete() @@ ): - if secret.exists: - LOGGER.info(f"Secret {resource_name} has been recreated") - recreated_secret = secret - break + if not secret.exists: + continue + + if secret.instance.metadata.uid == original_uid: + LOGGER.info( + f"Secret {resource_name} still has the original UID; waiting for recreation" + ) + continue + + LOGGER.info(f"Secret {resource_name} has been recreated") + recreated_secret = secret + break🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_catalog/db_check/conftest.py` around lines 31 - 58, The current recreated_model_catalog_postgres_secret function may return the original secret because delete() is async; change the wait logic to detect a new Secret instance by comparing a stable identifier from the original secret (e.g., model_catalog_postgres_secret.metadata.uid or .resource_version) against the sampled Secret returned by TimeoutSampler (the sampled Secret must have a different uid/resource_version before accepting it), and only call extract_secret_values when you confirm the sampled secret is a new instance (not the pre-deletion object).
14-20:⚠️ Potential issue | 🟠 MajorUse the live
model_registry_namespacefixture here.This fixture hard-codes the namespace from
py_config, so it can read or delete the wrong Secret when the registry namespace is overridden by cluster state.Suggested fix
`@pytest.fixture`(scope="class") -def model_catalog_postgres_secret(admin_client: DynamicClient) -> Secret: +def model_catalog_postgres_secret( + admin_client: DynamicClient, model_registry_namespace: str +) -> Secret: """Get the model-catalog-postgres secret from model registry namespace""" return Secret( client=admin_client, name="model-catalog-postgres", - namespace=py_config["model_registry_namespace"], + namespace=model_registry_namespace, ensure_exists=True, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_catalog/db_check/conftest.py` around lines 14 - 20, The model_catalog_postgres_secret fixture currently hard-codes the namespace from py_config; change it to accept and use the live model_registry_namespace fixture instead so tests use the actual cluster namespace. Update the model_catalog_postgres_secret signature to accept model_registry_namespace and replace namespace=py_config["model_registry_namespace"] with namespace=model_registry_namespace (retain ensure_exists=True and the Secret construction). Ensure any callers of model_catalog_postgres_secret in tests are compatible with the fixture change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@tests/model_registry/model_catalog/db_check/conftest.py`:
- Around line 31-58: The current recreated_model_catalog_postgres_secret
function may return the original secret because delete() is async; change the
wait logic to detect a new Secret instance by comparing a stable identifier from
the original secret (e.g., model_catalog_postgres_secret.metadata.uid or
.resource_version) against the sampled Secret returned by TimeoutSampler (the
sampled Secret must have a different uid/resource_version before accepting it),
and only call extract_secret_values when you confirm the sampled secret is a new
instance (not the pre-deletion object).
- Around line 14-20: The model_catalog_postgres_secret fixture currently
hard-codes the namespace from py_config; change it to accept and use the live
model_registry_namespace fixture instead so tests use the actual cluster
namespace. Update the model_catalog_postgres_secret signature to accept
model_registry_namespace and replace
namespace=py_config["model_registry_namespace"] with
namespace=model_registry_namespace (retain ensure_exists=True and the Secret
construction). Ensure any callers of model_catalog_postgres_secret in tests are
compatible with the fixture change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 883df9fb-256d-449e-8f54-ac0f62e6c514
📒 Files selected for processing (4)
tests/model_registry/model_catalog/db_check/__init__.pytests/model_registry/model_catalog/db_check/conftest.pytests/model_registry/model_catalog/db_check/test_model_catalog_secrets.pytests/model_registry/model_catalog/db_check/utils.py
🚧 Files skipped from review as they are similar to previous changes (3)
- tests/model_registry/model_catalog/db_check/utils.py
- tests/model_registry/model_catalog/db_check/test_model_catalog_secrets.py
- tests/model_registry/model_catalog/db_check/init.py
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/model_registry/model_catalog/db_check/utils.py`:
- Line 29: The current LOGGER.info call logs secret_values.keys() which exposes
sensitive secret key names; change the logging to avoid listing keys by either
logging only the count (e.g., len(secret_values)) or moving the message to DEBUG
level and omit the key names; update the reference in this file to replace the
LOGGER.info(f"Captured secret with keys: {list(secret_values.keys())}") usage
with a safer message that references LOGGER and secret_values (e.g., log the
number of secrets or a debug-only message) so no secret key names are emitted.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 9d8f9997-3f81-4256-bd6e-49f204aa88ef
📒 Files selected for processing (4)
tests/model_registry/model_catalog/db_check/__init__.pytests/model_registry/model_catalog/db_check/conftest.pytests/model_registry/model_catalog/db_check/test_model_catalog_secrets.pytests/model_registry/model_catalog/db_check/utils.py
🚧 Files skipped from review as they are similar to previous changes (3)
- tests/model_registry/model_catalog/db_check/init.py
- tests/model_registry/model_catalog/db_check/test_model_catalog_secrets.py
- tests/model_registry/model_catalog/db_check/conftest.py
There was a problem hiding this comment.
♻️ Duplicate comments (2)
tests/model_registry/model_catalog/db_check/conftest.py (2)
31-35:⚠️ Potential issue | 🟠 MajorUse the live namespace fixture here as well.
Line 35 still bypasses
model_registry_namespace, so this fixture can delete or poll the wrong Secret when the registry namespace is overridden by cluster state.Suggested fix
`@pytest.fixture`(scope="class") def recreated_model_catalog_postgres_secret( - admin_client: DynamicClient, model_catalog_postgres_secret: Secret + admin_client: DynamicClient, + model_catalog_postgres_secret: Secret, + model_registry_namespace: str, ) -> dict[str, str]: """Delete model-catalog-postgres secret and wait for it to be recreated""" - model_registry_namespace = py_config["model_registry_namespace"] resource_name = "model-catalog-postgres"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_catalog/db_check/conftest.py` around lines 31 - 35, The fixture recreated_model_catalog_postgres_secret currently reads model_registry_namespace from py_config (bypassing the live namespace); update the fixture signature to accept the model_registry_namespace fixture (e.g., add model_registry_namespace: str parameter) and replace the local py_config["model_registry_namespace"] usage with that parameter so the secret deletion and polling use the actual live namespace when the registry namespace is overridden.
38-56:⚠️ Potential issue | 🟠 MajorWait for a new Secret instance, not just
exists == True.Line 39 deletes asynchronously. Lines 53-56 can still capture the pre-delete Secret, so the rotation check becomes nondeterministic and may read stale credentials.
Suggested fix
LOGGER.info(f"Deleting secret {resource_name} in namespace {model_registry_namespace}") + original_uid = model_catalog_postgres_secret.instance.metadata.uid model_catalog_postgres_secret.delete() @@ ): - if secret.exists: - LOGGER.info(f"Secret {resource_name} has been recreated") - recreated_secret = secret - break + if not secret.exists: + continue + + if secret.instance.metadata.uid == original_uid: + LOGGER.info(f"Secret {resource_name} still has the original UID; waiting for recreation") + continue + + LOGGER.info(f"Secret {resource_name} has been recreated") + recreated_secret = secret + break🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/model_registry/model_catalog/db_check/conftest.py` around lines 38 - 56, Capture the original Secret identity before deleting (e.g., read model_catalog_postgres_secret or Secret(...).metadata.uid/resource_version) and after deletion, change the TimeoutSampler loop (which currently checks only secret.exists) to wait for a Secret instance whose uid/resource_version differs from the original; specifically, in the loop that iterates Secret(client=admin_client, name=resource_name, namespace=model_registry_namespace) replace the exists-only check with a check that secret.exists is True AND secret.metadata.uid (or metadata.resource_version) != original_uid so you only accept a recreated Secret, not the pre-delete instance.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@tests/model_registry/model_catalog/db_check/conftest.py`:
- Around line 31-35: The fixture recreated_model_catalog_postgres_secret
currently reads model_registry_namespace from py_config (bypassing the live
namespace); update the fixture signature to accept the model_registry_namespace
fixture (e.g., add model_registry_namespace: str parameter) and replace the
local py_config["model_registry_namespace"] usage with that parameter so the
secret deletion and polling use the actual live namespace when the registry
namespace is overridden.
- Around line 38-56: Capture the original Secret identity before deleting (e.g.,
read model_catalog_postgres_secret or Secret(...).metadata.uid/resource_version)
and after deletion, change the TimeoutSampler loop (which currently checks only
secret.exists) to wait for a Secret instance whose uid/resource_version differs
from the original; specifically, in the loop that iterates
Secret(client=admin_client, name=resource_name,
namespace=model_registry_namespace) replace the exists-only check with a check
that secret.exists is True AND secret.metadata.uid (or
metadata.resource_version) != original_uid so you only accept a recreated
Secret, not the pre-delete instance.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: a20dcf9a-3ba4-42a8-8297-7046f2bbd88d
📒 Files selected for processing (3)
tests/model_registry/model_catalog/db_check/conftest.pytests/model_registry/model_catalog/db_check/test_model_catalog_secrets.pytests/model_registry/model_catalog/db_check/utils.py
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/model_registry/model_catalog/db_check/utils.py
- tests/model_registry/model_catalog/db_check/test_model_catalog_secrets.py
|
Status of building tag latest: success. |
Pull Request
Summary
Related Issues
How it has been tested
Additional Requirements
Summary by CodeRabbit
Tests
Test Helpers