Skip to content

Conversation

@kaikaila
Copy link
Contributor

@kaikaila kaikaila commented Oct 19, 2025

Summary

This PR adds full PostgreSQL (pgx driver) support to Kubeflow Pipelines backend, enabling users to choose between MySQL and PostgreSQL as the metadata database. The implementation introduces a clean dialect abstraction layer and includes a major query optimization that benefits both database backends.

Key achievements
✅ Complete PostgreSQL integration for API Server and Cache Server, addressing #7512, #9813
✅ All CI tests passing (MySQL + PostgreSQL).
✅ Significant performance improvement for ListRuns queries. This PR is expected to address the root causes behind #10778, #10230, #9780, #9701
✅ Zero breaking changes - backward compatible with existing MySQL deployments

What Changed

  1. Storage Layer Refactoring - Dialect Abstraction ([backend/src/apiserver/common/sql/dialect]
  • Problem
    SQL syntax was tightly coupled to MySQL.

  • Solution
    Introduced a DBDialect interface that encapsulates database-specific behavior
    Identifier quoting (MySQL backticks vs PostgreSQL double quotes)
    Placeholder styles (? vs $1, $2, ...)
    Aggregation functions (GROUP_CONCAT vs string_agg)
    Concatenation syntax (CONCAT() vs ||)

  • Files

    • Core dialect implementation → backend/src/apiserver/common/sql/dialect/dialect.go
    • Dialect-aware utility functions → backend/src/apiserver/storage/sql_dialect_util.go
    • Reusable filter builders with proper quoting → backend/src/apiserver/storage/list_filters.go

All storage layer code now uses

q := s.dbDialect.QuoteIdentifier
qb := s.dbDialect.QueryBuilder()

This ensures queries work correctly across MySQL, PostgreSQL, and SQLite (for tests).

  1. ListRuns Query Performance Optimization
  • Problem
    The original ListRuns query called addMetricsResourceReferencesAndTasks which performed a 3-layer LEFT JOIN with GROUP BY on all columns, including LONGTEXT fields like PipelineSpecManifest WorkflowSpecManifest etc. This caused slow response times for large datasets.
  • Solution
    Layers 1-3: LEFT JOIN only on PrimaryKeyUUID + aggregated columns (refs, tasks, metrics)
    Final layer: INNER JOIN back to run_details to fetch LONGTEXT columns
  • Performance impact
    Eliminates GROUP BY on LONGTEXT columns entirely. Expected substantial performance improvements for deployments with large pipeline specifications, though formal load testing has not yet been conducted.
  1. Deployment Configurations
  • Production-ready PostgreSQL kustomization → manifests/kustomize/env/platform-agnostic-postgresql/
  • Local development setup → manifests/kustomize/env/dev-kind-postgresql/
  • PostgreSQL StatefulSet → manifests/kustomize/third-party/postgresql/

Configuration is symmetric to existing MySQL manifests for consistency.

  1. CI Manifest Overlays

Created CI-specific Kustomize overlays to ensure tests use locally built images from the Kind registry instead of pulling official images from ghcr.io:

  • Add PostgreSQL CI overlay .github/resources/manifests/standalone/postgresql/
  • Added kfp-cache-server image override to .github/resources/manifests/standalone/base/kustomization.yaml
  1. Added 2 PostgreSQL-specific CI workflows
  • V2 API and integration tests (cache enabled/disabled matrix) → api-server-test-Postgres.yml
  • V1 integration tests → integration-tests-v1-postgres.yml

PostgreSQL tests cover the core cache enabled/disabled matrix.

  1. Local development support
  • make dev-kind-cluster-pg - Provision Kind cluster with PostgreSQL
  • Updated README for PostgreSQL setup and debugging, achieving parity with MySQL documentation.

Testing

Unit Tests

23 test files modified/added
New test coverage: dialect_test.go, list_filters_test.go, sql_dialect_util_test.go
All existing tests updated to use dialect abstraction

Integration Tests

✅ V1 API integration tests (PostgreSQL)
✅ V2 API integration tests (PostgreSQL, cache enabled/disabled)
✅ Existing MySQL tests remain green

Migration Guide

  • For new deployments:
    kubectl apply -k manifests/kustomize/env/platform-agnostic-postgresql
  • For existing MySQL deployments:
    No action required. This PR is fully backward compatible.
  • For local development, to set up the kind cluster with Postgres
    make -C backend dev-kind-cluster-pg

This PR continues from #12063.

@google-oss-prow
Copy link

Hi @kaikaila. Thanks for your PR.

I'm waiting for a kubeflow member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@github-actions
Copy link

🚫 This command cannot be processed. Only organization members or owners can use the commands.

@kaikaila kaikaila force-pushed the feature/postgres-integration branch 7 times, most recently from cd1d08b to 85498ed Compare October 22, 2025 05:03
@kaikaila
Copy link
Contributor Author

Currently, both MySQL and PGX setups use the DB superuser for all KFP operations, which is why client_manager.go contains a “create database if not exist” step here.

From a security standpoint, would it be preferable to:

  1. Move DB creation out of the client manager and into the deployment/init phase (i.e. add a manifests/kustomize/third-party/postgresql/base/pg-init-configmap.yaml) and
  2. Introduce a dedicated restricted user for KFP components, limited to the mlpipeline database?

If the team agrees, I can propose a follow-up PR to refactor accordingly.

@HumairAK
Copy link
Collaborator

I'm fine with this, I don't think it's great that KFP tries to create a database (or a bucket frankly)

fyi @mprahl / @droctothorpe

@kaikaila
Copy link
Contributor Author

Thanks, @HumairAK — totally agree on the security point.
Since this PR is already getting quite heavy, would you be okay if I leave the user permission changes for a separate follow-up PR?

@kaikaila kaikaila force-pushed the feature/postgres-integration branch 3 times, most recently from 09fd370 to 1e0caa8 Compare October 23, 2025 07:10
@HumairAK
Copy link
Collaborator

yes that is fine

@kaikaila kaikaila force-pushed the feature/postgres-integration branch 6 times, most recently from 4d33821 to e6c943c Compare October 24, 2025 02:47
@kaikaila
Copy link
Contributor Author

Question about the PostgreSQL test workflow organization

Current situation

The V2 integration tests for PostgreSQL logically belong in a "PostgreSQL counterpart" to legacy-v2-api-integration-tests.yml
However, I didn't want to create a new workflow with "legacy" in the name from day one.
As a temporary solution, I merged them into api-server-test-Postgres.yml
This causes asymmetry with api-server-tests.yml and the workflow has mixed responsibilities.

Question: What's the recommended workflow organization for PostgreSQL tests?

Should I:

  • a. Create legacy-v2-api-integration-tests-postgres.yml for consistency (even though it's new)?
  • b. Keep current structure and accept the asymmetry?
  • c. Refactor both MySQL and PostgreSQL to a unified structure?

Would love guidance on the long-term vision for test workflow organization, especially from @nsingla

@kaikaila kaikaila force-pushed the feature/postgres-integration branch 4 times, most recently from 6c3ca2a to 6ae921f Compare December 4, 2025 08:45
@kaikaila
Copy link
Contributor Author

kaikaila commented Dec 8, 2025

Question about the PostgreSQL test workflow organization

Current situation

The V2 integration tests for PostgreSQL logically belong in a "PostgreSQL counterpart" to legacy-v2-api-integration-tests.yml However, I didn't want to create a new workflow with "legacy" in the name from day one. As a temporary solution, I merged them into api-server-test-Postgres.yml This causes asymmetry with api-server-tests.yml and the workflow has mixed responsibilities.

Question: What's the recommended workflow organization for PostgreSQL tests?

Should I:

* a. Create legacy-v2-api-integration-tests-postgres.yml for consistency (even though it's new)?

* b. Keep current structure and accept the asymmetry?

* c. Refactor both MySQL and PostgreSQL to a unified structure?

Would love guidance on the long-term vision for test workflow organization, especially from @nsingla

I actually would like to get rid of these legacy tests asap, but there are still few tests that needs to be migrated first, so my suggestion is to not add more work to the legacy workflows Rather, can we add "database" as a workflow parameter, similar to "pipeline_store", and run tests against mysql as well as postgres in the same workflow?

Hi @nsingla,

Thanks for the context. I'd like to clarify the reasoning behind the current structure of
api-server-test-Postgres.yml
(which includes both API Tests and Integration Tests), as it addresses the constraints we discussed:

Integration Tests: You mentioned we should avoid adding more workload to the existing
legacy-v2-api-integration-tests.yml
. Since the main
api-server-tests.yml
only runs Ginkgo tests and misses these legacy integration tests (which are critical for verifying Postgres support), I have placed them here. This ensures we have Postgres coverage without modifying the legacy workflow.
API Tests: While I have added Postgres to the
api-server-tests.yml
matrix, I am keeping this separate workflow as a control group for now. It allows us to prove that the Postgres business logic is correct in isolation, ensuring we have a green signal while we work on stabilizing the main workflow matrix.
My plan is to treat
api-server-test-Postgres.yml
as a temporary bridge. Once the legacy integration tests are migrated to Ginkgo and the main workflow is stable, we can consolidate everything there and remove this file.

Does this approach sound reasonable to you as an interim solution?

@kaikaila kaikaila force-pushed the feature/postgres-integration branch 11 times, most recently from 5181644 to 1594f9c Compare December 13, 2025 04:12
@kaikaila kaikaila force-pushed the feature/postgres-integration branch 3 times, most recently from 34559fb to c7fdd08 Compare December 18, 2025 06:26
…iveExperiment

- Extract repeated subquery SQL into resourceReferenceSubquery variable
- Unify code style: consistently use SetMap() throughout
- Add detailed comments explaining PostgreSQL $N placeholder handling
- Simplify error messages

optimization according to sanchesoon's suggestion

Signed-off-by: kaikaila <[email protected]>
@kaikaila kaikaila force-pushed the feature/postgres-integration branch from c7fdd08 to 8531316 Compare December 21, 2025 02:58
1. dialect.go: fmt.Sprintf for sql string

2. merge review diff from Humair

*_store.go: replace sql string concatenating with fmt.Sprintf

reuse escapeSQLString from dialect.go

replace == with erros.Is()

replace t.Errorf with require or assert; replace t.Fatalf with require.FailNow in integration test

hardcode expectations

rename QuoteFunction

QualifyIdentifier in storage package

unit tests for qualifyIdentifier

parametized timeout as a constant

make target dev-kind-cluster with DB parameter

revert dev-kind-cluster bridge to 172.17.0.1 for linux

cleaning redundant postgres config in configmap

add database parameter to api-server-test standalone only

add CI job for postgres with argo 3.6.7

database ”" to “mysql”

agents.md and readme.md

add unit test for placeholder numbering

update api-server-test to include postgres

document for kind-cluster-agnostic

test(integration): use consistent naming for test files to ensure database-agnostic sorting

fix(test): handle binary files in pipeline spec replacement and use first yaml file for smoke tests

fix branch path

update compile golden files

fix: Add error handling and retry logic when listing runs and recurring runs during cleanup.

fix: Use ReferenceKey ID for namespace filter when querying pipelines by namespace.

refactor: move table prefix dot appending logic from model methods to list options.

differentiate report_name

test: Remove UUID generation from experiment names in integration tests.

change 127.0.0.1 to localhost

feat: Use shared `test-and-report` action for running v2 API tests in the Postgres workflow.

skip upgrade tests

refactor: extract `default-allow-same-namespace` NetworkPolicy into dedicated files for `dev-kind` and `dev-kind-postgresql` environments.

refactor: remove quoteIdentifier from list.Options

update dev-kind to be OS-aware

Signed-off-by: kaikaila <[email protected]>
backend/Makefile: update k8s 1.34.0
api-server-test-Postgres: k8s back to 1.30.2,
trigger ci in my repo

Signed-off-by: kaikaila <[email protected]>
@kaikaila kaikaila force-pushed the feature/postgres-integration branch from 8531316 to fe697f3 Compare December 22, 2025 00:27
…environment variables from CI workflow.

Signed-off-by: kaikaila <[email protected]>
@kaikaila kaikaila force-pushed the feature/postgres-integration branch from fe697f3 to 4946472 Compare December 22, 2025 06:00
Signed-off-by: kaikaila <[email protected]>
@kaikaila kaikaila force-pushed the feature/postgres-integration branch from 6c48105 to eb946bd Compare December 22, 2025 08:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants