Skip to content

Conversation

@kimwnasptd
Copy link
Member

This is a follow-up of #135, and #123

It's goal is to update also the module directives in our go projects to reflect the updated GH repo.

@kimwnasptd kimwnasptd force-pushed the fix-kimwnasptd-correct-module-paths branch 2 times, most recently from 78d405f to d811482 Compare October 7, 2025 14:34
@google-oss-prow google-oss-prow bot added size/XL and removed size/XS labels Oct 7, 2025
@kimwnasptd kimwnasptd changed the title (WIP )chore: update the go module paths for updated repo chore: update the go module paths for updated repo Oct 7, 2025
@kimwnasptd kimwnasptd changed the title chore: update the go module paths for updated repo chore: update the go module paths for updated repo (WIP) Oct 7, 2025
@kimwnasptd
Copy link
Member Author

PodDefaults were easy. The more involved ones will be KFAM, since it imports Profile Controller. I'm getting some errors on imports but will try to resolve and update everything at once.

I'll also thoroughly test in a running cluster to ensure the components will be functioning as expected.

@kimwnasptd kimwnasptd force-pushed the fix-kimwnasptd-correct-module-paths branch from 36d1678 to d8662d3 Compare October 7, 2025 14:59
@kimwnasptd
Copy link
Member Author

I can't update the paths for KFAM in this PR, as it relies on the module github.com/kubeflow/kubeflow/profile-controller.

This is problematic, as if I change the import path to be github.com/kubeflow/dashboard/profile-controller then it'll use the latest commit from this repo. But in this commit the profile-controller declares its name as kubeflow/kubeflow.

# components/access-management
$ go mod tidy
go: finding module for package github.com/kubeflow/dashboard/components/profile-controller/api/v1beta1
go: found github.com/kubeflow/dashboard/components/profile-controller/api/v1beta1 in github.com/kubeflow/dashboard/components/profile-controller v0.0.0-20250925122946-4fa8bb9e5ec4
go: github.com/kubeflow/dashboard/components/access-management/kfam imports
        github.com/kubeflow/dashboard/components/profile-controller/api/v1beta1: github.com/kubeflow/dashboard/components/[email protected]: parsing go.mod:
        module declares its path as: github.com/kubeflow/kubeflow/components/profile-controller
                but was required as: github.com/kubeflow/dashboard/components/profile-controller

In this PR I'll only touch the module paths of:

  1. PodDefaults, which are self contained
  2. Profile Controller, which doesn't import any other code

And then will create a separate PR to update KFAM to point to the correct path of profile-controller in this repo, since then the committed profile-controller will be correctly pointing to github.com/kubeflow/dashboard/profile-controller

@kimwnasptd kimwnasptd force-pushed the fix-kimwnasptd-correct-module-paths branch from d8662d3 to 0ff8b72 Compare October 7, 2025 15:00
@kimwnasptd kimwnasptd changed the title chore: update the go module paths for updated repo (WIP) chore: update the go module paths for updated repo Oct 7, 2025
@kimwnasptd
Copy link
Member Author

Run some local tests on a kind cluster as well. Both the Profile Controller and the PodDefaults Webhook are working as expected:

# poddefaults
{"level":"info","ts":"2025-10-07T20:23:29Z","logger":"controller-runtime.certwatcher","msg":"Updated current TLS certificate"}
I1007 20:23:29.743043       1 main.go:771] About to start serving webhooks: &http.Server{Addr:":4443", Handler:http.Handler(nil), DisableGeneralOptionsHandler:false, TLSConfig:(*tls.Config)(0xc00022a4e0), ReadTimeout:0, ReadHeaderTimeout:0, WriteTimeout:0, IdleTimeout:0, MaxHeaderBytes:0, TLSNextProto:map[string]func(*http.Server, *tls.Conn, http.Handler)(nil), ConnState:(func(net.Conn, http.ConnState))(nil), ErrorLog:(*log.Logger)(nil), BaseContext:(func(net.Listener) context.Context)(nil), ConnContext:(func(context.Context, net.Conn) context.Context)(nil), inShutdown:atomic.Bool{_:atomic.noCopy{}, v:0x0}, disableKeepAlives:atomic.Bool{_:atomic.noCopy{}, v:0x0}, nextProtoOnce:sync.Once{done:0x0, m:sync.Mutex{state:0, sema:0x0}}, nextProtoErr:error(nil), mu:sync.Mutex{state:0, sema:0x0}, listeners:map[*net.Listener]struct {}(nil), activeConn:map[*http.conn]struct {}(nil), onShutdown:[]func()(nil), listenerGroup:sync.WaitGroup{noCopy:sync.noCopy{}, state:atomic.Uint64{_:atomic.noCopy{}, _:atomic.align64{}, v:0x0}, sema:0x0}}
{"level":"info","ts":"2025-10-07T20:23:29Z","logger":"controller-runtime.certwatcher","msg":"Starting certificate watcher"}
I1007 20:24:15.091249       1 main.go:600] Entering mutatePods in mutating webhook
I1007 20:24:15.095709       1 main.go:646] fetched 1 poddefault(s) in namespace kubeflow-user-example-com
I1007 20:24:15.095752       1 main.go:662] 1 matching pod defaults, for pod test-0
I1007 20:24:15.095760       1 main.go:668] Matching PD detected of count 1, patching spec
I1007 20:24:15.095777       1 main.go:481] mutating pod: test-0
I1007 20:24:15.095787       1 main.go:683] applied poddefaults: access-ml-pipeline successfully on Pod: test-0
I1007 20:24:26.484450       1 main.go:600] Entering mutatePods in mutating webhook
I1007 20:24:26.486664       1 main.go:646] fetched 1 poddefault(s) in namespace kubeflow-user-example-com
I1007 20:24:26.686863       1 main.go:600] Entering mutatePods in mutating webhook
I1007 20:24:26.688741       1 main.go:646] fetched 1 poddefault(s) in namespace kubeflow-user-example-com

and

# profile-controller
time="2025-10-07T20:13:25Z" level=info msg="Server started"
2025/10/07 20:15:49 GET /metrics PrometheusMetrics 1.321514ms
2025/10/07 20:16:19 GET /metrics PrometheusMetrics 1.71305ms
2025/10/07 20:16:49 GET /metrics PrometheusMetrics 1.337685ms
2025/10/07 20:17:19 GET /metrics PrometheusMetrics 1.218922ms
2025/10/07 20:17:49 GET /metrics PrometheusMetrics 1.256022ms
2025/10/07 20:18:19 GET /metrics PrometheusMetrics 1.163588ms
2025/10/07 20:18:49 GET /metrics PrometheusMetrics 1.009868ms
2025/10/07 20:19:19 GET /metrics PrometheusMetrics 1.344573ms
2025/10/07 20:19:49 GET /metrics PrometheusMetrics 1.11658ms
2025/10/07 20:20:19 GET /metrics PrometheusMetrics 1.132262ms
2025/10/07 20:20:49 GET /metrics PrometheusMetrics 1.109772ms
2025/10/07 20:21:19 GET /metrics PrometheusMetrics 1.034993ms
2025/10/07 20:21:49 GET /metrics PrometheusMetrics 1.031613ms
2025/10/07 20:22:19 GET /metrics PrometheusMetrics 1.157279ms
2025/10/07 20:22:49 GET /metrics PrometheusMetrics 990.811µs
2025/10/07 20:23:19 GET /metrics PrometheusMetrics 1.332356ms
2025/10/07 20:23:38 http: superfluous response.WriteHeader call from github.com/kubeflow/kubeflow/components/access-management/kfam.(*KfamV1Alpha1Client).QueryClusterAdmin (api_default.go:272)
2025/10/07 20:23:38 GET /kfam/v1/role/clusteradmin?user=user%40example.com QueryClusterAdmin 46.107µs
2025/10/07 20:23:38 http: superfluous response.WriteHeader call from github.com/kubeflow/kubeflow/components/access-management/kfam.(*KfamV1Alpha1Client).ReadBinding (api_default.go:252)
2025/10/07 20:23:38 GET /kfam/v1/bindings?user=user%40example.com ReadBinding 2.458779ms
2025/10/07 20:23:38 http: superfluous response.WriteHeader call from github.com/kubeflow/kubeflow/components/access-management/kfam.(*KfamV1Alpha1Client).QueryClusterAdmin (api_default.go:272)
2025/10/07 20:23:38 GET /kfam/v1/role/clusteradmin?user=user%40example.com QueryClusterAdmin 43.312µs
2025/10/07 20:23:38 http: superfluous response.WriteHeader call from github.com/kubeflow/kubeflow/components/access-management/kfam.(*KfamV1Alpha1Client).ReadBinding (api_default.go:252)
2025/10/07 20:23:38 GET /kfam/v1/bindings?user=user%40example.com ReadBinding 1.289363ms
2025/10/07 20:23:38 http: superfluous response.WriteHeader call from github.com/kubeflow/kubeflow/components/access-management/kfam.(*KfamV1Alpha1Client).ReadBinding (api_default.go:252)
2025/10/07 20:23:38 GET /kfam/v1/bindings?namespace=kubeflow-user-example-com ReadBinding 56.156µs
2025/10/07 20:23:49 GET /metrics PrometheusMetrics 1.192108ms

@kimwnasptd
Copy link
Member Author

@andyatmiami this should now be fully ready for a pass, when you get a chance

@andyatmiami
Copy link
Contributor

/ok-to-test

Copy link
Contributor

@andyatmiami andyatmiami left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for these changes @kimwnasptd ! From a static and quasi-dynamic review - I think these changes are good!

I feel comfortable with the go.sum (significant) changes as outlined in a review comment. I also searched the components/profile-controller and components/poddefaults-webhooks directories and can confirm any remaining reference to the string kubeflow/kubeflow is wholly unrelated to the valid changes you have applied here.

Obviously the GHA checks are fairly robust in verifying a change such as this - but I also still updated a running kind cluster with a (partial) Kubeflow installation and verified the workloads modified in this PR enter a healthy RUNNING state within the cluster.

Testing Methodology

  • create a minimal Kubeflow deployment using a custom script of mine
    ./setup-kubeflow-notebooks.sh --force

  • build poddefaults-webhooks image from PR and load into kind

    $ DOCKER_BUILDKIT=0 gmake docker-build IMG=quay.io/rh-ee-astonebe/kubeflow-dashboard TAG=poddefaults-gomod-verify
    $ kind load docker-image quay.io/rh-ee-astonebe/kubeflow-dashboard:poddefaults-gomod-verify --name kubeflow
    
  • build profile-controller image from PR and load into kind

    $ DOCKER_BUILDKIT=0 gmake docker-build IMG=quay.io/rh-ee-astonebe/kubeflow-dashboard TAG=profcon-gomod-verify
    $ kind load docker-image quay.io/rh-ee-astonebe/kubeflow-dashboard:profcon-gomod-verify --name kubeflow
    
  • edit profiles-deployment to use custom image
    $ kubectl -n kubeflow edit deployment/profiles-deployment

  • edit poddefaults-webhook deployment to use custom image
    $ kubectl -n kubeflow edit deployment/admission-webhook-deployment

    • note this deployment was originally set up from v1.10 branch - hence the "old" name
  • confirm workloads stay in healthy state
    $ kubectl get deployments,replicasets,pods -n kubeflow

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ℹ️ While the number of edits in this file initially concerned me - I can confirm that if I simply check out upstream/main and run go mod tidy in components/defaults - I see this very same change set introduced...

So looks like this was a result of go mod tidy not being run some time(s) in the past - but this delta is unrelated to the changes @kimwnasptd is introducing in this PR...

@kimwnasptd
Copy link
Member Author

I got notified that it's OK for maintainers to approve their PRs. I personally am not in favour of this, but if this is something we roll for now then I'll go forth. cc @juliusvonkohout

/approve

@google-oss-prow
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: kimwnasptd

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@google-oss-prow google-oss-prow bot merged commit dc123d0 into kubeflow:main Oct 8, 2025
20 checks passed
@kimwnasptd kimwnasptd deleted the fix-kimwnasptd-correct-module-paths branch October 8, 2025 10:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants