-
Notifications
You must be signed in to change notification settings - Fork 41
chore: update the go module paths for updated repo #152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: update the go module paths for updated repo #152
Conversation
78d405f to
d811482
Compare
|
PodDefaults were easy. The more involved ones will be KFAM, since it imports Profile Controller. I'm getting some errors on imports but will try to resolve and update everything at once. I'll also thoroughly test in a running cluster to ensure the components will be functioning as expected. |
Signed-off-by: Kimonas Sotirchos <[email protected]>
36d1678 to
d8662d3
Compare
|
I can't update the paths for KFAM in this PR, as it relies on the module This is problematic, as if I change the import path to be # components/access-management
$ go mod tidy
go: finding module for package github.com/kubeflow/dashboard/components/profile-controller/api/v1beta1
go: found github.com/kubeflow/dashboard/components/profile-controller/api/v1beta1 in github.com/kubeflow/dashboard/components/profile-controller v0.0.0-20250925122946-4fa8bb9e5ec4
go: github.com/kubeflow/dashboard/components/access-management/kfam imports
github.com/kubeflow/dashboard/components/profile-controller/api/v1beta1: github.com/kubeflow/dashboard/components/[email protected]: parsing go.mod:
module declares its path as: github.com/kubeflow/kubeflow/components/profile-controller
but was required as: github.com/kubeflow/dashboard/components/profile-controllerIn this PR I'll only touch the module paths of:
And then will create a separate PR to update KFAM to point to the correct path of profile-controller in this repo, since then the committed profile-controller will be correctly pointing to |
Signed-off-by: Kimonas Sotirchos <[email protected]>
d8662d3 to
0ff8b72
Compare
|
Run some local tests on a kind cluster as well. Both the Profile Controller and the PodDefaults Webhook are working as expected: # poddefaults
{"level":"info","ts":"2025-10-07T20:23:29Z","logger":"controller-runtime.certwatcher","msg":"Updated current TLS certificate"}
I1007 20:23:29.743043 1 main.go:771] About to start serving webhooks: &http.Server{Addr:":4443", Handler:http.Handler(nil), DisableGeneralOptionsHandler:false, TLSConfig:(*tls.Config)(0xc00022a4e0), ReadTimeout:0, ReadHeaderTimeout:0, WriteTimeout:0, IdleTimeout:0, MaxHeaderBytes:0, TLSNextProto:map[string]func(*http.Server, *tls.Conn, http.Handler)(nil), ConnState:(func(net.Conn, http.ConnState))(nil), ErrorLog:(*log.Logger)(nil), BaseContext:(func(net.Listener) context.Context)(nil), ConnContext:(func(context.Context, net.Conn) context.Context)(nil), inShutdown:atomic.Bool{_:atomic.noCopy{}, v:0x0}, disableKeepAlives:atomic.Bool{_:atomic.noCopy{}, v:0x0}, nextProtoOnce:sync.Once{done:0x0, m:sync.Mutex{state:0, sema:0x0}}, nextProtoErr:error(nil), mu:sync.Mutex{state:0, sema:0x0}, listeners:map[*net.Listener]struct {}(nil), activeConn:map[*http.conn]struct {}(nil), onShutdown:[]func()(nil), listenerGroup:sync.WaitGroup{noCopy:sync.noCopy{}, state:atomic.Uint64{_:atomic.noCopy{}, _:atomic.align64{}, v:0x0}, sema:0x0}}
{"level":"info","ts":"2025-10-07T20:23:29Z","logger":"controller-runtime.certwatcher","msg":"Starting certificate watcher"}
I1007 20:24:15.091249 1 main.go:600] Entering mutatePods in mutating webhook
I1007 20:24:15.095709 1 main.go:646] fetched 1 poddefault(s) in namespace kubeflow-user-example-com
I1007 20:24:15.095752 1 main.go:662] 1 matching pod defaults, for pod test-0
I1007 20:24:15.095760 1 main.go:668] Matching PD detected of count 1, patching spec
I1007 20:24:15.095777 1 main.go:481] mutating pod: test-0
I1007 20:24:15.095787 1 main.go:683] applied poddefaults: access-ml-pipeline successfully on Pod: test-0
I1007 20:24:26.484450 1 main.go:600] Entering mutatePods in mutating webhook
I1007 20:24:26.486664 1 main.go:646] fetched 1 poddefault(s) in namespace kubeflow-user-example-com
I1007 20:24:26.686863 1 main.go:600] Entering mutatePods in mutating webhook
I1007 20:24:26.688741 1 main.go:646] fetched 1 poddefault(s) in namespace kubeflow-user-example-comand # profile-controller
time="2025-10-07T20:13:25Z" level=info msg="Server started"
2025/10/07 20:15:49 GET /metrics PrometheusMetrics 1.321514ms
2025/10/07 20:16:19 GET /metrics PrometheusMetrics 1.71305ms
2025/10/07 20:16:49 GET /metrics PrometheusMetrics 1.337685ms
2025/10/07 20:17:19 GET /metrics PrometheusMetrics 1.218922ms
2025/10/07 20:17:49 GET /metrics PrometheusMetrics 1.256022ms
2025/10/07 20:18:19 GET /metrics PrometheusMetrics 1.163588ms
2025/10/07 20:18:49 GET /metrics PrometheusMetrics 1.009868ms
2025/10/07 20:19:19 GET /metrics PrometheusMetrics 1.344573ms
2025/10/07 20:19:49 GET /metrics PrometheusMetrics 1.11658ms
2025/10/07 20:20:19 GET /metrics PrometheusMetrics 1.132262ms
2025/10/07 20:20:49 GET /metrics PrometheusMetrics 1.109772ms
2025/10/07 20:21:19 GET /metrics PrometheusMetrics 1.034993ms
2025/10/07 20:21:49 GET /metrics PrometheusMetrics 1.031613ms
2025/10/07 20:22:19 GET /metrics PrometheusMetrics 1.157279ms
2025/10/07 20:22:49 GET /metrics PrometheusMetrics 990.811µs
2025/10/07 20:23:19 GET /metrics PrometheusMetrics 1.332356ms
2025/10/07 20:23:38 http: superfluous response.WriteHeader call from github.com/kubeflow/kubeflow/components/access-management/kfam.(*KfamV1Alpha1Client).QueryClusterAdmin (api_default.go:272)
2025/10/07 20:23:38 GET /kfam/v1/role/clusteradmin?user=user%40example.com QueryClusterAdmin 46.107µs
2025/10/07 20:23:38 http: superfluous response.WriteHeader call from github.com/kubeflow/kubeflow/components/access-management/kfam.(*KfamV1Alpha1Client).ReadBinding (api_default.go:252)
2025/10/07 20:23:38 GET /kfam/v1/bindings?user=user%40example.com ReadBinding 2.458779ms
2025/10/07 20:23:38 http: superfluous response.WriteHeader call from github.com/kubeflow/kubeflow/components/access-management/kfam.(*KfamV1Alpha1Client).QueryClusterAdmin (api_default.go:272)
2025/10/07 20:23:38 GET /kfam/v1/role/clusteradmin?user=user%40example.com QueryClusterAdmin 43.312µs
2025/10/07 20:23:38 http: superfluous response.WriteHeader call from github.com/kubeflow/kubeflow/components/access-management/kfam.(*KfamV1Alpha1Client).ReadBinding (api_default.go:252)
2025/10/07 20:23:38 GET /kfam/v1/bindings?user=user%40example.com ReadBinding 1.289363ms
2025/10/07 20:23:38 http: superfluous response.WriteHeader call from github.com/kubeflow/kubeflow/components/access-management/kfam.(*KfamV1Alpha1Client).ReadBinding (api_default.go:252)
2025/10/07 20:23:38 GET /kfam/v1/bindings?namespace=kubeflow-user-example-com ReadBinding 56.156µs
2025/10/07 20:23:49 GET /metrics PrometheusMetrics 1.192108ms |
|
@andyatmiami this should now be fully ready for a pass, when you get a chance |
|
/ok-to-test |
andyatmiami
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for these changes @kimwnasptd ! From a static and quasi-dynamic review - I think these changes are good!
I feel comfortable with the go.sum (significant) changes as outlined in a review comment. I also searched the components/profile-controller and components/poddefaults-webhooks directories and can confirm any remaining reference to the string kubeflow/kubeflow is wholly unrelated to the valid changes you have applied here.
Obviously the GHA checks are fairly robust in verifying a change such as this - but I also still updated a running kind cluster with a (partial) Kubeflow installation and verified the workloads modified in this PR enter a healthy RUNNING state within the cluster.
Testing Methodology
-
create a minimal Kubeflow deployment using a custom script of mine
./setup-kubeflow-notebooks.sh --force -
build poddefaults-webhooks image from PR and load into kind
$ DOCKER_BUILDKIT=0 gmake docker-build IMG=quay.io/rh-ee-astonebe/kubeflow-dashboard TAG=poddefaults-gomod-verify $ kind load docker-image quay.io/rh-ee-astonebe/kubeflow-dashboard:poddefaults-gomod-verify --name kubeflow -
build profile-controller image from PR and load into kind
$ DOCKER_BUILDKIT=0 gmake docker-build IMG=quay.io/rh-ee-astonebe/kubeflow-dashboard TAG=profcon-gomod-verify $ kind load docker-image quay.io/rh-ee-astonebe/kubeflow-dashboard:profcon-gomod-verify --name kubeflow -
edit profiles-deployment to use custom image
$ kubectl -n kubeflow edit deployment/profiles-deployment -
edit poddefaults-webhook deployment to use custom image
$ kubectl -n kubeflow edit deployment/admission-webhook-deployment- note this deployment was originally set up from v1.10 branch - hence the "old" name
-
confirm workloads stay in healthy state
$ kubectl get deployments,replicasets,pods -n kubeflow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ℹ️ While the number of edits in this file initially concerned me - I can confirm that if I simply check out upstream/main and run go mod tidy in components/defaults - I see this very same change set introduced...
So looks like this was a result of go mod tidy not being run some time(s) in the past - but this delta is unrelated to the changes @kimwnasptd is introducing in this PR...
|
I got notified that it's OK for maintainers to approve their PRs. I personally am not in favour of this, but if this is something we roll for now then I'll go forth. cc @juliusvonkohout /approve |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kimwnasptd The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This is a follow-up of #135, and #123
It's goal is to update also the
moduledirectives in our go projects to reflect the updated GH repo.