-
Notifications
You must be signed in to change notification settings - Fork 500
[WIP] [Feat] Re-do mk admission after eviction in worker cluster #8477
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[WIP] [Feat] Re-do mk admission after eviction in worker cluster #8477
Conversation
|
Skipping CI for Draft Pull Request. |
|
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
✅ Deploy Preview for kubernetes-sigs-kueue canceled.
|
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: mszadkow The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
/ok-to-test |
7d879a3 to
16cb9d9
Compare
|
/retest |
|
@mszadkow: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
olekzabl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Just a few minor comments.
| } | ||
|
|
||
| // workload eviction on worker cluster | ||
| log.V(5).Info("Workload gets evicted in the remote cluster", "cluster", evictedRemote) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: this present tense feels slightly confusing; IIUC the workload already got evicted.
| acs.LastTransitionTime = metav1.NewTime(w.clock.Now()) | ||
| workload.SetAdmissionCheckState(&wl.Status.AdmissionChecks, *acs, w.clock) | ||
| wl.Status.ClusterName = nil | ||
| wl.Status.NominatedClusterNames = nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity - can this have an effect, given this rule?
(or is this intended to replace an empty list with nil in some edge cases?)
| } | ||
|
|
||
| for cluster := range group.remotes { | ||
| if err := client.IgnoreNotFound(group.RemoveRemoteObjects(ctx, cluster)); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In most cases, other calls to RemoveRemoteObjects in this file are followed by group.remotes[cluster] = nil.
Should it be done also here?
|
|
||
| createdAtWorker := "" | ||
|
|
||
| ginkgo.By("Checking that the workload is created in one of the workers", func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Below, you'll modify worker's CQ limits to manipulate what it can fit.
If so, how about using that trick once more, to get control over where the wl lands initially?
AFAICS the initial CPU quotas are: 2 at worker1 and 1 at worker2.
So if you set 1.5 for your workload, it'll certainly land on worker 1.
Then, you could swap the quotas (say, first set 2 at w2, then set 1 at w1) and verify if the workload moved to w2.
Compared to what you have now, +1 CQ update but -4 if blocks. (If I'm not mistaken).
| ginkgo.By("Checking that the workload is re-admitted in the other worker cluster", func() { | ||
| gomega.Eventually(func(g gomega.Gomega) { | ||
| g.Expect(k8sManagerClient.Get(ctx, wlKey, managerWl)).To(gomega.Succeed()) | ||
| g.Expect(managerWl.Status.ClusterName).NotTo(gomega.HaveValue(gomega.Equal(createdAtWorker))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could also check that it's not empty.
| }, gomega.Equal(completedJobCondition)))) | ||
| }) | ||
| }) | ||
| ginkgo.It("Should redo the admission process once the workload looses Admission in the worker cluster", func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| ginkgo.It("Should redo the admission process once the workload looses Admission in the worker cluster", func() { | |
| ginkgo.It("Should redo the admission process once the workload loses Admission in the worker cluster", func() { |
| ginkgo.By("check manager's workload ClusterName reset", func() { | ||
| gomega.Eventually(func(g gomega.Gomega) { | ||
| managerWl := &kueue.Workload{} | ||
| g.Expect(managerTestCluster.client.Get(worker1TestCluster.ctx, wlLookupKey, managerWl)).To(gomega.Succeed()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| g.Expect(managerTestCluster.client.Get(worker1TestCluster.ctx, wlLookupKey, managerWl)).To(gomega.Succeed()) | |
| g.Expect(managerTestCluster.client.Get(managerTestCluster.ctx, wlLookupKey, managerWl)).To(gomega.Succeed()) |
(or are there reasons to use the other context here?)
| // workload eviction on worker cluster | ||
| log.V(5).Info("Workload gets evicted in the remote cluster", "cluster", evictedRemote) | ||
| needsACUpdate := acs.State == kueue.CheckStateReady | ||
| if err := workload.PatchAdmissionStatus(ctx, w.client, group.local, w.clock, func(wl *kueue.Workload) (bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would we do it when needsACUpdate is false?
(Currently, the update func does nothing in this case, but it still returns true - looks like we'd send an empty patch request to the apiserver?)
What type of PR is this?
/kind feature
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #8302
Special notes for your reviewer:
Still working on the E2E as I want to achieve situation when worker that evicted the workload in the first place can not be admitted, but we don't know which one will it be.
Does this PR introduce a user-facing change?