-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: respect multinode consolidation timeout in all cases #2025
base: main
Are you sure you want to change the base?
perf: respect multinode consolidation timeout in all cases #2025
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: rschalo The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Pull Request Test Coverage Report for Build 13745582402Details
💛 - Coveralls |
@@ -236,6 +236,9 @@ func (p *Provisioner) NewScheduler(ctx context.Context, pods []*corev1.Pod, stat | |||
|
|||
instanceTypes := map[string][]*cloudprovider.InstanceType{} | |||
for _, np := range nodePools { | |||
if ctx.Err() != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this section of code take so much time that we think we need to handle this error at this level? I get that there's a trade-off between the number of times that we write this and how quickly we can respond
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We probably don't need to check here. It probably makes the most sense to just check the timeout between pods in scheduling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Except that this context is timed out and continues on to cloudProvider.GetInstanceTypes
. Less that we're handling the error and more that we're silencing spurious logging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That doesn't completely solve it right? I think we just have to handle it generally because we can still race pass this check and fire spurious errors
@@ -236,6 +236,9 @@ func (p *Provisioner) NewScheduler(ctx context.Context, pods []*corev1.Pod, stat | |||
|
|||
instanceTypes := map[string][]*cloudprovider.InstanceType{} | |||
for _, np := range nodePools { | |||
if ctx.Err() != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That doesn't completely solve it right? I think we just have to handle it generally because we can still race pass this check and fire spurious errors
Fixes #N/A
Description
Scheduling for 50 nodes in multinode consolidation can take a long time, especially in large clusters where a scheduling decision for a node can take 20 seconds or longer. This can cause multinode consolidation to block drift, emptiness, and single node consolidation for longer than intended.
How was this change tested?
Deployed with a 5 second timeout and saw multinode consolidation bail before exhausting the list of candidates.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.