-
Notifications
You must be signed in to change notification settings - Fork 762
Make HighNodeUtilization select 1 node if all nodes are underutilized #1616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
Welcome @zoonage! |
|
Hi @zoonage. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
This is the first time I've tried to use the de-scheduler, how are the *NodeUtilization plugins preventing pods from being re-scheduled onto the node they've just been evicted from? |
I've added PreferredNoSchedule tainting to nodes now to avoid scheduling on nodes we're trying to remove |
|
Just realised the tainting has an unintended effect on LowNodeUtilization, will give this a rethink |
|
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
|
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
|
Hi @zoonage , do you have any plans on moving this forward ? I think this is a very nice addition to the plugin. |
Heya, yeah I do, I've just been time constrained a lot this year. Hoping to get some time for it in August to wrap this up |
|
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
|
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/remove-lifecycle rotten |
Skimming your change description it looks like what this needs is a combination of the two to work, my approach was flawed because it assumes you can treat each node as identical, however there are so many cases where that isn't true (e.g. node groups, pods with AZ bound PVs etc) Currently the place I'm working isn't using k8s so I don't have the time (or infrastructure) to test these changes right now unfortunately, however if you want to take any of my changes please feel free to use them |
Eventually reconcile the issue in #725 by tainting and removing 1 node at a time on each run.
There's definitely a more efficient way by working out how many nodes could be removed to achieve a certain resource utilisation density on the remaining nodes, however this is a quick fix that eventually reconciles the cluster into the desired state