-
Notifications
You must be signed in to change notification settings - Fork 4.3k
VPA: Implement in-place updates support #6652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VPA: Implement in-place updates support #6652
Conversation
|
|
|
Welcome @jkyros! |
|
Hi @jkyros. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
0211fd9 to
db872fc
Compare
|
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
db872fc to
2a1040e
Compare
2a1040e to
6be1549
Compare
|
Rebased, and I had to adjust a couple of the newly-added tests to account for in-place. This isn't abandoned, but I was kind of hoping to resolve kubernetes/kubernetes#124712 first, so we could use in-place generally without having to test/document a bunch of corners where it doesn't work. |
|
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
6be1549 to
6d16d41
Compare
This just addes the UpdateModeInPlaceOrRecreate mode to the types so we can use it. I did not add InPlaceOnly, as that seemed contentious and it didn't seem like we had a good use case for it yet.
With the InPlacePodVerticalScaling feature, we can now update the resource requests and limits of pods in-place rather than having to evict the pod to rewrite it. We do have to make sure though ( because apps could react badly to an update or require container restarts ) that we limit the amount of disruption we can introduce at once, so we limit our updates only to the ones that the updater has okayed. (And then, over in the updater we're going to meter them so they don't get sent to the admission-controller all at once) This commit: - allows the admission-controller to monitor update operations - adds the new UpdateOrRecreate update mode to the list of possible update modes - makes sure the admission-controller only patches pod update requests that were triggered by the updater (by using a special annotation) - makes sure the admission-controller removes the annotation upon patching to signify that it is done
So because of InPlacePodVerticalScaling, we can have a pod object whose resource spec is correct, but whose status is not, because that pod may have been updated in-place after the original admission. This would have been ignored until now because "the spec looks correct", but we need to take the status into account as well if a resize is in progress. This commit: - takes status resources into account for pods/containers that are being in-place resized - makes sure that any pods that are "stuck" in-place updating (i.e. the node doesn't have enough resources either temporarily or permanently) will still show up in the list as having "wrong" resources so they can still get queued for eviction and be re-assigned to nodes that do have enough resources
This commit makes the eviction restrictor in-place update aware. While this possibly could be a separate restrictor or refactored into a shared "disruption restrictor", I chose not to do that at this time. I don't think eviction/in-place update can be completely separate as they can both cause disruption (albeit in-place less so) -- they both need to factor in the total disruption -- so I just hacked the in-place update functions into the existing evictor and added some additional counters for disruption tracking. While we have the pod lifecycle to look at to figure out "where we are" in eviction, we don't have that luxury with in-place, so that's why we need the additional "IsInPlaceUpdating" helper.
The updater logic wasn't in-place aware, so I tried to make it so. The thought here is that we try to in-place update if we can, if we can't or if it gets stuck/can't satisfy the recommendation, then we fall back to eviction. I tried to keep the "blast radius" small by stuffing the in-place logic in its own function and then falling back to eviction if it's not possible. It would be nice if we had some sort of "can the node support an in-place resize with the current recommendation" but that seemed like a whole other can of worms and math.
We might want to add a few more that are combined disruption counters, e.g. in-place + eviction totals, but for now just add some separate counters to keep track of what in-place updates are doing.
For now, this just updates the mock with the new functions I added to the eviction interface. We need some in-place test cases.
TODO(jkyros): come back here and look at this after you get it working
The updater now needs to be able to update pods. In the current approach, it's so it can add an annotation marking the pod as needing an in-place update. The admission controller is still doing the resource updating as part of patching ,the updater is not updating resources directly. I wonder if it should?
So far this is just: - Make sure it scales when it can But we still need a bunch of other ones like - Test fallback to eviction - Test timeout/eviction when it gets stuck, etc
In the event that we can't perform the whole update, this calculates a set of updates that should be disruptionless and only queues that partial set, omitting the parts that would cause disruption.
Signed-off-by: Max Cao <[email protected]>
Signed-off-by: Max Cao <[email protected]>
a35fb73 to
b9b9217
Compare
VPA: remove resolved comments and cleanup stale code Signed-off-by: Max Cao <[email protected]>
|
Update on this: This is mostly finished to the point that I need review and feedback to guide the rest of the implementation. I have some local commits I have not pushed yet which refactor a bit of the TODOs. I'd really like to get this for initial review to involve the community, but I really want to clean up the commit history so it doesn't look so messy, so I will spend some time on that. I may open a new PR in case I mess the history up here. |
Signed-off-by: Max Cao <[email protected]>
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: jkyros The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
PR needs rebase. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/close Work has moved over to #7673 |
|
@adrianmoisey: Closed this PR. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
This is a "hack and slash" attempt at getting in-place scaling working according to AEP-4016. It mostly works, but it's still a mess and missing some details, so don't take it too seriously just yet.
The TL;DR is that it seems to work okay as long as the pod specifies limits, but if no limits are specified, there seems to be a high likelihood that the
InPlacePodVerticalScalingfeature gets stuck inInProgressseemingly indefinitely (well, or until we fall back and evict -- but I hadn't implemented that yet).Which issue(s) this PR fixes:
Fixes #4016
Special notes for your reviewer:
Don't spend a bunch of time on actual review yet, it's a mess, it's littered with TODOs, parts of it are...hmm...questionable, and it needs tests. I just wanted to have something tangible to reference conceptually when i bring this up in the sig-autoscaling meeting.
Notable general areas of concern:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: