Skip to content

[ca] fix: undo Scale operation when a scaleUpRequest times out#9132

Open
thatmidwesterncoder wants to merge 2 commits intokubernetes:masterfrom
thatmidwesterncoder:undo_scaleup_on_timeout
Open

[ca] fix: undo Scale operation when a scaleUpRequest times out#9132
thatmidwesterncoder wants to merge 2 commits intokubernetes:masterfrom
thatmidwesterncoder:undo_scaleup_on_timeout

Conversation

@thatmidwesterncoder
Copy link

@thatmidwesterncoder thatmidwesterncoder commented Jan 27, 2026

What type of PR is this?

/kind bug

What this PR does / why we need it:

This PR modifies the cluster-autoscaler to automatically revert (decrease) the target size of a node group when a scale-up request times out.

Currently, when a scale-up fails due to infrastructure provider capacity limits (e.g., CAPI infrastructure provider cannot provision new nodes), the autoscaler removes the scale-up request from tracking and puts the node group in backoff, but it does not decrease the target size back to its original value. This causes the infrastructure provider to indefinitely retry provisioning the failed nodes, even after the workload that triggered the scale-up has disappeared. Manual intervention is required to scale the cluster back down.

With this fix, when a scale-up request times out:

  1. The autoscaler calls DecreaseTargetSize() on the node group to revert the increase
  2. If the decrease fails, a warning is logged and an event is emitted
  3. The scale-up request is then removed from tracking as before

This allows clusters running close to infrastructure capacity to automatically recover from failed scale-ups without manual intervention.

Which issue(s) this PR fixes:

Fixes #9120

Special notes for your reviewer:

  • The DecreaseTargetSize method is called with a negative value equal to the original Increase from the scale-up request
  • Error handling is in place - if the decrease fails, we log a warning and emit an event, but still proceed with removing the scale-up request and applying backoff
  • Existing tests were updated to include WithOnScaleUp handler since the test provider now needs to handle the decrease operation
  • Three new test cases were added:
    • TestExpiredScaleUpRevertsTargetSize - verifies target size is decreased on timeout
    • TestExpiredScaleUpRevertsTargetSizeHandlesError - verifies graceful handling when decrease fails
    • TestExpiredScaleUpRevertsPartialIncrease - verifies correct decrease amount for partial increases

Does this PR introduce a user-facing change?

Cluster-autoscaler now automatically reverts the target size of a node group when a scale-up request times out. This prevents cloud providers from indefinitely retrying failed node provisioning attempts after infrastructure capacity limits are hit.

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Jan 27, 2026
@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Jan 27, 2026

CLA Signed

The committers listed above are authorized under a signed CLA.

  • ✅ login: thatmidwesterncoder / name: Jacob Lindgren (517b0eb, ae55033)

@k8s-ci-robot
Copy link
Contributor

Welcome @thatmidwesterncoder!

It looks like this is your first PR to kubernetes/autoscaler 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/autoscaler has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. and removed do-not-merge/needs-area labels Jan 27, 2026
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: thatmidwesterncoder
Once this PR has been reviewed and has the lgtm label, please assign x13n for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

Hi @thatmidwesterncoder. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Jan 27, 2026
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Jan 27, 2026
@thatmidwesterncoder thatmidwesterncoder force-pushed the undo_scaleup_on_timeout branch 2 times, most recently from 90291a6 to c06cd0a Compare February 10, 2026 16:10
@snasovich
Copy link

@elmiko / @jackfrancis , hello. Seeing how you've reviewed another community PR wondering if you could look into this one as well (or please redirect to someone else). Thank you in advance, much appreciated.

@elmiko
Copy link
Contributor

elmiko commented Feb 12, 2026

this seems like a nice fix, thank you @thatmidwesterncoder

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Feb 12, 2026
Copy link
Contributor

@elmiko elmiko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this patch makes sense to me, and i appreciate the addition of the tests too. i have a couple questions and we definitely need to get another reviewer to examine this.

// Attempt to revert the failed scale-up by decreasing target size.
// This prevents cloud providers from indefinitely retrying failed provisioning attempts.
if scaleUpRequest.Increase > 0 {
klog.V(2).Infof("Reverting timed-out scale-up for node group %v by decreasing target size by %d",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i wonder if this log should also be warning level?

how frequently does this get emitted during a failure scenario?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i wonder if this log should also be warning level?

That is a good point - I missed that the log on line 302 is a warn as well so it makes sense to use the same level.

how frequently does this get emitted during a failure scenario?

It essentially gets emitted every N minutes equivalent to the max-node-provision-time flag. This is per-node group though, so if multiple node so if the autoscaler is attempting to scale M node groups it would appear N*M times. All in all not very often and won't show up unless something bad is going on with the infrastructure provider.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It essentially gets emitted every N minutes equivalent to the max-node-provision-time flag.

this doesn't seem overly chatty to me, i was concerned that if we made this a warning level log then perhaps it would create too much spam in the logs. but i think it's slow enough that even with multiple node groups in failure it would not be too noisy.

assert.NoError(t, err)

// Verify DecreaseTargetSize was called even though it returned an error
mockedNodeGroup.AssertCalled(t, "DecreaseTargetSize", -4)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we have any way to detect that the error path was utilized?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, I updated the test to monitor for the events that would only be triggered during the error condition (FailedToRevertScaleUp).

assert.Nil(t, clusterstate.scaleUpRequests["ng1"])
}

func TestExpiredScaleUpRevertsPartialIncrease(t *testing.T) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

given the title of this test, i would have expected that one of the two nodes requested for scale up has failed. could you highlight the differences from the TestExpiredScaleUpRevertsTargetSize, i'm having trouble seeing them.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah - I should have added some nodes to the mocked node group to make it a little more clear. Essentially I wanted to prove that when we have some nodes come online as ready, but not enough to satisfy the scale request that the autoscaler would still timeout waiting for the nodes, resulting in the entire scale-up operation getting undone. I updated it to make it to demonstrate that a little better hopefully.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks! i figured that is what you wanted to test, i was having difficulty seeing the difference.

Copy link
Contributor

@elmiko elmiko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the updates!

/lgtm

given the nature of this change i definitely think we should get another set of eyes on the review. cc @jackfrancis @towca

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/cluster-autoscaler cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[cluster-autoscaler] Autoscaler does not scale down after failed scale-up and workload disappears

4 participants