Fix nil pointer dereference in atomicScaleDownNode#9453
Fix nil pointer dereference in atomicScaleDownNode#9453bpineau wants to merge 1 commit intokubernetes:masterfrom
Conversation
NodeGroupForNode can return (nil, nil) when a node's instance is no
longer part of any registered node group. This can happen when the
GCE MIG instance cache is regenerated (by the background hourly
goroutine or during a concurrent fillMigInstances call) between the
pre-filtering step and the atomicScaleDownNode call: the node passes
the pre-filter with stale cache data, but by the time
atomicScaleDownNode queries NodeGroupForNode again, the refreshed
cache no longer maps the instance to a MIG.
In any cases, the CloudProvider interface spec allows providers to return nil:
```go
// NodeGroupForNode returns the node group for the given node, nil if the node
// should not be processed by cluster autoscaler, or non-nil error if such
// occurred. Must be implemented.
NodeGroupForNode(*apiv1.Node) (NodeGroup, error)
```
The crash manifests as:
```
panic: runtime error: invalid memory address or nil pointer dereference
goroutine 11809 [running]:
...scaledown/planner.(*Planner).atomicScaleDownNode(0xc00183c000, 0xc0f066c5c0)
planner.go:322 +0x8b
```
Add a nil check for the returned nodeGroup, consistent with all other
call sites of NodeGroupForNode in the codebase (pre_filtering_processor,
eligibility checker).
|
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: bpineau The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Hi @bpineau. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Tip We noticed you've done this a few times! Consider joining the org to skip this step and gain Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
Makes sense. Thanks @bpineau! /lgtm |
|
/retest |
|
@bpineau: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
Thanks @Choraden , the failing test seems to be an unrelated known issue? |
NodeGroupForNode can return (nil, nil) when a node's instance is no longer part of any registered node group. This can happen when the GCE MIG instance cache is regenerated (by the background hourly goroutine or during a concurrent fillMigInstances call) between the pre-filtering step and the atomicScaleDownNode call: the node passes the pre-filter with stale cache data, but by the time atomicScaleDownNode queries NodeGroupForNode again, the refreshed cache no longer maps the instance to a MIG.
In any cases, the CloudProvider interface spec allows providers to return nil:
The crash manifests as:
Add a nil check for the returned nodeGroup, consistent with all other call sites of NodeGroupForNode in the codebase (pre_filtering_processor, eligibility checker).
What type of PR is this?
/kind bug