Skip to content

Commit b932d9b

Browse files
authored
📖 docs: add delete-machine info to Scaling Nodes page (#13195)
* docs: add delete-machine info to Scaling Nodes page The change adds a brief description of using `cluster.x-k8s.io/delete-machine` labels to control Machine scaling on the [Scaling Nodes](https://cluster-api.sigs.k8s.io/tasks/automated-machine-management/scaling) page. The label is mentioned in the [supported labels](https://main.cluster-api.sigs.k8s.io/reference/api/labels-and-annotations.html?highlight=labels#supported-labels) but I think it's relevant to include it in the Scaling Nodes page too. * Add caveat when using delete-machine label * Add line spacing
1 parent a2bbc01 commit b932d9b

File tree

1 file changed

+4
-0
lines changed
  • docs/book/src/tasks/automated-machine-management

1 file changed

+4
-0
lines changed

docs/book/src/tasks/automated-machine-management/scaling.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,10 @@ Machines can be owned by scalable resources i.e. MachineSet and MachineDeploymen
66

77
You can scale MachineSets and MachineDeployments in or out by expressing intent via `.spec.replicas` or updating the scale subresource e.g `kubectl scale machinedeployment foo --replicas=5`.
88

9+
If you need to prioritize which Machines get deleted during scale-down, add the `cluster.x-k8s.io/delete-machine` label to the Machine. KCP or a MachineSet will delete labeled control plane or worker Machines first, and this label has top priority over all delete policies.
10+
11+
**Note**: The label only affects MachineSet scale-down; in a MachineDeployment, the choice of MachineSet to scale-down may bypass labeled Machines.
12+
913
When you delete a Machine directly or by scaling down, the same process takes place in the same order:
1014
- The Node backed by that Machine will try to be drained indefinitely and will wait for any volume to be detached from the Node unless you specify a `.spec.nodeDrainTimeout`.
1115
- CAPI uses default [kubectl draining implementation](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) with `-–ignore-daemonsets=true`. If you needed to ensure DaemonSets eviction you'd need to do so manually by also adding proper taints to avoid rescheduling.

0 commit comments

Comments
 (0)