🐛 Wait for CAPD Cluster dependecies deletion#13329
🐛 Wait for CAPD Cluster dependecies deletion#13329alexander-demicev wants to merge 1 commit intokubernetes-sigs:mainfrom
Conversation
Signed-off-by: Alexandr Demicev <alexandr.demicev@suse.com>
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
salasberryfin
left a comment
There was a problem hiding this comment.
Thanks @alexander-demicev.
/lgtm
|
LGTM label has been added. DetailsGit tree hash: fc4f364df87758c2da1ef0ef44b59a251b0bc364 |
| @@ -117,10 +117,27 @@ func (r *ClusterBackEndReconciler) ReconcileNormal(ctx context.Context, cluster | |||
|
|
|||
| // ReconcileDelete handle docker backend for delete DevMachines. | |||
There was a problem hiding this comment.
We've noticed an issue in the CAPRKE2 tests with the deletion of a CAPD cluster. When not done in the correct order, the CAPD cluster can be deleted before the machines, this is blocking machine deletion.
This should be impossible. I wonder if there is an issue elsewhere.
Please see:
(and all the checks above before we delete the InfraCluster)/hold
There was a problem hiding this comment.
@sbueringer Thanks for the review. I wonder if the problem we're seeing now is the same one we saw in CAPA a while ago.
Let's say I run kubectl delete -f cluster.yaml, where the yaml contains all manifests for the cluster. The Cluster gets a deletion timestamp, and the timestamp is propagated to the DockerCluster through owner references. The Cluster controller includes the dependency check, but the DockerCluster starts reconciling anyway. ReconcileDelete removes the finalizer before the DockerMachines are gone and before the Cluster controller's checks pass. This causes the DockerCluster to be deleted while the DockerMachines remain stuck.
There was a problem hiding this comment.
Could be.
To clarify. The manifest contains the DockerCluster?
Deleting all objects of an Cluster at the same time is entirely unsupported to be honest (similar to how just deleting the entire namespace which also leads to deletionTimestamps on all objects at the same time is unsupported).
Cluster gets a deletion timestamp, and the timestamp is propagated to the DockerCluster through owner references
I think in your case the deletionTimestamp comes directly from kubectl delete. ownerRefs should only propagate the deletionTimestamp after the Cluster object is gone from etcd.
There was a problem hiding this comment.
The reason why deleting everything at the same time is unsupported is because it's a huge effort to support this correctly.
Every single controller would need safe-guards to check other resources to figure out if it is actually allowed to already go through reconcileDelete.
E.g. the DockerMachine controller would have figure out if it's already time to delete worker Machines (vs. CP Machines that should be deleted later), etc...
What this PR does / why we need it:
We've noticed an issue in the CAPRKE2 tests with the deletion of a CAPD cluster. When not done in the correct order, the CAPD cluster can be deleted before the machines, this is blocking machine deletion. This PR introduces similar logic that already exists in CAPA to wait for dependent objects to be deleted before removing the infrastructure cluster.
CAPA PR for reference: kubernetes-sigs/cluster-api-provider-aws#5365
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)format, will close the issue(s) when PR gets merged):Fixes #
/area provider/infrastructure-docker