Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -117,10 +117,27 @@ func (r *ClusterBackEndReconciler) ReconcileNormal(ctx context.Context, cluster

// ReconcileDelete handle docker backend for delete DevMachines.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've noticed an issue in the CAPRKE2 tests with the deletion of a CAPD cluster. When not done in the correct order, the CAPD cluster can be deleted before the machines, this is blocking machine deletion.

This should be impossible. I wonder if there is an issue elsewhere.

Please see:

if cluster.Spec.InfrastructureRef.IsDefined() {
(and all the checks above before we delete the InfraCluster)

/hold

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sbueringer Thanks for the review. I wonder if the problem we're seeing now is the same one we saw in CAPA a while ago.
Let's say I run kubectl delete -f cluster.yaml, where the yaml contains all manifests for the cluster. The Cluster gets a deletion timestamp, and the timestamp is propagated to the DockerCluster through owner references. The Cluster controller includes the dependency check, but the DockerCluster starts reconciling anyway. ReconcileDelete removes the finalizer before the DockerMachines are gone and before the Cluster controller's checks pass. This causes the DockerCluster to be deleted while the DockerMachines remain stuck.

Copy link
Member

@sbueringer sbueringer Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be.

To clarify. The manifest contains the DockerCluster?

Deleting all objects of an Cluster at the same time is entirely unsupported to be honest (similar to how just deleting the entire namespace which also leads to deletionTimestamps on all objects at the same time is unsupported).

Cluster gets a deletion timestamp, and the timestamp is propagated to the DockerCluster through owner references

I think in your case the deletionTimestamp comes directly from kubectl delete. ownerRefs should only propagate the deletionTimestamp after the Cluster object is gone from etcd.

Copy link
Member

@sbueringer sbueringer Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason why deleting everything at the same time is unsupported is because it's a huge effort to support this correctly.

Every single controller would need safe-guards to check other resources to figure out if it is actually allowed to already go through reconcileDelete.

E.g. the DockerMachine controller would have figure out if it's already time to delete worker Machines (vs. CP Machines that should be deleted later), etc...

func (r *ClusterBackEndReconciler) ReconcileDelete(ctx context.Context, cluster *clusterv1.Cluster, dockerCluster *infrav1.DevCluster) (ctrl.Result, error) {
log := ctrl.LoggerFrom(ctx)

if dockerCluster.Spec.Backend.Docker == nil {
return ctrl.Result{}, errors.New("DockerBackendReconciler can't be called for DevClusters without a Docker backend")
}

// Check if there are any dependent DockerMachines still being deleted.
// We need to wait for all machines to be gone before deleting the cluster infrastructure.
numDependencies, err := r.dependencyCount(ctx, cluster)
if err != nil {
log.Error(err, "error getting DockerCluster dependencies")
return ctrl.Result{}, err
}

if numDependencies > 0 {
log.Info("DockerCluster still has dependent DockerMachines - requeuing", "dependencyCount", numDependencies)
return ctrl.Result{RequeueAfter: 20 * time.Second}, nil
}

log.Info("DockerCluster has no dependent DockerMachines, proceeding with deletion")

// Create a helper for managing a docker container hosting the loadbalancer.
externalLoadBalancer, err := docker.NewLoadBalancer(ctx, cluster,
dockerCluster.Spec.Backend.Docker.LoadBalancer.ImageRepository,
Expand Down Expand Up @@ -209,3 +226,23 @@ func (r *ClusterBackEndReconciler) PatchDevCluster(ctx context.Context, patchHel
}},
)
}

// dependencyCount returns the count of DockerMachines that are dependent on this cluster.
func (r *ClusterBackEndReconciler) dependencyCount(ctx context.Context, cluster *clusterv1.Cluster) (int, error) {
log := ctrl.LoggerFrom(ctx)
log.V(4).Info("Looking for DockerCluster dependencies")

listOptions := []client.ListOption{
client.InNamespace(cluster.Namespace),
client.MatchingLabels(map[string]string{clusterv1.ClusterNameLabel: cluster.Name}),
}

machines := &infrav1.DockerMachineList{}
if err := r.Client.List(ctx, machines, listOptions...); err != nil {
return 0, errors.Wrapf(err, "failed to list DockerMachines for cluster %s/%s", cluster.Namespace, cluster.Name)
}

log.V(4).Info("Found dependent DockerMachines", "count", len(machines.Items))

return len(machines.Items), nil
}
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ import (

"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/utils/ptr"
Expand Down Expand Up @@ -566,6 +567,9 @@ func (r *MachineBackendReconciler) getUnsafeLoadBalancerConfigTemplate(ctx conte
Namespace: dockerCluster.Namespace,
}
if err := r.Get(ctx, key, cm); err != nil {
if apierrors.IsNotFound(err) && !dockerCluster.DeletionTimestamp.IsZero() {
return "", nil
}
return "", errors.Wrapf(err, "failed to retrieve custom HAProxy configuration ConfigMap %s", key)
}
template, ok := cm.Data["value"]
Expand Down
Loading