Skip to content

Cluster failed to start with single or mutilples nodes and timeout with high CPU usage #4036

@ckf-ateme

Description

@ckf-ateme

What happened:

The cluster failed to start after 4 min of timeout. 100% CPU was used while the container was running.
Error extract (full logs attached)

[control-plane-check] kube-apiserver is not healthy after 4m0.000261894s

What you expected to happen:

The cluster shall start and not eat my CPU

How to reproduce it (as minimally and precisely as possible):
My config is the following. And I created the cluster with no further options

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  # Make it accessible from the runner
  apiServerAddress: "0.0.0.0"
  # For simplicity, we use the default port
  apiServerPort: 6443
nodes:
  - role: control-plane

Anything else we need to know?:
Here you’ll find the attached logs
kind-logs.zip

Environment:

  • kind version: (use kind version): kind v0.30.0 go1.24.6 linux/amd64
  • Runtime info: (use docker info, podman info or nerdctl info): docker v 28.5.1
  • OS (e.g. from /etc/os-release): Ubuntu 22.04.5 LTS"
  • Kubernetes version: (use kubectl version): Client Version: v1.33.1
  • Any proxies or other special environment settings?: No

Thank you for your help :)

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/supportCategorizes issue or PR as a support question.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions