Skip to content

Eksa nywhere setup on bare metal waiting for control plane to be ready while creating management cluster #10430

@maluniji

Description

@maluniji

Hi Team,

I am facing an issue in creating eks anywhere cluster setup on baremetal. It creates the boot strap cluster and waits for the control plane to be ready while creating the workload cluster

Setup:
5 machines including admin machine(all the machines are installed with ubuntu 24.04)
Eksctl anywhere : 24.04
kub version : 1.31
We have created the ubuntu image and hosted on webserver within the admin machine
Admin machine is able to connect the nodes and ILO of the nodes

Below is my management.yml
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: management-cluster
spec:
clusterNetwork:
cniConfig:
cilium: {}
pods:
cidrBlocks:
- 10.244.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlaneConfiguration:
count: 3
endpoint:
host: 192.168.6.39 # VIP recommended for HA control-plane
machineGroupRef:
kind: TinkerbellMachineConfig
name: mgmt-cp
workerNodeGroupConfigurations:

  • name: mgmt-workers
    count: 0
    machineGroupRef:
    kind: TinkerbellMachineConfig
    name: mgmt-wk
    datacenterRef:
    kind: TinkerbellDatacenterConfig
    name: baremetal-dc
    kubernetesVersion: "1.31"
    managementCluster:
    name: management-cluster

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellDatacenterConfig
metadata:
name: baremetal-dc
spec:
tinkerbellIP: "192.168.6.61"

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellMachineConfig
metadata:
name: mgmt-cp
spec:
osFamily: ubuntu
osImageURL: http://192.168.6.60:8080/ubuntu-2204-kube-1-31.gz
hardwareSelector:
role: mgmt-cp
users:

  • name: admin-ltim
    sshAuthorizedKeys:
    • "ssh-rsa

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellMachineConfig
metadata:
name: mgmt-wk
spec:
osFamily: ubuntu
osImageURL: http://192.168.6.60:8080/ubuntu-2204-kube-1-31.gz
hardwareSelector:
role: mgmt-wk
users:

  • name: admin-ltim
    sshAuthorizedKeys:
    • "ssh-rsa xxxxxxxxxx

Harware.csv
hostname,bmc_ip,bmc_username,bmc_password,mac,ip_address,netmask,gateway,nameservers,labels,disk
comostest-cp1,192.168.5.46,admin,xxxx,mac:75:7b:35:28:mac,192.168.6.31,255.255.255.0,192.168.6.1,8.8.8.8|8.8.4.4,role=mgmt-cp,/dev/sda
comostest-cp2,192.168.5.53,admin,xxxx,mac:d3:44:7d:ba:mac,192.168.6.32,255.255.255.0,192.168.6.1,8.8.8.8|8.8.4.4,role=mgmt-cp,/dev/sda
comostest-cp3,192.168.5.60,admin,xxxx,:mac:77:1d:6f:mac,192.168.6.34,255.255.255.0,192.168.6.1,8.8.8.8|8.8.4.4,role=mgmt-cp,/dev/sda

Note: mac address and passwords are masked here
Plesae find the logs for the issue

root@comostest:/home/admin-ltim# kubectl get hardware -A --show-labels
NAMESPACE NAME STATE LABELS
eksa-system comostest-cp1 role=mgmt-cp,v1alpha1.tinkerbell.org/ownerName=management-cluster-qscpj,v1alpha1.tinkerbell.org/ownerNamespace=eksa-system
eksa-system comostest-cp2 role=mgmt-cp
eksa-system comostest-cp3 role=mgmt-cp
root@comostest:/home/admin-ltim# kubectl get workflows -A
NAMESPACE NAME TEMPLATE STATE CURRENT-ACTION TEMPLATE-RENDERING
eksa-system management-cluster-qscpj management-cluster-qscpj STATE_PENDING successful
root@comostest:/home/admin-ltim#

root@comostest:/home/admin-ltim# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-5d65f7f465-m7qv2 1/1 Running 0 27m
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-fdfbb9698-nqwld 1/1 Running 0 27m
capi-system capi-controller-manager-cc49654f6-qvnk9 1/1 Running 0 27m
capt-system capt-controller-manager-85f69bf658-jrk7p 1/1 Running 0 27m
cert-manager cert-manager-6d7b58bd85-csmdh 1/1 Running 0 27m
cert-manager cert-manager-cainjector-6569f47589-vzmdn 1/1 Running 0 27m
cert-manager cert-manager-webhook-d79b9d6cd-5jmqj 1/1 Running 0 27m
eksa-system eksa-controller-manager-5c6d6484fd-h4xlz 1/1 Running 0 26m
eksa-system hegel-fbfb7d5c7-gzq6x 1/1 Running 0 27m
eksa-system rufio-5454bcd45b-lnvx7 1/1 Running 0 27m
eksa-system tink-controller-7dcfd467c7-z2xkf 1/1 Running 0 27m
eksa-system tink-server-8956bfb57-4l6vb 1/1 Running 0 27m
eksa-system tink-stack-978f4bf95-jr6bh 1/1 Running 0 27m
etcdadm-bootstrap-provider-system etcdadm-bootstrap-provider-controller-manager-5798bb484-hdv7b 1/1 Running 0 27m
etcdadm-controller-system etcdadm-controller-controller-manager-6cc65ff477-9hs9g 1/1 Running 0 27m
kube-system coredns-865959b65f-qw4jj 1/1 Running 0 28m
kube-system coredns-865959b65f-sb2sg 1/1 Running 0 28m
kube-system etcd-management-cluster-eks-a-cluster-control-plane 1/1 Running 0 28m
kube-system kindnet-wvdz7 1/1 Running 0 28m
kube-system kube-apiserver-management-cluster-eks-a-cluster-control-plane 1/1 Running 0 28m
kube-system kube-controller-manager-management-cluster-eks-a-cluster-control-plane 1/1 Running 0 28m
kube-system kube-proxy-chx69 1/1 Running 0 28m
kube-system kube-scheduler-management-cluster-eks-a-cluster-control-plane 1/1 Running 0 28m
local-path-storage local-path-provisioner-84bf446b5b-7zk47 1/1 Running 0 28m

root@comostest:/home/admin-ltim# kubectl logs hegel-fbfb7d5c7-gzq6x -n eksa-system
{"level":"info","controller":"machine","controllerGroup":"bmc.tinkerbell.org","controllerKind":"Machine","Machine":{"name":"bmc-comostest-cp1","namespace":"eksa-system"},"namespace":"eksa-system","name":"bmc-comostest-cp1","reconcileID":"f9cf7a0f-b1fb-4ec5-a570-53e4f881dc5d","host":"192.168.5.46","username":"admin","v":0,"logger":"controllers/Machine","providersAttempted":["gofish","ipmitool","asrockrack","IntelAMT","dell","supermicro","openbmc"],"successfulProvider":["ipmitool","gofish"],"caller":"github.com/tinkerbell/rufio/controller/client.go:48","time":"2025-12-11T06:58:41Z","message":"Connected to BMC"}
{"level":"info","controller":"machine","controllerGroup":"bmc.tinkerbell.org","controllerKind":"Machine","Machine":{"name":"bmc-comostest-cp1","namespace":"eksa-system"},"namespace":"eksa-system","name":"bmc-comostest-cp1","reconcileID":"f9cf7a0f-b1fb-4ec5-a570-53e4f881dc5d","v":0,"logger":"controllers/Machine","host":"192.168.5.46","successfulCloseConns":["gofish","ipmitool"],"providersAttempted":["gofish","ipmitool"],"successfulProvider":"","caller":"github.com/tinkerbell/rufio/controller/machine.go:143","time":"2025-12-11T06:58:43Z","message":"BMC connection closed"}
root@comostest:/home/admin-ltim#

root@comostest:/home/admin-ltim# kubectl logs eksa-controller-manager-5c6d6484fd-h4xlz -n eksa-system
":"tinkerbell","phase":"checkControlPlaneReady","v":0}
{"ts":1765436608795.7869,"caller":"controllers/cluster_controller.go:536","msg":"Updating cluster status","controller":"cluster","controllerGroup":"anywhere.eks.amazonaws.com","controllerKind":"Cluster","Cluster":{"name":"management-cluster","namespace":"default"},"namespace":"default","name":"management-cluster","reconcileID":"dd4f6539-e450-4d99-896b-da81e1c59734","v":0}
{"ts":1765436609619.9866,"caller":"clustercache/cluster_accessor.go:262","msg":"Connect failed","controller":"clustercache","controllerGroup":"cluster.x-k8s.io","controllerKind":"Cluster","Cluster":{"name":"management-cluster","namespace":"eksa-system"},"namespace":"eksa-system","name":"management-cluster","reconcileID":"df77e7ea-24f2-4542-9099-742563aed839","err":"error creating HTTP client and mapper: cluster is not reachable: Get "https://192.168.6.39:6443/?timeout=5s\": dial tcp 192.168.6.39:6443: connect: no route to host","errVerbose":"Get "https://192.168.6.39:6443/?timeout=5s\": dial tcp 192.168.6.39:6443: connect: no route to host\ncluster is not reachable\nsigs.k8s.io/cluster-api/controllers/clustercache.createHTTPClientAndMapper\n\tsigs.k8s.io/[email protected]/controllers/clustercache/cluster_accessor_client.go:187\nsigs.k8s.io/cluster-api/controllers/clustercache.(*clusterAccessor).createConnection\n\tsigs.k8s.io/[email protected]/controllers/clustercache/cluster_accessor_client.go:62\nsigs.k8s.io/cluster-api/controllers/clustercache.(*clusterAccessor).Connect\n\tsigs.k8s.io/[email protected]/controllers/clustercache/cluster_accessor.go:255\nsigs.k8s.io/cluster-api/controllers/clustercache.(*clusterCache).Reconcile\n\tsigs.k8s.io/[email protected]/controllers/clustercache/cluster_cache.go:478\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile\n\tsigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler\n\tsigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:340\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem\n\tsigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:300\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.1\n\tsigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:202\nruntime.goexit\n\truntime/asm_amd64.s:1700\nerror creating HTTP client and mapper\nsigs.k8s.io/cluster-api/controllers/clustercache.(*clusterAccessor).createConnection\n\tsigs.k8s.io/[email protected]/controllers/clustercache/cluster_accessor_client.go:64\nsigs.k8s.io/cluster-api/controllers/clustercache.(*clusterAccessor).Connect\n\tsigs.k8s.io/[email protected]/controllers/clustercache/cluster_accessor.go:255\nsigs.k8s.io/cluster-api/controllers/clustercache.(*clusterCache).Reconcile\n\tsigs.k8s.io/[email protected]/controllers/clustercache/cluster_cache.go:478\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile\n\tsigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler\n\tsigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:340\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem\n\tsigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:300\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.1\n\tsigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:202\nruntime.goexit\n\truntime/asm_amd64.s:1700"}
{"ts":1765436613797.7302,"caller":"controllers/cluster_controller.go:293","msg":"Reconciling cluster","controller":"cluster","controllerGroup":"anywhere.eks.amazonaws.com","controllerKind":"Cluster","Cluster":{"name":"management-cluster","namespace":"default"},"namespace":"default","name":"management-cluster","reconcileID":"b5ff1c49-2c8e-41b5-81ab-2d6efb851288","v":0}
{"ts":1765436613898.8662,"caller":"clusters/ipvalidator.go:43","msg":"CAPI cluster already exists, skipping control plane IP validation","controller":"cluster","controllerGroup":"anywhere.eks.amazonaws.com","controllerKind":"Cluster","Cluster":{"name":"management-cluster","namespace":"default"},"namespace":"default","name":"management-cluster","reconcileID":"b5ff1c49-2c8e-41b5-81ab-2d6efb851288","provider":"tinkerbell","v":0}

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions