Description
What were you trying to accomplish?
Attempt to create a cluster using a node group with a non-amd64 instance type that is not marked in DescribeInstanceTypes
as a "current generation" instance type. If this instance type needs a non-default AMI (such as, for example, it is an ARM instance), eksctl
will not correctly choose the correct AMI and will instead default to the standard AMI.
What happened?
I tried to create a cluster using a node group with a1.large
instances, to test support of the product I work on for this instance type. The latest version of ekctl
failed to create the cluster, while older versions are able to.
How to reproduce it?
Attempt to create a cluster with a broken instance type. For this example, I am going to use an a1.large
instance type, which is the first example in eksctl
's ARM documentation:
$ eksctl create cluster --node-type=a1.large
2025-03-21 18:56:35 [ℹ] eksctl version 0.205.0
...
2025-03-21 18:56:35 [ℹ] nodegroup "ng-811a19e3" will use "" [AmazonLinux2/1.30]
...
2025-03-21 19:17:15 [ℹ] waiting for CloudFormation stack "eksctl-wonderful-creature-1742583949-nodegroup-ng-811a19e3"
2025-03-21 19:17:15 [!] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
2025-03-21 19:17:15 [ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-west-2 --name=wonderful-creature-1742583949'
2025-03-21 19:17:15 [✖] waiter state transitioned to Failure
Error: failed to create cluster "wonderful-creature-1742583949"
$ aws cloudformation describe-stack-events --stack-name eksctl-wonderful-creature-1742583949-nodegroup-ng-811a19e3 | jq '.StackEvents[] | select(.ResourceStatus=="CREATE_FAILED") | .ResourceStatusReason'
"Resource handler returned message: \"[a1.large] is not a valid instance type for requested amiType AL2_x86_64 (Service: Eks, Status Code: 400, Request ID: 0a0b6cf0-930a-4c41-a2c1-00a2eb4370f1) (SDK Attempt Count: 1)\" (RequestToken: 2cca1041-5c10-6ca9-d11f-37255fb5f4f6, HandlerErrorCode: InvalidRequest)"
Logs
$ eksctl create cluster --node-type=a1.large
2025-03-21 19:05:49 [ℹ] eksctl version 0.205.0
2025-03-21 19:05:49 [ℹ] using region us-west-2
2025-03-21 19:05:50 [ℹ] skipping us-west-2b from selection because it doesn't support the following instance type(s): a1.large
2025-03-21 19:05:50 [ℹ] skipping us-west-2d from selection because it doesn't support the following instance type(s): a1.large
2025-03-21 19:05:50 [ℹ] setting availability zones to [us-west-2a us-west-2c]
2025-03-21 19:05:50 [ℹ] subnets for us-west-2a - public:192.168.0.0/19 private:192.168.64.0/19
2025-03-21 19:05:50 [ℹ] subnets for us-west-2c - public:192.168.32.0/19 private:192.168.96.0/19
2025-03-21 19:05:50 [ℹ] nodegroup "ng-811a19e3" will use "" [AmazonLinux2/1.30]
2025-03-21 19:05:50 [ℹ] using Kubernetes version 1.30
2025-03-21 19:05:50 [ℹ] creating EKS cluster "wonderful-creature-1742583949" in "us-west-2" region with managed nodes
2025-03-21 19:05:50 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2025-03-21 19:05:50 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=wonderful-creature-1742583949'
2025-03-21 19:05:50 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "wonderful-creature-1742583949" in "us-west-2"
2025-03-21 19:05:50 [ℹ] CloudWatch logging will not be enabled for cluster "wonderful-creature-1742583949" in "us-west-2"
2025-03-21 19:05:50 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=wonderful-creature-1742583949'
2025-03-21 19:05:50 [ℹ] default addons coredns, metrics-server, vpc-cni, kube-proxy were not specified, will install them as EKS addons
2025-03-21 19:05:50 [ℹ]
2 sequential tasks: { create cluster control plane "wonderful-creature-1742583949",
2 sequential sub-tasks: {
2 sequential sub-tasks: {
1 task: { create addons },
wait for control plane to become ready,
},
create managed nodegroup "ng-811a19e3",
}
}
2025-03-21 19:05:50 [ℹ] building cluster stack "eksctl-wonderful-creature-1742583949-cluster"
2025-03-21 19:05:50 [ℹ] deploying stack "eksctl-wonderful-creature-1742583949-cluster"
2025-03-21 19:06:20 [ℹ] waiting for CloudFormation stack "eksctl-wonderful-creature-1742583949-cluster"
2025-03-21 19:06:51 [ℹ] waiting for CloudFormation stack "eksctl-wonderful-creature-1742583949-cluster"
2025-03-21 19:07:51 [ℹ] waiting for CloudFormation stack "eksctl-wonderful-creature-1742583949-cluster"
2025-03-21 19:08:51 [ℹ] waiting for CloudFormation stack "eksctl-wonderful-creature-1742583949-cluster"
2025-03-21 19:09:51 [ℹ] waiting for CloudFormation stack "eksctl-wonderful-creature-1742583949-cluster"
2025-03-21 19:10:52 [ℹ] waiting for CloudFormation stack "eksctl-wonderful-creature-1742583949-cluster"
2025-03-21 19:11:52 [ℹ] waiting for CloudFormation stack "eksctl-wonderful-creature-1742583949-cluster"
2025-03-21 19:12:52 [ℹ] waiting for CloudFormation stack "eksctl-wonderful-creature-1742583949-cluster"
2025-03-21 19:13:53 [ℹ] waiting for CloudFormation stack "eksctl-wonderful-creature-1742583949-cluster"
2025-03-21 19:13:54 [ℹ] creating addon: coredns
2025-03-21 19:13:55 [ℹ] successfully created addon: coredns
2025-03-21 19:13:55 [ℹ] creating addon: metrics-server
2025-03-21 19:13:56 [ℹ] successfully created addon: metrics-server
2025-03-21 19:13:56 [!] recommended policies were found for "vpc-cni" addon, but since OIDC is disabled on the cluster, eksctl cannot configure the requested permissions; the recommended way to provide IAM permissions for "vpc-cni" addon is via pod identity associations; after addon creation is completed, add all recommended policies to the config file, under `addon.PodIdentityAssociations`, and run `eksctl update addon`
2025-03-21 19:13:56 [ℹ] creating addon: vpc-cni
2025-03-21 19:13:56 [ℹ] successfully created addon: vpc-cni
2025-03-21 19:13:57 [ℹ] creating addon: kube-proxy
2025-03-21 19:13:57 [ℹ] successfully created addon: kube-proxy
2025-03-21 19:15:59 [ℹ] building managed nodegroup stack "eksctl-wonderful-creature-1742583949-nodegroup-ng-811a19e3"
2025-03-21 19:15:59 [ℹ] deploying stack "eksctl-wonderful-creature-1742583949-nodegroup-ng-811a19e3"
2025-03-21 19:15:59 [ℹ] waiting for CloudFormation stack "eksctl-wonderful-creature-1742583949-nodegroup-ng-811a19e3"
2025-03-21 19:16:30 [ℹ] waiting for CloudFormation stack "eksctl-wonderful-creature-1742583949-nodegroup-ng-811a19e3"
2025-03-21 19:17:15 [ℹ] waiting for CloudFormation stack "eksctl-wonderful-creature-1742583949-nodegroup-ng-811a19e3"
2025-03-21 19:17:15 [!] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
2025-03-21 19:17:15 [ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-west-2 --name=wonderful-creature-1742583949'
2025-03-21 19:17:15 [✖] waiter state transitioned to Failure
Error: failed to create cluster "wonderful-creature-1742583949"
Anything else we need to know?
N/A
Versions
$ eksctl info
eksctl version: 0.205.0
kubectl version: v1.32.2
OS: linux