-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Open
Labels
featureNew feature or requestNew feature or requestneeds-triageIssues that need to be triagedIssues that need to be triaged
Description
Description
What problem are you trying to solve?
I use cilium with IPAM and kube-proxy replacement and it happens that pods are scheduled on nodes that have no more IPs to give.
I tried using amazon-eks-ami NodeConfig's maxPodsExpression in my EC2NodeClass:
---
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
name: al2023-default
spec:
# Required, resolves a default ami and userdata
amiFamily: AL2023
amiSelectorTerms:
- alias: al2023@latest
# Required, discovers subnets to attach to instances
subnetSelectorTerms:
- tags:
"kubernetes.io/cluster/eks-acme-dev-euw1-01": "owned"
"kubernetes.io/role/internal-elb": "1"
# Optional, overrides autogenerated userdata with a merge semantic
userData: |
Content-Type: multipart/mixed; boundary="MIMEBOUNDARY"
MIME-Version: 1.0
--MIMEBOUNDARY
Content-Transfer-Encoding: 7bit
Content-Type: application/node.eks.aws
Mime-Version: 1.0
---
apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
cluster:
cidr: 172.20.0.0/16
name: eks-acme-dev-euw1-01
kubelet:
maxPodsExpression: "default_enis * (ips_per_eni - 1)"
config:
clusterDNS:
- 172.20.0.10
--MIMEBOUNDARY--
# Optional, configures detailed monitoring for the instance
detailedMonitoring: trueIt calculates the wanted maxPods in /etc/kubernetes/kubelet/config.json but not in /etc/kubernetes/kubelet/config.json.d/40-nodeadm.conf
[ec2-user@ip-10-100-155-89 ~]$ cat /etc/kubernetes/kubelet/config.json | jq .maxPods
56
[ec2-user@ip-10-100-155-89 ~]$ cat /etc/kubernetes/kubelet/config.json.d/40-nodeadm.conf | jq .maxPods
58
The resulting userData for this EC2NodeClass (useless data elided):
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="//"
--//
Content-Type: application/node.eks.aws
---
apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
cluster:
cidr: 172.20.0.0/16
name: eks-acme-staging-euc1-01
kubelet:
maxPodsExpression: "default_enis * (ips_per_eni - 1)"
config:
clusterDNS:
- 172.20.0.10
--//
Content-Type: application/node.eks.aws
# Karpenter Generated NodeConfig
apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
metadata: {}
spec:
cluster:
....
cidr: 172.20.0.0/16
name: eks-acme-staging-euc1-01
containerd: {}
instance:
localStorage: {}
kubelet:
config:
clusterDNS:
- 172.20.0.10
maxPods: 58
registerWithTaints:
- effect: NoExecute
key: karpenter.sh/unregistered
- effect: NoExecute
key: node.cilium.io/agent-not-ready
value: "true"
- effect: NoExecute
key: ebs.csi.aws.com/agent-not-ready
flags:
- --node-labels="karpenter.k8s.aws/ec2nodeclass=al2023-default,karpenter.sh/capacity-type=spot,karpenter.sh/do-not-sync-taints=true,karpenter.sh/nodepool=default-arm64"
--//--
I guess I would need the # Karpenter Generated NodeConfig to not reference maxPods in .spec.kubelet.config
How important is this feature to you?
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
waarg and digithor
Metadata
Metadata
Assignees
Labels
featureNew feature or requestNew feature or requestneeds-triageIssues that need to be triagedIssues that need to be triaged