-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
I am EKS using Auto Mode with the general purpose and system node pools. I created a copy of the general purpose node pool with a different set of values for the topology.kubernetes.io/zone key and assigned it a weight of 100.
I have a Deployment with the following Spec:
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: inflate
terminationGracePeriodSeconds: 0
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
resources:
requests:
cpu: "1000m"
memory: "128Mi"I tainted the nodes provisioned by the general-purpose node pool in a particular AZ with the NoExecute taint which automatically evicts the pods running on those nodes. Instead of using the higher weighted NodePool, the pods are scheduled onto another node provisioned by the general-purpose NodePool. How does Karpenter and/or the Kubernetes scheduler determine which node pool is should use in this scenario? It seems like it should use the higher weighted node pool especially since the topologySpreadConstraint is configured to "schedule anyway" when the skew requirements can't be met. Interestingly, if I remove the topologySpreadConstraint from the deployment, Karpenter will provision a node from the general-purpose copy node pool, and pods are scheduled onto that node.