tl;dr: See the blog post that accompanies this project: 10,000 Kubernetes Pods for 10,000 Subscribers.
This repository contains automation to build a large Kubernetes cluster in AWS, and run 10,000 Pods on that cluster. And it does it two ways, because I realized that the first way didn't work without running 1,000 vCPUs in my brand new AWS account (AWS support usually doesn't look kindly on people who open a new account and immediately ask for insane capacity increases!).
attempt-one-eksis the first attempt: build an EKS cluster with 100t3.micronodes; later updated to use 14m5.16xlargenodes, which worked but required an insane amount of computing power, which would cost over $30,000/month!attempt-two-k3sis the second attempt: build a K3s cluster with onec5.2xlargemaster and 100c5.largenodes. (I almost got it working witht3.micronodes but they started dying when I deployed 100 Pods to each node...).
In the end, I found out that burstable t3 instances just aren't ready for massive amounts of pods, no matter what. I run into networking and burst CPU limits. And EKS has some annoying limitations with it's current VPC CNI networking, but those could be overcome if you take the time to swap out a different networking solution.
If you're interested in automating Kubernetes with Ansible, I have the perfect book for you: Ansible for Kubernetes.
