|
| 1 | +# Node Pool |
| 2 | + |
| 3 | +Node Pool allows you to bring up additional pools of worker nodes each with a separate configuration including: |
| 4 | + |
| 5 | +* Instance Type |
| 6 | +* Storage Type/Size/IOPS |
| 7 | +* Instance Profile |
| 8 | +* Additional, User-Provided Security Group(s) |
| 9 | +* Spot Price |
| 10 | +* AWS service to manage your EC2 instances: [Auto Scaling](http://docs.aws.amazon.com/autoscaling/latest/userguide/WhatIsAutoScaling.html) or [Spot Fleet](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet.html) |
| 11 | +* [Node labels](http://kubernetes.io/docs/user-guide/node-selection/) |
| 12 | +* [Taints](https://github.com/kubernetes/kubernetes/issues/17190) |
| 13 | + |
| 14 | +## Deploying a Multi-AZ cluster with cluster-autoscaler support with Node Pools |
| 15 | + |
| 16 | +Edit the `cluster.yaml` file to decrease `workerCount`, which is meant to be number of worker nodes in the "main" cluster, down to zero: |
| 17 | + |
| 18 | +```yaml |
| 19 | +workerCount: 0 |
| 20 | +subnets: |
| 21 | + - availabilityZone: us-west-1a |
| 22 | + instanceCIDR: "10.0.0.0/24" |
| 23 | +``` |
| 24 | +
|
| 25 | +Update the main cluster to catch up changes made in `cluster.yaml`: |
| 26 | + |
| 27 | +``` |
| 28 | +$ kube-aws update \ |
| 29 | + --s3-uri s3://<my-bucket>/<optional-prefix> |
| 30 | +``` |
| 31 | +
|
| 32 | +Create two node pools, each with a different subnet and an availability zone: |
| 33 | +
|
| 34 | +``` |
| 35 | +$ kube-aws node-pools init --node-pool-name first-pool-in-1a \ |
| 36 | + --availability-zone us-west-1a \ |
| 37 | + --key-name ${KUBE_AWS_KEY_NAME} \ |
| 38 | + --kms-key-arn ${KUBE_AWS_KMS_KEY_ARN} |
| 39 | + |
| 40 | +$ kube-aws node-pools init --node-pool-name second-pool-in-1b \ |
| 41 | + --availability-zone us-west-1a \ |
| 42 | + --key-name ${KUBE_AWS_KEY_NAME} \ |
| 43 | + --kms-key-arn ${KUBE_AWS_KMS_KEY_ARN} |
| 44 | +``` |
| 45 | +
|
| 46 | +Edit the `cluster.yaml` for the first zone: |
| 47 | +
|
| 48 | +``` |
| 49 | +$ $EDITOR node-pools/first-pool-in-1a/cluster.yaml |
| 50 | +``` |
| 51 | +
|
| 52 | +```yaml |
| 53 | +workerCount: 1 |
| 54 | +subnets: |
| 55 | + - availabilityZone: us-west-1a |
| 56 | + instanceCIDR: "10.0.1.0/24" |
| 57 | +``` |
| 58 | + |
| 59 | +Edit the `cluster.yaml` for the second zone: |
| 60 | + |
| 61 | +``` |
| 62 | +$ $EDITOR node-pools/second-pool-in-1b/cluster.yaml |
| 63 | +``` |
| 64 | + |
| 65 | +```yaml |
| 66 | +workerCount: 1 |
| 67 | +subnets: |
| 68 | + - availabilityZone: us-west-1b |
| 69 | + instanceCIDR: "10.0.2.0/24" |
| 70 | +``` |
| 71 | +
|
| 72 | +Launch the node pools: |
| 73 | +
|
| 74 | +``` |
| 75 | +$ kube-aws node-pools up --node-pool-name first-pool-in-1a \ |
| 76 | + --s3-uri s3://<my-bucket>/<optional-prefix> |
| 77 | + |
| 78 | +$ kube-aws node-pools up --node-pool-name second-pool-in-1b \ |
| 79 | + --s3-uri s3://<my-bucket>/<optional-prefix> |
| 80 | +``` |
| 81 | + |
| 82 | +Deployment of cluster-autoscaler is currently out of scope of this documentation. |
| 83 | +Please read [cluster-autoscaler's documentation](https://github.com/kubernetes/contrib/blob/master/cluster-autoscaler/cloudprovider/aws/README.md) for instructions on it. |
| 84 | + |
| 85 | +## Customizing min/max size of the auto scaling group |
| 86 | + |
| 87 | +If you've chosen to power your worker nodes in a node pool with an auto scaling group, you can customize `MinSize`, `MaxSize`, `MinInstancesInService` in `cluster.yaml`: |
| 88 | + |
| 89 | +Please read [the AWS documentation](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html#aws-properties-as-group-prop) for more information on `MinSize`, `MaxSize`, `MinInstancesInService` for ASGs. |
| 90 | + |
| 91 | +``` |
| 92 | +worker: |
| 93 | + # Auto Scaling Group definition for workers. If only `workerCount` is specified, min and max will be the set to that value and `rollingUpdateMinInstancesInService` will be one less. |
| 94 | + autoScalingGroup: |
| 95 | + minSize: 1 |
| 96 | + maxSize: 3 |
| 97 | + rollingUpdateMinInstancesInService: 2 |
| 98 | +``` |
| 99 | + |
| 100 | +See [the detailed comments in `cluster.yaml`](https://github.com/coreos/kube-aws/blob/master/nodepool/config/templates/cluster.yaml) for further information. |
| 101 | + |
| 102 | +## Deploying a node pool powered by Spot Fleet |
| 103 | + |
| 104 | +Utilizing Spot Fleet gives us chances to dramatically reduce cost being spent on EC2 instances powering Kubernetes worker nodes while achieving reasonable availability. |
| 105 | +AWS says cost reduction is up to 90% but the cost would slightly vary among instance types and other users' bids. |
| 106 | + |
| 107 | +Spot Fleet support may change in backward-incompatible ways as it is still an experimenta feature. |
| 108 | +So, please use this feature at your own risk. |
| 109 | +However, we'd greatly appreciate your feedbacks because they do accelerate improvements in this area! |
| 110 | + |
| 111 | +This feature assumes you already have the IAM role with ARN like "arn:aws:iam::youraccountid:role/aws-ec2-spot-fleet-role" in your own AWS account. |
| 112 | +It implies that you've arrived "Spot Requests" in EC2 Dashboard in the AWS console at least once. |
| 113 | +See [the AWS documentation describing pre-requisites for Spot Fleet](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html#spot-fleet-prerequisites) for details. |
| 114 | + |
| 115 | +To add a node pool powered by Spot Fleet, edit node pool's `cluster.yaml`: |
| 116 | + |
| 117 | +```yaml |
| 118 | +worker: |
| 119 | + spotFleet: |
| 120 | + targetCapacity: 3 |
| 121 | +``` |
| 122 | +
|
| 123 | +To customize your launch specifications to diversify your pool among instance types other than the defaults, edit `cluster.yaml`: |
| 124 | + |
| 125 | +```yaml |
| 126 | +worker: |
| 127 | + spotFleet: |
| 128 | + targetCapacity: 5 |
| 129 | + launchSpecifications: |
| 130 | + - weightedCapacity: 1 |
| 131 | + instanceType: m3.medium |
| 132 | + - weightedCapacity: 2 |
| 133 | + instanceType: m3.large |
| 134 | + - weightedCapacity: 2 |
| 135 | + instanceType: m4.large |
| 136 | +``` |
| 137 | + |
| 138 | +This configuration would normally result in Spot Fleet to bring up 3 instances to meet your target capacity: |
| 139 | + |
| 140 | +* 1x m3.medium = 1 capacity |
| 141 | +* 1x m3.large = 2 capacity |
| 142 | +* 1x m4.large = 2 capacity |
| 143 | + |
| 144 | +This is achieved by the `diversified` strategy of Spot Fleet. |
| 145 | +Please read [the AWS documentation describing Spot Fleet Allocation Strategy](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet.html#spot-fleet-allocation-strategy) for more details. |
| 146 | + |
| 147 | +Please also see [the detailed comments in `cluster.yaml`](https://github.com/coreos/kube-aws/blob/master/nodepool/config/templates/cluster.yaml) and [the GitHub issue summarizing the initial implementation](https://github.com/coreos/kube-aws/issues/112) of this feature for further information. |
| 148 | + |
| 149 | +When you are done with your cluster, [destroy your cluster][aws-step-6] |
| 150 | + |
| 151 | +[aws-step-1]: kubernetes-on-aws.md |
| 152 | +[aws-step-2]: kubernetes-on-aws-render.md |
| 153 | +[aws-step-3]: kubernetes-on-aws-launch.md |
| 154 | +[aws-step-4]: kube-aws-cluster-updates.md |
| 155 | +[aws-step-5]: kubernetes-on-aws-node-pool.md |
| 156 | +[aws-step-6]: kubernetes-on-aws-destroy.md |
0 commit comments