Skip to content
This repository was archived by the owner on Sep 30, 2020. It is now read-only.

Commit 14c4033

Browse files
authored
Merge pull request #162 from mumoshu/0.9.2-doc
Documentation for v0.9.2
2 parents 711fb97 + e43c5c2 commit 14c4033

10 files changed

+454
-191
lines changed

Documentation/kube-aws-cluster-updates.md

+8-4
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# kube-aws cluster updates
1+
# Updating the Kubernetes cluster
22

33
## Types of cluster update
44
There are two distinct categories of cluster update.
@@ -35,7 +35,11 @@ Fortunately, CoreOS update engine will take care of keeping the members of the e
3535

3636
In the (near) future, etcd will be hosted on Kubernetes and this problem will no longer be relevant. Rather than concocting overly complex band-aid, we've decided to "punt" on this issue of the time being.
3737

38+
Once you have successfully updated your cluster, you are ready to [add node pools to your cluster][aws-step-5].
3839

39-
40-
41-
40+
[aws-step-1]: kubernetes-on-aws.md
41+
[aws-step-2]: kubernetes-on-aws-render.md
42+
[aws-step-3]: kubernetes-on-aws-launch.md
43+
[aws-step-4]: kube-aws-cluster-updates.md
44+
[aws-step-5]: kubernetes-on-aws-node-pool.md
45+
[aws-step-6]: kubernetes-on-aws-destroy.md
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
## Destroy the cluster
2+
3+
When you are done with your cluster, run `kube-aws node-pools destroy` and `kube-aws destroy` then all cluster components will be destroyed.
4+
5+
If you created any node pool, you must delete these first by running `kube-aws node-pools destroy`, or `kube-aws destroy` will end up failing because node pools still references
6+
AWS resources managed by the main cluster.
7+
8+
If you created any Kubernetes Services of type `LoadBalancer`, you must delete these first, as the CloudFormation cannot be fully destroyed if any externally-managed resources still exist.

Documentation/kubernetes-on-aws-launch.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ This is the [third step of running Kubernetes on AWS][aws-step-1]. We're ready t
77
Now for the exciting part, creating your cluster:
88

99
```sh
10-
$ kube-aws up
10+
$ kube-aws up --s3-uri s3://<your-bucket-name>/<prefix>
1111
```
1212

1313
**NOTE**: It can take some time after `kube-aws up` completes before the cluster is available. When the cluster is first being launched, it must download all container images for the cluster components (Kubernetes, dns, heapster, etc). Depending on the speed of your connection, it can take a few minutes before the Kubernetes api-server is available.
@@ -18,7 +18,7 @@ If you configured Route 53 settings in your configuration above via `createRecor
1818

1919
Otherwise, navigate to the DNS registrar hosting the zone for the provided external DNS name. Ensure a single A record exists, routing the value of `externalDNSName` defined in `cluster.yaml` to the externally-accessible IP of the master node instance.
2020

21-
You can invoke `kube-aws status` to get the cluster API IP address after cluster creation, if necessary. This command can take a while.
21+
You can invoke `kube-aws status` to get the cluster API endpoint after cluster creation, if necessary. This command can take a while.
2222

2323
## Access the cluster
2424

@@ -59,11 +59,11 @@ If you want to share, audit or back up your stack, use the export flag:
5959
$ kube-aws up --export
6060
```
6161

62-
## Destroy the cluster
63-
64-
When you are done with your cluster, simply run `kube-aws destroy` and all cluster components will be destroyed.
65-
If you created any Kubernetes Services of type `LoadBalancer`, you must delete these first, as the CloudFormation cannot be fully destroyed if any externally-managed resources still exist.
62+
Once you have successfully launched your cluster, you are ready to [update your cluster][aws-step-4].
6663

6764
[aws-step-1]: kubernetes-on-aws.md
6865
[aws-step-2]: kubernetes-on-aws-render.md
6966
[aws-step-3]: kubernetes-on-aws-launch.md
67+
[aws-step-4]: kube-aws-cluster-updates.md
68+
[aws-step-5]: kubernetes-on-aws-node-pool.md
69+
[aws-step-6]: kubernetes-on-aws-destroy.md
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# Known Limitations
2+
3+
## hostPort doesn't work
4+
5+
This isn't really an issue of kube-aws but rather Kubernetes and/or CNI issue.
6+
Anyways, it doesn't work if `hostNetwork: false`.
7+
8+
If you want to deploy `nginx-ingress-controller` which requires `hostPort`, just set `hostNetwork: true`:
9+
10+
```
11+
spec:
12+
hostNetwork: true
13+
containers:
14+
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
15+
name: nginx-ingress-lb
16+
```
17+
18+
Relevant kube-aws issue: [does hostPort not work on kube-aws/CoreOS?](https://github.com/coreos/kube-aws/issues/91)
19+
20+
See [the upstream issue](https://github.com/kubernetes/kubernetes/issues/23920#issuecomment-254918942) for more information.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,156 @@
1+
# Node Pool
2+
3+
Node Pool allows you to bring up additional pools of worker nodes each with a separate configuration including:
4+
5+
* Instance Type
6+
* Storage Type/Size/IOPS
7+
* Instance Profile
8+
* Additional, User-Provided Security Group(s)
9+
* Spot Price
10+
* AWS service to manage your EC2 instances: [Auto Scaling](http://docs.aws.amazon.com/autoscaling/latest/userguide/WhatIsAutoScaling.html) or [Spot Fleet](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet.html)
11+
* [Node labels](http://kubernetes.io/docs/user-guide/node-selection/)
12+
* [Taints](https://github.com/kubernetes/kubernetes/issues/17190)
13+
14+
## Deploying a Multi-AZ cluster with cluster-autoscaler support with Node Pools
15+
16+
Edit the `cluster.yaml` file to decrease `workerCount`, which is meant to be number of worker nodes in the "main" cluster, down to zero:
17+
18+
```yaml
19+
workerCount: 0
20+
subnets:
21+
- availabilityZone: us-west-1a
22+
instanceCIDR: "10.0.0.0/24"
23+
```
24+
25+
Update the main cluster to catch up changes made in `cluster.yaml`:
26+
27+
```
28+
$ kube-aws update \
29+
--s3-uri s3://<my-bucket>/<optional-prefix>
30+
```
31+
32+
Create two node pools, each with a different subnet and an availability zone:
33+
34+
```
35+
$ kube-aws node-pools init --node-pool-name first-pool-in-1a \
36+
--availability-zone us-west-1a \
37+
--key-name ${KUBE_AWS_KEY_NAME} \
38+
--kms-key-arn ${KUBE_AWS_KMS_KEY_ARN}
39+
40+
$ kube-aws node-pools init --node-pool-name second-pool-in-1b \
41+
--availability-zone us-west-1a \
42+
--key-name ${KUBE_AWS_KEY_NAME} \
43+
--kms-key-arn ${KUBE_AWS_KMS_KEY_ARN}
44+
```
45+
46+
Edit the `cluster.yaml` for the first zone:
47+
48+
```
49+
$ $EDITOR node-pools/first-pool-in-1a/cluster.yaml
50+
```
51+
52+
```yaml
53+
workerCount: 1
54+
subnets:
55+
- availabilityZone: us-west-1a
56+
instanceCIDR: "10.0.1.0/24"
57+
```
58+
59+
Edit the `cluster.yaml` for the second zone:
60+
61+
```
62+
$ $EDITOR node-pools/second-pool-in-1b/cluster.yaml
63+
```
64+
65+
```yaml
66+
workerCount: 1
67+
subnets:
68+
- availabilityZone: us-west-1b
69+
instanceCIDR: "10.0.2.0/24"
70+
```
71+
72+
Launch the node pools:
73+
74+
```
75+
$ kube-aws node-pools up --node-pool-name first-pool-in-1a \
76+
--s3-uri s3://<my-bucket>/<optional-prefix>
77+
78+
$ kube-aws node-pools up --node-pool-name second-pool-in-1b \
79+
--s3-uri s3://<my-bucket>/<optional-prefix>
80+
```
81+
82+
Deployment of cluster-autoscaler is currently out of scope of this documentation.
83+
Please read [cluster-autoscaler's documentation](https://github.com/kubernetes/contrib/blob/master/cluster-autoscaler/cloudprovider/aws/README.md) for instructions on it.
84+
85+
## Customizing min/max size of the auto scaling group
86+
87+
If you've chosen to power your worker nodes in a node pool with an auto scaling group, you can customize `MinSize`, `MaxSize`, `MinInstancesInService` in `cluster.yaml`:
88+
89+
Please read [the AWS documentation](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html#aws-properties-as-group-prop) for more information on `MinSize`, `MaxSize`, `MinInstancesInService` for ASGs.
90+
91+
```
92+
worker:
93+
# Auto Scaling Group definition for workers. If only `workerCount` is specified, min and max will be the set to that value and `rollingUpdateMinInstancesInService` will be one less.
94+
autoScalingGroup:
95+
minSize: 1
96+
maxSize: 3
97+
rollingUpdateMinInstancesInService: 2
98+
```
99+
100+
See [the detailed comments in `cluster.yaml`](https://github.com/coreos/kube-aws/blob/master/nodepool/config/templates/cluster.yaml) for further information.
101+
102+
## Deploying a node pool powered by Spot Fleet
103+
104+
Utilizing Spot Fleet gives us chances to dramatically reduce cost being spent on EC2 instances powering Kubernetes worker nodes while achieving reasonable availability.
105+
AWS says cost reduction is up to 90% but the cost would slightly vary among instance types and other users' bids.
106+
107+
Spot Fleet support may change in backward-incompatible ways as it is still an experimenta feature.
108+
So, please use this feature at your own risk.
109+
However, we'd greatly appreciate your feedbacks because they do accelerate improvements in this area!
110+
111+
This feature assumes you already have the IAM role with ARN like "arn:aws:iam::youraccountid:role/aws-ec2-spot-fleet-role" in your own AWS account.
112+
It implies that you've arrived "Spot Requests" in EC2 Dashboard in the AWS console at least once.
113+
See [the AWS documentation describing pre-requisites for Spot Fleet](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html#spot-fleet-prerequisites) for details.
114+
115+
To add a node pool powered by Spot Fleet, edit node pool's `cluster.yaml`:
116+
117+
```yaml
118+
worker:
119+
spotFleet:
120+
targetCapacity: 3
121+
```
122+
123+
To customize your launch specifications to diversify your pool among instance types other than the defaults, edit `cluster.yaml`:
124+
125+
```yaml
126+
worker:
127+
spotFleet:
128+
targetCapacity: 5
129+
launchSpecifications:
130+
- weightedCapacity: 1
131+
instanceType: m3.medium
132+
- weightedCapacity: 2
133+
instanceType: m3.large
134+
- weightedCapacity: 2
135+
instanceType: m4.large
136+
```
137+
138+
This configuration would normally result in Spot Fleet to bring up 3 instances to meet your target capacity:
139+
140+
* 1x m3.medium = 1 capacity
141+
* 1x m3.large = 2 capacity
142+
* 1x m4.large = 2 capacity
143+
144+
This is achieved by the `diversified` strategy of Spot Fleet.
145+
Please read [the AWS documentation describing Spot Fleet Allocation Strategy](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet.html#spot-fleet-allocation-strategy) for more details.
146+
147+
Please also see [the detailed comments in `cluster.yaml`](https://github.com/coreos/kube-aws/blob/master/nodepool/config/templates/cluster.yaml) and [the GitHub issue summarizing the initial implementation](https://github.com/coreos/kube-aws/issues/112) of this feature for further information.
148+
149+
When you are done with your cluster, [destroy your cluster][aws-step-6]
150+
151+
[aws-step-1]: kubernetes-on-aws.md
152+
[aws-step-2]: kubernetes-on-aws-render.md
153+
[aws-step-3]: kubernetes-on-aws-launch.md
154+
[aws-step-4]: kube-aws-cluster-updates.md
155+
[aws-step-5]: kubernetes-on-aws-node-pool.md
156+
[aws-step-6]: kubernetes-on-aws-destroy.md
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
# Pre-requisites
2+
3+
If you're deploying a cluster with kube-aws:
4+
5+
* [EC2 instances whose types are larger than or equal to `m3.medium` should be chosen for the cluster to work reliably](https://github.com/coreos/kube-aws/issues/138)
6+
* [At least 3 etcd, 2 controller, 2 worker nodes are required to achieve high availability](https://github.com/coreos/kube-aws/issues/138#issuecomment-266432162)
7+
8+
## Deploying to an existing VPC
9+
10+
If you're deploying a cluster to an existing VPC:
11+
12+
* Internet Gateway needs to be added to VPC before cluster can be created
13+
* Or [all the nodes will fail to launch because they can't pull docker images or ACIs required to run essential processes like fleet, hyperkube, etcd, awscli, cfn-signal, cfn-init.](https://github.com/coreos/kube-aws/issues/120)
14+
* Existing route tables to be reused by kube-aws must be tagged with the key `KubernetesCluster` and your cluster's name for the value.
15+
* Or [Kubernetes will fail to create ELBs correspond to Kubernetes services with `type=LoadBalancer`](https://github.com/coreos/kube-aws/issues/135)
16+
* ["DNS Hostnames" must be turned on before cluster can be created](https://github.com/coreos/kube-aws/issues/119)
17+
* Or etcd nodes are unable to communicate each other thus the cluster doesn't work at all
18+
19+
Once you understand pre-requisites, you are [ready to launch your first Kubernetes cluster][aws-step-1].
20+
21+
[aws-step-1]: kubernetes-on-aws.md
22+
[aws-step-2]: kubernetes-on-aws-render.md
23+
[aws-step-3]: kubernetes-on-aws-launch.md
24+
[aws-step-4]: kube-aws-cluster-updates.md
25+
[aws-step-5]: kubernetes-on-aws-node-pool.md
26+
[aws-step-6]: kubernetes-on-aws-destroy.md

0 commit comments

Comments
 (0)