Skip to content

Commit aae5e44

Browse files
feat: Add provider documentation (#45)
* feat: Tighten KMS key policy * fix: remove put key policy * other changes * feat: Add provider docs * update sidebar navigation * Update changelog.mdx --------- Co-authored-by: Josh Stevens <[email protected]>
1 parent fb9fc6b commit aae5e44

File tree

18 files changed

+1159
-3
lines changed

18 files changed

+1159
-3
lines changed
Lines changed: 273 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,273 @@
1+
# AWS
2+
3+
## Prerequisites
4+
5+
Ensure that you have the following installed and configured:
6+
7+
- **[AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)**: Configured with necessary permissions.
8+
- **[kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)**: Installed and configured.
9+
- **[Helm](https://helm.sh/docs/intro/install/)**: Installed.
10+
- **[eksctl](https://eksctl.io/installation/)**: Installed.
11+
12+
## 1. Create an EKS Cluster
13+
14+
This command creates a new EKS cluster with a managed node group. Adjust the `--region`, `--node-type`, and node count options as needed.
15+
16+
```bash
17+
eksctl create cluster --name my-cluster --region us-west-2 --nodegroup-name standard-workers --node-type t3.medium --nodes 1 --nodes-min 1 --nodes-max 2 --managed
18+
```
19+
20+
Output:
21+
22+
```bash
23+
2024-08-20 18:21:15 [ℹ] eksctl version 0.189.0-dev+c9afc4260.2024-08-19T12:43:03Z
24+
2024-08-20 18:21:15 [ℹ] using region us-west-2
25+
2024-08-20 18:21:16 [ℹ] setting availability zones to [us-west-2c us-west-2d us-west-2b]
26+
2024-08-20 18:21:16 [ℹ] subnets for us-west-2c - public:192.168.0.0/19 private:192.168.96.0/19
27+
2024-08-20 18:21:16 [ℹ] subnets for us-west-2d - public:192.168.32.0/19 private:192.168.128.0/19
28+
2024-08-20 18:21:16 [ℹ] subnets for us-west-2b - public:192.168.64.0/19 private:192.168.160.0/19
29+
2024-08-20 18:21:16 [ℹ] nodegroup "standard-workers" will use "" [AmazonLinux2/1.30]
30+
2024-08-20 18:21:16 [ℹ] using Kubernetes version 1.30
31+
2024-08-20 18:21:16 [ℹ] creating EKS cluster "my-cluster" in "us-west-2" region with managed nodes
32+
2024-08-20 18:21:16 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
33+
2024-08-20 18:21:16 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=my-cluster'
34+
2024-08-20 18:21:16 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "my-cluster" in "us-west-2"
35+
2024-08-20 18:21:16 [ℹ] CloudWatch logging will not be enabled for cluster "my-cluster" in "us-west-2"
36+
2024-08-20 18:21:16 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=my-cluster'
37+
2024-08-20 18:21:16 [ℹ] default addons coredns, vpc-cni, kube-proxy were not specified, will install them as EKS addons
38+
2024-08-20 18:21:16 [ℹ]
39+
2 sequential tasks: { create cluster control plane "my-cluster",
40+
2 sequential sub-tasks: {
41+
2 sequential sub-tasks: {
42+
1 task: { create addons },
43+
wait for control plane to become ready,
44+
},
45+
create managed nodegroup "standard-workers",
46+
}
47+
}
48+
2024-08-20 18:21:16 [ℹ] building cluster stack "eksctl-my-cluster-cluster"
49+
2024-08-20 18:21:18 [ℹ] deploying stack "eksctl-my-cluster-cluster"
50+
2024-08-20 18:21:48 [ℹ] waiting for CloudFormation stack "eksctl-my-cluster-cluster"
51+
...
52+
2024-08-20 18:30:29 [ℹ] creating addon
53+
2024-08-20 18:30:29 [ℹ] successfully created addon
54+
2024-08-20 18:30:30 [!] recommended policies were found for "vpc-cni" addon, but since OIDC is disabled on the cluster, eksctl cannot configure the requested permissions; the recommended way to provide IAM permissions for "vpc-cni" addon is via pod identity associations; after addon creation is completed, add all recommended policies to the config file, under `addon.PodIdentityAssociations`, and run `eksctl update addon`
55+
2024-08-20 18:30:30 [ℹ] creating addon
56+
2024-08-20 18:30:31 [ℹ] successfully created addon
57+
2024-08-20 18:30:32 [ℹ] creating addon
58+
2024-08-20 18:30:32 [ℹ] successfully created addon
59+
2024-08-20 18:32:35 [ℹ] building managed nodegroup stack "eksctl-my-cluster-nodegroup-standard-workers"
60+
2024-08-20 18:32:37 [ℹ] deploying stack "eksctl-my-cluster-nodegroup-standard-workers"
61+
2024-08-20 18:32:37 [ℹ] waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-standard-workers"
62+
...
63+
2024-08-20 18:37:39 [✔] saved kubeconfig as "/Users/rrelayer/.kube/config"
64+
2024-08-20 18:37:39 [ℹ] no tasks
65+
2024-08-20 18:37:39 [✔] all EKS cluster resources for "my-cluster" have been created
66+
2024-08-20 18:37:39 [✔] created 0 nodegroup(s) in cluster "my-cluster"
67+
2024-08-20 18:37:40 [ℹ] nodegroup "standard-workers" has 1 node(s)
68+
2024-08-20 18:37:40 [ℹ] node "ip-192-168-22-89.us-west-2.compute.internal" is ready
69+
2024-08-20 18:37:40 [ℹ] waiting for at least 1 node(s) to become ready in "standard-workers"
70+
2024-08-20 18:37:40 [ℹ] nodegroup "standard-workers" has 1 node(s)
71+
2024-08-20 18:37:40 [ℹ] node "ip-192-168-22-89.us-west-2.compute.internal" is ready
72+
2024-08-20 18:37:40 [✔] created 1 managed nodegroup(s) in cluster "my-cluster"
73+
2024-08-20 18:37:41 [ℹ] kubectl command should work with "/Users/rrelayer/.kube/config", try 'kubectl get nodes'
74+
2024-08-20 18:37:41 [✔] EKS cluster "my-cluster" in "us-west-2" region is ready
75+
```
76+
77+
```bash
78+
eksctl get cluster --name my-cluster --region us-west-2
79+
```
80+
81+
Output:
82+
83+
```bash
84+
NAME VERSION STATUS CREATED VPC SUBNETS SECURITYGROUPS PROVIDER
85+
my-cluster 1.30 ACTIVE 2024-08-20T16:21:42Z vpc-090d3761130933be4 subnet-00f479ddeb9bc51f7,subnet-0123eaaf4d9fb037a,subnet-09256a39c7e39ad7c,subnet-0df075e1795076648,subnet-0ed78cc4efed47b11,subnet-0f64d1e62abe83d4d sg-0939a7fb80a664be9 EKS
86+
```
87+
88+
`eksctl` automatically configures your `kubeconfig` file. To check your nodes:
89+
90+
```bash
91+
kubectl get nodes
92+
```
93+
94+
Output:
95+
96+
```bash
97+
NAME STATUS ROLES AGE VERSION
98+
ip-192-168-22-89.us-west-2.compute.internal Ready <none> 6m33s v1.30.2-eks-1552ad0
99+
```
100+
101+
## 2. Deploy the Helm Chart
102+
103+
### 2.1. Download the rrelayer repository
104+
105+
```bash
106+
git clone https://github.com/joshstevens19/rrelayer.git
107+
```
108+
109+
### 2.2. Configure the `values.yaml` File
110+
111+
Customize the `values.yaml` for your deployment:
112+
113+
```yaml
114+
replicaCount: 2
115+
116+
image:
117+
repository: ghcr.io/joshstevens19/rrelayer
118+
tag: "latest"
119+
pullPolicy: IfNotPresent
120+
121+
service:
122+
type: ClusterIP
123+
port: 3000
124+
125+
ingress:
126+
enabled: false
127+
128+
postgresql:
129+
enabled: false
130+
```
131+
132+
### 2.3. Install the Helm Chart
133+
134+
```bash
135+
helm install rrelayer ./helm/rrelayer -f helm/rrelayer/values.yaml
136+
```
137+
138+
Output:
139+
140+
```bash
141+
NAME: rrelayer
142+
LAST DEPLOYED: Tue Aug 20 18:43:58 2024
143+
NAMESPACE: default
144+
STATUS: deployed
145+
REVISION: 1
146+
TEST SUITE: None
147+
NOTES:
148+
1. Get the application URL by running these commands:
149+
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=rrelayer,app.kubernetes.io/instance=rrelayer" -o jsonpath="{.items[0].metadata.name}")
150+
export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
151+
echo "Visit http://127.0.0.1:8080 to use your application"
152+
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
153+
```
154+
155+
### 2.4. Verify the Deployment
156+
157+
```bash
158+
kubectl get pods
159+
```
160+
161+
Output:
162+
163+
```bash
164+
NAME READY STATUS RESTARTS AGE
165+
rrelayer-rrelayer-94dd58475-p8g5d 1/1 Running 0 17s
166+
```
167+
168+
## 3. Monitor and Manage the Deployment
169+
170+
### 3.1. Health Monitoring
171+
172+
rrelayer exposes a lightweight health endpoint that reports whether the service is ready to accept traffic. The endpoint lives on the same port as the main API.
173+
174+
#### 3.1.1. Accessing the Health Endpoint
175+
176+
The health endpoint is automatically available when you run rrelayer:
177+
178+
- `GET /health` - Returns the current health status as JSON
179+
180+
Example response:
181+
182+
```json
183+
{
184+
"status": "healthy"
185+
}
186+
```
187+
188+
#### 3.1.2. Health Status Types
189+
190+
The response contains a single `status` field:
191+
192+
- `healthy` - rrelayer is running normally and can process requests
193+
- `unhealthy` - returned as a non-200 HTTP status when rrelayer fails to initialise or encounters a fatal error
194+
195+
#### 3.1.3. Monitoring in Production
196+
197+
For production deployments, you can:
198+
199+
1. **Configure load balancer health checks** to target `/health`
200+
2. **Set up alerting** on HTTP status codes (`200 OK` is healthy; any non-200 indicates an issue)
201+
3. **Forward metrics to observability stacks** such as Prometheus, Grafana, or DataDog
202+
4. **Automate incident response** when the health endpoint reports failures
203+
204+
#### 3.1.4. Custom Health Port
205+
206+
The health endpoint listens on whatever port you configure for the API. Update `api_config.port` in `rrelayer.yaml` if you need to expose the service on a different port:
207+
208+
```yaml
209+
api_config:
210+
port: 3000
211+
```
212+
213+
### 3.2. View Logs
214+
215+
```bash
216+
kubectl logs -l app.kubernetes.io/name=rrelayer
217+
```
218+
219+
Output:
220+
221+
```bash
222+
2024-08-20T16:44:17.710908Z INFO rrelayer is up on http://localhost:3000
223+
2024-08-20T16:44:17.779423Z INFO Applied database schema
224+
```
225+
226+
### 3.3. Upgrade the Helm Chart
227+
228+
```bash
229+
helm upgrade rrelayer ./helm/rrelayer -f helm/rrelayer/values.yaml
230+
```
231+
232+
## 4. Clean Up
233+
234+
### 4.1. Uninstall the Helm Chart
235+
236+
```bash
237+
helm uninstall rrelayer
238+
```
239+
240+
Output:
241+
242+
```bash
243+
release "rrelayer" uninstalled
244+
```
245+
246+
### 4.2. Delete the EKS cluster
247+
248+
```bash
249+
eksctl delete cluster --name my-cluster --region us-west-2
250+
```
251+
252+
Output:
253+
254+
```bash
255+
2024-08-20 18:49:04 [ℹ] deleting EKS cluster "my-cluster"
256+
2024-08-20 18:49:05 [ℹ] will drain 0 unmanaged nodegroup(s) in cluster "my-cluster"
257+
2024-08-20 18:49:05 [ℹ] starting parallel draining, max in-flight of 1
258+
2024-08-20 18:49:05 [✖] failed to acquire semaphore while waiting for all routines to finish: context canceled
259+
2024-08-20 18:49:07 [ℹ] deleted 0 Fargate profile(s)
260+
2024-08-20 18:49:09 [✔] kubeconfig has been updated
261+
2024-08-20 18:49:09 [ℹ] cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
262+
2024-08-20 18:49:12 [ℹ]
263+
2 sequential tasks: { delete nodegroup "standard-workers", delete cluster control plane "my-cluster" [async]
264+
}
265+
2024-08-20 18:49:12 [ℹ] will delete stack "eksctl-my-cluster-nodegroup-standard-workers"
266+
2024-08-20 18:49:12 [ℹ] waiting for stack "eksctl-my-cluster-nodegroup-standard-workers" to get deleted
267+
2024-08-20 18:49:13 [ℹ] waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-standard-workers"
268+
....
269+
2024-08-20 18:58:09 [ℹ] will delete stack "eksctl-my-cluster-cluster"
270+
2024-08-20 18:58:10 [✔] all cluster resources were deleted
271+
```
272+
273+
This guide provides the necessary steps to deploy the `rrelayer` Helm chart on AWS EKS using `eksctl`.

0 commit comments

Comments
 (0)