Skip to content

Commit c829a69

Browse files
author
Leon Stigter
authored
[BLOG] Set up a local Knative environment up in no time flat (#2546)
* Create set-up-a-local-knative-environment-up-in-no-time-flat.md * Updated blog post with feedback Changes: - Removed extra whitespace on line 290 - Added a blank line between the commands and the output to help make it more clear which parts are commands and which are output of the command - Added explanation of the values for hostport and containerport in the clusterconfig.yaml for KinD - Updated the title of the blog post and name of the file * Update set-up-a-local-knative-environment-with-kind.md Update with review feedback listed in PR #2546 * Update set-up-a-local-knative-environment-with-kind.md Update trailing whitespace
1 parent ab264bf commit c829a69

File tree

1 file changed

+361
-0
lines changed

1 file changed

+361
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,361 @@
1+
---
2+
Title: 'How to set up a local Knative environment with KinD and without DNS headaches'
3+
Author: Leon Stigter
4+
Author handle: https://twitter.com/retgits
5+
Date: ''
6+
Description: A how-to guide to deploy Knative, Kourier, and your first app on top of a Kubernetes cluster.
7+
Folder with media files: 'N/A'
8+
Blog URL: ''
9+
Labels: Articles
10+
Reviewers: ''
11+
12+
---
13+
| Reviewer | Date | Approval |
14+
| ------------- | ------------- | ------------- |
15+
| @retgits | 2020-06-03 |:+1:|
16+
| <!-- Your Github handle here --> | | |
17+
18+
Knative builds on Kubernetes to abstract away complexity for developers, and enables them to focus on delivering value to their business. The complex (and sometimes boring) parts of building apps to run on Kubernetes are managed by Knative. In this post, we will focus on setting up a lightweight environment to help you to develop modern apps faster using Knative.
19+
20+
## Step 1: Setting up your Kubernetes deployment using KinD
21+
There are many options for creating a Kubernetes cluster on your local machine. However, since we are running containers in the Kubernetes cluster anyway, let’s also use containers for the cluster itself. Kubernetes IN Docker, or _KinD_ for short, enables developers to spin up a Kubernetes cluster where each cluster node is a container.
22+
23+
You can install KinD on your machine by running the following commands:
24+
25+
```bash
26+
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.8.1/kind-$(uname)-amd64
27+
chmod +x ./kind
28+
mv ./kind /some-dir-in-your-PATH/kind
29+
```
30+
31+
Next, create a Kubernetes cluster using KinD, and expose the ports the ingress gateway to listen on the host. To do this, you can pass in a file with the following cluster configuration parameters:
32+
33+
```bash
34+
cat > clusterconfig.yaml <<EOF
35+
kind: Cluster
36+
apiVersion: kind.sigs.k8s.io/v1alpha3
37+
nodes:
38+
- role: control-plane
39+
extraPortMappings:
40+
## expose port 31380 of the node to port 80 on the host
41+
- containerPort: 31080
42+
hostPort: 80
43+
## expose port 31443 of the node to port 443 on the host
44+
- containerPort: 31443
45+
hostPort: 443
46+
EOF
47+
```
48+
49+
The values for the container ports are randomly chosen, and are used later on to configure a NodePort service with these values.
50+
The values for the host ports are where you'll send cURL requests to as you deploy applications to the cluster.
51+
52+
After the cluster configuration file has been created, you can create a cluster. Your `kubeconfig` will automatically be updated, and the default cluster will be set to your new cluster.
53+
54+
```bash
55+
$ kind create cluster --name knative --config clusterconfig.yaml
56+
```
57+
58+
```bash
59+
Creating cluster "knative" ...
60+
✓ Ensuring node image (kindest/node:v1.18.2) 🖼
61+
✓ Preparing nodes 📦
62+
✓ Writing configuration 📜
63+
✓ Starting control-plane 🕹️
64+
✓ Installing CNI 🔌
65+
✓ Installing StorageClass 💾
66+
Set kubectl context to "kind-knative"
67+
You can now use your cluster with:
68+
69+
kubectl cluster-info --context kind-knative
70+
71+
Have a nice day! 👋
72+
```
73+
74+
## Step 2: Install Knative Serving
75+
Now that the cluster is running, you can add Knative components using the Knative CRDs. At the time of writing, the latest available version is 0.15.
76+
77+
```bash
78+
$ kubectl apply --filename https://github.com/knative/serving/releases/download/v0.15.0/serving-crds.yaml
79+
```
80+
81+
```bash
82+
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev created
83+
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev created
84+
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev created
85+
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev created
86+
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev created
87+
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev created
88+
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev created
89+
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev created
90+
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev created
91+
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev created
92+
```
93+
94+
After the CRDs, the core components are next to be installed on your cluster. For brevity, the unchanged components are removed from the response.
95+
96+
```bash
97+
$ kubectl apply --filename https://github.com/knative/serving/releases/download/v0.15.0/serving-core.yaml
98+
```
99+
100+
```bash
101+
namespace/knative-serving created
102+
serviceaccount/controller created
103+
clusterrole.rbac.authorization.k8s.io/knative-serving-admin created
104+
clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-admin created
105+
image.caching.internal.knative.dev/queue-proxy created
106+
configmap/config-autoscaler created
107+
configmap/config-defaults created
108+
configmap/config-deployment created
109+
configmap/config-domain created
110+
configmap/config-gc created
111+
configmap/config-leader-election created
112+
configmap/config-logging created
113+
configmap/config-network created
114+
configmap/config-observability created
115+
configmap/config-tracing created
116+
horizontalpodautoscaler.autoscaling/activator created
117+
deployment.apps/activator created
118+
service/activator-service created
119+
deployment.apps/autoscaler created
120+
service/autoscaler created
121+
deployment.apps/controller created
122+
service/controller created
123+
deployment.apps/webhook created
124+
service/webhook created
125+
clusterrole.rbac.authorization.k8s.io/knative-serving-addressable-resolver created
126+
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-admin created
127+
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-edit created
128+
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-view created
129+
clusterrole.rbac.authorization.k8s.io/knative-serving-core created
130+
clusterrole.rbac.authorization.k8s.io/knative-serving-podspecable-binding created
131+
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.serving.knative.dev created
132+
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.serving.knative.dev created
133+
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.serving.knative.dev created
134+
```
135+
136+
## Step 3: Set up networking using Kourier
137+
Next, choose a networking layer. This example uses Kourier. Kourier is the option with the lowest resource requirements, and connects to Envoy and the Knative Ingress CRDs directly.
138+
139+
To install Kourier and make it available as a service leveraging the node ports, you’ll need to download the YAML file first and make a few changes.
140+
141+
```bash
142+
curl -Lo kourier.yaml https://github.com/knative/net-kourier/releases/download/v0.15.0/kourier.yaml
143+
```
144+
145+
By default, the Kourier service is set to be of type `LoadBalancer`. On local machines, this type doesn’t work, so you’ll have to change the type to `NodePort` and add `nodePort` elements to the two listed ports.
146+
147+
The complete Service portion (which runs from line 75 to line 94 in the document), should be replaced with:
148+
149+
```yaml
150+
apiVersion: v1
151+
kind: Service
152+
metadata:
153+
name: kourier
154+
namespace: kourier-system
155+
labels:
156+
networking.knative.dev/ingress-provider: kourier
157+
spec:
158+
ports:
159+
- name: http2
160+
port: 80
161+
protocol: TCP
162+
targetPort: 8080
163+
nodePort: 31080
164+
- name: https
165+
port: 443
166+
protocol: TCP
167+
targetPort: 8443
168+
nodePort: 31443
169+
selector:
170+
app: 3scale-kourier-gateway
171+
type: NodePort
172+
```
173+
174+
To install the Kourier controller, enter the command:
175+
176+
```bash
177+
$ kubectl apply --filename kourier.yaml
178+
```
179+
180+
```bash
181+
namespace/kourier-system created
182+
configmap/config-logging created
183+
configmap/config-observability created
184+
configmap/config-leader-election created
185+
service/kourier created
186+
deployment.apps/3scale-kourier-gateway created
187+
deployment.apps/3scale-kourier-control created
188+
clusterrole.rbac.authorization.k8s.io/3scale-kourier created
189+
serviceaccount/3scale-kourier created
190+
clusterrolebinding.rbac.authorization.k8s.io/3scale-kourier created
191+
service/kourier-internal created
192+
service/kourier-control created
193+
configmap/kourier-bootstrap created
194+
```
195+
196+
Now you will need to set Kourier as the default networking layer for Knative Serving. You can do this by entering the command:
197+
198+
```bash
199+
$ kubectl patch configmap/config-network \
200+
--namespace knative-serving \
201+
--type merge \
202+
--patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'
203+
```
204+
205+
If you want to validate that the patch command was successful, run the command:
206+
207+
```bash
208+
$ kubectl describe configmap/config-network --namespace knative-serving
209+
```
210+
211+
```bash
212+
... (abbreviated for readability)
213+
ingress.class:
214+
----
215+
kourier.ingress.networking.knative.dev
216+
...
217+
```
218+
219+
To get the same experience that you would when using a cluster that has DNS names set up, you can add a “magic” DNS provider.
220+
221+
_nip.io_ provides a wildcard DNS setup that will automatically resolve to the IP address you put in front of nip.io.
222+
223+
To patch the domain configuration for Knative Serving using nip.io, enter the command:
224+
225+
```bash
226+
$ kubectl patch configmap/config-domain \
227+
--namespace knative-serving \
228+
--type merge \
229+
--patch '{"data":{"127.0.0.1.nip.io":""}}'
230+
```
231+
232+
If you want to validate that the patch command was successful, run the command:
233+
234+
```bash
235+
$ kubectl describe configmap/config-domain --namespace knative-serving
236+
```
237+
238+
```bash
239+
... (abbreviated for readability)
240+
Data
241+
====
242+
127.0.0.1.nip.io:
243+
----
244+
...
245+
```
246+
247+
By now, all pods in the knative-serving and kourier-system namespaces should be running.
248+
You can check this by entering the commands:
249+
250+
```bash
251+
$ kubectl get pods --namespace knative-serving
252+
```
253+
254+
```bash
255+
NAME READY STATUS RESTARTS AGE
256+
activator-6d9f95b7f8-w6m68 1/1 Running 0 12m
257+
autoscaler-597fd8d69d-gmh9s 1/1 Running 0 12m
258+
controller-7479cc984d-492fm 1/1 Running 0 12m
259+
webhook-bf465f954-4c7wq 1/1 Running 0 12m
260+
```
261+
262+
```bash
263+
$ kubectl get pods --namespace kourier-system
264+
```
265+
266+
```bash
267+
NAME READY STATUS RESTARTS AGE
268+
3scale-kourier-control-699cbc695-ztswk 1/1 Running 0 10m
269+
3scale-kourier-gateway-7df98bb5db-5bw79 1/1 Running 0 10m
270+
```
271+
272+
To validate your cluster gateway is in the right state and using the right ports, enter the command:
273+
274+
```bash
275+
$ kubectl --namespace kourier-system get service kourier
276+
```
277+
278+
```bash
279+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
280+
kourier NodePort 10.98.179.178 <none> 80:31080/TCP,443:31443/TCP 87m
281+
```
282+
283+
```bash
284+
$ docker ps -a
285+
```
286+
287+
```bash
288+
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
289+
d53c275d7461 kindest/node:v1.18.2 "/usr/local/bin/entr…" 4 hours ago Up 4 hours 127.0.0.1:49350->6443/tcp, 0.0.0.0:80->31080/tcp, 0.0.0.0:443->31443/tcp knative-control-plane
290+
```
291+
292+
The ports, and how they’re tied to the host, should be the same as you’ve defined in the clusterconfig file. For example, port 31380 in the cluster is exposed as port 80.
293+
294+
## Step 4: Deploying your first app
295+
Now that the cluster, Knative, and the networking components are ready, you can deploy an app.
296+
The straightforward [Go app](https://knative.dev/docs/eventing/samples/helloworld/helloworld-go/) that already exists, is an excellent example app to deploy.
297+
The first step is to create a yaml file with the hello world service definition:
298+
299+
```bash
300+
cat > service.yaml <<EOF
301+
apiVersion: serving.knative.dev/v1 # Current version of Knative
302+
kind: Service
303+
metadata:
304+
name: helloworld-go # The name of the app
305+
namespace: default # The namespace the app will use
306+
spec:
307+
template:
308+
spec:
309+
containers:
310+
- image: gcr.io/knative-samples/helloworld-go # The URL to the image of the app
311+
env:
312+
- name: TARGET # The environment variable printed out by the sample app
313+
value: "Hello Knative Serving is up and running with Kourier!!"
314+
EOF
315+
```
316+
317+
To deploy your app to Knative, enter the command:
318+
319+
```bash
320+
$ kubectl apply --filename service.yaml
321+
```
322+
323+
To validate your deployment, you can use `kubectl get ksvc`.
324+
**NOTE:** While your cluster is configuring the components that make up the service, the output of the `kubectl get ksvc` command will show that the revision is missing. The status **ready** eventually changes to **true**.
325+
326+
```bash
327+
$ kubectl get ksvc
328+
```
329+
330+
```bash
331+
NAME URL LATESTCREATED LATESTREADY READY REASON
332+
helloworld-go http://helloworld-go.default.127.0.0.1.nip.io helloworld-go-fqqs6 Unknown RevisionMissing
333+
```
334+
335+
```bash
336+
NAME URL LATESTCREATED LATESTREADY READY REASON
337+
helloworld-go http://helloworld-go.default.127.0.0.1.nip.io helloworld-go-fqqs6 helloworld-go-fqqs6 True
338+
```
339+
340+
The final step is to test your application, by checking that the code returns what you expect. You can do this by sending a cURL request to the URL listed above.
341+
342+
Because this example mapped port 80 of the host to be forwarded to the cluster and set the DNS, you can use the exact URL.
343+
344+
```bash
345+
$ curl -v http://helloworld-go.default.127.0.0.1.nip.io
346+
```
347+
348+
```bash
349+
Hello Knative Serving is up and running with Kourier!!
350+
```
351+
352+
## Step 5: Cleaning up
353+
You can stop your cluster and remove all the resources you’ve created by entering the command:
354+
355+
```bash
356+
kind delete cluster --name knative
357+
```
358+
359+
## About the author
360+
As a Product Manager, Leon is very passionate and outspoken when it comes to serverless and container technologies. He believes that "devs wanna dev" and that drives his passion to help build better products. He enjoys writing code, speaking at conferences and meetups, and blogging about that.
361+
In his personal life, he’s on a mission to taste cheesecake in every city he visits (suggestions are welcome @retgits).

0 commit comments

Comments
 (0)