You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: kubernetes/README.md
+51Lines changed: 51 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,6 +27,7 @@ Table of Contents:
27
27
-[3.10. Ingress](#310-ingress)
28
28
-[3.11. ConfigMap](#311-configmap)
29
29
-[3.12. Liveness and Readiness](#312-liveness-and-readiness)
30
+
-[3.13. Gateway](#313-gateway)
30
31
-[4. Security](#4-security)
31
32
-[4.1. Kubernetes API Server Control access](#41-kubernetes-api-server-control-access)
32
33
-[4.2. Secure Pod and Container](#42-secure-pod-and-container)
@@ -70,6 +71,7 @@ Table of Contents:
70
71
- Read and store the configuration
71
72
- Command line interface
72
73
-`kube-api-server`: exposes the Kubernetes API.
74
+
73
75
- The API is the front end for the Kubernetes control plane.
74
76
- Kubernetes system components communicate only with the API server. API server is the only component that communicates with etcd.
75
77
- Flow:
@@ -79,6 +81,7 @@ Table of Contents:
79
81
```
80
82
81
83
-`etcd`: Consistent and highly-available key-value store used as Kubernetes's backing store for all cluster data.
84
+
82
85
- Explore the Kubernetes configuration and status in etcd:
83
86
84
87
```bash
@@ -212,6 +215,7 @@ kubectl get ev --field-selector type=Warning
212
215
| Declarative object configuration | Directories of files | Production projects | 1+ | Highest |
213
216
214
217
-**Imperative commands**:
218
+
215
219
- User operates directly on live objects in a cluster.
216
220
- The recommended way to get started or to run a one-off task in a cluster.
217
221
- Example:
@@ -230,6 +234,7 @@ kubectl get ev --field-selector type=Warning
230
234
- Commands do not provide a template for creating new objects.
231
235
232
236
-**Imperative object configuration**:
237
+
233
238
-`kubectl` command specifies the operation (create, replace, etc.), optional flags and at least one file name.
234
239
- Example:
235
240
@@ -253,6 +258,7 @@ kubectl get ev --field-selector type=Warning
253
258
- Updates to live objects must be reflected in configuration files, or they will be lost during the next replacement.
254
259
255
260
-**Declarative object configuration**:
261
+
256
262
- User operates on object configuration files stored locally, however the user doesn't define the operations to be taken on the files. Create, update, and delete operations are automatically detected per-object by `kubectl`. This enables working on directories, where different operations might be needed for different objects.
257
263
- Example:
258
264
@@ -291,6 +297,7 @@ kubectl get ev --field-selector type=Warning
291
297
- Each pods gets its own unique IP address and can communicate with all other pods through a flat, NAT-less network.
292
298
- The network is set up by the system administrator or by a Container Network Interface (CNI) plugin, not by Kubernetes itself.
293
299
- For example, CNI Flannel:
300
+
294
301
- Read more [here](https://chunqi.li/2015/10/10/Flannel-for-Docker-Overlay-Network/)
295
302
- Network communicate - multihost.
296
303
- Flannel also uses etcd to configure the settings and store the status.
@@ -416,26 +423,32 @@ spec:
416
423
```
417
424
418
425
- Headless Service:
426
+
419
427
- Sometimes you don't need load-balancing and a single Service IP. In this case, you can create what are termed _headless service_, by explicitly specifying `"None"` for the cluster IP address (`.spec.clusterIP`).
420
428
- For headless Services, a cluster IP is not allocated. kube-proxy does not handle these Services, and there is no load balancing or proxying done by the platform for them. The cluster DNS returns not just a single `A` record pointing to the service's cluster IP, but multiple `A` records, one for each pod that's part of the service. Clients can there fore query the DNS to get the IPs of all the pods in the service.
421
429
- A headless Service allows a client to connect to whichever Pod it prefers, directly.
422
430
423
431

424
432
425
433
- Expose services externally:
434
+
426
435
- ClusterIP services are only accessible within the cluster.
427
436
- If you need to make a service available to the outside world, you can do one of the following:
437
+
428
438
- ~Assign an additional IP to a node and set it as one of the service's `externalIP`~.
429
439
- Set the service's type to `NodePort` and access the service through the node's port(s).
440
+
430
441
- Kubernetes makes the service avaiable on a network port on all cluster nodes. Because the port is open on the nodes, it's called a node port.
431
442
- Expose pods through a NodePort service:
432
443
433
444

445
+
434
446
- Expose multiple ports through with a NodePort service:
435
447
436
448

437
449
438
450
- Ask Kubernetes to provision a LoadBalancer by setting the type to `LoadBalancer`.
451
+
439
452
- The LoadBalancer stands in front of the nodes and handles the connections coming from the clients. It routes each connection to the service by forwarding it to the node port on one of the nodes.
440
453
- The `LoadBalancer` service type is an extenstion of the `NodePort` type, which makes the service accessible through these node ports.
441
454
- Expose a LoadBalancer service.
@@ -504,13 +517,15 @@ spec:
504
517
- downwardAPI
505
518
- A **PersistentVolume** object represents a storage volume available in the clsuter that can be used to persist application data.
506
519
- A pod transitively references a persistent volume and its underlying storage by referring to a **PersistentVolumeClaim** object that references the **PersistentVolume** object, which then references the underlying storage. This allows the ownership of the persistent volume to be decoupled from the lifecyle of the pod.
520
+
507
521
- A **PersistentVolumeClaim** represents a user's claim on the persistent volume.
- Benefits of using persistent volumes and claims:
512
526
- The infrastructure-specific details are now decoupled from the application represents by the pod.
513
527
- Example:
528
+
514
529
- Create PersistentVolume:
515
530
516
531
```yaml
@@ -612,6 +627,7 @@ volumeBindingMode: WaitForFirstConsumer # How volumes of this class are provisio
612
627
- As a file in a pod (via volumes).
613
628
- External image to pull secrets.
614
629
- Generate secrets:
630
+
615
631
- Using files.
616
632
617
633
```bash
@@ -640,7 +656,9 @@ volumeBindingMode: WaitForFirstConsumer # How volumes of this class are provisio
640
656
```
641
657
642
658
- Kubernetes Secrets are, by default, stored unencrypted in the API server's underlying data store (etcd). Anyone with API access can retrieve or modify a Secret, and so can anyone with access to etcd. Additionally, anyone who is authorized to create a Pod in a namespace can use that access to read any Secret in that namespace; this includes indirect access such as the ability to create a Deployment.
659
+
643
660
- [Enable Encryption at Rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/) for Secrest.
661
+
644
662
- Generate encryption keys: Create strong encryption keys using a secure method. Algorithms like AES-GCM are recommended for both confidentiality and integrity.
- Understanding the (lack of) isolation between namespaces:
730
+
712
731
- When two pods created in different namespaces are scheduled to the same cluster node, they both run in the same OS kernel -> an application that break out of its container or consumes too much of the node's resources can ffect the operation of the other application.
- Kubernetes doesn't provide network isolation between applications running in pods in different namespaces (by default) -> Can use the NetworkPolicy object to configure which applications in which namespaces can connect to which applications in other namespaces.
716
736
- Should not use namespaces to split a single physical cluster into production, staging, and development environments.
717
737
@@ -833,6 +853,19 @@ spec:
833
853
- The kubelet uses startup probes to know when a container application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, making sure those probes don't interfere with the application startup. This can be used to adopt liveness checks on slow stasrting containers, avoiding them getting killed by the kubelet before they are up and running.
- The [Gateway API](https://gateway-api.sigs.k8s.io/) is an official Kubernetes project, representing the next generation of APIs for ingress, load balancing and potentially even service mesh interactions. It provides a unified specification for configuring network traffic routing, aiming to be expressive, extensible and aligned with the different roles involved in managing Kubernetes infrastructure and applications.
859
+
- The limitations of Ingress:
860
+
- Kubernetes Ingress defines rules for exposing HTTP and HTTPS services running within the cluster to external clients. It typically relies on an Ingress controller, a separate piece of software running in the cluster that watches Ingress resources and configures an underlying load balancer or proxy accordingly. While simple path and host-based routing work well with Ingress, its core specification is limited (header-based routing, traffic splitting for canary releases, ...).
861
+
- The single Ingress resource often blurs the lines of responsibility between infrastructure operators and application developers, potentially leading to configuration conflicts or overly broad permissions.
862
+
- Resource model:
863
+
- The primary resources are GatewayClass (cluster-scoped), Gateway, and various Route types (HTTPRoute, TCPRoute or GRPCRoute).
864
+
- GatewayClass manages Gateways of this class.
865
+
- Gateway requests a specific traffic entrypoint, like a load balancer IP address, based on a GatewayClass.
866
+
- Route resources contain the specific rules for how traffic arriving at a Gateway listener should be mapped to backend Kubernetes services.
867
+
- This layered approach allows platform team to manage the underlying infrastructure (GatewayClass, Gateway) and set policies. In contrast, application teams can independently manage the routing logic specific to their services (Route resources) within the established boundaries.
868
+
836
869
## 4. Security
837
870
838
871
### 4.1. Kubernetes API Server Control access
@@ -851,6 +884,7 @@ spec:
851
884
- _system:serviceaccounts_: all ServiceAccounts in the system.
852
885
- _system:serviceaccounts:\<namespace\>_: includes all ServiceAccounts in a specific namespace.
853
886
- ServiceAccounts:
887
+
854
888
- Every Pod is associated with a ServiceAccount, which represents the identity of the app running in the pod.
- Use user roles as the key factor in determining whether the user may perform the action or not. A subject is associated with one or more roles and each role is allowed to perform certain verbs on certain resources.
872
907
- RBAC authorization rules are configured through 4 resources, which can be grouped into 2 groups:
873
908
- **Role** and **ClusterRoles**, which specify which verbs can be performed on which resources.
@@ -876,6 +911,7 @@ spec:
876
911
- RoleBindings and Roles are namespaced; ClusterRoles and ClusterRoleBindings aren't.
- Binding to a host port without using the host's network namespace.
981
+
945
982
-**NodePort** vs **hostPort**: **hostPort** a connection to the node's port is forwarded directly to the pod running on that node, whereas with a Nodeport service, a connection to the node's port is forwarded to a randomly selected pod.
- Only one instance of the pod can be scheduled to each node.
951
989
952
990
- Container's security context.
@@ -974,8 +1012,10 @@ spec:
974
1012
```
975
1013
976
1014
- Security-related features in pods:
1015
+
977
1016
- **PodSecurityPolicy** is a cluster-level resource, which defines what security-related features users can or can't use in their pods.
978
1017
- **PodSecurityPolicy** resource defines things:
1018
+
979
1019
- Whether a pod can use the host's IPC, PID, or Network namespaces.
980
1020
- Which host ports a pod can bind to.
981
1021
- What user IDs a container can run as.
@@ -998,6 +1038,7 @@ spec:
998
1038
```
999
1039
1000
1040
- Isolate the pod network:
1041
+
1001
1042
- How the network between pods can be secured by limiting which pods can talk to which pods -> depends on which container networking plugin is used in the cluster -> If plugin supports it, configure network isolation with **NetworkPolicy** resources.
0 commit comments