You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: Learning about the Tailscale Kubernetes Operator, and how it can be used to manage Tailscale in a Kubernetes cluster.
3
+
description: A comprehensive guide to using the Tailscale Kubernetes Operator for secure networking in Kubernetes clusters
4
4
slug: tailscale-operator
5
5
date: 2025-01-06 00:00:00+0000
6
6
image: tailscale.png
@@ -10,22 +10,28 @@ categories:
10
10
tags:
11
11
- kubernetes
12
12
- tailscale
13
-
weight: 1
13
+
- networking
14
+
- security
15
+
weight: 1
14
16
---
15
17
16
-
In this post, I will be discussing the **Tailscale Kubernetes Operator**, and how it can be used to manage Tailscale in a Kubernetes cluster.
18
+
The **Tailscale Kubernetes Operator** enables seamless integration between Kubernetes clusters and Tailscale's secure networking capabilities. In this deep dive, I'll explore how to use the operator to manage Tailscale connectivity in a Kubernetes environment.
17
19
18
-
Live Kubernetes manifests can be found [here](https://github.com/rajsinghtech/kubernetes-manifests/tree/main/clusters/talos-robbinsdale/apps/tailscale).
20
+
Live Kubernetes manifests for this setup can be found in my [GitHub repository](https://github.com/rajsinghtech/kubernetes-manifests/tree/main/clusters/talos-robbinsdale/apps/tailscale).
21
+
22
+
## How Tailscale Works
23
+
24
+
Before diving into the operator specifics, it's helpful to understand how Tailscale works. Tailscale creates a secure mesh network using WireGuard for encrypted tunnels between nodes. Instead of traditional hub-and-spoke VPN architecture, Tailscale enables direct peer-to-peer connections between nodes where possible, falling back to DERP (Designated Encrypted Relay for Packets) servers when direct connections aren't possible.
19
25
20
26
## API Server Proxy
21
27
22
-
The Tailscale Kubernetes Operator includes an**API Server Proxy** that enables you to expose and access the Kubernetes control plane (`kube-apiserver`) over Tailscale. This is a great feature that negates the need for external management tools like Rancher.
28
+
One of the most powerful features of the Tailscale Kubernetes Operator is the**API Server Proxy**. This allows you to securely expose your Kubernetes control plane (`kube-apiserver`) over Tailscale, eliminating the need for external management tools like Rancher.
In my case, I have the API Server Proxy enabled and configured to use the `auth` mode. This allows me to configure granular Kubernetes RBAC permissions for individual tailnet users or groups.
32
+
1.**Configure Tailscale ACLs**
27
33
28
-
1.**Add the following to your Tailscale ACL:**
34
+
First, we need to set up appropriate access controls in the Tailscale ACLs:
29
35
30
36
```json
31
37
{
@@ -60,9 +66,14 @@ In my case, I have the API Server Proxy enabled and configured to use the `auth`
60
66
}
61
67
```
62
68
63
-
This configuration grants users in the `autogroup:admin` group access to impersonate the `system:masters` group in Kubernetes, giving them full administrative privileges. Regular members in `autogroup:member` are granted access to the `tailnet-readers` group, allowing them read-only access as defined by Kubernetes RBAC.
69
+
This configuration:
70
+
- Grants admin users (`autogroup:admin`) full cluster access via the `system:masters` group
71
+
- Gives regular users (`autogroup:member`) read-only access through the `tailnet-readers` group
72
+
- Uses Tailscale's built-in group impersonation for RBAC integration
73
+
74
+
2.**Set up RBAC for Read-only Access**
64
75
65
-
2.**Create a `ClusterRoleBinding` for the `tailnet-readers` group:**
76
+
Create a `ClusterRoleBinding` for the read-only group:
66
77
67
78
```yaml
68
79
apiVersion: rbac.authorization.k8s.io/v1
@@ -79,24 +90,23 @@ In my case, I have the API Server Proxy enabled and configured to use the `auth`
79
90
apiGroup: rbac.authorization.k8s.io
80
91
```
81
92
82
-
3. **Configure your kubeconfig to use the Tailscale API Server Proxy:**
Within the operator, I noticed it was being denied the ability to run privileged containers due to Pod Security Policies.
216
-
217
-
```
218
-
{"level":"info","ts":"2025-01-06T21:27:36Z","logger":"KubeAPIWarningLogger","msg":"would violate PodSecurity \"restricted:latest\": privileged (containers \"sysctler\", \"tailscale\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \"sysctler\", \"tailscale\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \"sysctler\", \"tailscale\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or containers \"sysctler\", \"tailscale\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers \"sysctler\", \"tailscale\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")"}
219
-
```
190
+
### Pod Security and Privileges
220
191
221
-
To resolve this, I updated the namespace to allow privileged containers:
192
+
The Tailscale operator requires privileged access to configure networking. To allow this in namespaces with Pod Security Policies:
222
193
223
194
```yaml
224
195
kind: Namespace
@@ -231,20 +202,26 @@ metadata:
231
202
argocd.argoproj.io/sync-options: Prune=false
232
203
```
233
204
205
+
### Network Analysis
234
206
235
-
###Hubble Flows
207
+
Using Hubble, we can observe the Tailscale traffic patterns:
236
208
237
-
In Tailscale, we can see the UDP flows to the world as well as the connections to the Tailscale Coordination servers over HTTPS.
209
+

238
210
239
-

211
+
### NAT Behavior
240
212
241
-
### NAT Type
213
+
The Connector pod operates in EasyNAT mode, enabling direct UDP connections when possible:
242
214
243
-
From the ping command, we can see that the NAT type of my Connector pod withink the cluster is likely EasyNAT mode. We can tell this because UDP is yes. In addition, when pinging the connector from outside the cluster we never reach a DERP server. This means I am in direct connection mode and likely not in HardNAT mode.
0 commit comments