Skip to content

Commit 3667e95

Browse files
committed
cleanup
1 parent 18071fa commit 3667e95

File tree

1 file changed

+73
-96
lines changed

1 file changed

+73
-96
lines changed
Lines changed: 73 additions & 96 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Tailscale Operator Deep Dive
3-
description: Learning about the Tailscale Kubernetes Operator, and how it can be used to manage Tailscale in a Kubernetes cluster.
3+
description: A comprehensive guide to using the Tailscale Kubernetes Operator for secure networking in Kubernetes clusters
44
slug: tailscale-operator
55
date: 2025-01-06 00:00:00+0000
66
image: tailscale.png
@@ -10,22 +10,28 @@ categories:
1010
tags:
1111
- kubernetes
1212
- tailscale
13-
weight: 1
13+
- networking
14+
- security
15+
weight: 1
1416
---
1517

16-
In this post, I will be discussing the **Tailscale Kubernetes Operator**, and how it can be used to manage Tailscale in a Kubernetes cluster.
18+
The **Tailscale Kubernetes Operator** enables seamless integration between Kubernetes clusters and Tailscale's secure networking capabilities. In this deep dive, I'll explore how to use the operator to manage Tailscale connectivity in a Kubernetes environment.
1719

18-
Live Kubernetes manifests can be found [here](https://github.com/rajsinghtech/kubernetes-manifests/tree/main/clusters/talos-robbinsdale/apps/tailscale).
20+
Live Kubernetes manifests for this setup can be found in my [GitHub repository](https://github.com/rajsinghtech/kubernetes-manifests/tree/main/clusters/talos-robbinsdale/apps/tailscale).
21+
22+
## How Tailscale Works
23+
24+
Before diving into the operator specifics, it's helpful to understand how Tailscale works. Tailscale creates a secure mesh network using WireGuard for encrypted tunnels between nodes. Instead of traditional hub-and-spoke VPN architecture, Tailscale enables direct peer-to-peer connections between nodes where possible, falling back to DERP (Designated Encrypted Relay for Packets) servers when direct connections aren't possible.
1925

2026
## API Server Proxy
2127

22-
The Tailscale Kubernetes Operator includes an **API Server Proxy** that enables you to expose and access the Kubernetes control plane (`kube-apiserver`) over Tailscale. This is a great feature that negates the need for external management tools like Rancher.
28+
One of the most powerful features of the Tailscale Kubernetes Operator is the **API Server Proxy**. This allows you to securely expose your Kubernetes control plane (`kube-apiserver`) over Tailscale, eliminating the need for external management tools like Rancher.
2329

24-
[Learn more](https://tailscale.com/kb/1437/kubernetes-operator-api-server-proxy).
30+
### Setting up API Server Access
2531

26-
In my case, I have the API Server Proxy enabled and configured to use the `auth` mode. This allows me to configure granular Kubernetes RBAC permissions for individual tailnet users or groups.
32+
1. **Configure Tailscale ACLs**
2733

28-
1. **Add the following to your Tailscale ACL:**
34+
First, we need to set up appropriate access controls in the Tailscale ACLs:
2935

3036
```json
3137
{
@@ -60,9 +66,14 @@ In my case, I have the API Server Proxy enabled and configured to use the `auth`
6066
}
6167
```
6268

63-
This configuration grants users in the `autogroup:admin` group access to impersonate the `system:masters` group in Kubernetes, giving them full administrative privileges. Regular members in `autogroup:member` are granted access to the `tailnet-readers` group, allowing them read-only access as defined by Kubernetes RBAC.
69+
This configuration:
70+
- Grants admin users (`autogroup:admin`) full cluster access via the `system:masters` group
71+
- Gives regular users (`autogroup:member`) read-only access through the `tailnet-readers` group
72+
- Uses Tailscale's built-in group impersonation for RBAC integration
73+
74+
2. **Set up RBAC for Read-only Access**
6475

65-
2. **Create a `ClusterRoleBinding` for the `tailnet-readers` group:**
76+
Create a `ClusterRoleBinding` for the read-only group:
6677

6778
```yaml
6879
apiVersion: rbac.authorization.k8s.io/v1
@@ -79,24 +90,23 @@ In my case, I have the API Server Proxy enabled and configured to use the `auth`
7990
apiGroup: rbac.authorization.k8s.io
8091
```
8192
82-
3. **Configure your kubeconfig to use the Tailscale API Server Proxy:**
93+
3. **Configure kubectl**
94+
95+
Set up your local kubectl configuration:
8396
8497
```bash
8598
tailscale configure kubeconfig tailscale-operator.your-tailnet.ts.net
8699
```
87100

88-
Replace `tailscale-operator.your-tailnet.ts.net` with the MagicDNS name of your Tailscale operator node.
89-
90-
Now, you can use the `kubectl` command to interact with your Kubernetes cluster over Tailscale!
91-
92-
![Operator Running](k9s.png)
101+
Once configured, you can securely access your cluster from anywhere in your tailnet:
93102

103+
![Operator Running in K9s](k9s.png)
94104

95-
## Egress
105+
## Egress Configuration
96106

97-
The Tailscale Kubernetes Operator supports **Egress**, which allows you to route traffic from your cluster to other destinations in your tailnet.
107+
The operator enables pods within your cluster to access resources in your tailnet through Tailscale's secure mesh network.
98108

99-
In this example, I have created a `Service` that routes traffic to the `robbinsdale.your-tailnet.ts.net` Tailscale MagicDNS name.
109+
Here's an example of exposing a Tailscale node to your cluster:
100110

101111
```yaml
102112
apiVersion: v1
@@ -107,45 +117,37 @@ metadata:
107117
name: tailscale-robbinsdale
108118
namespace: home
109119
spec:
110-
externalName: placeholder # any value - will be overwritten by operator
120+
externalName: placeholder # The operator will update this
111121
type: ExternalName
112122
```
113123
114-
115-
The operator schedules a `StatefulSet` that acts as a proxy, routing traffic to the specified MagicDNS name.
124+
The operator creates:
125+
1. A StatefulSet running a Tailscale proxy pod
126+
2. A Service that routes traffic through the proxy
127+
3. A new node in your tailnet
116128
117129
![StatefulSet Pod](egress-pod.png)
118-
![Service Updated with ExternalName set to the Pod's Name](egress-service.png)
119-
![Tailscale adds a node to the tailnet to act as a proxy](egress-tailscale.png)
130+
![Service Configuration](egress-service.png)
131+
![Tailscale Node](egress-tailscale.png)
120132
121-
Now, we have a pod that is able to access the `robbinsdale.your-tailnet.ts.net` Tailscale MagicDNS name:
133+
You can verify the connectivity:
122134
123135
```bash
136+
# Test SSH access
124137
root@code-server-5fb56db484-f7wg5:/# ssh [email protected]
125-
The authenticity of host 'tailscale-robbinsdale.home (10.1.0.3)' can't be established.
126-
ED25519 key fingerprint is SHA256:Zv7nqSSrTgYQSsjGeDi/Y/XCfvW9bUDBnabYb1c2sgw.
127-
This key is not known by any other names.
128-
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
129-
Warning: Permanently added 'tailscale-robbinsdale.home' (ED25519) to the list of known hosts.
130-
[email protected]'s password:
131138

139+
# Test network connectivity
132140
root@code-server-5fb56db484-f7wg5:/# ping tailscale-robbinsdale.home -c 2
133141
PING ts-tailscale-robbinsdale-p4cks.tailscale.svc.cluster.local (10.1.0.3) 56(84) bytes of data.
134142
64 bytes from ts-tailscale-robbinsdale-p4cks-0.ts-tailscale-robbinsdale-p4cks.tailscale.svc.cluster.local (10.1.0.3): icmp_seq=1 ttl=61 time=0.522 ms
135143
64 bytes from ts-tailscale-robbinsdale-p4cks-0.ts-tailscale-robbinsdale-p4cks.tailscale.svc.cluster.local (10.1.0.3): icmp_seq=2 ttl=61 time=0.461 ms
136-
137-
--- ts-tailscale-robbinsdale-p4cks.tailscale.svc.cluster.local ping statistics ---
138-
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
139-
rtt min/avg/max/mdev = 0.461/0.491/0.522/0.030 ms
140144
```
141145
142146
## Subnet Routing
143147
144-
The Tailscale Kubernetes Operator allows you to advertise cluster subnets to your tailnet using a `Connector` resource.
145-
146-
In this example, I have created a `Connector` that advertises the Local, Pod, Service, and LoadBalancer subnets.
148+
The Tailscale Connector resource allows you to advertise cluster subnets to your tailnet, enabling seamless access to cluster resources.
147149
148-
``` yaml
150+
```yaml
149151
apiVersion: tailscale.com/v1alpha1
150152
kind: Connector
151153
metadata:
@@ -156,69 +158,38 @@ spec:
156158
hostname: robbinsdale-connector
157159
subnetRouter:
158160
advertiseRoutes:
159-
- "192.168.50.0/24" # Local
160-
- "10.0.0.0/16" # Pod
161-
- "10.1.0.0/16" # Service
162-
- "10.69.0.0/16" # LoadBalancer
161+
- "192.168.50.0/24" # Local network
162+
- "10.0.0.0/16" # Pod CIDR
163+
- "10.1.0.0/16" # Service CIDR
164+
- "10.69.0.0/16" # LoadBalancer CIDR
163165
exitNode: false
164166
```
165167
166-
167-
The result is a device in my tailnet that exposes these subnets.
168+
The Connector creates a Tailscale node that routes traffic between your tailnet and the advertised subnets:
168169
169170
![Connector Pod](connector-pod.png)
170171
![Connector in Tailscale](connector-tailscale.png)
171172
172-
Now, I can reach any resource within my local network and Kubernetes cluster from anywhere in my tailnet!
173+
### Auto-approving Routes
173174
174-
```bash
175-
rajs@macbook:/# ping 192.168.50.1 -c 2
176-
PING 192.168.50.1 (192.168.50.1) 56(84) bytes of data.
177-
64 bytes from 192.168.50.1: icmp_seq=1 ttl=64 time=0.022 ms
178-
64 bytes from 192.168.50.1: icmp_seq=2 ttl=64 time=0.022 ms
179-
180-
--- 192.168.50.1 ping statistics ---
181-
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
182-
rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms
183-
```
184-
185-
Note: I added the `autoApprovers` field in my Tailscale ACL to automatically approve routes for the `tag:k8s` group.
175+
To automatically approve subnet routes, add this to your Tailscale ACLs:
186176
187177
```json
188-
"autoApprovers": {
189-
"routes": {
190-
"192.168.50.0/24": [
191-
"tag:k8s",
192-
"autogroup:admin",
193-
],
194-
"10.43.0.0/16": [
195-
"tag:k8s",
196-
"autogroup:admin",
197-
],
198-
"10.42.0.0/16": [
199-
"tag:k8s",
200-
"autogroup:admin",
201-
],
202-
"10.96.0.0/16": [
203-
"tag:k8s",
204-
"autogroup:admin",
205-
],
206-
},
207-
},
178+
"autoApprovers": {
179+
"routes": {
180+
"192.168.50.0/24": ["tag:k8s", "autogroup:admin"],
181+
"10.43.0.0/16": ["tag:k8s", "autogroup:admin"],
182+
"10.42.0.0/16": ["tag:k8s", "autogroup:admin"],
183+
"10.96.0.0/16": ["tag:k8s", "autogroup:admin"]
184+
}
185+
}
208186
```
209187

188+
## Advanced Topics
210189

211-
## Notables
212-
213-
### Namespace Privilege Escalation
214-
215-
Within the operator, I noticed it was being denied the ability to run privileged containers due to Pod Security Policies.
216-
217-
```
218-
{"level":"info","ts":"2025-01-06T21:27:36Z","logger":"KubeAPIWarningLogger","msg":"would violate PodSecurity \"restricted:latest\": privileged (containers \"sysctler\", \"tailscale\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \"sysctler\", \"tailscale\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \"sysctler\", \"tailscale\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or containers \"sysctler\", \"tailscale\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers \"sysctler\", \"tailscale\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")"}
219-
```
190+
### Pod Security and Privileges
220191

221-
To resolve this, I updated the namespace to allow privileged containers:
192+
The Tailscale operator requires privileged access to configure networking. To allow this in namespaces with Pod Security Policies:
222193

223194
```yaml
224195
kind: Namespace
@@ -231,20 +202,26 @@ metadata:
231202
argocd.argoproj.io/sync-options: Prune=false
232203
```
233204
205+
### Network Analysis
234206
235-
### Hubble Flows
207+
Using Hubble, we can observe the Tailscale traffic patterns:
236208
237-
In Tailscale, we can see the UDP flows to the world as well as the connections to the Tailscale Coordination servers over HTTPS.
209+
![Hubble Network Flows](hubble.png)
238210
239-
![Hubble](hubble.png)
211+
### NAT Behavior
240212
241-
### NAT Type
213+
The Connector pod operates in EasyNAT mode, enabling direct UDP connections when possible:
242214
243-
From the ping command, we can see that the NAT type of my Connector pod withink the cluster is likely EasyNAT mode. We can tell this because UDP is yes. In addition, when pinging the connector from outside the cluster we never reach a DERP server. This means I am in direct connection mode and likely not in HardNAT mode.
215+
![NAT Configuration](nat.png)
244216
245-
![Nat Type](nat.png)
246-
247-
``` bash
217+
```bash
248218
rajs@macbook % tailscale ping robbinsdale-connector
249219
pong from robbinsdale-connector (100.107.45.57) via 67.4.239.75:56786 in 33ms
250220
```
221+
222+
This direct connectivity indicates successful NAT traversal without requiring DERP relay servers.
223+
224+
## References
225+
226+
- [How Tailscale Works](https://tailscale.com/blog/how-tailscale-works)
227+
- [Kubernetes Operator Documentation](https://tailscale.com/kb/1236/kubernetes-operator)

0 commit comments

Comments
 (0)