@@ -126,8 +126,8 @@ Here we use UIDs, but the same applies for GIDs.
126
126
inside the container to different IDs in the host. In particular, mapping root
127
127
inside the container to unprivileged user and group IDs in the node.
128
128
- Increase pod to pod isolation by allowing to use non-overlapping mappings
129
- (UIDs/GIDs) whenever possible. IOW, if two containers runs as user X, they run
130
- as different UIDs in the node and therefore are more isolated than today.
129
+ (UIDs/GIDs) whenever possible. In other words: if two containers runs as user
130
+ X, they run as different UIDs in the node and therefore are more isolated than today.
131
131
- Allow pods to have capabilities (e.g. ` CAP_SYS_ADMIN ` ) that are only valid in
132
132
the pod (not valid in the host).
133
133
- Benefit from the security hardening that user namespaces provide against some
@@ -288,10 +288,47 @@ message Mount {
288
288
}
289
289
```
290
290
291
+ The CRI runtime reports what runtime handlers have support for user
292
+ namespaces through the ` StatusResponse ` message, that gains a new
293
+ field ` runtime_handlers ` :
294
+
295
+ ```
296
+ message StatusResponse {
297
+ // Status of the Runtime.
298
+ RuntimeStatus status = 1;
299
+ // Info is extra information of the Runtime. The key could be arbitrary string, and
300
+ // value should be in json format. The information could include anything useful for
301
+ // debug, e.g. plugins used by the container runtime.
302
+ // It should only be returned non-empty when Verbose is true.
303
+ map<string, string> info = 2;
304
+
305
+ // Runtime handlers.
306
+ repeated RuntimeHandler runtime_handlers = 3;
307
+ }
308
+ ```
309
+
310
+ Where RuntimeHandler is defined as below:
311
+
312
+ ```
313
+ message RuntimeHandlerFeatures {
314
+ // supports_user_namespaces is set to true if the runtime handler supports
315
+ // user namespaces.
316
+ bool supports_user_namespaces = 1;
317
+ }
318
+
319
+ message RuntimeHandler {
320
+ // Name must be unique in StatusResponse.
321
+ // An empty string denotes the default handler.
322
+ string name = 1;
323
+ // Supported features.
324
+ RuntimeHandlerFeatures features = 2;
325
+ }
326
+ ```
327
+
291
328
### Support for pods
292
329
293
330
Make pods work with user namespaces. This is activated via the
294
- bool ` pod.spec.HostUsers ` .
331
+ bool ` pod.spec.hostUsers ` .
295
332
296
333
The mapping length will be 65536, mapping the range 0-65535 to the pod. This wide
297
334
range makes sure most workloads will work fine. Additionally, we don't need to
@@ -403,7 +440,7 @@ If the pod wants to read who is the owner of file `/vol/configmap/foo`, now it
403
440
will see the owner is root inside the container. This is due to the IDs
404
441
transformations that the idmap mount does for us.
405
442
406
- In other words, we can make sure the pod can read files instead of chowning them
443
+ In other words: we can make sure the pod can read files instead of chowning them
407
444
all using the host IDs the pod is mapped to, by just using an idmap mount that
408
445
has the same mapping that we use for the pod user namespace.
409
446
@@ -469,7 +506,7 @@ something else to this list:
469
506
- What about windows or VM container runtimes, that don't use linux namespaces?
470
507
We need a review from windows maintainers once we have a more clear proposal.
471
508
We can then adjust the needed details, we don't expect the changes (if any) to be big.
472
- IOW, in my head this looks like this: we merge this KEP in provisional state if
509
+ In my head this looks like this: we merge this KEP in provisional state if
473
510
we agree on the high level idea, with @giuseppe we do a PoC so we can fill-in
474
511
more details to the KEP (like CRI changes, changes to container runtimes, how to
475
512
configure kubelet ranges, etc.), and then the Windows folks can review and we
@@ -593,6 +630,7 @@ use container runtime versions that have the needed changes.
593
630
594
631
- Gather and address feedback from the community
595
632
- Be able to configure UID/GID ranges to use for pods
633
+ - This feature is not supported on Windows.
596
634
- Get review from VM container runtimes maintainers (not blocker, as VM runtimes should just ignore
597
635
the field, but nice to have)
598
636
@@ -603,6 +641,15 @@ use container runtime versions that have the needed changes.
603
641
604
642
### Upgrade / Downgrade Strategy
605
643
644
+ Existing pods will still work as intended, as the new field is missing there.
645
+
646
+ Upgrade will not change any current behaviors.
647
+
648
+ When the new functionality wasn't yet used, downgrade will not be affected.
649
+
650
+ Versions of Kubernetes that doesn't have this feature implemented will
651
+ ignore and strip out the new field ` pod.spec.hostUsers ` .
652
+
606
653
### Version Skew Strategy
607
654
608
655
<!--
@@ -635,11 +682,12 @@ doesn't create them. The runtime can detect this situation as the `user` field
635
682
in the ` NamespaceOption ` will be seen as nil, [ thanks to
636
683
protobuf] [ proto3-defaults ] . We already tested this with real code.
637
684
638
- Old runtime and new kubelet: containers are created without userns. As the
639
- ` user ` field of the ` NamespaceOption ` message is not part of the runtime
640
- protofiles, that part is ignored by the runtime and pods are created using the
641
- host userns.
685
+ Old runtime and new kubelet: the runtime won't report that it supports
686
+ user namespaces through the ` StatusResponse ` message, so the kubelet
687
+ will detect it and fail when such a request is done.
642
688
689
+ We added unit tests for the feature gate disabled, and integration
690
+ tests for the feature gate enabled and disabled.
643
691
644
692
[ proto3-defaults ] : https://developers.google.com/protocol-buffers/docs/proto3#default
645
693
@@ -686,7 +734,7 @@ well as the [existing list] of feature gates.
686
734
-->
687
735
688
736
- [x] Feature gate (also fill in values in ` kep.yaml ` )
689
- - Feature gate name: UserNamespacesPodsSupport
737
+ - Feature gate name: UserNamespacesSupport
690
738
- Components depending on the feature gate: kubelet, kube-apiserver
691
739
692
740
###### Does enabling the feature change any default behavior?
@@ -733,7 +781,7 @@ Pods will have to be re-created to use the feature.
733
781
734
782
We will add.
735
783
736
- We will test for when the field pod.spec.HostUsers is set to true, false
784
+ We will test for when the field pod.spec.hostUsers is set to true, false
737
785
and not set. All of this with and without the feature gate enabled.
738
786
739
787
We will also unit test that, if pods were created with the new field
@@ -766,7 +814,7 @@ The rollout is just a feature flag on the kubelet and the kube-apiserver.
766
814
If one API server is upgraded while others aren't, the pod will be accepted (if the apiserver is >=
767
815
1.25). If it is scheduled to a node that the kubelet has the feature flag activated and the node
768
816
meets the requirements to use user namespaces, then the pod will be created with the namespace. If
769
- it is scheduled to a node that has the feature disabled, it will be scheduled without the user
817
+ it is scheduled to a node that has the feature disabled, it will be created without the user
770
818
namespace.
771
819
772
820
On a rollback, pods created while the feature was active (created with user namespaces) will have to
@@ -787,7 +835,7 @@ will rollout across nodes.
787
835
788
836
On Kubernetes side, the kubelet should start correctly.
789
837
790
- On the node runtime side, a pod created with pod.spec.HostUsers =false should be on RUNNING state if
838
+ On the node runtime side, a pod created with pod.spec.hostUsers =false should be on RUNNING state if
791
839
all node requirements are met.
792
840
<!--
793
841
What signals should users be paying attention to when the feature is young
@@ -798,7 +846,7 @@ that might indicate a serious problem?
798
846
799
847
Yes.
800
848
801
- We tested to enable the feature flag, create a deployment with pod.spec.HostUsers =false, and then disable
849
+ We tested to enable the feature flag, create a deployment with pod.spec.hostUsers =false, and then disable
802
850
the feature flag and restart the kubelet and kube-apiserver.
803
851
804
852
After that, we deleted the deployment pods (not the deployment object), the pods were re-created
@@ -830,7 +878,7 @@ previous answers based on experience in the field.
830
878
831
879
###### How can an operator determine if the feature is in use by workloads?
832
880
833
- Check if any pod has the pod.spec.HostUsers field set to false.
881
+ Check if any pod has the pod.spec.hostUsers field set to false.
834
882
<!--
835
883
Ideally, this should be a metric. Operations against the Kubernetes API (e.g.,
836
884
checking if there are objects with field X set) may be a last resort. Avoid
@@ -839,7 +887,7 @@ logs or events for this purpose.
839
887
840
888
###### How can someone using this feature know that it is working for their instance?
841
889
842
- Check if any pod has the pod.spec.HostUsers field set to false and is on RUNNING state on a node
890
+ Check if any pod has the pod.spec.hostUsers field set to false and is on RUNNING state on a node
843
891
that meets all the requirements.
844
892
845
893
There are step-by-step examples in the Kubernetes documentation too.
@@ -859,7 +907,7 @@ Recall that end users cannot usually observe component logs or access metrics.
859
907
- Condition name:
860
908
- Other field:
861
909
- [x] Other (treat as last resort)
862
- - Details: check pods with pod.spec.HostUsers field set to false, and see if they are in RUNNING
910
+ - Details: check pods with pod.spec.hostUsers field set to false, and see if they are in RUNNING
863
911
state.
864
912
865
913
###### What are the reasonable SLOs (Service Level Objectives) for the enhancement?
@@ -1135,7 +1183,7 @@ No changes to current kubelet behaviors. The feature only uses kubelet-local inf
1135
1183
- Mitigations: What can be done to stop the bleeding, especially for already
1136
1184
running user workloads?
1137
1185
1138
- Remove the pod.spec.HostUsers field or disable the feature gate.
1186
+ Remove the pod.spec.hostUsers field or disable the feature gate.
1139
1187
1140
1188
- Diagnostics: What are the useful log messages and their required logging
1141
1189
levels that could help debug the issue?
@@ -1183,7 +1231,7 @@ No changes to current kubelet behaviors. The feature only uses kubelet-local inf
1183
1231
- Mitigations: What can be done to stop the bleeding, especially for already
1184
1232
running user workloads?
1185
1233
1186
- Remove the pod.spec.HostUsers field or disable the feature gate.
1234
+ Remove the pod.spec.hostUsers field or disable the feature gate.
1187
1235
1188
1236
- Diagnostics: What are the useful log messages and their required logging
1189
1237
levels that could help debug the issue?
@@ -1217,7 +1265,7 @@ writing to this file.
1217
1265
- Mitigations: What can be done to stop the bleeding, especially for already
1218
1266
running user workloads?
1219
1267
1220
- Remove the pod.spec.HostUsers field or disable the feature gate.
1268
+ Remove the pod.spec.hostUsers field or disable the feature gate.
1221
1269
1222
1270
- Diagnostics: What are the useful log messages and their required logging
1223
1271
levels that could help debug the issue?
@@ -1233,12 +1281,11 @@ writing to this file.
1233
1281
There are no tests for failures to read or write the file, the code-paths just return the errors
1234
1282
in those cases.
1235
1283
1236
-
1237
1284
- Error getting the kubelet IDs range configuration
1238
1285
- Detection: How can it be detected via metrics? Stated another way:
1239
1286
how can an operator troubleshoot without logging into a master or worker node?
1240
1287
1241
- In this case the Kubelet will fail to start with a clear error message.
1288
+ In this case the kubelet will fail to start with a clear error message.
1242
1289
1243
1290
- Mitigations: What can be done to stop the bleeding, especially for already
1244
1291
running user workloads?
@@ -1369,21 +1416,23 @@ The issues without idmap mounts in previous iterations of this KEP, is that the
1369
1416
pod had to be unique for every pod in the cluster, easily reaching a limit when the cluster is "big
1370
1417
enough" and the UID space runs out. However, with idmap mounts the IDs assigned to a pod just needs
1371
1418
to be unique within the node (and with 64k ranges we have 64k pods possible in the node, so not
1372
- really an issue). IOW, by using idmap mounts, we changed the IDs limit to be node-scoped instead of
1373
- cluster-wide/cluster-scoped.
1419
+ really an issue). In other words: by using idmap mounts, we changed the IDs limit to be node-scoped
1420
+ instead of cluster-wide/cluster-scoped.
1421
+
1422
+ Some use cases for longer mappings include:
1374
1423
1375
1424
There are no known use cases for longer mappings that we know of. The 16bit range (0-65535) is what
1376
1425
is assumed by all POSIX tools that we are aware of. If the need arises, longer mapping can be
1377
1426
considered in a future KEP.
1378
1427
1379
- ### Allow runtimes to pick the mapping?
1428
+ ### Allow runtimes to pick the mapping
1380
1429
1381
1430
Tim suggested that we might want to allow the container runtimes to choose the
1382
1431
mapping and have different runtimes pick different mappings. While KEP authors
1383
1432
disagree on this, we still need to discuss it and settle on something. This was
1384
1433
[ raised here] ( https://github.com/kubernetes/enhancements/pull/3065#discussion_r798760382 )
1385
1434
1386
- Furthermore, the reasons mentioned by Tim (some nodes having CRIO, some others having containerd,
1435
+ Furthermore, the reasons mentioned by Tim Hockin (some nodes having CRIO, some others having containerd,
1387
1436
etc.) are handled correctly now. Different nodes can use different container runtimes, if a custom
1388
1437
range needs to be used by the kubelet, that can be configured per-node.
1389
1438
0 commit comments