Skip to content

Commit 8b8f5d5

Browse files
authored
Merge pull request #52613 from kubernetes/dev-1.35
Add official 1.35 release docs
2 parents 2fd1937 + 74d011f commit 8b8f5d5

File tree

114 files changed

+2940
-458
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

114 files changed

+2940
-458
lines changed

content/en/docs/concepts/architecture/cgroups.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -124,6 +124,16 @@ For cgroup v2, the output is `cgroup2fs`.
124124

125125
For cgroup v1, the output is `tmpfs.`
126126

127+
## Deprecation of cgroup v1
128+
129+
{{< feature-state for_k8s_version="v1.35" state="deprecated" >}}
130+
131+
Kubernetes has deprecated cgroup v1.
132+
Removal will follow [Kubernetes deprecation policy](/docs/reference/using-api/deprecation-policy/).
133+
134+
Kubelet will no longer start on a cgroup v1 node by default.
135+
To disable this setting a cluster admin should set `failCgroupV1` to false in the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/).
136+
127137
## {{% heading "whatsnext" %}}
128138

129139
- Learn more about [cgroups](https://man7.org/linux/man-pages/man7/cgroups.7.html)

content/en/docs/concepts/architecture/garbage-collection.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -144,9 +144,7 @@ until disk usage reaches the `LowThresholdPercent` value.
144144

145145
#### Garbage collection for unused container images {#image-maximum-age-gc}
146146

147-
{{< feature-state feature_gate_name="ImageMaximumGCAge" >}}
148-
149-
As a beta feature, you can specify the maximum time a local image can be unused for,
147+
You can specify the maximum time a local image can be unused for,
150148
regardless of disk usage. This is a kubelet setting that you configure for each node.
151149

152150
To configure the setting, you need to set a value for the `imageMaximumGCAge`
@@ -207,4 +205,4 @@ configure garbage collection:
207205

208206
* Learn more about [ownership of Kubernetes objects](/docs/concepts/overview/working-with-objects/owners-dependents/).
209207
* Learn more about Kubernetes [finalizers](/docs/concepts/overview/working-with-objects/finalizers/).
210-
* Learn about the [TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) that cleans up finished Jobs.
208+
* Learn about the [TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) that cleans up finished Jobs.

content/en/docs/concepts/architecture/mixed-version-proxy.md

Lines changed: 44 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -12,20 +12,25 @@ weight: 220
1212

1313
Kubernetes {{< skew currentVersion >}} includes an alpha feature that lets an
1414
{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}
15-
proxy a resource requests to other _peer_ API servers. This is useful when there are multiple
15+
proxy resource requests to other _peer_ API servers. It also lets clients get
16+
a holistic view of resources served across the entire cluster through discovery.
17+
This is useful when there are multiple
1618
API servers running different versions of Kubernetes in one cluster
1719
(for example, during a long-lived rollout to a new release of Kubernetes).
1820

1921
This enables cluster administrators to configure highly available clusters that can be upgraded
20-
more safely, by directing resource requests (made during the upgrade) to the correct kube-apiserver.
21-
That proxying prevents users from seeing unexpected 404 Not Found errors that stem
22-
from the upgrade process.
22+
more safely, by :
2323

24-
This mechanism is called the _Mixed Version Proxy_.
24+
1. ensuring that controllers relying on discovery to show a comprehensive list of resources
25+
for important tasks always get the complete view of all resources. We call this complete cluster wide
26+
discovery- _Peer-aggregated discovery_
27+
1. directing resource requests (made during the upgrade) to the correct kube-apiserver.
28+
This proxying prevents users from seeing unexpected 404 Not Found errors that stem
29+
from the upgrade process. This mechanism is called the _Mixed Version Proxy_.
2530

26-
## Enabling the Mixed Version Proxy
31+
## Enabling Peer-aggregated Discovery and Mixed Version Proxy
2732

28-
Ensure that `UnknownVersionInteroperabilityProxy` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
33+
Ensure that `UnknownVersionInteroperabilityProxy` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/#UnknownVersionInteroperabilityProxy)
2934
is enabled when you start the {{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}:
3035

3136
```shell
@@ -67,6 +72,25 @@ If these flags are unspecified, peers will use the value from either `--advertis
6772
`--bind-address` command line argument to the kube-apiserver.
6873
If those too, are unset, the host's default interface is used.
6974

75+
## Peer-aggregated discovery
76+
77+
When you enable the feature, discovery requests are automatically enabled to serve
78+
a comprehensive discovery document (listing all resources served by any apiserver in the cluster)
79+
by default.
80+
81+
If you would like to request
82+
a non peer-aggregated discovery document, you can indicate so by adding the following Accept header to the discovery request:
83+
84+
```
85+
application/json;g=apidiscovery.k8s.io;v=v2;as=APIGroupDiscoveryList;profile=nopeer
86+
```
87+
88+
{{< note >}}
89+
Peer-aggregated discovery is only supported
90+
for [Aggregated Discovery](/docs/concepts/overview/kubernetes-api/#aggregated-discovery) requests
91+
to the `/apis` endpoint and not for [Unaggregated (Legacy) Discovery](/docs/concepts/overview/kubernetes-api/#unaggregated-discovery) requests.
92+
{{< /note >}}
93+
7094
## Mixed version proxying
7195

7296
When you enable mixed version proxying, the [aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
@@ -82,29 +106,16 @@ loads a special filter that does the following:
82106
### How it works under the hood
83107

84108
When an API Server receives a resource request, it first checks which API servers can
85-
serve the requested resource. This check happens using the internal
86-
[`StorageVersion` API](/docs/reference/generated/kubernetes-api/v{{< skew currentVersion >}}/#storageversioncondition-v1alpha1-internal-apiserver-k8s-io).
87-
88-
* If the resource is known to the API server that received the request
89-
(for example, `GET /api/v1/pods/some-pod`), the request is handled locally.
90-
91-
* If there is no internal `StorageVersion` object found for the requested resource
92-
(for example, `GET /my-api/v1/my-resource`) and the configured APIService specifies proxying
93-
to an extension API server, that proxying happens following the usual
94-
[flow](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) for extension APIs.
95-
96-
* If a valid internal `StorageVersion` object is found for the requested resource
97-
(for example, `GET /batch/v1/jobs`) and the API server trying to handle the request
98-
(the _handling API server_) has the `batch` API disabled, then the _handling API server_
99-
fetches the peer API servers that do serve the relevant API group / version / resource
100-
(`api/v1/batch` in this case) using the information in the fetched `StorageVersion` object.
101-
The _handling API server_ then proxies the request to one of the matching peer kube-apiservers
102-
that are aware of the requested resource.
103-
104-
* If there is no peer known for that API group / version / resource, the handling API server
105-
passes the request to its own handler chain which should eventually return a 404 ("Not Found") response.
106-
107-
* If the handling API server has identified and selected a peer API server, but that peer fails
108-
to respond (for reasons such as network connectivity issues, or a data race between the request
109-
being received and a controller registering the peer's info into the control plane), then the handling
110-
API server responds with a 503 ("Service Unavailable") error.
109+
serve the requested resource. This check happens using the non peer-aggregated discovery document.
110+
111+
* If the resource is listed in the non peer-aggregated discovery document retrieved from the API server that received the request(for example, `GET /api/v1/pods/some-pod`), the request is handled locally.
112+
113+
* If the resource in a request (for example, `GET /apis/resource.k8s.io/v1beta1/resourceclaims`) is not found in the non peer-aggregated discovery document retrieved from the API server trying to handle the request (the _handling API server_), likely because the `resource.k8s.io/v1beta1` API was introduced in a newer Kubernetes version and the _handling API server_ is running an older version that does not support it, then the _handling API server_ fetches the peer API servers that do serve the relevant API group / version / resource (`resource.k8s.io/v1beta1/resourceclaims` in this case) by checking the non peer-aggregated discovery documents from all peer API servers. The _handling API server_ then proxies the request to one of the matching peer kube-apiservers that are aware of the requested resource.
114+
115+
* If there is no peer known for that API group / version / resource, the handling API server
116+
passes the request to its own handler chain which should eventually return a 404 ("Not Found") response.
117+
118+
* If the handling API server has identified and selected a peer API server, but that peer fails
119+
to respond (for reasons such as network connectivity issues, or a data race between the request
120+
being received and a controller registering the peer's info into the control plane), then the handling
121+
API server responds with a 503 ("Service Unavailable") error.

content/en/docs/concepts/containers/images.md

Lines changed: 20 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -221,7 +221,7 @@ the kubelet will pull the images in parallel on behalf of the two different Pods
221221

222222
### Maximum parallel image pulls
223223

224-
{{< feature-state for_k8s_version="v1.32" state="beta" >}}
224+
{{< feature-state for_k8s_version="v1.35" state="stable" >}}
225225

226226
When `serializeImagePulls` is set to false, the kubelet defaults to no limit on
227227
the maximum number of images being pulled at the same time. If you would like to
@@ -409,7 +409,7 @@ on images hosted in a private registry.
409409
Access to pre-pulled images may be authorized according to [image pull credential verification](#ensureimagepullcredentialverification).
410410
{{< /note >}}
411411

412-
#### Ensure image pull credential verification {#ensureimagepullcredentialverification}
412+
### Ensure image pull credential verification {#ensureimagepullcredentialverification}
413413

414414
{{< feature-state feature_gate_name="KubeletEnsureSecretPulledImages" >}}
415415

@@ -446,7 +446,23 @@ will continue to verify without the need to access the registry. New or rotated
446446
will require the image to be re-pulled from the registry.
447447
{{< /note >}}
448448

449-
#### Creating a Secret with a Docker config
449+
#### Enabling `KubeletEnsureSecretPulledImages` for the first time
450+
451+
When the `KubeletEnsureSecretPulledImages` gets enabled for the first time, either
452+
by a kubelet upgrade or by explicitly enabling the feature, if a kubelet is able to
453+
access any images at that time, these will all be considered pre-pulled. This happens
454+
because in this case the kubelet has no records about the images being pulled.
455+
The kubelet will only be able to start making image pull records as any image gets
456+
pulled for the first time.
457+
458+
If this is a concern, it is advised to clean up nodes of all images that should not
459+
be considered pre-pulled before enabling the feature.
460+
461+
Note that removing the directory holding the image pulled records will have the same
462+
effect on kubelet restart, particularly the images currently cached in the nodes by
463+
the container runtime will all be considered pre-pulled.
464+
465+
### Creating a Secret with a Docker config
450466

451467
You need to know the username, registry password and client email address for authenticating
452468
to the registry, as well as its hostname.
@@ -514,7 +530,7 @@ for detailed instructions.
514530
You can use this in conjunction with a per-node `.docker/config.json`. The credentials
515531
will be merged.
516532

517-
## Use cases
533+
### Use cases
518534

519535
There are a number of solutions for configuring private registries. Here are some
520536
common use cases and suggested solutions.

content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -266,6 +266,12 @@ Installing an Aggregated API server always involves running a new Deployment.
266266
Custom resources consume storage space in the same way that ConfigMaps do. Creating too many
267267
custom resources may overload your API server's storage space.
268268

269+
Custom resources are placed into storage based upon the the current storage
270+
version of the resource, defined in the CRD spec. Any update to a custom
271+
resource will use the currently defined storage version to store the resource.
272+
All other versions either need to have all the fields of that version or define
273+
conversions to work properly.
274+
269275
Aggregated API servers may use the same storage as the main API server, in which case the same
270276
warning applies.
271277

Lines changed: 167 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,167 @@
1+
---
2+
title: Storage Versions
3+
content_type: concept
4+
weight: 110
5+
---
6+
7+
<!-- overview -->
8+
The Kubernetes API server stores objects, relying on an etcd-compatible backing
9+
store (often, the backing storage is etcd itself). Each object is serialized
10+
using a particular version of that API type; for example, the v1 representation
11+
of a ConfigMap. Kubernetes uses the term _storage version_ to describe how an
12+
object is stored in your cluster.
13+
14+
The Kubernetes API also relies on automatic conversion; for example, if you have
15+
a HorizontalPodAutoscaler, then you can interact with that
16+
HorizontalPodAutoscaler using any mix of the v1 and v2 versions of the
17+
HorizontalPodAutoscaler API. Kubernetes is responsible for converting each API
18+
call so that clients do not see what version is actually serialized.
19+
20+
For cluster administrators, object storage version is an important concept to
21+
understand since it is what links the API representation of the object to the
22+
actual encoding in the storage backend. This can be important for when the
23+
underlying binary encodings of the object matter, such as for encryption at
24+
rest, or API deprecation.
25+
26+
The same API may have multiple storage versions that the API Server can then
27+
convert to an object schema. A single object that is part of that resource must
28+
only have one storage version at any time. This means that the API Server is
29+
aware of the binary encodings of the objects and is able to convert between all
30+
the stored versions to the API Representation of the object dynamically.
31+
32+
The version of an object is separate from the storage version entirely. For
33+
example, a `v1alpha1` and `v1beta1` API Object for the same Resource will be
34+
encoded the same in storage as long as the storage version has not been updated
35+
between the two objects.
36+
37+
<!-- body -->
38+
39+
## Storage version to resource mapping
40+
41+
Every resource will have 1 active storage version at any point in time, meaning
42+
that any write to an object will store the object at that storage version. The
43+
storage version can be updated however, making it so that objects can be stored
44+
at differing versions. One object will only be stored at one storage version at
45+
any time.
46+
47+
Reads from the API Server will convert the stored data to the API representation
48+
of the object. This makes it so that old storage versions can sit indefinitely
49+
as long as no updates occur to the object. Writes, on the other hand, will
50+
convert the stored object to the new representation upon update.
51+
52+
## Storage versions for custom resources {#CustomResourceDefinition-storage-version}
53+
54+
[Custom
55+
resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/#storage) are
56+
defined dynamically, and as such differ from built in Kubernetes types with
57+
their storage version. Builtin objects generally have their storage encoding
58+
defined separately from their API types, where the stored object acts as a hub
59+
and the specific version of the resource does not matter apart from being a
60+
field in the object schema.
61+
62+
However, for custom resources, a certain version of the resource must be set as
63+
the storage version. The schema defined by that specific version of the custom
64+
resource will be used as the encoding of the resource in the storage layer. See
65+
the [advanced CRD
66+
featureset](/docs/concepts/extend-kubernetes/api-extension/custom-resources/#advanced-features-and-flexibility)
67+
for more detailed information on the API setup and versioning.
68+
69+
For example see this CustomResourceDefinition for _crontabs_:
70+
71+
```yaml
72+
apiVersion: apiextensions.k8s.io/v1
73+
kind: CustomResourceDefinition
74+
metadata:
75+
name: crontabs.example.com
76+
spec:
77+
group: example.com
78+
# list of versions supported by this CustomResourceDefinition
79+
versions:
80+
- name: v1beta1
81+
# Each version can be enabled/disabled by Served flag.
82+
served: true
83+
# One and only one version must be marked as the storage version.
84+
storage: true
85+
schema:
86+
openAPIV3Schema:
87+
type: object
88+
properties:
89+
host:
90+
type: string
91+
port:
92+
type: string
93+
- name: v1
94+
served: true
95+
storage: false
96+
schema:
97+
openAPIV3Schema:
98+
type: object
99+
properties:
100+
host:
101+
type: string
102+
port:
103+
type: string
104+
time:
105+
type: string
106+
conversion:
107+
strategy: None
108+
scope: Namespaced
109+
names:
110+
plural: crontabs
111+
singular: crontab
112+
kind: CronTab
113+
shortNames:
114+
- ct
115+
```
116+
117+
The `v1beta1` API definition is used as the storage version, meaning that any
118+
updates or creation of `crontabs` will be stored with the object schema of the
119+
`v1beta1` api. In this case it actually would mean that the `v1` API object
120+
would never be able to store the `time` field since it is not part of the
121+
storage definition. This schema is used in the storage layer as the binary
122+
encoding of the object itself. Trying to set two versions as the stored version
123+
at the same time is considered invalid, since that would mean that two data
124+
schemes would be considered valid ways to store the objects at the same time.
125+
126+
Upon modification of the version that is used for storage, that version of the
127+
API will be used to store any new or update CRs. Watching or getting the object
128+
will have the object be in use but will just convert the object from the old
129+
storage version and not affect the object. Only updating or creating will have
130+
an effect and use the newly defined storage version.
131+
132+
## How storage versions are relevant to encryption at rest
133+
134+
There are tools to [encrypt the at rest
135+
storage](/docs/tasks/administer-cluster/kms-provider/) of a cluster, especially
136+
for cluster secrets. This adds an additional layer of protection for data
137+
exfiltration since the actual stored data in the cluster is encrypted. This
138+
means that the API Server is actually decrypting the data as it retrieves them
139+
from storage. the data from storage. The APIServer must have the key for that
140+
storage version in order to decode the object properly.
141+
142+
The storage version in this case is more than just the binary encoding of the
143+
object. As long as what is stored can be somehow converted into the API object,
144+
it can be used as a storage version.
145+
146+
## Migrating to a different storage version
147+
148+
Multiple storage versions for a single resource can pose problems for cluster
149+
administrators. A cluster administrator may not remove old versions of an API
150+
for CRDs which may be unsupported until they are sure that all objects are no
151+
longer using the storege version associated with it. With a large number of
152+
objects and an opaque view into which ones are new and which ones still are
153+
backed by old storage versions, it makes it difficult to tell when a version can
154+
be safely removed. If a version is removed prematurely, it can mean being unable
155+
to read the object entirely.
156+
157+
Another important issue is the use of encryption keys as defined in the section
158+
above. Since a resource must be actively in use to update the storage version,
159+
when a key rotation is done, both the old encryption key and the new encryption
160+
key must remain in use until the administrator is sure all objects have been
161+
written to at least once. This poses both security risks and usability issues,
162+
since a key cannot be fully removed from use until then.
163+
164+
See [storage version
165+
migration](/docs/tasks/manage-kubernetes-objects/storage-version-migration) on
166+
examples of how to run a migration to ensure that all objects are using a newer
167+
storage version without manual intervention.

content/en/docs/concepts/policy/node-resource-managers.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -203,9 +203,9 @@ listed in alphabetical order:
203203
`full-pcpus-only` (GA, visible by default)
204204
: Always allocate full physical cores (available since Kubernetes v1.22, GA since Kubernetes v1.33)
205205

206-
`strict-cpu-reservation` (beta, visible by default)
206+
`strict-cpu-reservation` (GA, visible by default)
207207
: Prevent all the pods regardless of their Quality of Service class to run on reserved CPUs
208-
(available since Kubernetes v1.32)
208+
(available since Kubernetes v1.32, GA since Kubernetes v1.35)
209209

210210
`prefer-align-cpus-by-uncorecache` (beta, visible by default)
211211
: Align CPUs by uncore (Last-Level) cache boundary on a best-effort way

0 commit comments

Comments
 (0)