Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions modules/rn-ocp-release-notes-fixed-issues.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,7 @@ To maintain compatibility with Kubernetes 1.34, CoreDNS has been updated to vers
[id="rn-ocp-release-note-node-fixed-issues_{context}"]
== Node

* Before this update, the maximum open files soft limit was reduced in {product-title} builds. As a consequence, containers had a reduced maximum open files limit, causing application failures. With this release, the CRI-O configuration has been updated to restore the maximum open files limit. As a result, the maximum open files limit in containers is restored to the previous value, improving the functionality of applications that require higher limits. (link:https://issues.redhat.com/browse/OCPBUGS-62095[OCPBUGS-62095])

[id="rn-ocp-release-note-node-tuning-operator-fixed-issues_{context}"]
== Node Tuning Operator
Expand Down
2 changes: 0 additions & 2 deletions modules/rn-ocp-release-notes-known-issues.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,6 @@ where:
+
For information on the oc-mirror v2 plugin, see _Mirroring images for a disconnected installation by using the oc-mirror plugin v2_.

* Starting with {product-title} 4.21, there is a decrease in the default maximum open files soft limit for containers. As a consequence, end users might experience application failures. To work around this problem, increase the container runtimes (CRI-O) ulimit configuration using a method of your choice, such as the `ulimit` command. (link:https://issues.redhat.com/browse/OCPBUGS-62095[OCPBUGS-62095])

* Currently, on clusters with SR-IOV network virtual functions configured, a race condition might occur between system services responsible for network device renaming and the TuneD service managed by the Node Tuning Operator. As a consequence, the TuneD profile might become degraded after the node restarts, leading to performance degradation. As a workaround, restart the TuneD pod to restore the profile state. (link:https://issues.redhat.com/browse/OCPBUGS-41934[OCPBUGS-41934])

* Currently, pods that use a `guaranteed` QoS class and request whole CPUs might not restart automatically after a node reboot or kubelet restart. The issue might occur in nodes configured with a static CPU Manager policy and using the `full-pcpus-only` specification, and when most or all CPUs on the node are already allocated by such workloads. As a workaround, manually delete and re-create the affected pods. (link:https://issues.redhat.com/browse/OCPBUGS-43280[OCPBUGS-43280])
Expand Down