Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2895,6 +2895,9 @@ Topics:
- Name: Creating a performance profile
File: cnf-create-performance-profiles
Distros: openshift-origin,openshift-enterprise
- Name: Shared CPUs for lightweight workload tasks
File: cnf-shared-cpu-for-workloads
Distros: openshift-origin,openshift-enterprise
- Name: Workload partitioning
File: enabling-workload-partitioning
Distros: openshift-origin,openshift-enterprise
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
129 changes: 129 additions & 0 deletions modules/configuring-a-workload-to-use-shared-cpus.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
:_mod-docs-content-type: PROCEDURE

[id="configuring-a-workload-to-use-shared-cpus_{context}"]
= Configuring a workload to use shared CPUs

You can pin lightweight workload tasks to shared CPUs to improve CPU resource efficiency. When you enable shared CPUs for a node by using a performance profile, and request shared CPUs by using a workload's pod specification, containers deployed on the pod feature the `OPENSHIFT_SHARED_CPUS` and `OPENSHIFT_ISOLATED_CPUS` environment variables. These variables show the IDs of the shared and isolated CPUs.

You can use the `OPENSHIFT_SHARED_CPUS` environment variable to pin latency-tolerant application threads to shared CPUs.

For latency-sentsitive application threads, use the `OPENSHIFT_ISOLATED_CPUS` environment variable.

.Prerequisites

* Log in as a user with `cluster-admin` privileges.

* You defined and enabled shared CPUs by using a performance profile.

//* You enabled the `TechPreviewNoUpgrade` feature set on the cluster. For more information about enabling a feature set, see the _Additional resources_ section.
// +
// [WARNING]
// ====
// Enabling the `TechPreviewNoUpgrade` feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.
// ====

.Procedure

. Create a namespace by running the following command.
+
[source,terminal]
----
$ oc create namespace <namespace_name>
----
+
[NOTE]
====
To use shared CPUs, you must use a namespace with the `workload.mixedcpus.openshift.io/allowed` annotation.
====

. Add the required annotation to the namespace by running the following command:
+
[source,terminal]
----
$ oc annotate namespace <namespace_name> workload.mixedcpus.openshift.io/allowed=''
----
+
.Example output
[source,terminal]
----
namespace/<namespace_name> annotate
----

. Create a `Pod` resource that requests shared CPUs:

.. Create a YAML file that defines the `Pod` resource:
+
--
.Example `shared-cpu-pod.yaml` file
[source,yaml]
----
apiVersion: v1
kind: Pod
metadata:
name: dpdk-pod
metadata: <namespace_name> <1>
spec:
runtimeClassName: performance-example-shared-cpu-pp <2>
containers:
- name: dpdk-cnt
image: image-address
resources:
requests:
cpu: “2” <3>
memory: “100m” <4>
workload.openshift.io/enable-shared-cpus: ”1” <5>
limits:
cpu: “2”
memory: “100m”
workload.openshift.io/enable-shared-cpus: ”1”
----
<1> Specify the namespace with the `workload.mixedcpus.openshift.io/allowed=''` annotation.
<2> Specify the performance profile with the shared CPU configuration in the format `performance-<performance_profile_name>`.
<3> Specify the number of cores required.
+
[IMPORTANT]
====
To use shared CPUs, you must create a pod with a QoS class of `Guaranteed`. To specify a pod with a QoS class of `Guaranteed`, the memory limit must equal the memory request, and the CPU limit must equal the CPU request. The CPU limit and CPU request values must be integers.
====
<4> Specify the memory required.
<5> This is a boolean field to enable shared CPUs for the pod. Enter `1` to enable shared CPUs. The shared CPUs feature is disabled by default.
--

.. Create the `Pod` resource by running the following command:
+
[source,terminal]
----
$ oc create -f shared-cpu-pod.yaml
----
+
.Example output
[source,terminal]
----
pod/dpdk-pod created
----

.Verification

. Verify that you can access the `OPENSHIFT_SHARED_CPUS` and `OPENSHIFT_ISOLATED_CPUS` environment variables from within the pod. You can use this information to pin lightweight tasks from the pod's container processes to shared CPUs:

.. Start an interactive shell within the target pod by running the following command:
+
[source,terminal]
----
$ oc exec -it dpdk-pod -- /bin/sh
----

.. Verify that you can access the environment variables by running the following command:
+
[source,terminal]
----
# env | grep OPENSHIFT
----
+
.Example output
[source,terminal]
----
OPENSHIFT_SHARED_CPUS=<CPU_ID> <1>
OPENSHIFT_ISOLATED_CPUS=<CPUD_ID>
----
<1> You can view the CPU ID for shared CPUs and isolated CPUs in the `OPENSHIFT_SHARED_CPUS` and `OPENSHIFT_ISOLATED_CPUS`environment variables.
110 changes: 110 additions & 0 deletions modules/configuring-shared-cpus-for-a-workload.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
:_mod-docs-content-type: PROCEDURE

[id="configuring-shared-cpus-for-a-workload_{context}"]
= Configuring shared CPUs in a performance profile

You can define and enable shared CPUs by using a performance profile. You can use these shared CPUs for lightweight application tasks.

.Prerequisites

* Log in as a user with `cluster-admin` privileges.

//* You enabled the `TechPreviewNoUpgrade` feature set on the cluster. For more information about //enabling a feature set, see the _Additional resources_ section.
// +
// [WARNING]
// ====
// Enabling the `TechPreviewNoUpgrade` feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.
// ====

.Procedure

. Create a `PerformanceProfile` resource:

.. Create a YAML file that defines the `PerformanceProfile` resource:
+
.Example `shared-cpu-pp.yaml` file
[source,yaml]
----
apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
name: example-shared-cpu-pp
spec:
cpu:
reserved: "0-1"
isolated: "4-8"
shared: "2-3" <1>
workloadHints:
mixedCpus: true <2>
nodeSelector:
node-role.kubernetes.io/performance: "test" <3>
----
<1> Define the shared CPU cores.
<2> Enable the shared CPUs feature by enabling the `mixedCpus` workload hint.
<3> Select the node that you want to use for shared CPUs.

.. Create the `PerformanceProfile` resource by running the following command:
+
[source,bash]
----
$ oc create -f shared-cpu-pp.yaml
----
+
.Example output
[source,terminal]
----
performanceprofile.performance.openshift.io/example-shared-cpu-pp created
----
+
[NOTE]
====
After you apply a performance profile, the nodes must reboot and return to the `Ready` status. Wait for the nodes to complete these tasks before you continue to the verification steps.
====

.Verification

. Verify that the performance profile is created by running the following command:
+
[source,terminal]
----
$ oc describe performanceprofile example-shared-cpu-pp
----
+
.Example output
[source,terminal]
----
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Creation succeeded 27m (x17 over 35m) performance-profile-controller Succeeded to create all components
----

. Verify that the shared CPUs are present under the `reservedSystemCPUs` pool on the target node:

.. Enter into a debug session on the target node:
+
[source,terminal]
----
$ oc debug node/<node_name>
----

.. Set `/host` as the root directory within the debug shell prompt.
+
[source,terminal]
----
sh-4.4# chroot /host
----

.. Check that the `reservedSystemCPUs` pool includes the `shared` and `reserved` CPUs by running the following command:
+
[source,terminal]
----
sh-5.1# cat /etc/kubernetes/kubelet.conf | grep "reservedSystemCPUs"
----
+
.Example output
[source,terminal]
----
"reservedSystemCPUs": "0-3",
----
43 changes: 43 additions & 0 deletions scalability_and_performance/cnf-shared-cpu-for-workloads.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
:_mod-docs-content-type: ASSEMBLY
[id="cnf-shared-cpu-for-workloads"]
= Shared CPUs for lightweight workload tasks
include::_attributes/common-attributes.adoc[]
:context: cnf-shared-cpu-for-workloads

toc::[]

Within high-performance workloads, some lightweight application tasks can occupy scarce CPU resources. For example, log printing or configuration processing can occupy an isolated CPU that could be used for more critical workload demands. You can increase workload performance, and more efficiently use CPU resources, by moving lightweight application tasks to a set of shared CPUs.

[NOTE]
====
The shared CPUs feature is not available on HyperShift-hosted clusters.
====

//:FeatureName: Configuring shared CPUs
//include::snippets/technology-preview.adoc[]

You can define and enable a set of shared CPUs by using a performance profile. When you enable shared CPUs, the kublet's `reservedSystemCpus` pool, which is typically used for system housekeeping tasks only, is internally partitioned into a `shared` CPU pool and a `reserved` CPU pool.

.Overview of shared CPUs in a node
image::555_OpenShift_shared_CPUs_overview_0324.png[Overview of shared CPUs for pods in a node]

The `shared` CPU pool can process system housekeeping tasks and lightweight application tasks. The `reserved` CPU pool continues to only process system housekeeping tasks. As the `shared` CPU pool also processes system housekeeping tasks there might be some latency in processing application tasks. However, this is acceptable give the peripheral nature of the application tasks and the overall improvement in CPU resource efficiency.

[NOTE]
====
System housekeeping tasks can only run on shared CPUs when workload partitioning is disabled.
====

A typical use-case for shared CPUs is Data Plane Development Kit (DPDK) applications. DPDK applications require intensive data processing, such as packet forwarding, routing, and network function virtualization (NFV). Moving peripheral DPDK application tasks to a set of shared CPUs prevents these tasks from monopolizing isolated cores meant for critical workload processing.

include::modules/configuring-shared-cpus-for-a-workload.adoc[leveloffset=+1]

include::modules/configuring-a-workload-to-use-shared-cpus.adoc[leveloffset=+1]


// [id="{context}-additional-resources"]
// [role="_additional-resources"]
// == Additional resources

// * xref:../nodes/clusters/nodes-cluster-enabling-features.adoc#nodes-cluster-enabling[Enabling features using feature gates]