Skip to content

Commit 4060f05

Browse files
committed
OCPBUGS-55174: Updated the changing-cluster-network-mtu file with NIC info
1 parent a8f1d57 commit 4060f05

5 files changed

+68
-98
lines changed

Diff for: modules/aws-outposts-machine-set.adoc

-4
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,6 @@ $ oc get machinesets.machine.openshift.io <original_machine_set_name_1> \
5959
-n openshift-machine-api -o yaml
6060
----
6161
+
62-
--
6362
.Example output
6463
[source,yaml]
6564
----
@@ -90,11 +89,9 @@ spec:
9089
<1> The cluster infrastructure ID.
9190
<2> A default node label. For AWS Outposts, you use the `outposts` role.
9291
<3> The omitted `providerSpec` section includes values that must be configured for your Outpost.
93-
--
9492

9593
. Configure the new compute machine set to create edge compute machines in the Outpost by editing the `<new_machine_set_name_1>.yaml` file:
9694
+
97-
--
9895
.Example compute machine set for AWS Outposts
9996
[source,yaml]
10097
----
@@ -166,7 +163,6 @@ spec:
166163
<6> Specifies the AWS region in which the Outpost availability zone exists.
167164
<7> Specifies the dedicated subnet for your Outpost.
168165
<8> Specifies a taint to prevent workloads from being scheduled on nodes that have the `node-role.kubernetes.io/outposts` label. To schedule user workloads in the Outpost, you must specify a corresponding toleration in the `Deployment` resource for your application.
169-
--
170166

171167
. Save your changes.
172168

Diff for: modules/machineset-creating.adoc

+1-8
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,6 @@ $ oc get machineset <machineset_name> \
103103
-n openshift-machine-api -o yaml
104104
----
105105
+
106-
--
107106
.Example output
108107
[source,yaml]
109108
----
@@ -132,14 +131,8 @@ spec:
132131
...
133132
----
134133
<1> The cluster infrastructure ID.
135-
<2> A default node label.
136-
+
137-
[NOTE]
138-
====
139-
For clusters that have user-provisioned infrastructure, a compute machine set can only create `worker` and `infra` type machines.
140-
====
134+
<2> A default node label. For clusters that have user-provisioned infrastructure, a compute machine set can only create `worker` and `infra` type machines.
141135
<3> The values in the `<providerSpec>` section of the compute machine set CR are platform-specific. For more information about `<providerSpec>` parameters in the CR, see the sample compute machine set CR configuration for your provider.
142-
--
143136

144137
ifdef::vsphere[]
145138
.. If you are creating a compute machine set for a cluster that has user-provisioned infrastructure, note the following important values:

Diff for: modules/nw-aws-load-balancer-with-outposts.adoc

+2-5
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,6 @@ You must annotate Ingress resources with the Outpost subnet or the VPC subnet, b
2727

2828
* Configure the `Ingress` resource to use a specified subnet:
2929
+
30-
--
3130
.Example `Ingress` resource configuration
3231
[source,yaml]
3332
----
@@ -50,7 +49,5 @@ spec:
5049
port:
5150
number: 80
5251
----
53-
<1> Specifies the subnet to use.
54-
* To use the Application Load Balancer in an Outpost, specify the Outpost subnet ID.
55-
* To use the Application Load Balancer in the cloud, you must specify at least two subnets in different availability zones.
56-
--
52+
<1> Specifies the subnet to use. To use the Application Load Balancer in an Outpost, specify the Outpost subnet ID. To use the Application Load Balancer in the cloud, you must specify at least two subnets in different availability zones.
53+

Diff for: modules/nw-cluster-mtu-change.adoc

+62-81
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,7 @@ ifndef::outposts[= Changing the cluster network MTU]
2323
ifdef::outposts[= Changing the cluster network MTU to support AWS Outposts]
2424

2525
ifdef::outposts[]
26-
During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster.
27-
You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet.
26+
During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet.
2827
endif::outposts[]
2928

3029
ifndef::outposts[As a cluster administrator, you can increase or decrease the maximum transmission unit (MTU) for your cluster.]
@@ -71,61 +70,65 @@ Status:
7170
----
7271

7372
ifndef::local-zone,wavelength-zone,post-aws-zones,outposts[]
74-
. Prepare your configuration for the hardware MTU:
75-
76-
** If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration:
73+
. Prepare your configuration for the hardware MTU by selecing one of the following methods:
74+
+
75+
.. If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration:
7776
+
7877
[source,text]
7978
----
80-
dhcp-option-force=26,<mtu>
79+
dhcp-option-force=26,<mtu> <1>
8180
----
81+
<1> Where `<mtu>` specifies the hardware MTU for the DHCP server to advertise.
82+
+
83+
.. If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly.
84+
+
85+
.. If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This method is the default for {product-title} if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified.
8286
+
83-
--
84-
where:
85-
86-
`<mtu>`:: Specifies the hardware MTU for the DHCP server to advertise.
87-
--
88-
89-
** If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly.
90-
91-
** If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for {product-title} if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified.
92-
9387
... Find the primary network interface by entering the following command:
94-
9588
+
9689
[source,terminal]
9790
----
98-
$ oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0
91+
$ oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0 <1> <2>
9992
----
93+
<1> Where `<node_name>` specifies the name of a node in your cluster.
94+
<2> Where `ovs-if-phys0` is the primary network interface. For nodes that use multiple NIC bonds, append `bond-sub0` primary NIC bond interface and `bond-sub1` for the secondary NIC bond interface.
10095
+
101-
--
102-
where:
103-
104-
`<node_name>`:: Specifies the name of a node in your cluster.
105-
--
106-
107-
... Create the following NetworkManager configuration in the `<interface>-mtu.conf` file:
96+
... Create the following NetworkManager configuration in the `<interface>-mtu.conf` file.
10897
+
10998
.Example NetworkManager connection configuration
11099
[source,ini]
111100
----
112101
[connection-<interface>-mtu]
113-
match-device=interface-name:<interface>
114-
ethernet.mtu=<mtu>
102+
match-device=interface-name:<interface> <1>
103+
ethernet.mtu=<mtu> <2>
115104
----
105+
<1> Where `<interface>` specifies the primary network interface name.
106+
<2> Where `<mtu>` specifies the new hardware MTU value.
116107
+
117-
--
118-
where:
108+
[NOTE]
109+
====
110+
For nodes that use a network interface controller (NIC) bond interface, list the bond interface and any sub-interfaces in the `<bond-interface>-mtu.conf` file.
119111

120-
`<mtu>`:: Specifies the new hardware MTU value.
121-
`<interface>`:: Specifies the primary network interface name.
122-
--
112+
.Example NetworkManager connection configuration
113+
[source,ini]
114+
----
115+
[bond0-mtu]
116+
match-device=interface-name:bond0
117+
ethernet.mtu=9000
123118
124-
... Create two `MachineConfig` objects, one for the control plane nodes and another for the worker nodes in your cluster:
119+
[connection-eth0-mtu]
120+
match-device=interface-name:eth0
121+
ethernet.mtu=9000
125122
126-
.... Create the following Butane config in the `control-plane-interface.bu` file:
123+
[connection-eth1-mtu]
124+
match-device=interface-name:eth1
125+
ethernet.mtu=9000
126+
----
127+
====
128+
+
129+
... Create the following Butane config in the `control-plane-interface.bu` file, which is the `MachineConfig` object for the control plane nodes:
127130
+
128-
[source,yaml, subs="attributes+"]
131+
[source,yaml,subs="attributes+"]
129132
----
130133
variant: openshift
131134
version: {product-version}.0
@@ -141,11 +144,11 @@ storage:
141144
mode: 0600
142145
----
143146
<1> Specify the NetworkManager connection name for the primary network interface.
144-
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step.
145-
146-
.... Create the following Butane config in the `worker-interface.bu` file:
147+
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step. For NIC bonds, specify the name for the `<bond-interface>-mtu.conf` file.
148+
+
149+
... Create the following Butane config in the `worker-interface.bu` file, which is the `MachineConfig` object for the compute nodes:
147150
+
148-
[source,yaml, subs="attributes+"]
151+
[source,yaml,subs="attributes+"]
149152
----
150153
variant: openshift
151154
version: {product-version}.0
@@ -161,9 +164,9 @@ storage:
161164
mode: 0600
162165
----
163166
<1> Specify the NetworkManager connection name for the primary network interface.
164-
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step.
165-
166-
.... Create `MachineConfig` objects from the Butane configs by running the following command:
167+
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step.
168+
+
169+
... Create `MachineConfig` objects from the Butane configs by running the following command:
167170
+
168171
[source,terminal]
169172
----
@@ -183,16 +186,11 @@ endif::local-zone,wavelength-zone,post-aws-zones,outposts[]
183186
[source,terminal]
184187
----
185188
$ oc patch Network.operator.openshift.io cluster --type=merge --patch \
186-
'{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }'
189+
'{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }' <1> <2> <3>
187190
----
188-
+
189-
--
190-
where:
191-
192-
`<overlay_from>`:: Specifies the current cluster network MTU value.
193-
`<overlay_to>`:: Specifies the target MTU for the cluster network. This value is set relative to the value of `<machine_to>`. For OVN-Kubernetes, this value must be `100` less than the value of `<machine_to>`.
194-
`<machine_to>`:: Specifies the MTU for the primary network interface on the underlying host network.
195-
--
191+
<1> Where `<overlay_from>` specifies the current cluster network MTU value.
192+
<2> Where `<overlay_to>` specifies the target MTU for the cluster network. This value is set relative to the value of
193+
<3> Where `<machine_to>` specifies the MTU for the primary network interface on the underlying host network. For OVN-Kubernetes, this value must be `100` less than the value of `<machine_to>`.
196194
+
197195
ifndef::outposts[]
198196
.Example that increases the cluster MTU
@@ -246,19 +244,16 @@ machineconfiguration.openshift.io/state: Done
246244

247245
.. Verify that the following statements are true:
248246
+
249-
--
250247
* The value of `machineconfiguration.openshift.io/state` field is `Done`.
251248
* The value of the `machineconfiguration.openshift.io/currentConfig` field is equal to the value of the `machineconfiguration.openshift.io/desiredConfig` field.
252-
--
253249
254250
.. To confirm that the machine config is correct, enter the following command:
255251
+
256252
[source,terminal]
257253
----
258-
$ oc get machineconfig <config_name> -o yaml | grep ExecStart
254+
$ oc get machineconfig <config_name> -o yaml | grep ExecStart <1>
259255
----
260-
+
261-
where `<config_name>` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field.
256+
<1> Where `<config_name>` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field.
262257
+
263258
The machine config must include the following update to the systemd configuration:
264259
+
@@ -269,7 +264,7 @@ ExecStart=/usr/local/bin/mtu-migration.sh
269264

270265
ifndef::local-zone,wavelength-zone,post-aws-zones,outposts[]
271266
. Update the underlying network interface MTU value:
272-
267+
+
273268
** If you are specifying the new MTU with a NetworkManager connection configuration, enter the following command. The MachineConfig Operator automatically performs a rolling reboot of the nodes in your cluster.
274269
+
275270
[source,terminal]
@@ -278,7 +273,7 @@ $ for manifest in control-plane-interface worker-interface; do
278273
oc create -f $manifest.yaml
279274
done
280275
----
281-
276+
+
282277
** If you are specifying the new MTU with a DHCP server option or a kernel command line and PXE, make the necessary changes for your infrastructure.
283278

284279
. As the Machine Config Operator updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
@@ -316,37 +311,28 @@ machineconfiguration.openshift.io/state: Done
316311
+
317312
Verify that the following statements are true:
318313
+
319-
--
320-
* The value of `machineconfiguration.openshift.io/state` field is `Done`.
321-
* The value of the `machineconfiguration.openshift.io/currentConfig` field is equal to the value of the `machineconfiguration.openshift.io/desiredConfig` field.
322-
--
314+
* The value of `machineconfiguration.openshift.io/state` field is `Done`.
315+
* The value of the `machineconfiguration.openshift.io/currentConfig` field is equal to the value of the `machineconfiguration.openshift.io/desiredConfig` field.
323316

324317
.. To confirm that the machine config is correct, enter the following command:
325318
+
326319
[source,terminal]
327320
----
328-
$ oc get machineconfig <config_name> -o yaml | grep path:
321+
$ oc get machineconfig <config_name> -o yaml | grep path: <1>
329322
----
330-
+
331-
where `<config_name>` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field.
323+
<1> Where `<config_name>` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field.
332324
+
333325
If the machine config is successfully deployed, the previous output contains the `/etc/NetworkManager/conf.d/99-<interface>-mtu.conf` file path and the `ExecStart=/usr/local/bin/mtu-migration.sh` line.
334326
endif::local-zone,wavelength-zone,post-aws-zones,outposts[]
335327

336-
. To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin:
328+
. To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin. Replace `<mtu>` with the new cluster network MTU that you specified with `<overlay_to>`.
337329
+
338330
[source,terminal]
339-
+
340331
----
341332
$ oc patch Network.operator.openshift.io cluster --type=merge --patch \
342333
'{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}'
343334
----
344-
+
345-
--
346-
where:
347-
348-
`<mtu>`:: Specifies the new cluster network MTU that you specified with `<overlay_to>`.
349-
--
335+
Where `<mtu>` specifies the new cluster network MTU that you specified with `<overlay_to>`.
350336

351337
. After finalizing the MTU migration, each machine config pool node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
352338
+
@@ -389,15 +375,10 @@ $ oc get nodes
389375
+
390376
[source,terminal]
391377
----
392-
$ oc debug node/<node> -- chroot /host ip address show <interface>
378+
$ oc debug node/<node> -- chroot /host ip address show <interface> <1> <2>
393379
----
394-
+
395-
where:
396-
+
397-
--
398-
`<node>`:: Specifies a node from the output from the previous step.
399-
`<interface>`:: Specifies the primary network interface name for the node.
400-
--
380+
<1> Where `<node>` specifies a node from the output from the previous step.
381+
<2> Where `<interface>` specifies the primary network interface name for the node.
401382
+
402383
.Example output
403384
[source,text]

Diff for: networking/changing-cluster-network-mtu.adoc

+3
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,10 @@ toc::[]
99
[role="_abstract"]
1010
As a cluster administrator, you can change the MTU for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change.
1111

12+
// About the cluster MTU
1213
include::modules/nw-cluster-mtu-change-about.adoc[leveloffset=+1]
14+
15+
// Changing the cluster network MTU or Changing the cluster network MTU to support AWS Outposts
1316
include::modules/nw-cluster-mtu-change.adoc[leveloffset=+1]
1417

1518
[role="_additional-resources"]

0 commit comments

Comments
 (0)