You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/nw-cluster-mtu-change.adoc
+53-65
Original file line number
Diff line number
Diff line change
@@ -23,8 +23,7 @@ ifndef::outposts[= Changing the cluster network MTU]
23
23
ifdef::outposts[= Changing the cluster network MTU to support AWS Outposts]
24
24
25
25
ifdef::outposts[]
26
-
During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster.
27
-
You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet.
26
+
During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet.
28
27
endif::outposts[]
29
28
30
29
ifndef::outposts[As a cluster administrator, you can increase or decrease the maximum transmission unit (MTU) for your cluster.]
. Prepare your configuration for the hardware MTU:
75
-
76
-
** If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration:
74
+
+
75
+
.. If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration:
77
76
+
78
77
[source,text]
79
78
----
80
-
dhcp-option-force=26,<mtu>
79
+
dhcp-option-force=26,<mtu> <1>
81
80
----
81
+
<1> Where `<mtu>` specifies the hardware MTU for the DHCP server to advertise.
82
+
+
83
+
.. If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly.
84
+
+
85
+
.. If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for {product-title} if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified.
82
86
+
83
-
--
84
-
where:
85
-
86
-
`<mtu>`:: Specifies the hardware MTU for the DHCP server to advertise.
87
-
--
88
-
89
-
** If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly.
90
-
91
-
** If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for {product-title} if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified.
92
-
93
87
... Find the primary network interface by entering the following command:
94
-
95
88
+
96
89
[source,terminal]
97
90
----
98
-
$ oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0
91
+
$ oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0 <1> <2>
99
92
----
93
+
<1> Where `<node_name>` specifies the name of a node in your cluster.
94
+
<2> Where `ovs-if-phys0` is the primary network interface. For nodes that use multiple NIC bonds, append `bond-sub0` primary NIC bond interface and `bond-sub1` for the secondary NIC bond interface.
100
95
+
101
-
--
102
-
where:
103
-
104
-
`<node_name>`:: Specifies the name of a node in your cluster.
105
-
--
106
-
107
-
... Create the following NetworkManager configuration in the `<interface>-mtu.conf` file:
96
+
... Create the following NetworkManager configuration in the `<interface>-mtu.conf` file.
108
97
+
109
98
.Example NetworkManager connection configuration
110
99
[source,ini]
111
100
----
112
101
[connection-<interface>-mtu]
113
-
match-device=interface-name:<interface>
114
-
ethernet.mtu=<mtu>
102
+
match-device=interface-name:<interface> <1>
103
+
ethernet.mtu=<mtu> <2>
115
104
----
105
+
<1> Where `<interface>` specifies the primary network interface name.
106
+
<2> Where `<mtu>` specifies the new hardware MTU value.
116
107
+
117
-
--
118
-
where:
119
-
120
-
`<mtu>`:: Specifies the new hardware MTU value.
121
-
`<interface>`:: Specifies the primary network interface name.
122
-
--
108
+
[NOTE]
109
+
====
110
+
For nodes that use network interface controller (NIC) bonds, specify the primary NIC bond and any secondary NIC Bond in a `<bond-interface>-mtu.conf` file.
123
111
124
-
... Create two `MachineConfig` objects, one for the control plane nodes and another for the worker nodes in your cluster:
112
+
.Example NetworkManager connection configuration
113
+
[source,ini]
114
+
----
115
+
[connection-<primary-NIC-bond-interface>-mtu]
116
+
match-device=interface-name:<bond-iface-name>
117
+
ethernet.mtu=9000
125
118
126
-
.... Create the following Butane config in the `control-plane-interface.bu` file:
... Create the following Butane config in the `control-plane-interface.bu` file, which is the `MachineConfig` object for the control plane nodes:
127
126
+
128
-
[source,yaml,subs="attributes+"]
127
+
[source,yaml,subs="attributes+"]
129
128
----
130
129
variant: openshift
131
130
version: {product-version}.0
@@ -141,11 +140,11 @@ storage:
141
140
mode: 0600
142
141
----
143
142
<1> Specify the NetworkManager connection name for the primary network interface.
144
-
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step.
145
-
146
-
.... Create the following Butane config in the `worker-interface.bu` file:
143
+
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step. For NIC bonds, specify the name for the `<bond-interface>-mtu.conf` file.
147
144
+
148
-
[source,yaml, subs="attributes+"]
145
+
... Create the following Butane config in the `worker-interface.bu` file, which is the `MachineConfig` object for the compute nodes:
146
+
+
147
+
[source,yaml,subs="attributes+"]
149
148
----
150
149
variant: openshift
151
150
version: {product-version}.0
@@ -161,9 +160,9 @@ storage:
161
160
mode: 0600
162
161
----
163
162
<1> Specify the NetworkManager connection name for the primary network interface.
164
-
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step.
165
-
166
-
.... Create `MachineConfig` objects from the Butane configs by running the following command:
163
+
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step. For
164
+
+
165
+
... Create `MachineConfig` objects from the Butane configs by running the following command:
`<overlay_from>`:: Specifies the current cluster network MTU value.
193
-
`<overlay_to>`:: Specifies the target MTU for the cluster network. This value is set relative to the value of `<machine_to>`. For OVN-Kubernetes, this value must be `100` less than the value of `<machine_to>`.
194
-
`<machine_to>`:: Specifies the MTU for the primary network interface on the underlying host network.
187
+
<1> Where `<overlay_from>` specifies the current cluster network MTU value.
188
+
<2> Where `<overlay_to>` specifies the target MTU for the cluster network. This value is set relative to the value of
189
+
<3> Where `<machine_to>` specifies the MTU for the primary network interface on the underlying host network. For OVN-Kubernetes, this value must be `100` less than the value of `<machine_to>`.
. Update the underlying network interface MTU value:
272
-
266
+
+
273
267
** If you are specifying the new MTU with a NetworkManager connection configuration, enter the following command. The MachineConfig Operator automatically performs a rolling reboot of the nodes in your cluster.
274
268
+
275
269
[source,terminal]
@@ -278,7 +272,7 @@ $ for manifest in control-plane-interface worker-interface; do
278
272
oc create -f $manifest.yaml
279
273
done
280
274
----
281
-
275
+
+
282
276
** If you are specifying the new MTU with a DHCP server option or a kernel command line and PXE, make the necessary changes for your infrastructure.
283
277
284
278
. As the Machine Config Operator updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
@@ -325,10 +319,9 @@ Verify that the following statements are true:
325
319
+
326
320
[source,terminal]
327
321
----
328
-
$ oc get machineconfig <config_name> -o yaml | grep path:
where `<config_name>` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field.
324
+
<1> Where `<config_name>` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field.
332
325
+
333
326
If the machine config is successfully deployed, the previous output contains the `/etc/NetworkManager/conf.d/99-<interface>-mtu.conf` file path and the `ExecStart=/usr/local/bin/mtu-migration.sh` line.
`<mtu>`:: Specifies the new cluster network MTU that you specified with `<overlay_to>`.
349
-
--
337
+
<1> Where: `<mtu>` specifies the new cluster network MTU that you specified with `<overlay_to>`.
350
338
351
339
. After finalizing the MTU migration, each machine config pool node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
Copy file name to clipboardExpand all lines: networking/changing-cluster-network-mtu.adoc
+3
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,10 @@ toc::[]
9
9
[role="_abstract"]
10
10
As a cluster administrator, you can change the MTU for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change.
0 commit comments