You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/aws-outposts-machine-set.adoc
-4
Original file line number
Diff line number
Diff line change
@@ -59,7 +59,6 @@ $ oc get machinesets.machine.openshift.io <original_machine_set_name_1> \
59
59
-n openshift-machine-api -o yaml
60
60
----
61
61
+
62
-
--
63
62
.Example output
64
63
[source,yaml]
65
64
----
@@ -90,11 +89,9 @@ spec:
90
89
<1> The cluster infrastructure ID.
91
90
<2> A default node label. For AWS Outposts, you use the `outposts` role.
92
91
<3> The omitted `providerSpec` section includes values that must be configured for your Outpost.
93
-
--
94
92
95
93
. Configure the new compute machine set to create edge compute machines in the Outpost by editing the `<new_machine_set_name_1>.yaml` file:
96
94
+
97
-
--
98
95
.Example compute machine set for AWS Outposts
99
96
[source,yaml]
100
97
----
@@ -166,7 +163,6 @@ spec:
166
163
<6> Specifies the AWS region in which the Outpost availability zone exists.
167
164
<7> Specifies the dedicated subnet for your Outpost.
168
165
<8> Specifies a taint to prevent workloads from being scheduled on nodes that have the `node-role.kubernetes.io/outposts` label. To schedule user workloads in the Outpost, you must specify a corresponding toleration in the `Deployment` resource for your application.
Copy file name to clipboardExpand all lines: modules/machineset-creating.adoc
+1-8
Original file line number
Diff line number
Diff line change
@@ -103,7 +103,6 @@ $ oc get machineset <machineset_name> \
103
103
-n openshift-machine-api -o yaml
104
104
----
105
105
+
106
-
--
107
106
.Example output
108
107
[source,yaml]
109
108
----
@@ -132,14 +131,8 @@ spec:
132
131
...
133
132
----
134
133
<1> The cluster infrastructure ID.
135
-
<2> A default node label.
136
-
+
137
-
[NOTE]
138
-
====
139
-
For clusters that have user-provisioned infrastructure, a compute machine set can only create `worker` and `infra` type machines.
140
-
====
134
+
<2> A default node label. For clusters that have user-provisioned infrastructure, a compute machine set can only create `worker` and `infra` type machines.
141
135
<3> The values in the `<providerSpec>` section of the compute machine set CR are platform-specific. For more information about `<providerSpec>` parameters in the CR, see the sample compute machine set CR configuration for your provider.
142
-
--
143
136
144
137
ifdef::vsphere[]
145
138
.. If you are creating a compute machine set for a cluster that has user-provisioned infrastructure, note the following important values:
Copy file name to clipboardExpand all lines: modules/nw-aws-load-balancer-with-outposts.adoc
+2-5
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,6 @@ You must annotate Ingress resources with the Outpost subnet or the VPC subnet, b
27
27
28
28
* Configure the `Ingress` resource to use a specified subnet:
29
29
+
30
-
--
31
30
.Example `Ingress` resource configuration
32
31
[source,yaml]
33
32
----
@@ -50,7 +49,5 @@ spec:
50
49
port:
51
50
number: 80
52
51
----
53
-
<1> Specifies the subnet to use.
54
-
* To use the Application Load Balancer in an Outpost, specify the Outpost subnet ID.
55
-
* To use the Application Load Balancer in the cloud, you must specify at least two subnets in different availability zones.
56
-
--
52
+
<1> Specifies the subnet to use. To use the Application Load Balancer in an Outpost, specify the Outpost subnet ID. To use the Application Load Balancer in the cloud, you must specify at least two subnets in different availability zones.
Copy file name to clipboardExpand all lines: modules/nw-cluster-mtu-change.adoc
+62-81
Original file line number
Diff line number
Diff line change
@@ -23,8 +23,7 @@ ifndef::outposts[= Changing the cluster network MTU]
23
23
ifdef::outposts[= Changing the cluster network MTU to support AWS Outposts]
24
24
25
25
ifdef::outposts[]
26
-
During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster.
27
-
You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet.
26
+
During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet.
28
27
endif::outposts[]
29
28
30
29
ifndef::outposts[As a cluster administrator, you can increase or decrease the maximum transmission unit (MTU) for your cluster.]
. Prepare your configuration for the hardware MTU:
75
-
76
-
** If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration:
73
+
. Prepare your configuration for the hardware MTU by selecing one of the following methods:
74
+
+
75
+
.. If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration:
77
76
+
78
77
[source,text]
79
78
----
80
-
dhcp-option-force=26,<mtu>
79
+
dhcp-option-force=26,<mtu> <1>
81
80
----
81
+
<1> Where `<mtu>` specifies the hardware MTU for the DHCP server to advertise.
82
+
+
83
+
.. If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly.
84
+
+
85
+
.. If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This method is the default for {product-title} if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified.
82
86
+
83
-
--
84
-
where:
85
-
86
-
`<mtu>`:: Specifies the hardware MTU for the DHCP server to advertise.
87
-
--
88
-
89
-
** If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly.
90
-
91
-
** If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for {product-title} if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified.
92
-
93
87
... Find the primary network interface by entering the following command:
94
-
95
88
+
96
89
[source,terminal]
97
90
----
98
-
$ oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0
91
+
$ oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0 <1> <2>
99
92
----
93
+
<1> Where `<node_name>` specifies the name of a node in your cluster.
94
+
<2> Where `ovs-if-phys0` is the primary network interface. For nodes that use multiple NIC bonds, append `bond-sub0` primary NIC bond interface and `bond-sub1` for the secondary NIC bond interface.
100
95
+
101
-
--
102
-
where:
103
-
104
-
`<node_name>`:: Specifies the name of a node in your cluster.
105
-
--
106
-
107
-
... Create the following NetworkManager configuration in the `<interface>-mtu.conf` file:
96
+
... Create the following NetworkManager configuration in the `<interface>-mtu.conf` file.
108
97
+
109
98
.Example NetworkManager connection configuration
110
99
[source,ini]
111
100
----
112
101
[connection-<interface>-mtu]
113
-
match-device=interface-name:<interface>
114
-
ethernet.mtu=<mtu>
102
+
match-device=interface-name:<interface> <1>
103
+
ethernet.mtu=<mtu> <2>
115
104
----
105
+
<1> Where `<interface>` specifies the primary network interface name.
106
+
<2> Where `<mtu>` specifies the new hardware MTU value.
116
107
+
117
-
--
118
-
where:
108
+
[NOTE]
109
+
====
110
+
For nodes that use a network interface controller (NIC) bond interface, list the bond interface and any sub-interfaces in the `<bond-interface>-mtu.conf` file.
119
111
120
-
`<mtu>`:: Specifies the new hardware MTU value.
121
-
`<interface>`:: Specifies the primary network interface name.
122
-
--
112
+
.Example NetworkManager connection configuration
113
+
[source,ini]
114
+
----
115
+
[bond0-mtu]
116
+
match-device=interface-name:bond0
117
+
ethernet.mtu=9000
123
118
124
-
... Create two `MachineConfig` objects, one for the control plane nodes and another for the worker nodes in your cluster:
119
+
[connection-eth0-mtu]
120
+
match-device=interface-name:eth0
121
+
ethernet.mtu=9000
125
122
126
-
.... Create the following Butane config in the `control-plane-interface.bu` file:
123
+
[connection-eth1-mtu]
124
+
match-device=interface-name:eth1
125
+
ethernet.mtu=9000
126
+
----
127
+
====
128
+
+
129
+
... Create the following Butane config in the `control-plane-interface.bu` file, which is the `MachineConfig` object for the control plane nodes:
127
130
+
128
-
[source,yaml,subs="attributes+"]
131
+
[source,yaml,subs="attributes+"]
129
132
----
130
133
variant: openshift
131
134
version: {product-version}.0
@@ -141,11 +144,11 @@ storage:
141
144
mode: 0600
142
145
----
143
146
<1> Specify the NetworkManager connection name for the primary network interface.
144
-
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step.
145
-
146
-
.... Create the following Butane config in the `worker-interface.bu` file:
147
+
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step. For NIC bonds, specify the name for the `<bond-interface>-mtu.conf` file.
148
+
+
149
+
... Create the following Butane config in the `worker-interface.bu` file, which is the `MachineConfig` object for the compute nodes:
147
150
+
148
-
[source,yaml,subs="attributes+"]
151
+
[source,yaml,subs="attributes+"]
149
152
----
150
153
variant: openshift
151
154
version: {product-version}.0
@@ -161,9 +164,9 @@ storage:
161
164
mode: 0600
162
165
----
163
166
<1> Specify the NetworkManager connection name for the primary network interface.
164
-
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step.
165
-
166
-
.... Create `MachineConfig` objects from the Butane configs by running the following command:
167
+
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step.
168
+
+
169
+
... Create `MachineConfig` objects from the Butane configs by running the following command:
`<overlay_from>`:: Specifies the current cluster network MTU value.
193
-
`<overlay_to>`:: Specifies the target MTU for the cluster network. This value is set relative to the value of `<machine_to>`. For OVN-Kubernetes, this value must be `100` less than the value of `<machine_to>`.
194
-
`<machine_to>`:: Specifies the MTU for the primary network interface on the underlying host network.
195
-
--
191
+
<1> Where `<overlay_from>` specifies the current cluster network MTU value.
192
+
<2> Where `<overlay_to>` specifies the target MTU for the cluster network. This value is set relative to the value of
193
+
<3> Where `<machine_to>` specifies the MTU for the primary network interface on the underlying host network. For OVN-Kubernetes, this value must be `100` less than the value of `<machine_to>`.
* The value of `machineconfiguration.openshift.io/state` field is `Done`.
251
248
* The value of the `machineconfiguration.openshift.io/currentConfig` field is equal to the value of the `machineconfiguration.openshift.io/desiredConfig` field.
252
-
--
253
249
254
250
.. To confirm that the machine config is correct, enter the following command:
255
251
+
256
252
[source,terminal]
257
253
----
258
-
$ oc get machineconfig <config_name> -o yaml | grep ExecStart
. Update the underlying network interface MTU value:
272
-
267
+
+
273
268
** If you are specifying the new MTU with a NetworkManager connection configuration, enter the following command. The MachineConfig Operator automatically performs a rolling reboot of the nodes in your cluster.
274
269
+
275
270
[source,terminal]
@@ -278,7 +273,7 @@ $ for manifest in control-plane-interface worker-interface; do
278
273
oc create -f $manifest.yaml
279
274
done
280
275
----
281
-
276
+
+
282
277
** If you are specifying the new MTU with a DHCP server option or a kernel command line and PXE, make the necessary changes for your infrastructure.
283
278
284
279
. As the Machine Config Operator updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
* The value of `machineconfiguration.openshift.io/state` field is `Done`.
321
-
* The value of the `machineconfiguration.openshift.io/currentConfig` field is equal to the value of the `machineconfiguration.openshift.io/desiredConfig` field.
322
-
--
314
+
* The value of `machineconfiguration.openshift.io/state` field is `Done`.
315
+
* The value of the `machineconfiguration.openshift.io/currentConfig` field is equal to the value of the `machineconfiguration.openshift.io/desiredConfig` field.
323
316
324
317
.. To confirm that the machine config is correct, enter the following command:
325
318
+
326
319
[source,terminal]
327
320
----
328
-
$ oc get machineconfig <config_name> -o yaml | grep path:
where `<config_name>` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field.
323
+
<1> Where `<config_name>` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field.
332
324
+
333
325
If the machine config is successfully deployed, the previous output contains the `/etc/NetworkManager/conf.d/99-<interface>-mtu.conf` file path and the `ExecStart=/usr/local/bin/mtu-migration.sh` line.
. To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin:
328
+
. To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin. Replace `<mtu>` with the new cluster network MTU that you specified with `<overlay_to>`.
`<mtu>`:: Specifies the new cluster network MTU that you specified with `<overlay_to>`.
349
-
--
335
+
Where `<mtu>` specifies the new cluster network MTU that you specified with `<overlay_to>`.
350
336
351
337
. After finalizing the MTU migration, each machine config pool node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
352
338
+
@@ -389,15 +375,10 @@ $ oc get nodes
389
375
+
390
376
[source,terminal]
391
377
----
392
-
$ oc debug node/<node> -- chroot /host ip address show <interface>
378
+
$ oc debug node/<node> -- chroot /host ip address show <interface> <1> <2>
393
379
----
394
-
+
395
-
where:
396
-
+
397
-
--
398
-
`<node>`:: Specifies a node from the output from the previous step.
399
-
`<interface>`:: Specifies the primary network interface name for the node.
400
-
--
380
+
<1> Where `<node>` specifies a node from the output from the previous step.
381
+
<2> Where `<interface>` specifies the primary network interface name for the node.
Copy file name to clipboardExpand all lines: networking/changing-cluster-network-mtu.adoc
+3
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,10 @@ toc::[]
9
9
[role="_abstract"]
10
10
As a cluster administrator, you can change the MTU for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change.
0 commit comments